Recent Trends in Communication and Intelligent Systems: Proceedings of ICRTCIS 2020 (Algorithms for Intelligent Systems) 9811601666, 9789811601668

This book presents best selected research papers presented at the International Conference on Recent Trends in Communica

140 34 11MB

English Pages 240 [225] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Organisation
Hosted by
International Advisory Committee
National Advisory Committee
Organizing Committee
Local Organizing Committee Chairs
Inaugural Ceremony
Chief Guest
Guest of Honor
Valedictory Ceremony
Chief Guest
Guest of Honor
Keynote Speakers
About Arya College of Engineering & I.T.
Preface
Contents
About the Editors
1 Exploring Feature Selection Using Supervised Machine Learning Algorithms for Establishing a Link Between Pulmonary Embolism and Cardiac Arrest
1 Introduction
1.1 Abbreviations
1.2 Dataset and Data Preprocessing
2 Methodology
2.1 Phase I (Establishment of Connectivity Between Pulmonary Embolism and Cardiac Arrest)
2.2 Phase II (Implementation of Machine Learning)
3 Results
3.1 Univariate Feature Selection
3.2 Support Vector Machine
3.3 Ensemble Classifier
3.4 Bagging Classifier
3.5 Comparison of Bagging Technique with Boosting Technique
3.6 Results of Previous Research
3.7 Overall Results of Proposed System
4 Conclusion and Future Scope
References
2 Speech Signal Compression and Reconstruction Using Compressive Sensing Approach
1 Introduction
2 Literature Survey
3 Compressive Sensing Basic Phenomena
4 Result & Analysis
4.1 Signal-to-Noise Ratio (SNR)
4.2 Compression Ratio
5 Conclusion
Acknowledgement
References
3 Breast Cancer Prediction Using Enhanced CNN-Based Image Classification Mechanism
1 Introduction
2 Related Work
3 Research Motivation and Challenges
3.1 Proposed Work
3.2 Process Flow
4 Results and Discussion
5 Simulation of the Time Consumption
6 Simulation of Accuracy
7 Conclusion and Future Scope
References
4 Dual Band Notched Microstrip Patch Antenna with Three Split Ring Resonator Slots
1 Introduction
2 SRR Slots Antenna Design
3 Result
4 Conclusion
References
5 Effective RF Coverage Planning for WMAN Network Using 5 GHZ Backhaul
1 Introduction
2 Configuration
3 Wireless Network Design
3.1 Backbone Network
4 Results
5 Conclusion and Applications
5.1 Applications
References
6 Security Enhancement of E-Healthcare System in Cloud Using Efficient Cryptographic Method
1 Introduction
1.1 Objective of Our LKH Model
2 Comparative Analysis
3 Methodology
4 Result and Discussion
4.1 Running Time of Encryption
4.2 Running Time of Decryption
5 Conclusion and Future Enhancement
References
7 Development of Low-Cost Indigenous Prototype Profiling Buoy System via Embedded Controller for Underwater Parameter Estimation
1 Introduction
2 Hydrodynamic Shape of Profiling Buoy
3 Embedded Controlling Mechanism
3.1 Embedded Controllers
3.2 Linear Actuator
3.3 SD Module
4 Flotation of Profiling Buoy
5 Field Trials
5.1 Iteration1: PVC Housing
5.2 Iteration2: Iron Housing
5.3 Iteration3: Combination of Both PVC and Iron Housing
6 Embedded Setup and Flotation of Buoy
7 Parameters Estimation and Data Analysis
8 Conclusion
References
8 Analysis of Infant Mortality Rate in India Using Time Series Analytics
1 Introduction
2 Related Work on Time Series in Various Domains
3 Results and Discussions
3.1 Infant Mortality Rate (IMR)
3.2 Crude Death Rate (CDR)
4 Conclusion
References
9 Comparison of Bias Correction Techniques for Global Climate Model Temperature
1 Introduction
2 Methodology
3 Result and Discussion
4 Conclusion
References
10 Identification of Adverse Drug Events from Social Networks
1 Introduction
2 Related Works
2.1 Adverse Drug Event Extraction Using Semi-automated Techniques
2.2 Adverse Drug Event Extraction Using Machine Learning Techniques
2.3 Drug Entity Extraction
2.4 Filtering Negated Adverse Drug Events
3 Overall System
3.1 Data Preprocessing
3.2 Drug Entity Extraction Using NER Approach
3.3 Adverse Drug Event Extraction
3.4 Identifying and Extraction of Relationship Between Drug and Adverse Events
4 Conclusion
Acknowledgements
References
11 A Novel Scheme for Energy Efficiency and Secure Routing Protocol in Wireless Sensor Networks
1 Introduction
2 Related Works
3 Problem Formulation
4 Proposed Energy Efficiency and Secure Routing Protocol
5 Performance Metrics
5.1 Energy Consumption
5.2 Packet Delivery Ratio
5.3 Route Length
6 Conclusion
References
12 AlziHelp: An Alzheimer Disease Detection and Assistive System Inside Smart Home Focusing 5G Using IoT and Machine Learning Approaches
1 Introduction
2 Related Work
3 Preliminaries
4 Proposed System
5 Conclusion
References
13 An Adjustment to the Composition of the Techniques for Clustering and Classification to Boost Crop Classification
1 Introduction
2 Literature Survey
3 Experiment Setup
4 Experiments and Their Results
4.1 Experiment I
4.2 Experiment II
4.3 Comparison of Results from Experiment I and Experiment II
5 Conclusion
References
14 Minimization of Torque Ripple and Incremental of Power Factor in Switched Reluctance Motor Drive
1 Introduction
2 Literature Review
3 Proposed Methodology
4 Results and Discussion
5 Conclusion
References
15 Optimization of Test Case Prioritization Using Automatic Dependency Detection
1 Introduction
2 Related Work
3 Proposed Methodology
3.1 Pre-processing
3.2 Dependency-Extraction
3.3 Prioritization
4 Results and Discussion
5 Conclusion and Future Scope
References
16 Model Selection for Parkinson’s Disease Classification Using Vocal Features
1 Introduction
2 Literature Review
3 Proposed Methodology
3.1 Data-Set
3.2 Feature Selection
3.3 Preparing Data for Modelling
3.4 Hyperparameter Optimization
3.5 Classification
3.5.1 Logistic Regression
3.5.2 K-Nearest Neighbor
3.5.3 Random Forest
3.5.4 Bagging Classifier
3.5.5 XGBoost Classifier
3.5.6 Stacking Classifier
4 Results and Discussion
5 Conclusion
References
17 XAI—An Approach for Understanding Decisions Made by Neural Network
1 Introduction
2 Literature Survey
3 Methodology
4 Results
5 Conclusion and Future Scope
References
18 Hybrid Shoulder Surfing Attack Proof Approach for User Authentication
1 Introduction
2 Proposed Hybrid Authentication Approach
2.1 Registration Phase
2.2 Authentication Region Generation Phase
2.3 Authentication Phase
3 Results and Discussion
4 Conclusion
References
19 Redistribution of Dynamic Routing Protocols (ISIS, OSPF, EIGRP), IPvfi Networks, and Their Performance Analysis
1 Introduction
2 Proposed System
2.1 Creation of Network
2.2 Redistribution of OSPF and EIGRP Network
2.3 Redistribution of OSPF and ISIS Network
2.4 Redistribution Between EIGRP and ISIS Network
3 Performance Evaluation
4 Result
5 Network Modelling
5.1 Redistribution Between EIGRP and ISIS
5.2 Redistribution between OSPF and ISIS through EIGRP
6 Conclusion and Future Enhancement
References
20 Single Layer Tri-UWB Patch Antenna for Osteoporosis Diagnosis and Measurement of Vibrational Resonances in Biomedical Engineering for Future Applications
1 Introduction
2 Antenna Design Specifications and Discussion
3 Simulated Parametric Analysis and Study
4 Result and Discussion
5 Resonance Absorption and Radiative Width
5.1 Resonance in Microtubules and Polarization Potential
6 Conclusion
7 Future Work and Scope
References
21 Development of Novel Evaluating Practices for Subjective Answers Using Natural Language Processing
1 Introduction
2 Literature Survey
3 Application of Natural Language Processing
3.1 Sentiment Analysis
3.2 Automatic Summarization
3.3 Email Classification
3.4 Conversational User Interface
4 Problem Definition
5 Proposed Solution
6 Technical Flow
7 Conclusion
8 Future Work
References
Author Index
Recommend Papers

Recent Trends in Communication and Intelligent Systems: Proceedings of ICRTCIS 2020 (Algorithms for Intelligent Systems)
 9811601666, 9789811601668

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Algorithms for Intelligent Systems Series Editors: Jagdish Chand Bansal · Kusum Deep · Atulya K. Nagar

Aditya Kumar Singh Pundir Anupam Yadav Swagatam Das   Editors

Recent Trends in Communication and Intelligent Systems Proceedings of ICRTCIS 2020

Algorithms for Intelligent Systems Series Editors Jagdish Chand Bansal, Department of Mathematics, South Asian University, New Delhi, Delhi, India Kusum Deep, Department of Mathematics, Indian Institute of Technology Roorkee, Roorkee, Uttarakhand, India Atulya K. Nagar, School of Mathematics, Computer Science and Engineering, Liverpool Hope University, Liverpool, UK

This book series publishes research on the analysis and development of algorithms for intelligent systems with their applications to various real world problems. It covers research related to autonomous agents, multi-agent systems, behavioral modeling, reinforcement learning, game theory, mechanism design, machine learning, meta-heuristic search, optimization, planning and scheduling, artificial neural networks, evolutionary computation, swarm intelligence and other algorithms for intelligent systems. The book series includes recent advancements, modification and applications of the artificial neural networks, evolutionary computation, swarm intelligence, artificial immune systems, fuzzy system, autonomous and multi agent systems, machine learning and other intelligent systems related areas. The material will be beneficial for the graduate students, post-graduate students as well as the researchers who want a broader view of advances in algorithms for intelligent systems. The contents will also be useful to the researchers from other fields who have no knowledge of the power of intelligent systems, e.g. the researchers in the field of bioinformatics, biochemists, mechanical and chemical engineers, economists, musicians and medical practitioners. The series publishes monographs, edited volumes, advanced textbooks and selected proceedings.

More information about this series at http://www.springer.com/series/16171

Aditya Kumar Singh Pundir Anupam Yadav Swagatam Das •

Editors

Recent Trends in Communication and Intelligent Systems Proceedings of ICRTCIS 2020

123



Editors Aditya Kumar Singh Pundir Arya College of Engineering & I.T. Jaipur, India

Anupam Yadav Dr B R Ambedkar National Institute of Technology Jalandhar, India

Swagatam Das Electronics and Communication Sciences Unit Indian Statistical Institute Kolkata, India

ISSN 2524-7565 ISSN 2524-7573 (electronic) Algorithms for Intelligent Systems ISBN 978-981-16-0166-8 ISBN 978-981-16-0167-5 (eBook) https://doi.org/10.1007/978-981-16-0167-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Organisation

Hosted by Department of Electronics and Communication Engineering, Arya College of Engineering & I.T., Jaipur

International Advisory Committee Dr. Bimal K. Bose, University of Tennessee, USA Dr. S. Ganesan, Oakland University, USA Dr. L. M. Patnaik, IIS, Bangalore, India Dr. Ramesh Agarwal, Washington University, St. Louis Dr. Vincenzo Piuri, University of Milan, Italy Dr. Ashoka Bhat, University of Victoria, Canada Prof. Akhtar Kalam, Victoria University, Australia Dr. M. H. Rashid, University of West Florida, USA Dr. Fushuan Wen, Zhejiang University, China Ir. Dr. N. A. Rahim, UM, Kuala Lumpur, Malaysia Dr. Tarek Bouktir, University of Setif, Algeria

National Advisory Committee Dr. Dr. Dr. Dr. Dr.

S. N. Joshi, CSIR-CEERI, Pilani Vineet Sahula, MNIT, Jaipur India K. J. Rangra, CSIR-CEERI, Pilani, India R. K. Sharma, CSIR-CEERI, Pilani, India Vijay Janyani, MNIT, Jaipur India

v

vi

Organisation

Dr. K. R. Niazi, MNIT, Jaipur India Dr. V. K. Jain, DRDO, India Dr. Manoj Kumar Patairiya, Director, CSIR-NISCAIR, India Dr. Sanjeev Mishra, RTU, Kota Prof. Harpal Tiwari, MNIT, Jaipur Dr. S. Gurunarayanan, BITS, Pilani Dr. Ghanshyam Singh, MNIT, Jaipur Prof. Satish Kumar, CSIR-CSIO, Chandigarh Dr. Kota Srinivas, CSIR-CSIO, Chennai Centre Sh. Anand Pathak, SSME, India Sh.Ulkesh Desai, SSME, India Sh. Ashish Soni, SSME, India Sh. R. M. Shah, SSME, India Ach. (Er.) Daria S. Yadav, ISTE, Rajasthan and Haryana Section Er. Sajjan Singh Yadav, IEI, Jaipur Er. Gautam Raj Bhansali, IEI, Jaipur Dr. J. L. Sehgal, IEI, Jaipur Smt. Annapurna Bhargava, IEI, Jaipur Smt. Jaya Vajpai, IEI, Jaipur Dr. Hemant Kumar Garg, IEI, Jaipur Er. Gunjan Saxena, IEI, Jaipur Er. Sudesh Roop Rai, IEI, Jaipur Dr. Manish Tiwari, IETE Rajasthan Centre, Jaipur Dr. Dinesh Yadav, IETE Rajasthan Centre, Jaipur Dr. Jitendra Kumar Deegwal, Government Women Engineering College, Ajmer

Organizing Committee Chief Patrons Smt. Madhu Malti Agarwal, Chairperson, Arya Group Er. Anurag Agarwal, Group Chairman Patrons Prof. Dhananjay Gupta, Chairman, Governing Body Prof. Arun Kumar Arya, Principal, ACEIT, Jaipur General Chair Dr. Anupam Yadav, Dr. B. R. Ambedkar NIT, Jalandhar Dr. Swagatam Das, ISI Kolkatta Dr. Vibhakar Pathak, ACEIT, Jaipur

Organisation

Conveners Dr. Kirti Vyas, ACEIT, Jaipur Mr. Sachin Chauhan, ACEIT, Jaipur Er. Ankit Gupta, ACEIT, Jaipur Er. Vivek Upadhyaya, ACEIT, Jaipur Organizing Chair and Secretaries Dr. Rahul Srivastava, ACEIT, Jaipur Dr. Nitin Sharma, NIT Uttarakhand Dr. Aditya Kumar S. Pundir, ACEIT, Jaipur Special Session Chair Dr. Dr. Dr. Dr.

Sarabani Roy, Jadavpur University, Kolkata Nirmala Sharma, RTU Kota Irum Alvi, RTU, Kota S. Mekhilef, University of Malaya, Malaysia

Local Organizing Committee Chairs Prof. Manu Gupta, ACEIT, Jaipur Prof. Akhil Pandey, ACEIT, Jaipur Prof. Prabhat Kumar, ACEIT, Jaipur Prof. Shalani Bhargava, ACEIT, Jaipur Dr. Pawan Bhambu, ACEIT, Jaipur Shri Ramcharan Sharma, ACEIT, Jaipur

vii

viii

Organisation

Inaugural Ceremony Chief Guest Prof. (Dr.) Dhirendra Mathur TEQIP III Coordinator and Professor Rajasthan Technical University, Kota, Rajasthan

Guest of Honor Dr. Kusum Deep Professor Department of Mathematics, Indian Institute of Technology Roorkee, India

Valedictory Ceremony Chief Guest Dr. Jagdish Chand Bansal Associate Professor Department of Mathematics, South Asian University, India

Guest of Honor Dr. Swagatam Das Associate Professor Electronics and Communication Sciences Unit Indian Statistical Institute, India

Organisation

ix

Keynote Speakers Prof. (Dr.) Jemal H. Abawajy Full Professor Faculty of Science Engineering and Built Environment Deakin University, Australia Dr. Jagdish Chand Bansal Associate Professor Department of Mathematics, South Asian University, India Dr. Swagatam Das Associate Professor Electronics and Communication Sciences Unit Indian Statistical Institute, India Dr. Anupam Yadav Assistant Professor Department of Mathematics Dr. B. R. Ambedkar National Institute of Technology Jalandhar, India Dr. Rajash Kumar Professor Department of Electrical Engineering, Malaviya National Institute of Technology Jaipur, India Dr. Harish Sharma Associate Professor Rajasthan Technical University Kota, Rajasthan, India Prof. (Dr.) Anirban Das Professor Computer Science and Engineering UEM, Kolkata, India

About Arya College of Engineering & I.T.

Arya College of Engineering & I.T. was established under the aegis of All India Arya Samaj Society of Higher and Technology Education, in the year 2000, by Late Er. T. K. Agarwal Ji, Founder Honorable Chairman. Arya has made a strong stand among the topmost private engineering institutes in the state of Rajasthan. This group is a blend of innovation, perfection, and creation. The college is spread over a splendid 25 acres of land area, providing a state-of-the-art infrastructure with well-equipped laboratories, modern facilities, and finest education standards. The “Arya College 1st Old Campus” for a decade is known to create a benchmark with its specialized excellence, innovative approach, participative culture, and academic rigor. The Management of Arya College is having the right bent of innovation and has an accurate knack to get these innovative ideas implemented. Globally accredited for its professional emergence toward technical education, Arya College makes special efforts to recruit trained faculty and stirring admission procedures to select potential prospects across the country, which are then trained to turn into a pool of skilled intellectual capital for the nation. This helps in a healthy and dynamic exchange which incubates leader in the corporate world. The strong industry linkages ultimately go along the way in providing a holistic approach to research and education. Arya College is the pioneer in the field of technical education and was the first engineering college in the city of Jaipur, Rajasthan. Arya College of Engineering & I.T. was the first college to start regular M.Tech. in the state of Rajasthan in the year 2006. Arya College of Engineering & Information Technology (ACEIT), Kukas, Jaipur, established in the year 2000 is among the foremost of institutes of national significance in higher technical education and AICTE, New Delhi Approved and Affiliated with Rajasthan Technical University, Kota in Rajasthan. It is commonly known as “ARYA 1st OLD CAMPUS” and “ARYA 1st.” The institute ranks among the best technological institutions in the state and has contributed to all sectors of technical and professional development. It has also been considered a leading light in the area of education and research.

xi

Preface

This volume comprises papers those presented at the 2nd International Conference on Recent Trends in Communication and Intelligent System (ICRTICS) held during November 20–21, 2020. These presented papers cover a wider range of selected topics related to intelligent systems and communication networks, including Intelligent Computing and Converging Technologies, Intelligent System Communication and Sustainable Design and Intelligent Control, Measurement and Quality Assurance. The volume Recent Trends in Communication and Intelligent System (ICRTICS 2020) of Algorithms for Intelligent Systems brings 21 of the presented papers. Each of them presents new approaches and/or evaluates methods to real-world problems and exploratory research that describes novel approaches in the field of intelligent systems. ICRTICS 2020 has received (all tracks) 115 submissions, and 29 of them were shortlisted for presentation. The salient feature of ICRTCIS 2020 is to promote research with a view to bring academia and industry closer. The Advisory Committee of ICRTCIS 2020 comprises senior scientists, scholars, and professionals from the reputed institutes, laboratories, and industries around the world. The technical sessions will have peer-reviewed paper presentations. In addition, keynote addresses by eminent research scholars and invited talks by technocrats will be organized during the conference. The conference is strongly believed to result in igniting the minds of the young researchers for undertaking more interdisciplinary and collaborative research. The editors trust that this volume will be useful and interesting to readers for their own research work. Jaipur, India Jalandhar, India Kolkata, India December 2020

Aditya Kumar Singh Pundir Anupam Yadav Swagatam Das

xiii

Contents

1

2

3

4

5

6

7

Exploring Feature Selection Using Supervised Machine Learning Algorithms for Establishing a Link Between Pulmonary Embolism and Cardiac Arrest . . . . . . . . . . . . . . . . . . . . . . . . . . . . Naira Firdous, Sushil Bhardwaj, and Amjad Husain Bhat

1

Speech Signal Compression and Reconstruction Using Compressive Sensing Approach . . . . . . . . . . . . . . . . . . . . . . Vivek Upadhyaya and Mohammad Salim

11

Breast Cancer Prediction Using Enhanced CNN-Based Image Classification Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kumar Rahul, Rohitash Kumar Banyal, Vikas Malik, and Diksha

19

Dual Band Notched Microstrip Patch Antenna with Three Split Ring Resonator Slots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eshita Gupta and Anurag Garg

29

Effective RF Coverage Planning for WMAN Network Using 5 GHZ Backhaul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chiluveru Anoop, Tomar Ranjeet Singh, Sharma Mayank, and Chiluveru Ashok Kumar

35

Security Enhancement of E-Healthcare System in Cloud Using Efficient Cryptographic Method . . . . . . . . . . . . . . . . . . . . . . N. Rajkumar and E. Kannan

47

Development of Low-Cost Indigenous Prototype Profiling Buoy System via Embedded Controller for Underwater Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pedagadi V. S. Sankaracharyulu, Munaka Suresh Kumar, and Ch Kusma Kumari

57

xv

xvi

8

9

Contents

Analysis of Infant Mortality Rate in India Using Time Series Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Jagan Mohan Reddy and Shaik Johny Basha

71

Comparison of Bias Correction Techniques for Global Climate Model Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shweta Panjwani, S. Naresh Kumar, and Laxmi Ahuja

81

10 Identification of Adverse Drug Events from Social Networks . . . . . A. Balaji, S. Sendhilkumar, and G. S. Mahalakshmi 11 A Novel Scheme for Energy Efficiency and Secure Routing Protocol in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . R. Senthil Kumaran, R. Dhanyasri, K. Loga, and M. P. Harinee

85

95

12 AlziHelp: An Alzheimer Disease Detection and Assistive System Inside Smart Home Focusing 5G Using IoT and Machine Learning Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Md. Ibrahim Mamun, Afroza Rahman, M. F. Mridha, and M. A. Hamid 13 An Adjustment to the Composition of the Techniques for Clustering and Classification to Boost Crop Classification . . . . 115 Ankita Bissa and Mayank Patel 14 Minimization of Torque Ripple and Incremental of Power Factor in Switched Reluctance Motor Drive . . . . . . . . . . . . . . . . . . . . . . . . 125 E. Fantin Irudaya Raj and M. Appadurai 15 Optimization of Test Case Prioritization Using Automatic Dependency Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Sarika Chaudhary and Aman Jatain 16 Model Selection for Parkinson’s Disease Classification Using Vocal Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Mrityunjay Abhijeet Bhanja, Sarika Chaudhary, and Aman Jatain 17 XAI—An Approach for Understanding Decisions Made by Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Dipti Pawade, Ashwini Dalvi, Jash Gopani, Chetan Kachaliya, Hitansh Shah, and Hitanshu Shah 18 Hybrid Shoulder Surfing Attack Proof Approach for User Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Dipti Pawade and Avani Sakhapara 19 Redistribution of Dynamic Routing Protocols (ISIS, OSPF, EIGRP), IPvfi Networks, and Their Performance Analysis . . . . . . . 179 B. Sathyasri, P. Janani, and V. Mahalakshmi

Contents

xvii

20 Single Layer Tri-UWB Patch Antenna for Osteoporosis Diagnosis and Measurement of Vibrational Resonances in Biomedical Engineering for Future Applications . . . . . . . . . . . . . . . . . . . . . . . . 193 Khalid Ali Khan and Aravind Pitchai Venkataraman 21 Development of Novel Evaluating Practices for Subjective Answers Using Natural Language Processing . . . . . . . . . . . . . . . . . 205 Radha Krishna Rambola, Atharva Bansal, Parth Savaliya, Vaishali Sharma, and Shubham Joshi Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

About the Editors

Dr. Aditya Kumar Singh Pundir (B.E., M.Tech., and Ph.D.) is currently working as Professor, Mentor Incubation and IPR Activities in Department of Electronics and Communication Engineering at Arya College of Engineering and IT, Jaipur. His current research interests include IOT-based memory testing, built-in self-test using embedded system design, machine learning, algorithms for CAD and signal processing. He has published three patents, several book chapters, and three books to his credit and also authored more than 20 research papers in peer-reviewed refereed journals and conferences. Dr. Pundir was Organizing Chair, Convener and Member of the steering committee of several international conferences. Dr. Pundir is Life Member of the Indian Society for Technical Education (ISTE), Computer Society of India (CSI), and Professional Member of Association for Computing Machinery (ACM) and IEEE. Dr. Anupam Yadav, is Assistant Professor, Department of Mathematics, Dr. B. R. Ambedkar National Institute of Technology Jalandhar, India. His research area includes numerical optimization, soft computing, and artificial intelligence. He has more than ten years of research experience in the areas of soft computing and optimization. Dr. Yadav has done a Ph.D. in soft computing from the Indian Institute of Technology Roorkee, and he had worked as Research Professor at Korea University. He has published more than twenty-five research articles in journals of international repute and has published more than fifteen research articles in conference proceedings. Dr. Yadav has authored a textbook entitled An Introduction to Neural Network Methods for Differential Equations. He has edited three books which are published by AISC, Springer Series. Dr. Yadav was General Chair, Convener, and Member of the steering committee of several international conferences. He is Member of various research societies. Dr. Swagatam Das received the B.E. Tel. E., M.E. Tel. E (Control Engineering specialization) and Ph.D. degrees, all from Jadavpur University, India, in 2003, 2005, and 2009, respectively. Swagatam Das is currently serving as Associate Professor at the Electronics and Communication Sciences Unit of the Indian xix

xx

About the Editors

Statistical Institute, Kolkata, India. His research interests include evolutionary computing, pattern recognition, multi-agent systems, and wireless communication. Dr. Das has published more than 300 research articles in peer-reviewed journals and international conferences. He is the founding co-editor-in-chief of Swarm and Evolutionary Computation, an international journal from Elsevier. He has also served as or is serving as the associate editor of the Pattern Recognition (Elsevier), Neurocomputing (Elsevier), Information Sciences (Elsevier), IEEE Trans. on Systems, Man, and Cybernetics: Systems, IEEE Computational Intelligence Magazine, IEEE Access, and so on. He is Editorial Board Member of Progress in Artificial Intelligence (Springer), Applied Soft Computing (Elsevier), Engineering Applications of Artificial Intelligence (Elsevier), and Artificial Intelligence Review (Springer). Dr. Das has 16,500+ Google Scholar citations and an H-index of 62 till date. He has been associated with the international program committees and organizing committees of several regular international conferences including IEEE CEC, IEEE SSCI, SEAL, GECCO, and SEMCCO. He has acted as the guest editor for special issues in journals like IEEE Transactions on Evolutionary Computation and IEEE Transactions on SMC, Part C. He is the recipient of the 2012 Young Engineer Award from the Indian National Academy of Engineering (INAE). He is also the recipient of the 2015 Thomson Reuters Research Excellence India Citation Award as the highest cited researcher from India in Engineering and Computer Science category between 2010 and 2014.

Chapter 1

Exploring Feature Selection Using Supervised Machine Learning Algorithms for Establishing a Link Between Pulmonary Embolism and Cardiac Arrest Naira Firdous, Sushil Bhardwaj, and Amjad Husain Bhat

1 Introduction The exponential increase in the advancements and improvisations in the technologies in the past few years has ushered medical representatives into a data-driven era where huge amount of data is generated per day. This factor has greatly demanded a need for analytical and technological upgradation of the existing system. The first attempt to automate the medical procedures was made in the form of “Expert system.” However, they failed in the field of artificial intelligence and hence were rejected, as they lacked the proficiency of elucidating the logic and inducement behind a decision. A new dawn began in the field of artificial intelligence with the arrival of machine learning. Machine learning algorithms can be accorded for assessment of cardiovascular disorders and for prediction of cardiovascular events. Machine learning involves artificial intelligence and is used to solve a wide range of problems in data science. ChayakritKrittanawong et al. [1] have applied AI techniques to inspect novel genotypes and phenotypes in existing diseases. S. Shrestha et al. [2] have worked on diagnosis of heart failure with preserved ejection fraction using machine learning. Chetankumar et al. [3] have worked on “The Heart Rate Variability (HRV) parameters” to predict cardiac arrest in smokers using a machine learning. R. Alizadehsani et al. [4] have used a dataset called Z Alizadeh Sani with 303 patients and 54 features. G. Hinton et al. [5] show expert systems and graphical models that attempted to automate the reasoning processes of experts approach. Cowger Matthews et al. [6] have given a circumN. Firdous (&)  S. Bhardwaj Department of Computer Science Engineering, RIMT University, Mandi Gobindgarh, India e-mail: nairafi[email protected] A. H. Bhat Department of Computer Science, Central University of Kashmir, Srinagar, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_1

1

2

N. Firdous et al.

stantial review of the pathophysiology, by studying acute right ventricular failure in the setting of acute pulmonary embolism. Ebrahim Laher et al. [7] have presented a paper on pulmonary embolism (PE) due to cardiac arrest with a mortality rate of 30%. Bizopoulos et al. [8] have presented a review paper on deep learning on cardiology. Kim et al. [9] have proposed a cardiovascular disease prediction model using the Six Korea National Health. Carlos Cano-Espinosa et al. [10] observed CAD method for pulmonary embolism (pe). Pawar et al. [11] method showed results with improved efficiency of CA. Yao et al. [12] checked instant heart rate sequence and then fed it to an end-to-end multi-scale CNN which outputs the AF detection output, arriving at better results than previous methods.

1.1

Abbreviations

Systolic blood pressure (ap_hi), diastolic blood pressure (ap_lo), pulse pressure (pp), stroke volume (Sv), pulmonary embolism (pe), support vector machine (SVM), Naive Bayes (NB), K-nearest neighbor (Knn).

1.2

Dataset and Data Preprocessing

In this paper, we worked on modified UCI dataset, as UCI dataset for cardiology was not ample to carry out our work. We derived out extra input features from the pre-existing features by following all medical protocols, details of which are mentioned in phase I of methodology. This dataset is newly created dataset which is formed with the help of UCI cardiology dataset. This is the first study where this database has been used to establish a link between cardiac arrest and pulmonary embolism using machine learning. Previous work has been done on cardiovascular diseases without its association with pulmonary embolism with the accuracy of 99%.

2 Methodology 2.1

Phase I (Establishment of Connectivity Between Pulmonary Embolism and Cardiac Arrest)

The major condition of pulmonary embolism is the drop of stroke volume [amount of blood ejected out by the ventricles at the end of cardiac cycle]. This stroke volume is directly proportional to pulse pressure difference (between [ap_hi] and [ap_lo]).

1 Exploring Feature Selection Using Supervised Machine Learning … Fig. 1 Flowchart shows the connectivity between cardiac arrest and pulmonary embolism

3

(ap_hi) – (ap_lo)

Pulse Pressure (Pp)

If Pp < 25% of (ap_hi)

Sv=0 and pe=1

Pp ¼ ðap hiÞ  ðap loÞ

ð1Þ

Pp ¼ sv=C

ð2Þ

Pulse pressure is considered to be critically low if it is less than 25% of systolic blood pressure (ap_hi). Using these above-mentioned biological equations, we formulated a medical algorithm. With the addition of new variables, the attributes in the dataset were increased to 18 with 70 K records. We propose our algorithm as shown in Fig. 1, for describing a link between pulmonary embolism and cardiac arrest. • In the first step, we calculated the pulse pressure (Pp) with the help of systolic and diastolic blood pressures (Pp = [ap_hi]−[ap_lo]). • Pp is directly proportional to the stroke volume (sv). So, decrease in stroke volume will cause decrease in the amount of blood ejected out of the left ventricle during each systolic cardiac contraction, thereby leading to the formation of emboli which may result in cardiac arrest. • The value of stroke volume may be either 0 (low) or 1(high) depending upon the two conditions of Pp: 1. If Pp < 25% of ap_hi, sv = 0(low), indicates that the cause of cardiac arrest is pulmonary embolism. 2. If Pp > 25% of ap_hi, sv = 1 (high), indicates that pe is not a cause of CA.

4

2.2

N. Firdous et al.

Phase II (Implementation of Machine Learning)

• Univariate Feature Extraction The univariate method was employed to rank and select a handful of input features. The statistical tool selects the input feature which is very much correlated with the target variable. This step reduces the training time, thereby improving the efficiency (Fig. 2). • Support Vector Machine Support vector machine falls under the category of supervised machine learning algorithm. Linear kernel was used to train the model. We encountered the problem of overfitting with this model. In order to overcome overfitting problem, we formulated an ensemble classifier (Fig. 3). • Ensemble Classifier (Boosting Algorithm) We created an ensemble classifier in order to make a strong classifier from a number of weak classifiers. We combined support vector machine with two other machine learning models by means of hard voting. An ensemble model shown below (Fig. 4) is described by making use of multiple machine learning algorithms namely SVM, KNN, and Naive Bayes. Ensemble learning is used to aggrandize the Fig. 2 Selection of best input feature with univariate technique

Set of Input Features

Univariate Technique

Best Feature

Fig. 3 Correction of overfitting problem by replacing single machine learning model with boosting ensemble classifier

SVM

Overfitting

Replacement

Ensemble Classifier

1 Exploring Feature Selection Using Supervised Machine Learning … Fig. 4 Ensemble classifier

Training Data

Knn

5

SVM

N.B

75%

Testing Data

Voting Classifier (Hard)

25% OUTPUT

execution of machine learning models by amalgamating several learners. When compared to a single model, this type of learning builds model with improved efficiency and accuracy with proper precision in hand. • Bagging Technique We also worked with random forest. In bagging technique, we have many base learners, and in case of random forest, the default base learners are decision trees.

3 Results The main aim of this work is to predict if a patient is likely to have cardiac disease due to pulmonary embolism or not. We performed comparative analysis of various machine learning algorithms on medical algorithm.

3.1

Univariate Feature Selection

We first of all started with univariate selection, a feature extraction method. For implementing univariate selection, library namely SelectKBest which is present in Scikit-learn was used. The ‘K’ value in the SelectKBest, selects the K best attributes with respect to the output. Here, in this work, we extracted ten best features (k = 10) from our dataset with respect to our target output. Internally, chi-square (v2) is performed. After performing the fit function, we obtained a parameter called fit. Scores. This parameter calculates the score with respect to the chi-square (v2) test value taking into account each feature and the target output. The higher the score, the more important a feature is. We observed that stroke volume and pulse pressure showed high correlation with the target output (Fig. 5).

6

N. Firdous et al.

Scores

30000 25000 20000 15000 10000 5000 0

Scores

Fig. 5 Ten features which are very much correlated with the target output

3.2

Support Vector Machine

We performed comparative analysis of machine learning algorithms, and we encountered the problem of overfitting while dealing with SVM. The reason is that SVM in the training phase not only considers the original data but also comprehends noise as the useful data.

3.3

Ensemble Classifier

In order to overcome this problem, we proposed an ensemble method in which we combined many machine learning models by using a voting classifier. We trained our KNN for k = 5 values, our SVM model was trained by using linear classifier followed by Gaussian Naive Bayes. We arrived at high efficiency of 99.9% (Table 1).

3.4

Bagging Classifier

We also checked the accuracy by incorporating bagging technique called random forest with decision trees as the base learners. We also checked sensitivity and specificity (Fig. 6, Table 2). • Sensitivity: Proportion of true positive (cardiac arrest due to pulmonary embolism) with a positive test results. Table 1 Accuracy of ensemble classifier

Classifier

Accuracy (%)

Ensemble classifier

99.99

1 Exploring Feature Selection Using Supervised Machine Learning … Fig. 6 Pi-chart shows the distribution of specificity, sensitivity, and accuracy of bagging classifier

7

Bagging Technique

Sensitivity Specificity Accuracy

Table 2 Values of sensitivity, specificity, and accuracy of bagging classifier

Classifier

Sensitivity

Specificity

Accuracy (%)

Bagging classifier

0.9577

0.965

0.964

Sensitivity ¼ TP=ðTP þ FN)

ð3Þ

• Specificity:-Proposition of true negative (cardiac arrest due to other factors) with a negative test result. Specificity ¼ TN=ðTN þ FP)

3.5

ð4Þ

Comparison of Bagging Technique with Boosting Technique

Table 3 provides the information about the base learners of ensemble and random forest along with their accuracy (Fig. 7).

Table 3 Comparison of accuracy of ensemble and bagging classifier

Classifier

Base Learners

Accuracy (%)

Ensemble classifier Random forest

Svm + NB + Knn Decision trees

99.9 96.4

8

N. Firdous et al.

Fig. 7 Comparison of accuracy of boosting and bagging classifiers

Accuracy 100 99 98 Accuracy

97 96 95 Ensemble Classifier

3.6

Bagging Classifier

Results of Previous Research

Previous work has been done on heart failure but without its association with pulmonary embolism. Results of heart failure and pulmonary embolism have been displaced in the two tables mentioned below (Tables 4 and 5). Authors have attained an accuracy of 99%. Work has been carried on CSV dataset as well as on image files for detection of cardiovascular diseases and pulmonary embolism. Table 4 Results of previous research on pulmonary embolism Author

Technique

Accuracy (%)

Rucco et al. [13]

Introduced an approach for the analysis of partial and incomplete dataset based on Q analysis Used ANN for prediction of pulmonary Embolism Deep learning CNN model Deep learning CNN model Deep learning CNN model

94

Agharezaei et al. [14] Chen et al. [15] Weifang et al. [16] Remy et al. [17]

93.23 99 92.6 99

Table 5 Results of previous research on heart failure Author

Technique

Accuracy (%)

R. Kannan et al. [18] Rahma et al. [19] Divya et al. [20]

Logistic regression, random forest, stochastic gradient boosting, SVM Hard voting ensemble method

86, 80, 84 79

Random forest, decision tree, KNN

Liaqat et al. [21] Ashier et al. [22]

Linear SVM RSA based random forest

96.80, 93.33, 91.52 90 90

90

1 Exploring Feature Selection Using Supervised Machine Learning … Table 6 Overall accuracy of proposed system

3.7

Classifier

Accuracy (%)

Ensemble(hard voting) Bagging

0.993 96.4

9

Overall Results of Proposed System

In this work, two medical fields have been combined in order to extract out the cause and effect relationship between them using machine learning (Table 6).

4 Conclusion and Future Scope This paper contributes the correlative application and analysis of distinct machine learning algorithms in the Python software. To sum up, preprocessing is a crucial step in machine learning which helps to attain accurate results. The aim of this work was the application and the comparison of machine learning algorithms with divergent operational metrics and to ameliorate the efficiency by addressing the overfitting problem due to the SVM algorithm. We concluded that boosting and bagging algorithms performed well and showed promising results as compared to single machine learning algorithm. Implementation of machine learning for identifying pulmonary embolism as a cause of cardiac arrest can greatly prove helpful in saving a number of lives. People particularly belonging to lower section of the society need not to go for further investigations viz CT scans and other expensive medical examinations, so that the poor people who are not in a position to undertake such expensive medical treatments or investigations are saved from incurring heavy financial expenditure. Automating the medical procedure will also help medical representatives from maintaining bulk of data in the form of records of patients on this account. With the help of UCI dataset for cardiology, we created our own dataset to carry out this research on pulmonary embolism, a cause of heart failure using machine learning. In future, we will implement reinforcement learning on our dataset.

References 1. ChayakritKrittanawong (2017) Artificial Intelligence in Precision Cardiovascular Medicine. J Am CollE Cardiol 30 May 2. Shrestha S, Sengupta PP (2018) Machine learning for nuclear cardiology: The way forward 3. Shashikant R, Chetankumar P (2019) Predictive model of cardiac arrest in smokers using machine learning technique based on Heart Rate Variability parameter. J Appl Comput Inform 22 June

10

N. Firdous et al.

4. Alizadehsani R, Habib J, Javad Hosseini M, Hoda Mashayekhi R, Boghrati (2013) A data mining approach for diagnosis of coronary artery disease 5. G. Hinton (2018) Deep learning—a technology with the potential to transform health care. JAMA 6. Cowger Matthews J, McLaughlin V (2018) Acute Right Ventricular Failure in the Setting of Acute Pulmonary Embolism or Chronic Pulmonary Hypertension. Bentham Science Publication, February 7. Ebrahim Laher E (2018) Cardiac arrest due to pulmnory embolism–Science Direct. Indian Hear J, October 8. Bizopoulos P, Koutsouris D (2019) Deep Learning in Cardiology. IEEE Review 9. Kim J, Kang U, Lee Y (2017) Statistics and deep belief network based cardiovascular risk prediction. Healthc Inform Res 23(3):169–175 10. Cano-Espinosa C, Cazorla M, Gonzalez G (2020) Computed Aided Detection of Pulmonary Embolism Using Multi-Slice Multi-Axial Segmentation, MDPI 11. Singh S, Pandey S, Pawar U, Ram Janghel R (2018) Classification of ECG arrhythmia using recurrent neural networks. Science Direct 12. Yao Z, Zhu Z, Chen Y (2017) Atrial fibrillation detection by multiscale convolution neural networks. In: Information Fusion (Fusion), 2017, 20th International Conference IEEE 13. Rucco M, Sousa-Rodrigues D, Merelli E, Johnson JH (2015) A Neural hypernetwork approach for Pulmonary Embolism diagnosis. BMC Res Notes 8(1):617 14. Agharezaei L, Agharezaei Z, Nemati A, Bahaadinbeigy K, Keynia F, Baneshi MR (2016) The Prediction of the risk level of Pulmonary Embolism & Deep Venus Thrombosis through Artificial Neural Network. Acta Information Med 24(5):354–359 15. Chen MC, Ball RL, Yang L, Moradzadeh N, Chapman BE, Larson DB, Langlotz CP, Amrhein TJ, Lungren MP (2017) Deep learning to classify Radiology free-text reports. Radiology 286(3):845–852 16. Liu W, Liu M (2020) Evaluation of acute Pulmonary Embolism & Clot burden on CTPA with deep learning. In Imaging Informatics & Artificial Intelligence, Springer 17. Remy-Jardin M, Faivre JB (2020) Machine Leraning & Deep Neural Network Application in Thorax. J Thorac Imaging 18. Kannan R, Vasanthi V (2018) Machine Learning Algorithms with ROC Curve for Predicting &Diagnosing the heart disease. In Springer Briefs in Applied Science and Technology 19. Atallah R, Al-Mousa A (2019) Heart Disease Detection using Machine Learning Majority Voting Ensemble Method. In 2019 IEEE 20. Krishnani D, Kumari A, Dewangan A (2019) Prediction of Coronary Heart Disease Using Supervised Machine Learning Algorithm 2019 IEEE 21. Ali L, Ullah Khan S (2019) Early detection of Heart Failure by Reducing the time complexity of Machine Learning based predictive Model. In: 1st International Conference on Electronics & Computer Engineering 22. Ashier SZ, Yongjian L (2019) An Intelligent learning System based on Random Search Algorithm & Optimized Random Forest Model for Improving heart Disease detection. In: IEEE Explore

Chapter 2

Speech Signal Compression and Reconstruction Using Compressive Sensing Approach Vivek Upadhyaya

and Mohammad Salim

1 Introduction Recently, compressed sensing (CS) has gathered lots of attention in applied mathematics, electrical engineering, and computer science fields by advising the possibility of surpassing the traditional limits of sampling theory. This section will give a review of the main theory underlying CS. We will start with a discussion of sparseness and other low-dimensional signal models after a historical review. Sensing of the signal with its compression is the technique that is very popular nowadays because the compression is used in each and every area of data processing to its storage. By using the sensing technique, we can easily find out the information from the source of information [1, 2]. As we can see in the digital camera which can capture an image and can easily give the value of each pixel also have sensing capability. As sensing is a very important process, so it requires some necessities also. The data which we get by using the sensing has to undergo in the processing which is done using the digital system, for that processing the technique which is required is known as Nyquist theorem, and this theorem is considered as the basic benchmark for the digital signal processing which is used to reconstruct the compressed or sampled signal properly. As stated by the Nyquist, the sampling frequency should be the twice maximum frequency component available in the signal. The bandwidth which is as per the principle of Nyquist can easily reconstruct the compressed signal and can maintain a high-level accuracy with very few noisy components. But this traditional theory having some complexity and limiV. Upadhyaya (&) Ph.D. Scholar, Jaipur, India e-mail: [email protected] M. Salim Professor, Department of Electronics and Communication, Malaviya National Institute of Technology, Jaipur, Rajasthan, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_2

11

12

V. Upadhyaya and M. Salim

tations, the limitations which shows a severe problem with Nyquist are that the bandwidth or frequency which is required to reconstruct the signal is too much high than the maximum component of the original signal. This is not possible in every case because the data which is used is also large if we want to transmit that. So to overcome this problem, a new approach named compressive sensing is taken into consideration [3, 4]. This approach can take very few samples/measurements from the original signal and reconstruct the original signal without any noise. So by using this approach, we do not have to store a large amount of data. The basic concept of compressive sensing is sparsity. Sparsity and compressibility are the two fundamental concepts that show a significant role in various fields of science and mathematics. Sparsity can be considered as a guide for the approximation theory which is very efficient, and algorithms named shrinkage and thresholding also depend upon sparsity. This sparsity has a very good estimation as well as a compression level. The exactness of transform coding depends upon the sparsity level of the signal. The reduction of dimensions with effective modeling can also be done by sparsity. This sparsity can also be used in data acquisition; it also provides efficient protocols for data acquisition [5, 6]. If someone wants to image the compressive sensing, then you can consider or compare it with the digital camera which has millions of sensors in it for imaging, the pixel quality is too high but finally the encoding of the picture in a few hundred KB. A very relevant question arises here that if most of the signal shows the compressible nature, then why only we have to collect the whole data for consideration if only we have to take a small portion of this data and the rest of the data will be discarded [7–9].

2 Literature Survey In this paper, goal of the author is to reduce the background noise in speech processing (also called speech enhancement) [10]. The methodology proposed by the author is based on quasi-signal-to-noise ratio criteria. To guarantee the value of the sparsity, redundant dictionary K-SVD is used. OMP is used to find out the desired number of atoms from the given number of sparsity level. SNR and PESQ values are compared in this analysis. Authors in this paper propose an approach which is used to encrypt and compress the speech signal in a single step [11]. A compressive sensing technique is used for the compression purpose after the encryption. To enhance the sparsity of the signal, counter let transforms are used. The chaotic map method is used to enhance the key size for encryption and decryption purposes. SNR and PESQ both parameters are calculated to show the effectiveness of the proposed approach. Thong T. Do and Lu Gan introduce a new strategy, called structurally random matrix (SRM) defined as a three product [12]. In which for randomized, the sensing signal scrabbled the sample location or flipped the sample sign and after that transform the randomized signal to the transform coefficient of subsample for a sensing matrix. SRM is more capable of real-time compressive sensing applications

2 Speech Signal Compression and Reconstruction …

13

as compared to other random metrics. In their paper, encapsulate the several approaches regarding SRM as it has fast numerical calculation satisfied the theory. In this paper, the author investigated smoothed l0 algorithms and apply this algorithm to an audio signal and reconstruct the signal. Then, the comparison is carried out with l1 and OMP algorithm. They use MDCT for the sparsity purpose and Gaussian matrix for the sensing purpose. The efficiency of the proposed algorithm is calculated using the SNR value and the running time of the CPU. According to the authors, SL0 is the best algorithm for audio signals [13]. The author proposed an algorithm which is used for the audio coding based on smooth signal analysis using graph theory. They find out that the method proposed by them provides better separation at the decoder side for lower bitrate than the other methods. The graph which is used is prepared by the vectors and computed using non-negative matrix factorization. They assume that only one source is active at a time at the encoder side [14]. In this paper, our main objective to apply a modified compressive sensing approach to different speech words said by the same person. Then, we are analyzing how the different words reconstructed using compressive sensing after the compression. Here, we also analyze how compressive sensing is efficient than the traditional coding approach. In the next section, we are going to elaborate on some fundamental compressive sensing algorithms and approaches.

3 Compressive Sensing Basic Phenomena Minimum l1 Norm Reconstruction l1 minimization is an efficient reconstruction approach that resolves the issue associated with recovery by using a convex approach. The formula which is given

Fig. 1 (a) Subspaces include two sparse vectors in the R3 (b) A visual image of the l2 minimization (5) to detect the non-sparse point, which is located between the l2 ball (hypersphere, in red) and the translated measurement matrix null space (in green). (c) A visual image of the 1 minimization solution that collects the sparse point-of-contact with high accuracy [15]

14

V. Upadhyaya and M. Salim

in Eq. (1) is associated with the basis pursuit algorithm, and it will provide a sparse solution to the recovery problem (Fig. 1). ^ ¼ arg mink Ak A 1

ð1Þ

A2RM

4 Result & Analysis 4.1

Signal-to-Noise Ratio (SNR)

Signal-to-noise ratio is the quality estimation parameter for the speech signal. The value of SNR is directly proportional to the quality of the reconstruction approach.   P  2 10  log10 Oi m SNR ¼ P ½Oi ðmÞ  Po ðmÞ2

ð2Þ

m

Oi = Original speech signal components, Po = Reconstructed speech signal components, m = Total number of iterations used for reconstruction.

4.2

Compression Ratio

Compression ratio is the parameter that is used to define the number of samples used for the reconstruction to the total number of samples present in the original signal. C=R ¼

G H

ð3Þ

G = Number of speech signal samples used to reconstruct the original signal, H = Total number of samples present in the speech signal. In this work, our prime concern is to find out the quality parameters SNR and MSE for the ten different speech signals. The compressive sensing approach is applied to compress and reconstruct the speech signals. For the analysis purpose, ten different speech signals (different words were spoken by a female) are considered. Each speech signal is confined to 2 s time intervals. Each signal is compressed at different compression levels. For each compression level, the value of SNR and MSE is also calculated with the help of MATLAB-2016(b). The whole analysis is tabulated in four different Tables 1, 2, 3, and 4 given below. There is much variation in each signal’s SNR and MSE value when we are comparing all.

2 Speech Signal Compression and Reconstruction …

15

Table 1 Compression ratio v/s SNR C/R

SNR values for different speech signals Bad Banana Barb

Bard

Bay

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

2.5030166 6.962920 11.834715 15.897123 20.044345 23.410238 26.421997 30.377872

3.6463137 9.5236302 16.440649 20.135123 22.47172 24.435344 28.003451 30.802543

5.6194883 10.7770811 15.0771930 18.8588763 21.812831 25.279019 28.181587 32.428682

3.3566587 7.5159280 11.652871 16.857393 22.446685 26.527872 29.975368 34.526248

3.6977950 9.481130 17.629221 23.291182 27.57716 30.289184 33.299278 36.649319

Table 2 Compression ratio v/s SNR C/R 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

SNR values for different speech signals Bead Bear Bed

Bid

Bird

7.6812576 12.228389 14.806995 16.390090 18.778797 21.405137 23.509027 26.175427

8.2530491 12.051983 16.294646 19.35613 21.979936 25.211911 28.556372 32.470742

5.7895234 12.905361 19.864358 23.685627 26.788658 29.780509 32.737613 36.668427

5.0146367 9.7798845 15.043645 19.624523 23.640526 26.925781 31.042776 35.054572

4.0290873 8.6524427 12.774565 16.418182 18.892487 22.128858 25.189951 29.706545

Table 3 Compression ratio v/s MSE C/R

MSE values for different speech signals Bad Banana Barb

Bard

Bay

0.1 0.2 0.4 0.5 0.6 0.7 0.8

0.0098503 0.003527 4.51E-04 1.74E-04 7.99E-05 4.00E-05 1.61E-05

0.008222 0.0021244 1.85E-04 1.08E-04 6.86E-05 3.01E-05 1.58E-05

0.007698 0.002347 3.65E-04 1.85E-04 8.33E-05 4.27E-05 1.61E-05

0.008657 0.003322 3.87E-04 1.07E-04 4.17E-05 1.89E-05 6.61E-06

0.0061091 0.001613 6.71E-05 2.50E-05 1.34E-05 6.70E-06 3.10E-06

Tables 1 and 2 represent the actual values of signal-to-noise ratio after the proper reconstruction. As we can easily understand using the tables that the value of SNR is completely different for each speech signal. Word Bird has the highest, and Bead has the least SNR value. Mean square error values for all cases are given below, and comparison can also be done using the MSE values too (Tables 3 and 4, Figs. 2 and 3).

16

V. Upadhyaya and M. Salim

Table 4 Compression ratio v/s MSE C/R

MSE values for different speech signals Bead Bear Bed

Bid

Bird

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

0.001950 6.85E-04 3.78E-04 2.63E-04 1.51E-04 8.27E-05 5.10E-05 2.76E-05

0.001934 8.07E-04 3.04E-04 1.50E-04 8.20E-05 3.90E-05 1.80E-05 7.32E-06

0.004763 9.25E-04 1.86E-04 7.73E-05 3.78E-05 1.90E-05 9.62E-06 3.89E-06

0.0052985 0.0017685 5.26E-04 1.83E-04 7.27E-05 3.41E-05 1.32E-05 5.25E-06

0.0049171 0.0016958 6.56E-04 2.84E-04 1.60E-04 7.62E-05 3.76E-05 1.33E-05

Fig. 2 Compression ratio v/s SNR curve

5 Conclusion Compressive sensing is an approach that is based upon the sparsity level of a signal. If the sparsity of the signal is high, then the reconstruction level is also high. SNR and MSE both parameters can easily justify the quality of reconstruction. From the data given in the tables and curve, we can easily conclude that compressive sensing depends upon which type of signal is considered. As the dataset of ten speech signals is considered for the analysis purpose, all signals have a 2-second time frame. Concluding remarks are as follows. • Compressive sensing is highly dependent on the sparsity level of a signal. • The same kind of signal behaves differently for compression and reconstruction as depicted in the tables and figures.

2 Speech Signal Compression and Reconstruction …

17

Fig. 3 Compression ratio v/s MSE curve

• As the compression ratio increases, the value of SNR highly improves as the number of measurements increases. • In the case of MSE, it is opposite to the SNR. • The sparsity of a signal also depends on which type of domain is considered for the compression purpose. Acknowledgement This research is supported by Visvesvaraya PhD Scheme, MeitY, Govt. of India with unique awardee number “MEITY-PHD-2946”. Recipient Mr. Vivek Upadhyaya.

References 1. Candès EJ, Romberg J, Tao T (2006) Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inform Theory 52(2):489–509 2. Donoho D (2006) Compressed sensing. IEEE Trans Inform Theory 52(4):1289–1306 3. Baraniuk R (2007) Compressive sensing. IEEE Signal Processing Mag 24(4):118–120, 124 4. Shukla UP, Patel NB, Joshi AM (2013) A survey on recent advances in speech compressive sensing. In: IEEE International Multi-Conference on Automation, Computing, Communication, Control and Compressed Sensing (iMac4s), pp 276–280, March 5. DeVore RA (1998) Nonlinear approximation. Acta Numerica 7:51–150 6. Kotelnikov V (1933) On the carrying capacity of the ether and wire in telecommunications. In Izd. Red. Upr. Svyazi RKKA, Moscow, Russia 7. Nyquist H (1928) Certain topics in telegraph transmission theory. Trans. AIEE 47:617–644 8. Shannon C (1949) Communication in the presence of noise. Proc Institute of Radio Engineers 37(1):10–21 9. Whittaker E (1915) On the functions which are represented by the expansions of the interpolation theory. Proc Royal Soc Edinburgh, Sec A 35:181–194

18

V. Upadhyaya and M. Salim

10. Wang Jia-Ching, Lee Yuan-Shan, Lin Chang-Hong, Wang Shu-Fan, Shih Chih-Hao, Chung-Hsien Wu (2016) Compressive sensing-based speech enhancement. IEEE/ACM Trans on Audio, Speech, and Lang Processing 24(11):2122–2131 11. Al-Azawi MKM, Gaze AM (2017) Combined speech compression and encryption using chaotic compressive sensing with large key size. IET Sig Proces 12(2): 214–218 12. Do TT, Gan L, Nguyen NH, Tran TD (2012) Fast and efficient compressive sensing using structurally random matrices. IEEE Trans on Sig Proces 60(1): 139–154 13. Mahdjane K, Merazka F (2019) Performance Evaluation of Compressive Sensing for multifrequency audio Signals with Various Reconstructing Algorithms. In: 6th International Conference on Image and Signal Processing and their Applications (ISPA), pp. 1–4. IEEE 14. Puy G, Ozerov A, Duong N, Pérez P (2017) Informed source separation via compressive graph signal sampling. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 1–5 15. Candes E, Romberg J, Tao T (2006) Robust uncertainty principles: Exact signal reconstruction from highly incomplete Fourier information. IEEE Trans Info Theory 52(2): 489–509

Chapter 3

Breast Cancer Prediction Using Enhanced CNN-Based Image Classification Mechanism Kumar Rahul, Rohitash Kumar Banyal, Vikas Malik, and Diksha

1 Introduction In the case of breast cancer [1], a different kind of cancer occurs when a cell divides over and over again in a process that runs out of control. There are two kinds of cancer known as non-invasive and invasive. In such situations of non-invasive, ducts are filled with cancer cells. This case is considered as in situ. A stoma is the fatty portion of the breast. Cancer may spread from the lymph glands. After that, it may spread to the body’s other parts. Mostly, it has been seen that breast cancer begins from ducts. It transfers the milk to the nipple from ducts. Such kind of cancer is referred to as ductal cancer [2]. On the other hand, another kind of cancer begins in the glands. Glands make the breast milk. It is lobular cancer. There may be breast cancer in men. There are various types of breast cancer such as ductal carcinoma, lobular carcinoma, invasive ductal carcinoma, subtypes of invasive ductal carcinoma, invasive lobular carcinoma, etc. In this research, the CNN mechanism has been used to classify images. This mechanism is breaking an image down into features and then it is reconstructed and predicted at the end. The proposed work is based on a content-based image K. Rahul (&) Department of Basic and Applied Science, NIFTEM, Sonipat 131028, India e-mail: [email protected] R. K. Banyal Department of Computer Science and Engineering, Rajasthan Technical University, Kota 324010, India e-mail: [email protected] V. Malik  Diksha CSE & IT, Bhagat Phool Singh Mahila Vishwavidyalaya, Sonipat 131305, India e-mail: [email protected] Diksha e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_3

19

20

K. Rahul et al.

Table 1 Treatment distribution in case of invasive female breast cancer patients who have the age of 20 years or more with local/regional diagnosis [11] Most recent estimate Percent of patient

95% confidence interval

37.0 47.0 16.0

36.7–37.2 46.8–47.3 15.8–16.1

identification process where the sample of breast cancer image has been considered to identify cancer in the input sample. Moreover, the edge detection helps in eliminating the useless portion of the image and reduces its size and makes it easy to perform the comparison.

2 Related Work A lot of researches is there which has been done to do the prediction of breast cancer using several techniques. The researcher has proposed a novel technique for the detection of breast cancer. They have introduced Bayesian networks and SVM for the detection of breast cancer. In 2019, Y. Jiang, et al. [1] discussed the breast cancer histopathological image categorization. In the research work, the CNN has been used. The research work is proposing the design of a novel convolution neural network. In 2019, D. A. Ragab, et al. [2] explained the breast cancer detection with CNN. They also consider the support vector devices. In 2019, E. Kontopodis, et al. [3] investigated the role of model-based biomarkers. They also considered the model-free graphical biomarkers. The results have suggested the model-free DCE-MRI IBs. Such is a more robust alternative. These graphical biomarkers are very difficult. In 2018, H. Lin, et al. [4] explained the fast ScanNet: speedy and opaque examination of multi-gigapixel whole-slide images due to which the growth of cancer cells was discovered. For the identification of breast cancer, the growth of the lymph node is a significant sign. It can be easily noticed by pathologists with the help of a microscope. For the transformation of model, a fresh method in which layers are held securely was submitted by them in this work. In 2018, V. Chaurasia, et al. [5] did a forecast of gentle and nasty breast cancer was done with the help of data mining techniques. In the list of leading cancer, breast cancer comes in the second position by which females are affected in comparison with different types of cancers. From a historical record, it was found that there are nearly about 1.1 thousand cases in 2004. In 2018, P. Chauhan et al. [6] by adopting the assembly approach which was derived from genetic algorithm, a work was done on breast cancer forecast by them. The identification of breast cancer is an unwrap region of study. Breast cancer is an analytical difficulty. It can be resolved by the adoption of machine learning models like a decision tree, random forest and SVM. In 2018, B. Fu, et al. [7] did predict

3 Breast Cancer Prediction Using Enhanced CNN-Based …

21

invasive disease-free survival for the early-stage breast cancer patients. It can be achieved by the utilization of clinical data records. The females of China are dangerously exposed by breast cancer with high morbidity and death. It was almost impossible for doctors to organize a suitable treatment plan due to the lack of robust forecasting models that may make the life of the patient a little bit longer. In 2018, D. Kaushik [8] has provided a novel concept related to the post-surgical survival forecasting of a breast cancer patient. Cancer is one of the most common deaths causing disease. The chance of this disease is more in the women than the men. It has become a very challenging period after the breast cancer surgery of a patient. After the surgery, it is very difficult to make a decrement in the death rate. In 2018, M. Ma, et al. [9] proposed the novel two-stage deep technique. This technology has been used for mitosis estimation in breast cancer histology graphics. It is very necessary to make detection accurately. Along with this, the counting of mitosis has been determined essential for computer-aided diagnosis. Such has been physically performed with the help of a pathologist.

3 Research Motivation and Challenges There have been several types of research that are using breast cancer detection [5]. Such researches are beneficial to capture the symptoms of breast cancer in a patient. Such type of research plays a significant role in predicting the probability of breast cancer. Applications of convolution neural network [2] can be found in medical imaging since the 1990s. “Transferability” is set in pre-trained convolution neural network. It is an important aspect of the convolution neural network. According to earlier research, in the field of medical imaging transfer learning is divided into two parts. Firstly, it is necessary to use the pre-trained networks. It has been used to extract the feature. It is also helpful in the second group, and rest of the pre-trained network is used as it is in the first one except the logistic layer is used in place of a fully connected layer.

3.1

Proposed Work

The proposed work defined through various steps: Phase 1 The image base of benign, in situ, and invasive would be created. Phase 2 Apply traditional convolution neural network classifier to check the space and time consumption. Phase 3 Apply the edge detection mechanism on the image set. Phase 4 Apply the proposed convolution neural network classifier to check the space and time consumption. Phase 5 Compare the performance and space consumption of traditional and proposed work.

22

3.2

K. Rahul et al.

Process Flow

After getting a data set, the sample for prediction is taken. The edge detection model has been applied before applying the CNN model in the proposed work. Then the comparison of traditional work with proposed work is made. During the comparison, many factors such as size, accuracy and time are considered. İn Fig. 1, the process flow of proposed work is explained here after getting data set. The samples are taken for prediction on the basis of trained set. A copy of sample is processed by edge detection mechanism and then passed to CNN but another copy is directly to CNN model. The comparison of time taken, accuracy and space consumption is made afterward.

4 Results and Discussion The simulation has been divided into two sections: First is without edge detection and the second is with edge detection. Figures 2, 3, and 4 are representing benign, invasive and in situ. Figures 2, 3, and 4 are images before edge detection. These figures are more space consuming, and moreover the CNN takes time to predict in these cases.

Fig. 1 Proposed work

3 Breast Cancer Prediction Using Enhanced CNN-Based …

23

Fig. 2 Benign

Fig. 3 Invasive

Fig. 4 In situ

In the second face, the edge detection has been applied to the graphical contents to reduce the time consumption. Moreover, the content stored in images is classified rapidly. Figures 5, 6, and 7 represent images after edge detection. These figures are comparatively less space consuming, and CNN model takes less time during processing. Moreover after elimination of rendering, the accuracy during detection gets increased.

24

K. Rahul et al.

Fig. 5 Benign

Fig. 6 Invasive

Fig. 7 In situ

5 Simulation of the Time Consumption Due to the integration of edge detection, the time consumption gets reduced. During simulation, it has been observed that the ratio of time in case of normal CNN and edge-based CNN is 10.849004:1.78971, respectively. This has been simulated in Fig. 8 using MATLAB. The below chart is representing the comparison of time consumption in the case of traditional and proposed. In this figure, red line is presenting the time consumption in case of normal image while green line is showing time consumption in case of images where edge detection has been applied. By integrating edge detection, the space consumed by graphical content is minimized. During simulation, it has been observed that the ratio of size in case of

3 Breast Cancer Prediction Using Enhanced CNN-Based …

25

Fig. 8 Chart of time consumption in case of normal image and edge-based image in CNN

Fig. 9 Chart for space consumption in case of normal and edge-based image

normal CNN and edge-based CNN is 9:4, respectively. Figure 9 is representing the comparison of space consumption in the case of traditional and proposed. In following image, the red line is presenting the size consumption in case of normal image, and green line is presenting the size in case of edge-detected image.

26

K. Rahul et al.

6 Simulation of Accuracy Here the simulation of accuracy before edge detection and after edge detection has been found. The existing sample of in situ has been taken to check the accuracy during the comparison process. Step:1 In this step, the sample data sets with slight modification are read using imread function. >>abcd = imread(‘i1.jpg’); >>pqrd = imread(‘i2.jpg’); Step:2 The comparison of both matrixes is made using image comparison module that would find the modification in sample data set. >>ait_picmatch(abcd,pqrd); >>ait_picmatch(abcd,pqrd); Step:3 The result of mismatch is shown below if edge detection is not applied. ans = 95.4436 Step:4 Now apply edge detection on the sample data set. >>abcd1 = canny(abcd,1,1,1); >>pqrd1 = canny(pqrd,1,1,1); Step:5 Perform the comparison of both data sets. >>ait_picmatch(abcd1,pqrd1) Step:6 The difference/mismatch has been shown below. ans = 83.4231 Step:7 Get the difference of both data set. >>95−83 ans = 12 Step:8 Following equation can be used for finding accuracy ððold matching%  new matching%Þ  100=new matching% ¼ ðð95  83Þ  100=83Þ ¼ 1200=83 ¼ 14:4578Þ The below Fig. 10 is representing the simulation of accuracy. The ratio of accuracy is 12:14.4578 in case of normal image and edge-based image. The useless content in edge detection has been removed, and due to this the level of accuracy increases. Here green line is showing accuracy of comparison in case of normal image while red line is showing accuracy in case of edge-based images. Table 2 is showing the comparison of size, accuracy and space consumption in both cases.

7 Conclusion and Future Scope Here, edge detection mechanism is discussed. Proposed work is found more accurate, and it would improve efficiency and decisions through CNN. The research would provide the study of the existing research in breast cancer prediction field. It

3 Breast Cancer Prediction Using Enhanced CNN-Based …

27

Fig. 10 Chart of accuracy comparison in normal image and edge-based image using CNN

Table 2 Comparison chart Normal CNN Edge-based CNN

Size (Ratio)

Time (Ratio)

Accuracy (%)

9 4

10.849004 1.78971

1 14.4578

would study the present research objectives and their benefits. The research work would investigate the limitation of existing researches. Convolution neural network-based edge detection has been used to predict breast cancer. It would perform a simulation to present the output of the proposed work. The research would perform a comparative analysis of tradition work with proposed work to represent how the proposed model is better than previous.

References 1. Jiang Y, Id LC, Zhang H, Xiao X (2019) Breast cancer histopathological image classification using convolutional neural networks with small SE-ResNet module, pp 1–21 2. Ragab DA, Sharkas M, Marshall S, Ren J (2019) Breast cancer detection using deep convolutional neural networks and support vector machines, pp 1–23 3. Dencks S, Piepenbrock M, Opacic T, Krauspe B, Stickeler E, Kiessling F (2018) Relative Blood Volume Estimation from Clinical Super-Resolution US Imaging in Breast Cancer, no. 1, pp 1–4 4. Lin H, Member S, Chen H, Graham S, Member S (2018) Fast ScanNet: Fast and Dense Analysis of Multi-Gigapixel Whole-Slide Images for Cancer Metastasis Detection. IEEE Trans. Med. Imaging, vol. PP, no. c, p. 1, 2018 5. Chaurasia V, Pal S, Tiwari BB (2018) Prediction of benign and malignant breast cancer using data mining techniques 6. Chauhan P, Swami A, (2018) Breast Cancer Prediction Using Genetic Algorithm Based Ensemble Approach. 9th Int Conf Comput Commun Netw Technol, pp 1–8

28

K. Rahul et al.

7. Fu B, Liu P, Lin J, Deng L, Hu K, Zheng H (2018) Predicting Invasive Disease-Free Survival for Early-stage Breast Cancer Patients Using Follow-up Clinical Data. vol. 9294, no. c 8. Kaushik D (2018) Post-Surgical Survival forecasting of breast cancer patient : a novel approach. 2018 Int Conf Adv Comput Commun Informatics, pp 37–41 9. Ma M, Shi Y, Li W, Gao Y, Xu J (2018) A Novel Two-Stage Deep Method for Mitosis Detection in Breast Cancer Histology Images. In 2018 24th Int Conf Pattern Recognit, pp 3892–3897 10. https://progressreport.cancer.gov/treatment/breast_cancer 11. Chang M, Dalpatadu RJ, Phanord D, Singh AK, Harrah WF (2018) Breast Cancer Prediction Using Bayesian Logistic Regression, vol. 2, pp 2–6

Chapter 4

Dual Band Notched Microstrip Patch Antenna with Three Split Ring Resonator Slots Eshita Gupta and Anurag Garg

1 Introduction Ultra-wideband (UWB) frameworks require enormous data transfer capacity and minimal effort antenna with radiation pattern [1]. The use of UWB has been definitely expanded over past decade, enlightening the fact that Federal Communication Commission (FCC) in 2002 permitted the utilization of authorized 3.1–10.6 GHz range band to use in business, and it also has many points of interest for UWB correspondence [2]. Due to microstrip patch antenna, we get simplicity of coordination, light in weight, small in size, and minimized that it is mostly utilized for UWB antenna designs [3]. The frequency range of S-band is 2–4 GHz. This band of antennas are used in mobile satellite service (MSS), deep space research, some communication systems, etc. WiMAX and WLAN utilize 3–4 GHz and 5–6 GHz band range, respectively [4, 5]. Wireless interoperation for microwave access as indicated by IEEE 802.16 standard WiMAX permits transmission of information utilizing different/multiple wireless frequency ranges [6]. Presently, it uses 2.5–2.69 GHz, 3.4–3.69 GHz, and 5.25–5.85 GHz ranges as licensed frequencies. Wireless Local Area Network (WLAN) according to the current document 802.11 allots five different frequency ranges for WLAN, i.e., 2.4 GHz, 3.6 GHz, 4.9 GHz, 5 GHz, and 5.9 GHz. Frequency range of C-band is 4–8 GHz, and it is used in some Wi-Fi devices [7], weather radars, in some satellite communication systems [8], etc. Satellite communication mostly uses frequencies from 3.7–4.2 GHz and 5.925–6.425 GHz as uplink and downlink ranges respective. It also includes 5.725–5.875 GHz as ISM band and is used in medical and industrial applications [9]. E. Gupta (&)  A. Garg Engineering College, Ajmer, Rajasthan, India e-mail: [email protected] A. Garg e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_4

29

30

E. Gupta and A. Garg

So, here, a rectangular patch antenna is introduced with three split ring resonator slots and is used in different wireless applications. Though the demand of wireless communication is increasing, we need an efficient antenna with S11 < −10, VSWR < 2, high bandwidth, proper radiation pattern, significant gain, etc. Therefore, with the help of this paper, we try to introduce a new configuration of antenna for wireless communication. This antenna is radiating at dual band frequency ranges with large bandwidth that is why its applications also increase which means this single antenna has many applications like WiMAX, WLAN, satellite communication, and many more places where we want to work on these two resonant frequency ranges. The reason behind this shape and size is that we go through a paper with two SSR, but in that paper the application and gain were not explained and that paper explained the notched bands only. Therefore, we wish to design three complementary SSR slots, to improve the S11, VSWR characteristic, and we also explain the resonant frequency, gain, and applications of this antenna. And we thought that this antenna will contribute in wireless communication applications which is explained above.

2 SRR Slots Antenna Design The antenna which is proposed is a microstrip patch antenna which consists of a patch, ground, substrate, and feedline. The length is represented as ‘X’ and width is represented by ‘Y’ in the front and back view of antenna which is displayed in Figs. 1 and 2. Antenna substrate is made up of RogerRT5880 (lossy) of height 1.6 mm, loss tangent 0.009, and 2.2 dielectric constant. The patch, ground, and feedline are made up of copper (lossy) material. The rectangular patch is imprinted on one surface of the substrate with 50X microstrip feedline and one-third ground plane is placed on the reverse surface of it. The patch will consist of three rings known as SRR-1, SRR-2, and SRR-3 which gives us S-11 characteristic less than −10 dB, VSWR less than 2 and very significant gain. Table 1 shows parameters of designed antenna.

3 Result Return Loss (S11) Band width is defined by the frequency range for which return loss is less than −10 dB. Here, the designed antenna is resonating at two different frequencies, i.e., 3.92115 GHz (3.2703–4.5412 GHz) and 7.28275 GHz (5.7439– 8.8118 GHz), and the bandwidth achieved is 1.2709 GHz and 3.0679 GHz, respectively, with reference to −10 dB line. So, this resonant frequency represents that we use these frequency ranges for wireless applications, and it also shows the

4 Dual Band Notched Microstrip Patch … Fig. 1 Front view

Fig. 2 Back view

31

32 Table 1 Parameters

E. Gupta and A. Garg Parameters of antenna

Feed line Roger RT5880 Substrate Ground

Variables

Dimension (mm)

Xp Yp X1 X2 X3 X4 Y1 Y2 Y3 Y4 Xf Yf Xs Ys Xg Yg

13.5 14 9 5 1 0.5 11 7 3 1 12 4.971 30 28 10 28

amount of power reflected from antenna. Figure 3 demonstrates the return loss plot of antenna. Voltage Standing Wave Ratio (VSWR) To increase performance of the antenna, the VSWR value is small. VSWR of antenna defined the range of frequency over which it matched with feed line within specific limit. So, the VSWR of the designed antenna should be less than 2 at the resonant bandwidth. Figure 4 demonstrates the VSWR characteristic. Radiation Pattern This will show the graphical representation or mathematical function of the antenna radiation property. As the antenna is radiating at two different frequencies, i.e., 3.92115 GHz and 7.28275 GHz. So, the radiation pattern

Fig. 3 Return loss plot

4 Dual Band Notched Microstrip Patch …

33

Fig. 4 VSWR plot

at frequency 3.92115 GHz is directional, and at frequency 7.28275, it is bidirectional. Figure 5a and b demonstrates the radiation pattern of antenna. Gain Fig. 6 represents the very significant gain by the configured antenna. The antenna gain is fluctuating between 6.67 and 0.7 dB at first resonant band, and it gives fluctuation between 0 and 3.3 dB at the second resonant band.

Fig. 5 Radiation pattern at resonant frequency (a) 3.92115 GHz and (b) 7.28275 GHz

34

E. Gupta and A. Garg

Fig. 6 Gain Plot

4 Conclusion The antenna is a dual band antenna because it resonates at two different frequencies, i.e., 3.93115 GHz and 7.28275 GHz, and the bandwidth achieved is 1.2709 GHz and 3.0679 GHz, respectively, with reference to −10 dB line and VSWR less than 2. The designed antenna has significant gain. This type of antenna is used in S-band/C-band/WiMAX/WLAN.

References 1. Meena P, Garg A (2017) UWB antenna with dual band notched characteristics having SRR on patch. In: 2017 International Conference on Computer, Communications and Electronics (Comptelix). IEEE 2. Khan S, et al. (2019) Novel Patch Antenna Design for Wireless Channel Monitors. In: 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT) 3. Anurag G et al. (2015) A novel design dual band-notch small square monopole antenna with enhanced bandwidth for UWB application. In: 2015 International Conference on Computer, Communication and Control (IC4) 4. Deshmukh, AA, Singh D, Ray KP (2019) Modified designs of broadband E-shape microstrip antennas. Sādhanā 44(3): 64 5. Bao-Shan Y et al (2016) Dual-band microstrip antenna fed by coaxial probe. In: 2016 11th International Symposium on Antennas, Propagation and EM Theory (ISAPE). IEEE 6. Kamma A, et al. (2014) Reconfigurable dual-band notch UWB antenna. In: 2014 Twentieth National Conference on Communications (NCC). IEEE 7. Ai-ting Wu, Bo-ran Guan (2015) Design and Research of an Ultra-Wideband Antenna with triple Band-notched Characteristic. Jof microwaves 31(2):15–19 8. Jalil YE, Chakrabarty CK, Kasi B (2014) A compact ultra-wideband antenna with band-notched design. In: 2014 IEEE 2nd International Symposium on Telecommunication Technologies (ISTT). IEEE 9. Anitha P, Reddy ASR, Giri Prasad MN (2018) Design of a compact dual band patch antenna with enhanced bandwidth on modified ground plane. Int J of Applied Eng Res 13(1): 118–122

Chapter 5

Effective RF Coverage Planning for WMAN Network Using 5 GHZ Backhaul Chiluveru Anoop, Tomar Ranjeet Singh, Sharma Mayank, and Chiluveru Ashok Kumar

1 Introduction Wireless is a catch-all word used to describe telecommunications in which electromagnetic waves, rather than some form of wire, carry the signals. Wireless technology refers to a broad range of technologies that provide mobile communications for “in-building wireless” or extended mobility around the work area, city, campus or business complex. It is also used to mean “cellular” for in-building or out-of-building mobility services. Broadband wireless is revolutionary as it can enable high bandwidth connections directly to key people and needed information from anyplace at anytime. Wi-Fi is a mature and robust technology with many OEM manufacturers with intercompatible and low-cost products. Wi-Fi is typically used for indoor/outdoor WLANs because of its high data rates; however, it has a very small range when compared to the other technologies [1]. CDMA and related cellular broadband technologies are more suitable for wireless metropolitan area network (WMAN) and wireless wide area network (WWAN) because of relatively low throughtput and high ranges. A metropolitan area network is a network spread over an area or region that lies in between the coverage area of a local area network (LAN) and wide area network (WAN) to connect users with the computer resources. This term is generally used to C. Anoop (&)  T. Ranjeet Singh  S. Mayank ITM University, Gwalior 475001, Madhya Pradesh, India e-mail: [email protected] T. Ranjeet Singh e-mail: [email protected] S. Mayank e-mail: [email protected] C. Ashok Kumar Wireless Expert, MSPL, Singapore 628773, Singapore e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_5

35

36

C. Anoop et al.

interconnect the small networks in a city to a single larger network (which is referred to a wide area network). It is also used to mean the interconnection of several local area networks by bridging them with backbone lines. Wi-Fi is a best-of-breed wireless technology which is compatible with all major manufacturers as this is an open environment which can blend in seamlessly with the existing technologies and third-party clients which are protocol compatible [2]. Some of the key requirements for wireless networks are that the network infrastructure is decentralized, to avoid a central point of failure and control, and that the technology used is both cheap enough and simple enough that it can be maintained and expanded with limited technology experience [2]. This project excels on these counts in designing a city-wide wireless metropolitan area network using star topology design. In this project, we have designed a public Wi-Fi network for the city of Gwalior to provide coverage for major areas in the city. In this project, we have used different radio bands for different networks such as 2.4 and 5.8 GHZ unlicensed bands using the star topology design with a multi-hop wireless link for better coverage rather than a single direct link between central base station and remote nodes (Fig. 1).

2 Configuration The coverage area is spread around 20–25 km which provides Wi-Fi connectivity to major areas in the city of Gwalior. To cover the area, we actually require two networks. One is the backbone network which provides connectivity from central base station to remote base stations and other is access network which provides

Fig. 1 Wireless metropolitan area network

5 Effective RF Coverage Planning for WMAN Network Using 5 GHZ …

37

connectivity to the local end users in the area. For the backbone network, we used 14 radio units; out of which one is used at central base station with a single-feed omnidirectional antenna, and 12 radio units with corner antennas are used at remote base stations to receive the signal from central base station and feed that signal into the access network radio units, while the other radio unit with omnidirectional antenna in the backbone network is used to extend the connectivity from central base station. Here, the concept of multi-hop wireless link is used rather than direct links between the central base station and the nodes far away from central node. Multi-hop wireless links have their own benefits over the single direct links. As a matter of fact, transmission over multiple short links requires less power compared to that over longer links [3]. Multi-hop links enable high data rates resulting in high throughput and efficient use of the wireless spectrum [1]. They can avoid wide deployment of cables besides extending the network coverage area and improved connectivity. Hence, radio units with corner antennas in the central base station range are pointed toward central base station, and the remaining radio units in the extended range are pointed toward the radio unit used for extending connectivity. In the access network, 13 radio units all with omnidirectional antennas are used to provide coverage in the local area. So, a total of 27 radio units with a mix of omnidirectional and corner antennas are used to cover the area. Each of the radio units has been placed at different altitudes to comply with local geographical conditions. They are also calibrated for expected losses to provide accurate prediction of the coverage. The unlicensed 2.4 and 5.8 GHZ radio bands have been used for the access and backbone network, respectively, for strong connectivity with the central node. The 5.8 GHZ band has its own bandwidth benefits which can range up to 1300 Mbps, whereas 2.4 GHZ band ends up only at speeds of 450–600 Mbps [4]. In the present situation, most of the wireless devices in houses and industries rely on the 2.4 GHZ band which makes it congested for any further use. The backbone connectivity should be strong enough to avoid any interference with adjacent bands which leads to signal losses and noise. The 5.8 GHZ band which is a newer standard is less commonly used in the household devices and henceis less likely to see interference. The 5.8 GHZ band has 23 channels to use compared to 11 channels on the 2.4 GHZ band. However, a number of available channels are confined to the standards of regulatory organization in the area. Although the 5.8 GHZ band has less coverage area, they can provide better speeds which is highly essential in today’s digital world [4]. Hence, the manufacturers are starting to develop dual-band wireless devices. The quality of coverage depends on two parameters, i.e., receive strength and transmit strength. Though there is good coverage of signal from Wi-Fi transmitters, low-powered CPEs, such as those inbuilt into lower end or older laptops, may not be able to transmit the signal back, especially through obstacles. Thus, it is necessary to either standardize the more remote nodes rather than more access nodes with single feed or to increase redundancy in coverage by doing more access nodes with multiple feeds so that even low-powered CPEs are functional all over area.

38

C. Anoop et al.

3 Wireless Network Design The first approach in deploying a wireless network virtually would be to determine the location of radio units to be placed to cover the required area considering the given bandwidth and coverage area specifications. A RF site survey is done for this purpose before the network equipment is deployed physically. The height at which these radio units to be placed should also be determined to ensure clear line of sight communication considering local geographical conditions of the area. The radio mobile software is used for this purpose which can retrieve the original maps with actual geographical conditions from the Internet which would be helpful to design a real-world network virtually [5]. The second step in the design would be to choose the type of antennas and the power needed to provide wireless connectivity in the required area. Two types of antennas called omnidirectional antenna and corner antennas are used in this design. The omnidirectional antennas are the one which radiate the supplied power in all directions, whereas the corner antennas are the directional antennas with a specific sector angle which radiate only in the intended direction rather than in all directions. These are used when the wireless coverage is needed in only a specific direction while ignoring all other directions [6]. These radiate the available power only in the intended direction which provides better bandwidths in that specific area. As the metropolitan area network of the Gwalior city designed here uses star topology, it requires two sub-networks. Taking its easy deployment, maintenance and cost factors into account, the star topology is used over the mesh topology [6]. One is the backbone network which provides connectivity between the central node and remote node, and other is the access network which takes the feed from backbone network and provides wireless connectivity in the local area.

3.1

Backbone Network

The backbone network is the one used for the remote node connectivity with central node. The designed network consists of 14 number of radio units with a mix of omnidirectional and corner antennas. The Internet feed is given at the central node, and the same feed is wireless transmitted to all other remote nodes. The central radio is terminated to 360° omnidirectional antenna, and the antenna height is determined to optimize the radio coverage to the other nodes. Each remote radio unit acts as backbone point to the network. The remote radio is terminated to a corner antenna and is getting oriented and pointed toward the central node to optimize the radio coverage. From Fig. 2, it can be observed that there are two star sub-networks. One is from the central base station to few remote base stations, and other is from one of the remote base stations to other remotely located nodes which are not in the immediate range of central base station. It is designed so because the central node cannot

5 Effective RF Coverage Planning for WMAN Network Using 5 GHZ …

39

Fig. 2 Backbone network

directly cover all the remote nodes. Therefore, considering the cost factors, we have given connectivity to some remote nodes through a third remote node which is in immediate range of central node. The sample link between two radios in backbone network is shown below in Fig. 3. It can be observed from the below link that we have used 5.8 GHZ unlicensed band for backbone network to provide better connectivity between the nodes. We have set the receiver threshold at −87 dBm which is a stronger signal than the conventional −107 dBm signal. In the RF link below the Rx level is −78.5 dBm which is very much better than the set threshold Rx level which is −87 dBm. This implies that the RF link between these two radios is strong and can transmit and receive the signals efficiently without much loss. In Fig. 2, the central node which is CBB provides connectivity to R3BB which is a remote node directly in contact with central node. Then, the output signal from R3BB is given as a feed to RCBB1 which is placed at same location as R3BB through a network switch. Then, RCBB1 further provides wireless connectivity to RCBB2, RCBB3, R8BB, R9BB which are not directly in contact with central node. The two sub-networks are central backbone network which provides connectivity from CBB to remote nodes and remote backhaul network which provides connectivity from remote node to other remote nodes which are not in direct range of CBB.

40

C. Anoop et al.

Fig. 3 Backbone active units

Fig. 4 Backbone link budget

The network report with the radio sub-system signal strengths and locations with elevations of antennas can be observed in Fig. 3 (Figs. 4 and 5). Access Network: The access network is the one used for the wireless access to connect to the laptop (CPE). The network consists of 13 single-feed radio units, all with omnidirectional antennas used to provide coverage in the local area. Each of the radio units in the access network uses the feed from radio unit in the backbone network placed on the same tower which receives the signal from central base station (Figs. 6 and 7).

5 Effective RF Coverage Planning for WMAN Network Using 5 GHZ …

Fig. 5 Remote backbone active units

Fig. 6 Access network with coverage units

Fig. 7 Access network active units

41

42

C. Anoop et al.

As seen from the above, the access network is almost covering 80–90% of the major areas in the city which allows the residents to access the network in these particular areas.

4 Results Finally, the metropolitan area network for the city of Gwalior is successfully designed and simulated using the radio mobile software. The results of the simulation are stated in this section which shows that the network is well designed and accessible for users in vicinity of the access network radio unit coverage area. The coverage diagram of the access network is shown below. Different signal levels are showed in dBm against particular colour. Here, the laptop is used as the transmitting station, and the network units are used as the receive stations. The good reason for this is the antenna height of the laptop which is just 2 m, and height of the receiving stations is 15/12/10 m. The laptop can easily receive the signal from network stations as the network stations are powerful (600 mW RF output). But the RF output of the laptop is less (100 mW). So, the aim is to be the network stations which should able to receive weak signals of the laptop, i.e., the UPLINK, and we are optimizing, and for the DOWNLINK, we need not bother as the laptop will be able to receive them anyway. It can be clearly observed from the below map showing the network footprint of the designed network which confines to the standards set by ITU and provides better coverage with better signal quality to avoid users experiencing disruptions. However, this is only a design of the network, the actual bandwidths, and speeds which totally depend upon the specifications of the Internet connection provided at the central base station. The connection with proper orientation and radiation power of the antennas keeps up with the network designed (Figs. 8 and 9).

5 Conclusion and Applications While deploying a wireless network, radio coverage planning plays a significant role in determining the network performance. Earlier the backbone network is designed using the 2.4 GHZ band which seems to become congested lately [7]. In this project, we have designed the backbone network effectively using the 5 GHZ band which is less congested than the former to avoid any latency and disruptions. However, many difficulties like short-range transmission and proper connectivity between the radios in spite of fast attenuation have been solved with effective planning in this project to provide seamless connectivity over a wide area. A WMAN integrates all the city services on a single physical access infrastructure which has many advantages. There are obvious cost savings as all communications are moved onto a single platform. Complete mobility enhances communication as it

5 Effective RF Coverage Planning for WMAN Network Using 5 GHZ …

43

Fig. 8 Access network footprint

makes easy for people to be able to communicate with others no matter where they are in city [8]. Existing Scenario in the City of Gwalior: At present, there is no such fixed public Wi-Fi network in Gwalior if the government even wants to install and monitor CCTV systems all over the city. This public Wi-Fi network can provide Wi-Fi connectivity to people in the city as well as monitor them with this fixed network infrastructure readily available which will not force the government to use different networks for different system implementations in the city which requires a lot of budget and not so easy to maintain all the networks.

5.1

Applications

1. Wireless Internet & VOIP: Wireless Internet Access & VOIP is an integrated network of public Internet communication. Wireless high-speed data connections are used to provide point-to-point and point-to-multipoint Internet connectivity for the region having wireless metropolitan area networks (WMANs). 2. Security: A city has a tremendous responsibility to ensure the welfare and safety of its citizens who are young and sheltered. A city-wide video surveillance system can offer protection without infringing upon the privacy of the public.

44

C. Anoop et al.

Fig. 9 Access network coverage area

A central switch provides the ability to monitor all video feeds from one location and archive the feeds for future use. A wireless IP-based surveillance system offers many benefits over a traditional CCTV. 3. Digital Billboards: Modern digital billboards are electronic picture screens that rotate several static ads. Larger outdoor billboards exist alongside roadways, whereas smaller indoor billboards exist in entertainment centers such as sports stadiums. Advertising companies can change ads on such billboards remotely by accessing the billboard machine through the wireless mobile phone network. Digital billboards show 6–10 s of ads, with as many as eight businesses sharing one billboard. 4. Passenger Information System: A passenger information display system is an electronic device for providing public transportation users with information about the existence and status of a public transportation service through the use of visual, voice or other media. A distinction can be made among the information given by these systems.

5 Effective RF Coverage Planning for WMAN Network Using 5 GHZ …

45

References 1. Braun T, Kassler A, Kihl M, Veselin R. Multi hop wireless networks 2. Wireless Metropolitan Area Networking (WMAN). http://wikid.io.tudelft.nl/WikID/index.php/ Wireless_Metropolitan_Area_Networking_(WMAN) 3. Karloff H, Subbaraman R. Designing wireless metropolitan-area networks using mathematical optimization. In: IEEE 15203052 2015 wireless telecommunications symposium (WTS), 15– 17 April 2015 4. Osborn R. https://www.sabaitechnology.com/blog/24-ghz-vs-5-ghz-wifi. 3rd Nov 2017 5. Radio mobile software. https://www.Radiomobile.com 6. Korkakakis N, Vlachos K. Building wireless metropolitan networks 7. Laiho J, Wacker A, Novosad T. Radio network planning and optimization 8. Wi-Fi solutions and digital communications. http://wifi-plus.com

Chapter 6

Security Enhancement of E-Healthcare System in Cloud Using Efficient Cryptographic Method N. Rajkumar

and E. Kannan

1 Introduction Data security has now become a significant part of data communication as user spend a lot of energy associated with the network. One of the key reasons for the success of intruders is that much of the data they receive from a device is in a form that can be interpreted and understood. Various methods are used to maximize the security of the data being transmitted. The important method used to provide the confidentiality is through the use of Cryptography which is the art and science of protecting information from undesirable individuals by converting it into a form indiscernible by its attackers while it is stored and transmitted. It is a fundamental constructing block for building information systems. It relates to the study of mathematical techniques associated to the aspects of information security such as the confidentiality, data integrity, and authentication of the data. The data that can be interpreted and understood without any special steps is called plaintext or simple text in cryptographic terminology. Encryption is called the process of masquerading plaintext in such a way as to conceal its content. Encrypting plaintext results in cipher text called unreadable masquerading. The method of recovering the plaintext from the text of the cipher is called decryption. A device or product is called a cryptosystem that provides encryption and decryption. Subsequently, it utilizes similar key for both decryption and encryption, and keys ought to be traded among source and destination. So, the key ought to be prepared securely. In this technique, here, the key is overseen by the clients; so, the issue of key administration is redressed. The security of single key in symmetric key encryption is more significant; so, the key is moved utilizing separate secure protocol. N. Rajkumar (&)  E. Kannan Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_6

47

48

N. Rajkumar and E. Kannan

At present, the health service segment requires production of a situation that minimizes the tedious activities and other overpriced activities to acquire a patient’s complete clinical information and steadily incorporate these various assortments of clinical information to convey them to the healthcare framework. Electronic health record (EHR) has been broadly received to empower medical care dealers and patients to access medical services data from any spot and each and every time. Cloud organizations give significant structure at low cost and good quality. Cloud computing utilized in medical care division decreases the expense with improved effectiveness and quality. Be that as it may, nowadays, we are facing issues in cloud for storing data. Here, we present some of the preceding work of the researchers. Depending on the user’s request [1], Kim et al. have proposed a trusted model for efficient reconfiguration and allocation of computing resources. In order to achieve reliability, confidence calculations are made. Yang et al. [2] have proposed a shared firewall confidence model focused on cloud computing. A protocol has been proposed by Ahmed et al. [3] to create confidence and confidentiality when accessing data. Brodkin [4] has identified seven security threats that need to be addressed before companies make decisions about turning them into a model of cloud computing. According to Chen and Zhao [5], cloud computing as an approach poses new threats, impacts some, and magnifies others. Grobauer [6] has clarified these risks and their effect on security risks and vulnerabilities. Meh mood and colleagues have explored the usage of grid and cloud computing in healthcare [7, 8], transport [9–11], and distance learning [12] in previous work. In this paper, we address healthcare cloud security concerns and suggest architectures to protect healthcare cloud data. In LKH model, both one-way hash technique and reverse hash technique are implemented. In reverse hashing technique, data can be converted into any fixed size. So, this advantage can produce the output length in smaller size. The hash function computation time is relatively short than symmetric algorithm. One-way hash function is used in LKH model, so it is hard to reverse the process to decode the data that is hashed to a given data.

1.1

Objective of Our LKH Model

The main objective of our proposed work is as follows: (i) Group-based data sharing is used in cloud environment, to maintain forward and backward secrecy using one-way hash and reverse hash techniques. (ii) To eliminate group key compromise attack. (iii) To consume less computation time for key creation. (iv) To reduce key maintenance cost.

6 Security Enhancement of E-Healthcare System in Cloud …

49

2 Comparative Analysis In this section, comparative analysis of different encryption algorithms with regard to different attributes such as key size, block size, number of rounds, degree of protection, attack foundations and speed of encryption on them was discussed. Table 1 suggests the comparative evaluation among symmetric and uneven algorithms in tabular form for clean and quick analysis.

3 Methodology The RSA, Blowfish, AES, and LKH algorithms are selected and implemented. Public key cryptosystem safety is based on the factoring of large prime numbers [12, 13]. RSA public key cryptosystem is used for key trade or data encryption or digital signature. It produces pair of keys: public key for encryption and non-public key to decryption. The algorithm includes different stages; the preliminary step is key technology which is to be utilized as key to encrypt and decrypt data, the second step is encryption, and the third step is decryption. Length of key bits is 1024–4096. Public and private keys are created with the help of two top numbers, and it is utilized for encryption and decryption processes. Sender encrypts the message with the receiver’s public key, and the message is sent to the recipient. Receiver decrypts the message using the non-public key [14]. RSA should be disintegrated in different expansive advances: encryption, decryption, and key creation. The use of the Advanced Encryption Standard to replace the Data Encryption Standard was recommended by the US National Institute of Standards and Technology (NIST) in 1998. In 1998, Joan Daemen and Vincent Rijmen invented the AES algorithm, which is a symmetric key block cipher. It is a cipher of variable bit blocks which uses 128, 192 and 256 bits of variable key length. AES encryption is fast and versatile, particularly on small devices, and can be implemented on different platforms. For several security applications [16–18], AES has also been carefully checked. Schneier [18], make use of the Blowfish algorithm and made it available in the public area. The algorithm is existing from 1993. Blowfish has the key size of 32– 448 bits. This symmetric key block cipher has 64 bits additionally, and it also has a Feistal network. Due to its small size, it can be enhanced in hardware applications. The algorithm in Fig. 1 entails of two sections: key extension phase and record encryption part. Key improvement section feature can be adjusted over a key of all matters regarded 448 portions into a few sub-key clusters totaling 4168 bytes [18]. The data encryption takes place via potential of a 16-round Feistel community [19]. It is only suitable for applications where the key, such as a communication link or automatic file encryption, does not change often. When implemented on 32-bit microprocessors with large data caches, it is substantially faster than most encryption algorithms.

112 or 168

128, 192, 256

Key length depends on no of bits in the modulus Variable length i.e. 32–448

128, 192, 256

256, 512, 1024

0–1024 (128—suggested)

Smaller but effective key

128

3DES

AES

RSA

Blow fish

Two fish

Three fish

RC5

ECC

IDEA

Key size (Bits)

56

Parameters

DES

Algorithm

32, 62, 128 (64 suggested) Stream size in variable 64

256, 512, 1024

128

8

For 256, 512 is 72 For 1024 is 80 1–255 (64 suggested) 1

16

16

Nil

Not fixed 64

10, 12,14

48

16

Rounds

128

64

64

Block size (Bits)

Table 1 Comparison of traditional encryption and decryption algorithms

Linear attack

Correlation attack Timing attack Double attack

Brute-force attack Men-in-middle attack Differential analysis Brute-force attack Differential analysis Side-channel attack Key recovery attack Brute-force attack Timing attack Not yet but prone to key related attack Boomerang attack Differential attack Related-key attack Improved related-key boomerang attack

Attack sounds

Good

High

Good

Good

Good

High

Very high

Excellent

Vulnerable

Not adequate

Level of security

Fast

Very fast

Slow

Fast

Fast

Very fast

Very fast

Fast

Very slow

Very slow

Encryption speed

50 N. Rajkumar and E. Kannan

6 Security Enhancement of E-Healthcare System in Cloud …

51

6641 1 MB

2 MB

3017.1 1236.2 683.7 593.4

2000

710.9 269.6 154.7 132.7

4000

1710.9 665.4 376.1 312.5

6000

5 MB

10 MB

2356.5 1350.5 1254.9

8000

425.6 133.2 80 64.3

MILLI SECONDS

TEXT FILE ENCRYPTION

0 20 MB

FILE SIZE RSA

BF

AES

PROPOSED…

Fig. 1 Encryption runtime of text files

The Logical Key Hierarchy (LKH) model is based on the method of key generation. A multicast community that dynamically varies the storage of all users is generated in the LKH model. The subset may differ based on the user’s exile from the group or the new user’s inclusion in the group. Here, the server will broadcast community information that can be accessed by all members of the group. Since the information is transmitted over an insecure channel, other artifacts that are not in the community can access it. We are developing two hash-based cryptographic techniques that are accessible to users in the community and to the server in order to add privacy to the data [20]. An existing crypto system, key generation process has taken more time. Our proposed strategy possesses taken exceptionally less time for key creation and less computation cost contrast with existing technique and the proposed technique. Algorithm for Key Generation Step1: To generate random value and secret value. Step2: Next generate the group key by applying a reverse hash technique and XOR operations on the random value and secret value. Group Key: KGni = h−i(x)  hi(s) Step3: Generate the session key by performing one-way hash technique and XOR operations on group key. 3.1: Creating round key: Kcr = H(KG11  KG12) 3.2: Creating seed value: Sdcr = Kcr  s Session Key: Ks11 = H(Kcr  Sdcr) Step4: Generate the file key by performing encryption with session key. Final Key: Kf11 = E(Kcr  Ks11)

52

N. Rajkumar and E. Kannan

4 Result and Discussion The different output of the algorithm was contrasted and applied with text files. Encryption time, decryption and throughput time are the performance metrics. The experiment was conducted on a 64-bit Windows laptop, i5 processor, and CPU 3.2GHz with 4GB RAM. With key 128bit, the AES/RSA/BF algorithm was run. The encryption/decryption was replicated 10 times for each of the data blocks and the time requirement was reported for each run. The average time taken was then calculated and used for each algorithm’s throughput measurement. LKH model is tree-based key creation technique. It uses less number of hash functions and XoR operations for the final key creation as shown in Table 2. In the existing work, mutual authenticate key agreement protocol (MAKAP), ECC authenticate protocol and robotic key agreement protocol are utilized for key creation. In the proposed work, the final key is generated at the third level itself compared to the existing scheme.

4.1

Running Time of Encryption

The amount of time needed to use the chosen algorithm to perform the encryption process is known as the system’s encryption time. Figure 1 shows the encryption time of the proposed method. To show the exhibition of actualized data sharing plan, encryption (text file) execution time is accounted in Fig. 1. In this figure, the X-pivot shows the diverse measure of info records, and the Y-pivot shows the measure of time devoured for encoding the information text document or image file. In order to demonstrate the efficiency of the data sharing scheme implemented, encryption execution time is stated in Fig. 1. The X axis shows the different number of input files in this figure, and the Y axis shows the amount of time taken for the input text file to be encrypted. The proposed system uses less time for file encryption, according to the

Table 2 Time consumption with the existing techniques for key creation Protocols

Client

Server

Total

MAKAS (ms) ECCAP (ms) RKAP (ms) Proposed scheme (ms)

7Th + 4TXOR = 0.0035

5Th + 6TXOR = 0.0025

12Th + 10TXOR  0.006

3Tm + 9Th = 0.063075

2Tm + 6Th = 0.12615

4Th + 3TXOR = 0.002

5Th + 6TXOR = 0.0025

5Tm + 15Th  0.189225 9Th + 9TXOR  0.0045

3Th + 2TXOR = 0.0015

2Th + 3TXOR = 0.001

5Th + 5TXOR  0.0025

6 Security Enhancement of E-Healthcare System in Cloud …

53

provided results. The findings also show that the amount of time spent depends on the amount of data supplied for execution. In addition, when using the proposed data protection, the security of file sharing between different parties is improved.

4.2

Running Time of Decryption

The measure of time required to perform decryption process utilizing the chose algorithm is named as the decryption time of the framework. The efficiency of the device in terms of milliseconds is shown in Fig. 2. The yellow line indicates the performance of the proposed algorithm to illustrate the performance of the safe data sharing scheme. In the given Fig. 2, X-pivot shows the diverse measure of input files and the Y-pivot shows the measure of time devoured for decryption process. As per the created outcomes, the encryption time is more than the decryption time in the framework, yet the decryption time of the proposed algorithm is a lot of versatile, and after secure sharing, client can be downloaded in their framework. From the even aftereffect of Tables 2 and 3, we have presumed that LKH is setting aside less effort to encrypt the text file and RSA is setting aside exceptionally less effort to decrypt the text file. On account of encryption plot, throughput is determined as the normal encryption time. The throughput increments and force utilization diminishes. So as observed from Fig. 3 and Table 5, the throughput diagram of LKH is better in the event of text file encryption than other three algorithms.

1025.5 4

2 MB

575.4 897.5 522.1

1 MB

3.7

500

3.5 126.3 197.6 104.8

1000

3.7 210.7 457.7 296.4

1500

3.8 50.2 118.9 45.7

MILLI SECONDS

2000

1027.3

1844.5

TEXT FILE DECRYPTION

0 5 MB

10 MB

FILE SIZE RSA

BF

Fig. 2 Decryption runtime of text files

AES

PROPOSED METHOD

20 MB

54

N. Rajkumar and E. Kannan

THROUGHPUT) (KB/MSEC)

Throughput for Text files 10 7.4

8 6 4 2

8.3

4.2 1.6

0 RSA

BF

AES

PROPOSED METHOD

CRYPTOGRAPHIC ALGORITHM Fig. 3 Throughput of text files

5 Conclusion and Future Enhancement In this paper, we accomplished the problem of privacy protection and sharing large records in the remote cloud. From the introduced re-enactment result, it was inferred that our proposed technique has preferable presentation over different algorithms regarding key creation cycle, throughput and encryption and decryption time. Compared to the existing method, our method has taken less computation time for key generation process. Our proposed method achieved less computation time, less computation cost, more security, and good performance. Consequently, utilization of cloud computing in healthcare framework makes health administrations more reasonable, just as helping country to accomplish health value.

References 1. Kim H, Lee H, Kim W, Kim Y (2010) A trust evaluation model for QoS guarantee in cloud systems. Int J Grid Dis Comput 3(1):1–10 2. Yang Z, Qiao L, Liu C, Yang C Wan G (2010) A collaborative trust model of firewall-through based on Cloud Computing. In: The 2010 14th international conference on computer supported cooperative work in design (IEEE), pp 329–334 3. Ahmed M, Xiang Y, Ali S (2010) Above the trust and security in cloud computing: A notion towards innovation. In: 2010 IEEE/IFIP international conference on embedded and ubiquitous computing (IEEE), pp 723–730 4. Brodkin J (2008) Gartner: Seven cloud-computing security risks. Infoworld, pp 1–3 5. Chen D, Zhao H (2012) Data security and privacy protection issues in cloud computing. In: 2012 international conference on computer science and electronics engineering (IEEE), pp 647–651 6. Grobauer B, Walloschek T, Stocker E (2010) Understanding cloud computing

6 Security Enhancement of E-Healthcare System in Cloud …

55

7. Altowaijri S, Mehmood R, Williams J (2010) A quantitative model of grid systems performance in healthcare organisations. In: 2010 international conference on intelligent systems, modelling and simulation. IEEE, pp 431–436 8. Mehmood R, Faisal M.A., Altowaijri S (2015) Future networked healthcare systems: a review and case study. In: Handbook of research on redesigning the future of internet architectures. IGI Global, pp 531–558 9. Alazawi Z, Alani O, Abdljabar MB, Altowaijri S, Mehmood R (2014) A smart disaster management system for future cities. In: Proceedings of the 2014 ACM international workshop on wireless and mobile technologies for smart cities, pp 1–10 10. Alazawi Z, Altowaijri S, Mehmood R, Abdljabar MB (2011) Intelligent disaster management system based on cloud-enabled vehicular networks. In: 2011 11th international conference on ITS telecommunications (IEEE), pp 361–368 11. Mehmood R, Alam F, Albogami NN, Katib I, Albeshri A, Altowaijri SM (2017) UTiLearn: a personalised ubiquitous teaching and learning system for smart societies. IEEE Access 5:2615–2635(2017) 12. Panda M (2016) Performance analysis of encryption algorithms for security. In: 2016 international conference on signal processing, communication, power and embedded system. SCOPES—IEEE, pp 278–284 13. Altowaijri SM (2020) An architecture to improve the security of cloud computing in the healthcare sector. In: Smart Infrastructure and Applications. Springer, pp 249–266 14. Rivest RL, Shamir A, Adleman L (1978) A method for obtaining digital signatures and public-key cryptosystems. Commun CM 21(2):120–126 15. Singh G (2013) A study of encryption algorithms (RSA, DES, 3DES and AES) for information security. Int J Comp Appl 67(19) 16. Abd Elminaam DS, Abdual-Kader HM, Hadhoud MM (2010) Evaluating the performance of symmetric encryption algorithms. IJ Netw Secur 10(3):216–222 17. Singh G, Kinger S (2013) Integrating AES, DES, and 3-DES encryption algorithms for enhanced data security. Int J Sci Eng Res 4(7): 2058 18. Schneier B (1993) Description of a new variable-length key, 64-bit block cipher (Blowfish). In: International workshop on fast software encryption. Springer, Berlin, Heidelberg, pp 191– 204 19. Nie T, Zhang T (2009) A study of DES and Blowfish encryption algorithm. In: Tencon 2009– 2009 IEEE region 10 conference, pp 1–4 20. Rajkumar N, Kannan E (2020) Attribute-based collusion resistance in group-based cloud data sharing using LKH model. J Circ Syst Comput 29(02):2030001–2030020

Chapter 7

Development of Low-Cost Indigenous Prototype Profiling Buoy System via Embedded Controller for Underwater Parameter Estimation Pedagadi V. S. Sankaracharyulu, Munaka Suresh Kumar, and Ch Kusma Kumari

1 Introduction A globally observed fact is that the sea level is rising at a rate of nearly 3 mm per year. Melting of ice and extreme weather events are causing a change in the lifestyle of humans as well as nature. All these changes are due to the long-term climate variability, and its impact leads to many adverse effects like severe droughts, coastal flooding, frequent heat waves, and tropical cyclones. Consequently, there is a need for understanding the changes in both atmosphere and ocean to cater to the needs that arise due to climate variability [1]. Especially, observing the ocean currents and their movements is an important task to predict the changes in the climate and ocean behavior. Parameters like temperature, pressure, salinity, circulation patterns, sea level, and pH are used to predict the behavior of water bodies. An indigenous “Low-cost profiling float” design is attempted in this paper. The motivation behind this design is to implement the emerging embedded technology, which is giving solutions to many problems in real-time applications. Section 2 describes the shape of the buoy system considered for the prototype design. Sections 3 and 4 describe the embedded control and flotation mechanism. Section 5 predicts the field trials conducted and the problems encountered while

P. V. S. Sankaracharyulu (&)  Ch. Kusma Kumari Electronics and Communication Department, Gayatri Vidya Parishad College of Engineering (A), Madhurawada, Visakhapatnam 530048, India e-mail: [email protected] Ch. Kusma Kumari e-mail: [email protected] M. Suresh Kumar Electronics & Information Technology Government of India, Gambheeram, Anandapuram, Visakhapatnam 531163, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_7

57

58

P. V. S. Sankaracharyulu et al.

designing the buoy system. Finally, in Sect. 6, the observations of temperature and pressure that are measured using the prototype are displayed.

2 Hydrodynamic Shape of Profiling Buoy Profiling buoy systems are generally used to find the underwater parameters [2, 3]. In this design, a special type of setup is arranged to move under the water, which can gather information about the surface and sub-surface parameters like temperature and pressure. Compared to the existing system, this design can be considered as a low-cost system. The profiling buoy is 1.52 m long and 15.6 kg weight, with freely drifting automatic devices, which adjust the depth by changing the buoyant force acting on them. The present prototype is programmed to drift at a nominal depth of 5 m. The schematic diagram of the profiling buoy system is shown in Fig. 1.

3 Embedded Controlling Mechanism The novelty and outcome of this research are to arrive at an improved model of a profile float to accomplish the problems like cost of ocean access systems, maintenance of the float equipment, recovery of the buoy networks, etc. The presently

Fig. 1 Schematic diagram of profiling buoy

7 Development of Low-Cost Indigenous Prototype Profiling …

59

developed low-cost profiling buoy is built around commercially available electronic device Arduino [4] Nano (ATmega328P). The profiling buoy provides a platform to mount the Arduino, temperature, and pressure sensors (MS5540C), SD module and their accessories along with batteries. All the modules in the design receive power supply from a lithium-ion battery circuit [5]. Hence, the explanation is focused on the flotation. But the details about the sensors can be considered from the literature [6]. The block diagram of profiling buoy is shown in Fig. 2. The modules in the buoy system are described as follows.

3.1

Embedded Controllers

In this prototype, two Arduino controllers are used to achieving compatibility with all the peripherals. The temperature/pressure sensor is interfaced with Arduino 2 through serial peripheral interface (SPI) communication. The SD module is interfaced with Arduino1 through SPI communication.

3.2

Linear Actuator

A linear actuator is used as a single stroke pump to transfer the liquid which happens to be oil in this case into the external bladder. The bladder used in this design is a rubber bladder. Arduino2 drives the linear actuator through the driver (L298N) and four-channel relay system.

3.3

SD Module

Micro SD module [7] used in this design supports up to 4 GB memory card. The surface and sub-surface data obtained from the MS5540C sensor are stored in the SD card through Arduino2 and Arduino1.

Fig. 2 Block diagram of profiling buoy

60

P. V. S. Sankaracharyulu et al.

4 Flotation of Profiling Buoy The flotation of the buoy (Fig. 3) is based on the Archimedes’ principle and specific gravity calculations. Any object sinks in water if its specific gravity is greater than 1 and floats on the surface if its specific gravity is less than 1. Specific gravity is calculated as the ratio of the density of the object and density of water [8]. Density values of the buoy in the sinking and rising modes are calculated from Eqs. (1) and (2). M V1

ð1Þ

M V1 þ V2

ð2Þ

D1 ¼ D2 ¼

where M represents the mass of the profiling buoy system, V1 represents the volume of the buoy, V2 represents the volume of the bladder, D1 represents the density of the buoy before pumping the oil into the bladder (sinking mode) and D2 represents the density calculation of the buoy system after pumping the oil into the bladder (raising mode). The sequence of operation of the designed buoy system is represented through a flowchart in Fig. 4. The methodology for the flotation of the buoy is organized as follows: Step-1: Initialization of the embedded controllers. Step-2: Arduino2 drives the linear actuator. The actuator pumps the oil through the hydraulic piston to the bladder. Step-3: As the volume of the bladder changes, the density of the buoy also changes (D2) that leads to upward movement. Step-4: The sensor collects the temperature and pressure profiles [9] and sends that data from Arduino2 to Arduino1. The data is stored in the SD card.

Fig. 3 Movement of profiling buoy

7 Development of Low-Cost Indigenous Prototype Profiling …

61

Fig. 4 Flowchart representation of the system

Step-5: After buoy reaches the surface, the bladder deflates and buoy returns to its original density. Step-6: Again, the sensor collects the data and stores in the SD card. Once the buoy reaches its original density (D1), it starts sinking in the downward direction.

5 Field Trials The hydrodynamic shape and weight of the buoy play an important role in upward and downward movements. Some designs that have been used with various housing models and weighing conditions are discussed in the following iterations [10].

62

5.1

P. V. S. Sankaracharyulu et al.

Iteration1: PVC Housing

In the preliminary model, a prototype based on the complete PVC housing was considered. The major issue observed is density related. Due to less density/weight of PVC, proper sinking is not achieved. Unfortunately, the buoy stays on the surface of the water body.

5.2

Iteration2: Iron Housing

In the next iteration, complete iron housing was considered. The inner surface area of the iron housing is rough. In that rough surface area, the piston moves in the forward and backward direction. Due to friction, an issue occurred, and the piston cap got damaged resulting in leakage of oil/air into the actuator.

5.3

Iteration3: Combination of Both PVC and Iron Housing

The final design is the combination of both iron and PVC housing (Fig. 5) in which friction and density problems are avoided. The use of PVC and iron housing gave good performance in the sink, raise of the buoy.

6 Embedded Setup and Flotation of Buoy This paper describes an implementation strategy and justification for a new form of low-cost Do It Yourself (DIY) Profiling Buoy system using Arduino-based microcontroller. The proposed system is designed to monitor and analyze the underwater parameters. The PCB board is designed (Fig. 6), and all the components are placed on it as per the circuit diagram. The hardware board is the main controller setup to operate the profiling buoy both in an upward and downward direction. All the components have been enclosed in airtight space to prevent water leaking and make effective use of the buoyant system. Temperature and pressure profiles of the water body have been collected from the sensor which is externally attached to the buoy system. To assess the strength and accuracy of collecting the data, the proposed design is tested in a water body, which has 5 m depth. Figure 7 describes the position of the buoy system in sinking mode. As the bladder in a deflated position, the buoy moves in a downward direction. Figure 8 defines the upward movement of the experimental setup. This condition of operation is initiated with the expansion of the

7 Development of Low-Cost Indigenous Prototype Profiling …

63

Fig. 5 Final model (combination of both PVC and iron housing)

Fig. 6 Overall hardware setup of profiling buoy

bladder. Figure 9 shows the complete rise of the buoy system to the surface level. The data collected during the ascending and descending operations is stored in the SD card.

7 Parameters Estimation and Data Analysis The experimental results of the profiling buoy are collected in two different scenarios: one set of results during sunny day and the other set during cloudy day. Figure 10 describes the measurements obtained in hot sun weather. Figure 12 describes the variation of pressure with the depth. Figure 13 describes about variation of temperature with variation in depth. From the plots, it is observed that while the pressure increases with depth, the temperature on the other hand decreases with depth.

64

P. V. S. Sankaracharyulu et al.

Fig. 7 Profiling buoy in sinking mode

Fig. 8 Profiling buoy in an upward movement

Fig. 9 Profiling buoy raised to the surface of the water

From Fig. 12, the pressure is constantly varied during the ascending and descending movements. But from Fig. 13, there is a variation in the temperature scale during upward and downward movements. At the depth of 140–190 cm, temperature scale is same in both directions. At the surface levels of water, the variation in the temperature is clearly observed. This variation in the temperature is due to hot weather during the experimentation.

7 Development of Low-Cost Indigenous Prototype Profiling …

65

Fig. 10 Underwater parameter data during sunny weather

From the results, it is observed that, as the depth increased, the pressure also increased gradually, and there is also decrease in the temperature profile along with the depth. It is also observed that at the surface level and deeper levels of water, the variation in the temperature is also constant as like pressure. This constant temperature is due to cloudy weather during the experimentation. The temperature and

66

P. V. S. Sankaracharyulu et al.

Fig. 11 Underwater parameter data during cloudy weather

pressure profiles collected by using this embedded controlled buoy system are observed as similar to the existing systems. Another set of data is collected during cloudy day. Figure 11 describes the data profile in both upward and downward moments and plotted as shown in Figs. 14 and 15. Figure 15 shows that stepwise slope is constant (1° per 10 cm), but average slope is 5° per 150 cm. So, there is a three-time difference between instantaneous slope and overall slope. This could be due to a) insensitivity of temperature sensor, and b) the residence time of instrument at a given location is not enough to stabilize the temperature.

7 Development of Low-Cost Indigenous Prototype Profiling …

Fig. 12 Pressure versus depth graph on sunny day

Fig. 13 Temperature versus depth graph on sunny day

67

68

P. V. S. Sankaracharyulu et al.

Fig. 14 Pressure versus depth graph on cloudy day

Fig. 15 Temperature versus depth graph on cloudy day

In this prototype, the communication of sensors with the buoy is tethered. But, in the commercial product, the entire system will be embedded into the profiling float, and the data transmission to the external world is done through new wireless transmission mechanisms.

7 Development of Low-Cost Indigenous Prototype Profiling …

69

8 Conclusion The prototype profiling buoy designed and developed in this paper is primarily used to monitor the underwater parameters. The novel low-cost system finds application in collecting the data from the water bodies like river, sea and ocean environments. The design is pretty flexible in the sense that extra sensors may be tailored into the proposed system to measure a few additional parameters like turbidity, conductivity, salinity, etc. In the proposed hardware setup, the collected data is stored in the SD module. In the future, the collected data in the SD module may be transmitted via wireless transmission either to the ground station or to the satellite receiver to monitor the real-time underwater parameters in an online mode. The cost for making the prototype is around few thousands. But the existing profiling floats are available in INR Lakhs. Based on the cost of the prototype, it is estimated that the cost of a commercial product will definitely less than the existing ones. Idea as coined and working prototype in laboratory setup is demonstrated. Comparison with commercial instrument will be taken up in future scope of the work.

References 1. Banlue P et al (2018) Aerial-to-surface communication and data transferring system for environmental survey. In: 22nd international computer science and engineering conference (ICSEC). IEEE 2. Riser SC et al (2016) Fifteen years of ocean observations with the global Argo array. Nat Clim Change 6(2):145 3. Kaliyaperumal P et al (2015) Design analysis and installation of offshore instrumented moored data buoy system. J Shipp Ocean Eng 5:181–194 4. Langis DP (2015) Arduino based oceanographic instruments: an implementation strategy for low-cost sensors 5. Chang HI et al (2017) The NTU buoy for typhoon observation, part 1: system: NTU buoy for typhoon: system. OCEANS-Aberdeen. IEEE 6. Echert DC et al. (1989) The autonomous ocean profiler: a current-driven oceanographic sensor platform. IEEE J Ocean Eng 14(2):195–202 7. Smith R et al (2009) Acoustic doppler current profiler wave sentry buoy. Oceans, IEEE 8. Dhanak M et al (1999) Using small AUV for oceanographic measurements,” Oceans. In: MTS/IEEE. riding the crest into the 21st century. conference and exhibition. Conference proceedings (IEEE Cat. No. 99CH37008), vol 3 9. Le Menn M et al (5019) Development of surface drifting buoys for fiducial reference measurements of sea-surface temperature. Frontiers in Marine Science 6:578 10. Aracri S et al (2016) Trials of an autonomous profiling buoy system. J Oper Oceanogr 9 (sup1):s176–s184

Chapter 8

Analysis of Infant Mortality Rate in India Using Time Series Analytics D. Jagan Mohan Reddy

and Shaik Johny Basha

1 Introduction Time series data is defined as successive order of numerical data points in a series. For analyzing the data in sequential analysis, different state-space methods have been developed [1]. Time series analysis includes extricating the significant data or examples or attributes to identify the performance. One of the important and useful models is regression analysis, which develops a model based on at least one independent variable to forecast the result on continuous variables. The univariate time series model comprises a particular observation for a continuous period or for a long time. For example, monthly shampoo sale is shown in Fig. 1, which demonstrates 36 months monthly sale of shampoos in India. The sales of shampoos are obtained from “Datamarket—An online repository” (https:// datamarket.com/data/set/22r0/sales-of-shampoo-over-a-three-year-period). Various models have been proposed for forecasting of univariate time series analysis [2–5]. Univariate model is a proper tool to forecast which derived informative variables which are observable or not. It has been observed in the hospital patient records which gave insights for various factors like sickness, thyroid, accident, and so on which are empirically not quantifiable [6]. More than one observation collected over a period is known as a multivariate time series analysis. One of the examples is crude death rate (CDR) and infant mortality rate (IMR) in India from 1971 to 2013 which is shown in Fig. 2. The data is available online at Govt. of India online repository (https://data.gov.in/resources/ time-series-data-crude-death-rate-and-infant-mortality-rate-india-1971-2013). In this example, CDR and IMR are the two observations that are captured at time which can be treated as multivariate analysis. CDR is statistically analyzed in [7–9]. D. Jagan Mohan Reddy (&)  S. J. Basha Department of CSE, Lakireddy Bali Reddy College of Engineering, Mylavaram, Krishna 521230, Andhra Pradesh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_8

71

72

D. Jagan Mohan Reddy and S. J. Basha

Fig. 1 Sales of shampoo over a period of 36 months

Fig. 2 CDR and IMR in India over 43 years

While compared to the univariate analysis, multivariate time series analysis is toughest as well as challenging. Designing and correlating multivariate across hierarchical levels may vary from “system-to-system.” To deal with this time series data, must use factor analysis, i.e., it will reduce the attribute space from huge count to the lowest count of factors. Different authors of [10–14] have proposed many machine learning models that will forecast and predict the future. The remaining paper is divided into three sections. Section 2 details the time series data in various application domains proposed by researchers. In Sect. 3, we will discuss about the results obtained after performing analysis on the time series data. Finally, the paper concludes, and further directions for this research are discussed in Sect. 4.

8 Analysis of Infant Mortality Rate in India Using Time …

73

2 Related Work on Time Series in Various Domains From the last twenty years, evaluation of new methods for time series forecasting is done with the help of ARMA model of BOX-Jenkins. Authors proposed in [15] a combinational approach which will combine the linear ARMA and ANNs. In their work, their methodology has various justifications that can automate ARMA modeling: (i) deep knowledge is required for building an ARMA model which is complex. (ii) It is difficult to build ARMA model which requires training in statistical analysis and knowledge in application domain. (iii) When dealing with large datasets, it is worth that the automatic system is needed. The prediction error (RMSE) rate was 0.0025 [15] which is less compared with various other methods on 500 training samples. Authors of [16] proposed asymmetric Subsethood-Product Fuzzy neural inference system for predicting the electric prices in time series prediction. Authors proposed a novel method to accurately predict time series analysis using neurofuzzy inference system. Lee and Roberts [1] Author carried out the research work based on multivariate incomplete datasets using dynamic multi-autoregressive model. The work focused to solve the problem of single and multiple sensor problems. In their work, they have considered time series data of global temperature from 1880 to 2007. Lin [6] proposed a novel approach to model and forecast of hospital patient movements. There have two objectives, viz., one is to forecast the patient movement and other one is to develop a decision-making system that can automatically determine the forecast. Box–Jenkins univariate and Tiao-Box multiple time series models have been used by the authors to forecast the in-patient movements in the hospital. Opare [17] studied mortality rate under five years. Their study has focused on the time series data ranging from the year 1961 to the year 2012. To analyze the mortality rate, they have used standard time series models such as the random walk with drift, the Bayesian dynamic linear model, and Box–Jenkins (ARIMA). Jayanthi and Iyyanki [18] authors demonstrated the crude birth rate and mortal irate in India. As per the economic survey 2017–2018, the mortality rate declined from 6.5 to 0.6%. The experimental analysis was based in the statistical analysis. Their research work mainly focus on confidence interval (CI) with 95% of birth rate in 1984 was 32.08 to 39.22 measured, in 2011, 95% of Confidence Interval was even lowest from 20.68 to 25.24. The mortality rate in 1984 was 32.08–39.22 measured, in 2011, 95% of CI were even lowest from 20.68 to 25.24. A real-time study conducted and collected data from around 86 parts in 2011–2013. Authors [19] demonstrated that the experimental results were based on various diseases and performed the statistical analysis. The study reveals that death rate was 8.5% under the confidence interval 8.1–8.9. The highest date rate was 20.8% of cardiovascular, and 18.4% was parasitic disorder. Majority of the death causes was because of infections and maternal issues.

74

D. Jagan Mohan Reddy and S. J. Basha

The mortality was high due to majority of the road accidents compared to global in India for various states [20]. The authors observed the data since 1990–2017. Their experimental results show the causes of death obtained from the autopsy and calculated based on various input parameters. To the best of our knowledge, this paper addresses the very popular and novel research works studied in detail as shown in Table 1. Furthermore, there are several tools to predict or forecast the time series accurately. Although this is not a clear research objective, it is interesting to be able to develop more real-time forecasting algorithms and tools.

3 Results and Discussions This work demonstrates the mortality rate in India using time series analysis more than three decades. The dataset is obtained from Govt. of India from 1971-2013. To forecast the IMR for 1000 births and crude death rate of 100,000, the experiment results are obtained using popular method ARIMA model written Python. The ordered differencing is calculated for the first and second order to determine any lags in the datasets.

3.1

Infant Mortality Rate (IMR)

The ARIMA (p, q, d) model needs to be tuned according to the selected data over a time. p, q and d are the parameters need to be optimized to find the accurate model. In our results, we have identified the quality of the individual model based on the Akaike information criterion (AIC) and Bayesian information criterion (BIC). Table 2 describes the parameter tuning of the p, q and d. The best value of ARIMA (3, 1, 0) and the AIC value is 233.259 for the model building on 43 observations. The log-likelihood value −111.630 is obtained. The forecasting of IMR intercept and co-efficient of ar.L1, ar.L2 and ar.L3 are −2.5902,

Table 1 Summary of research work presenting time series Studies

Variables

Approach

Application domain

[15] [16] [21] [6] [17] [18] [19] [20]

Multivariate Univariate Multivariate Multivariate Univariate Univariate Univariate Univariate

ARMA+NN Neuro-fuzzy Rough set+NN Box–Jenkins+Tiao-Box ARIMA Statistical analysis Statistical analysis Statistical analysis

General time series Electricity Stock price Medical Mortality rate Birth and death rates Death rates Death rates

8 Analysis of Infant Mortality Rate in India Using Time … Table 2 Parameter tuning of p, q and d to minimize the AIC and BIC value for IMR

75

ARIMA (p, q, d)

AIC

BIC

Time in s

(1, (0, (1, (0, (0, (0, (1, (0, (1, (2, (2, (2, (2, (3, (3,

239.697 244.050 238.871 238.434 251.512 237.444 239.268 237.259 236.824 236.134 234.160 234.085 237.630 234.410 233.259

246.648 247.525 244.084 243.647 253.249 244.395 247.957 245.948 247.250 248.297 244.586 242.774 244.581 244.836 241.947

0.623 0.031 0.232 0.199 0.043 0.321 0.589 0.427 0.494 1.407 0.698 0.467 0.164 0.417 0.289

1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,

1) 0) 0) 1) 0) 2) 2) 3) 3) 3) 2) 1) 0) 1) 0)

−0.4825, −0.0863 and 0.4082, respectively. The diagnostic plot of our results is shown in Fig. 3. The prediction of the IMR is estimated till 2023, and our predictions show that death rate is going to be zero for 1000 people. The darker line with gray shaded region is the prediction for the next 10 years demonstrated in Fig. 4 with 95% confidence interval.

3.2

Crude Death Rate (CDR)

The crude death rate is calculated per 100, 000 of the entire population. The overall death rate is going to increase in India by 2022, and it is not going to stable unlike infant mortality rate. The experimental study is done with ARIMA model. The results demonstrate with parameter tuning in this model. The best AIC and BIC are shown in Table 3. The ARIMA (0,1,1) with p = 0, q = 1 and d = 1 is the optimum with AIC and BIC which is −217.893 and −212.68, respectively. The loglikelihood value is observer 111.947. The forecasting can be obtained from the intercept, ma.L1 and sigma2 which is −0.0087, −0.5966 and 0.0003, respectively. The confidence interval is (IC) [0.025–0.975]. The maximum likelihood value is under IC −1.008 to −0.185. The diagnostic plot of our results is shown in Fig. 5. The prediction of the CDR is estimated till 2023, and our predictions show that death rate is going to be log of 0.7–0.8 for 100,000 people. The darker line with gray shaded region is the prediction for the next 10 years demonstrated in Fig. 6 with 95% confidence interval.

76

D. Jagan Mohan Reddy and S. J. Basha

Fig. 3 Diagnostic pot of ARIMA (3, 1, 1) model

Fig. 4 Forecast of infant mortality rate with ARIMA (3, 1, 1) model

8 Analysis of Infant Mortality Rate in India Using Time …

77

Table 3 Parameter tuning of p, q and d to minimize the AIC and BIC value for CDR ARIMA (p, q, d)

AIC

BIC

Time in s

(1, (0, (1, (0, (0, (0, (1,

−216.078 −210.578 −212.971 −217.893 −205.894 −216.371 −216.823

−209.128 −207.103 −207.758 −212.680 −204.157 −209.420 −208.135

1.597 0.189 0.312 0.635 0.171 1.716 1.433

1, 1, 1, 1, 1, 1, 1,

1) 0) 0) 1) 0) 2) 2)

Fig. 5 Diagnostic plot of ARIMA (0, 1, 1) model

4 Conclusion Many services in real time have been enabled by the anonymous growth of the time series applications. While there is a rapid increase of applications, it will also raise many challenges. To achieve the good accuracy in forecasting, we must have a well-formulated mechanism for time series analysis. There are lot of advantages and disadvantages with autoregressive (AR), moving averaging (MA) and

78

D. Jagan Mohan Reddy and S. J. Basha

Fig. 6 Forecast of CDR with ARIMA (0, 1, 1) model

autoregressive–moving-average (ARMA) methods which have the impact on time series analysis. Later, Box–Jenkins method is proposed to accurate forecast time series using ARIMA. This article presents various univariate as well as multivariate time series predictions that discussed the several existing techniques. The objective of the proposed work deals with various time series models and the enhancements in mortality study. The analysis of the related work evidence has a clear diversion of researchers, over a couple of years, in time series forecast intended by the challenges created by new algorithms. Furthermore, the evolution of time series shows an increasing concern about the univariate and multivariate variables. The study further continues to investigate time series data using advanced mechanisms using deep learning to accurately forecast in real time. Our experiments demonstrated that the birthrate in India is going to be zero by 2023. ARIMA with best tuning parameters p = 3, q = 1 and d = 1 and running time of the experiment is 0.28 s. The crude death rate of India is going to be 7–9% by 2023. The ARIMA with best tuning parameters p = 0, q = 1 and d = 1 and the running time is 0.635.

References 1. Lee SM, Roberts SJ (2008) Multivariate time series forecasting in incomplete environments. Technical Report PARG-08-03, University of Oxford. Available at www.robots.ox.ac.uk/ *parg/publications.html 2. Chatfield C (2000) Time-series forecasting. Chapman and Hall/CRC 3. De Gooijer JG, Hyndman RJ (2006) Int J Forecast 22(3):443 4. Tay FE, Cao L (2001) Omega 29(4):309 5. Zhang G, Patuwo BE, Hu MY (1998) Int J Forecast 14(1):35 6. Lin WT (1989) Int J Forecast 5(2):195

8 Analysis of Infant Mortality Rate in India Using Time …

79

7. Saikia N, Choudhury L (2017) Indian J Public Health Res Dev 8(3):28 8. Dhillon PK, Mathur P, Nandakumar A, Fitzmaurice C, Kumar GA, Mehrotra R, Shukla D, Rath G, Gupta PC, Swaminathan R et al (2018) Lancet Oncol 19(10):1289 9. Prabhakaran D, Jeemon P, Sharma M, Roth GA, Johnson C, Harikrishnan S, Gupta R, Pandian JD, Naik N, Roy A et al (2018) Lancet Glob Health 6(12):e1339 10. Cao L, Mees A, Judd K (1998) Physica D 121(1–2):75 11. Chakraborty K, Mehrotra K, Mohan CK, Ranka S (1992) Neural Netw 5(6):961 12. Chen SM, Tanuwijaya K (2011) Expert Syst Appl 38(8):10594 13. Han M, Wang Y (2009) Expert Syst Appl 36(2):1280 14. Yazdanbakhsh O, Dick S (2015) In: Fuzzy information processing society (NAFIPS) held jointly with 2015 5th world conference on soft computing (WConSC), 2015 annual conference of the North American. IEEE, pp 1–6 15. Rojas I, Valenzuela O, Rojas F, Guillen A, Herrera LJ, Pomares H, Marquez L, Pasadas M (2008) Neurocomputing 71(4–6):519 16. Narayan A, Hipel KW, Ponnambalam K, Paul S (2011) In: IEEE international conference on systems, man, and cybernetics (SMC), 2011. IEEE, pp 2121–2126 17. Opare PE (2015) Time series models for the decrease in under-five mortality rate in ghana case study 1961–2012. Ph.D. thesis 18. Jayanthi P, Iyyanki M (2020) crude birth rate and crude mortality rate in India: a case of application of regression in healthcare, aging-life span and life expectancy. IntechOpen, London 19. Kalkonde Y, Deshmukh M, Kakarmath S, Puthran J, Agavane V, Sahane V, Bang A (2019) A prospective study of causes of death in rural Gadchiroli, an underdeveloped district of India (2011–2013). J Glob Health Rep 3 20. Dandona R, Kumar GA, Gururaj G, James S, Chakma JK, Thakur J, Srivastava A, Kumaresh G, Glenn SD, Gupta G et al (2020) Lancet Public Health 5(2):e86 21. Watanabe H, Chakraborty B, Chakraborty G (2007) In: ICICIC’07 second international conference on innovative computing, information and control, 2007. IEEE, pp 40–40

Chapter 9

Comparison of Bias Correction Techniques for Global Climate Model Temperature Shweta Panjwani, S. Naresh Kumar, and Laxmi Ahuja

1 Introduction Global climate models (GCMs)-derived temperature has been corrected using several bias correction techniques as these GCMs have lots of uncertainties [1, 2]. Different crop models like InfoCrop, DSSAT and APSIM use these bias-corrected climate scenarios for crop yield prediction [3]. But these biased climate change scenarios may result in uncertain predictions of various impact studies [4]. So, selection of appropriate bias correction method is necessary to reduce uncertainties from the GCMs. Globally, several studies are performed for temperature/precipitation that can compare various bias correction techniques. Evaluation of bias correction techniques is done for rainfall for hydrological studies over North America [5] and Myanmar river basin [6]. For Indian region also, several bias correction techniques, i.e., scaling, power transformation, quantile mapping, etc., are compared for summer monsoon [7, 8]. But, these studies did not compare the crop model-specific climatic parameters. Keeping this in view, this study aims to compare the bias correction techniques for global climate model temperatures (maximum and minimum) for Indian location, being performed on the statistical tool R.

S. Panjwani (&)  L. Ahuja Amity Institute of Information Technology, Amity University, Noida, UP, India e-mail: [email protected] S. Naresh Kumar Centre for Environment Science and Climate Resilient Agriculture, Indian Agricultural Research Institute, New Delhi 110 012, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_9

81

82

S. Panjwani et al.

2 Methodology The CCCma climate model daily temperature (maximum and minimum) data are used to compare the bias correction methods, i.e., scaling and empirical quantile mapping for Delhi location. These simulated model data are downloaded from the CMIP5 data Web site (https://esgf-node.llnl.gov/projects/cmip5/). For this, bias in the temperature data is removed using both the approaches against the observed data (India Meteorological Department (IMD)) during period 1971–2005. Scaling method performs the scaling between the observed and simulated data using the additive or multiplicative approach [9]. This study used additive scaling method for temperature. Empirical quantile mapping approach applies the empirical probability distribution function on observed and simulated values. Bias correction has been done by calculating the inverse of cumulative distribution function (CDF) of observed values with respect to the model output CDF at the particular value [10]. The bias-corrected value can be calculated as: Q ¼ FY1 FF F



ð1Þ

Here, FY and FF are CDF for observed data and ensemble mean of model, respectively. Then, yearly average was calculated for bias-corrected temperatures (maximum and minimum). Finally, the overall performance of both the approaches was evaluated for temperatures based on statistics. For this, correlation coefficient and RMSE were calculated for maximum and minimum temperature individually.

3 Result and Discussion Uncertainties(bias) from GCMs-generated temperature (maximum and minimum) data have been reduced using scaling and quantile mapping method using the observed data for 35 years’ period in R. The average maximum temperature per year varies between 29–32.5 and 26.3–29.5 °C of the observed and simulated (biased) data, respectively. After removing the bias using scaling and quantile mapping, these varied between 30 and 33.2 °C for both the approaches (Fig. 1). Similarly, these models overestimate (simulate, i.e., 17.6–21 °C) average minimum temperature per year as compared to observed data (16.9–18.8 °C). But after bias correction, the minimum temperature ranges between 16.4 and 19.8 °C (Fig. 1). The overall performance of both the methods was examined using statistics for temperatures. The values of correlation coefficient for maximum (0.817) and minimum (0.908) temperatures-are higher for data bias-corrected using quantile mapping than the scaling method. This indicates that quantile mapping method could be able to correct the temperatures much similar to the observed data across the 35 years’ period. Similarly, another statistics parameter RMSE values is found

9 Comparison of Bias Correction Techniques for Global …

83 Observed

Average Temperature per year(°C)

Maximum Temperature

Biased(Simulated)

34.0

BC_Scaling

33.0

BC_EQM

32.0 31.0 30.0 29.0 28.0 27.0 26.0

2003

2005 2005

1999 1999

2003

1997 1997

2001

1995 1995

2001

1993 1993

1991

1989

1987

1985

1983

1981

1979

1977

1975

1973

1971

25.0

Average Temperature per year(°C)

Minimum Temperature 21

20

19

18

17

1991

1989

1987

1985

1983

1981

1979

1977

1975

1973

1971

16

YEAR

Fig. 1 Mean temperatures (maximum and minimum) per year for observed, biased (uncorrected), bias-corrected (BC_Scaling and BC_eqm) data

Table 1 Comparison of bias correction methods based on statistics for maximum and minimum temperature

Biased (simulated) Correlation coefficient TMAX 0.815 TMIN 0.888 Root-mean-square error TMAX 5.481 TMIN 3.704

BC_Scaling

BC_EQM

0.815 0.888

0.817 0.908

4.082 3.521

4.027 3.239

to be less for quantile mapping for maximum (4.027) and minimum (3.239) temperature. So, quantile mapping method can correct the temperature data with less biasness as compared to the scaling method (Table 1).

84

S. Panjwani et al.

4 Conclusion Climate change impact studies use global climate model (GCM) data as such without preprocessing, which may result in uncertainties. So, GCMs-derived data must be bias-corrected using bias correction techniques. For this purpose, selection of appropriate method should be done. This study compared the scaling and empirical quantile mapping method for maximum and minimum temperature with respect to the observed IMD data for 35 years. Both the methods are able to correct temperature near to the observed data. But temperatures corrected by quantile method possess much similar trend to the observed data across 35 years with low RMSE values. So, it can be concluded from this study that quantile mapping can perform better than scaling method.

References 1. Mandal S, Simonovic SP (2019) Quantification of uncertainty in the assessment of future streamflow under changing climate conditions. Hydrol Process 31(11):2076–2094 2. Shen M, Chen J, Zhuan M, Chen H, Xu CY, Xiong L (2018) Estimating uncertainty and its temporal variation related to global climate models in quantifying climate change impacts on hydrology. J Hydrol 556:10–24 3. Lobell DB, Sibley A, Ortiz-Monasterio JI (2012) Extreme heat effects on wheat senescence in India. Nat Clim Change 2(3):186–189 4. Asseng S, Ewert F, Rosenzweig C, Jones JW, Hatfield JL, Ruane AC et al (2013) Uncertainty in simulating wheat yields under climate change. Nat Clim Change 3(9):827 5. Chen J, Brissette FP, Chaumont D, Braun M (2013) Finding appropriate bias correction methods in downscaling precipitation for hydrologic impact studies over North America. Water Resour Res 49(7):4187–4205 6. Ghimire U, Srinivasan G, Agarwal A (2019) Assessment of rainfall bias correction techniques for improved hydrological simulation. Int J Climatol 39(4):2386–2399 7. Acharya N, Chattopadhyay S, Mohanty UC, Dash SK, Sahoo LN (2013) On the bias correction of general circulation model output for Indian summer monsoon. Meteorol Appl 20 (3):349–356 8. Choudhary A, Dimri AP (2019) On bias correction of summer monsoon precipitation over India from CORDEX-SA simulations. Int J Climatol 39(3):1388–1403 9. Santander Meteorology Group (2015) downscaleR: climate data manipulation and statistical downscaling. R package version 0.6-0 10. Gudmundsson L, Bremnes JB, Haugen JE, Engen-Skaugen T (2012) Technical note: downscaling RCM precipitation to the station scale using statistical transformations—a comparison of methods. Hydrol Earth Syst Sci 16:3383–3390. https://doi.org/10.5194/hess16-3383-2012

Chapter 10

Identification of Adverse Drug Events from Social Networks A. Balaji , S. Sendhilkumar , and G. S. Mahalakshmi

1 Introduction Researches related with identification, extraction, and detection of ADEs for drug safety surveillance have increased in the past decade and still growing. Gradually, Web sites, social blogs, and forums have emerged as a major platform and a promising source lodging enormous discussions about health-related issues and its treatments, usage of drugs and their effects and adverse drug event reporting. Such discussions, comments, reviews posted in social media include information about the use of drugs for the treatment of a particular medical condition and Adverse Drug Events or medical condition/symptom occurring when a drug is used. With the increase in usage and users in social media applications like forums, blogs, etc., social networks have become an important arena for patients to share their treatment experiences, medicines consumed, potentially related drugs, their combination and adverse reactions. Patient social media comprises of very large and diversified population encompassing large volumes of discussions about medications in the form of unstructured data, especially in text. Analyzing these discussions could provide new medical knowledge which helps in identifying the adverse drug events for a drug. This helps in capturing ADEs to understand more about the patient from their own perspective based on their experiences. A. Balaji Department of Computer Science and Engineering, KCG College of Technology, Chennai, India e-mail: [email protected] S. Sendhilkumar (&) Department of Information Science and Technology, Anna University, Chennai, India e-mail: [email protected] G. S. Mahalakshmi Department of Computer Science and Engineering, Anna University, Chennai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_10

85

86

A. Balaji et al.

Identifying patient reports about ADEs manually from the health-related social blogs, Web sites and forums is near to impossible. Value-added business critical data may be generated by analyzing such social media post about ADEs. A new perspective toward understanding drug effectiveness and its side effects may be derived and used for improvising the practice of Pharmacovigilance. There are multiple challenges faced while mining for adverse drug events from social media data for the following reasons: (1) patients are not always aware of correct medical terms and its usage, and (2) patients/users use imaginary phrases, own description of symptom with limited knowledge, and reviews with vernacular/colloquial expressions. Adverse drug events and indications are difficult to distinguish as patient reviews are informal may contain abbreviated information, wrongly spelt terms, grammatically incorrect phrases and errors. The research work reported in this paper is an attempt to extract adverse drug events which are reported by the patients from health social network sites and forums. Patient forum data is collected from multiple data repositories. Data collected is initially normalized, then preprocessed using regular NLP techniques like segmentation, stop word removal, POS tagging, stemming, etc., using NLP tools. Followed by extraction of medical entity and adverse drug events using MetaMap which uses multiple lexicon sources such as FDA Adverse Event Reporting System (FAERS), Unified Medical Language System (UMLS) and Consumer Health Vocabulary (CHV), etc. Associations between drugs and adverse events are extracted using Apriori algorithm and association rules generated from Apriori algorithm helps in identifying potential adverse drug events of a drug. Negated adverse drug events which do not cause any harm or in some cases which are not even ADR are filtered out using semantic filtering algorithm. Finally, standard information retrieval metrics, like f-measure, precision, and recall are used for evaluating the overall performance of the proposed system.

2 Related Works 2.1

Adverse Drug Event Extraction Using Semi-automated Techniques

Carlo Piccinni et al. [1] and Yu Zhang et al. [2] have developed human expert-based annotations. Carlo et al. [1] have developed a semantic approach-based Pharmacovigilance framework for effective monitoring the adverse effects of drug usage in social media. In this work, a thesaurus is built using semi-automatic approach involving human experts for manual mapping of codes identifying drugs and adverse events. Many heterogeneous data sets related to adverse events were developed and merged in the Web-based platform as part of the work reported by Carlo et al. Yu Zhang et al. [2] have identified ADRs of Hypolipidemic drugs from Chinese adverse event reports. They perform manual

10

Identification of Adverse Drug Events from Social Networks

87

annotation of ADR tags using WHO Adverse Reaction Terminology (WHO-ART) and human experts. Then, the manually annotated data is then mapped to WHO-ART. To distinguish the seriousness of ADRs of the 579 records, they built another classification measures for the seriousness of ADRs regarding the WHO four-level norm of the seriousness of ADR and afterward they grouped the seriousness of unfavorable medication responses.

2.2

Adverse Drug Event Extraction Using Machine Learning Techniques

Shantanu Dev et al. [3] proposed machine learning approach to classify adverse drug events. They used Bag-of-Words to vectorize input data. The calculations normally utilized in text characterization are max entropy models, support vector machines, and tree-based models. To stay away from the issue of angle detonating or disappearing in standard RNN, long short-term memory (LSTM) and different variations were used. A large scale drug side effect prediction using collaborative filtering approach was developed by Diego Galeano and Alberto Paccanaro [4]. Their methodology gave recommendations for drug safety experts. The proposed idle factor model was built using 1,525 public safety drugs and 2,050 drug-effects associations extracted from the public safety data. Li et al. [5] proposed the joint extraction framework to extract adverse drug events. Training the model and decoding using a beam search algorithm is done using a structured perceptron. The structured learning approaches mostly uses beam search algorithm to improve the decoding efficiency in the decoding process. The beam search algorithm performs a search by defining a beam range and cutting off the solution space with lower confidence.

2.3

Drug Entity Extraction

Khuri et al. [6, 7] used MetaMap to extract drug entities and adverse drug events. MetaMap provides a weight quantifying the strength of mapping between any biomedical text and meta-thesaurus. Higher the weight more is the mapping of the tweet to drug effects. The extracted drug and its associated adverse events are then compared with open access Web sources like Medlineplus Drug Information. Anni Coden et al. [8] developed an unsupervised pattern matching approach that examines the word patterns around drug names mentioned in very large clinical corpora. Repeated occurrences of these patterns in the corpus result in higher probability that it contains a drug name. Fei Li et al. [9] developed a transition-based model for extracting drugs, diseases and mining adverse drug events. They used Named Entity Recognition and Standard Core NLP toolkit for

88

A. Balaji et al.

extraction of drug names and adverse drug events. In this research work of Fei Li et al., training was done using structured perceptron and o multiple-beam search algorithm was used for decoding. Jiang and Zheng [10] classified potential drug events from tweets by using supervise machine learning classifier. Liu et al. [11] and Moh et al. [12] applied both content and collaborative classification using NLP techniques and sentiments from data collected from social media for effective classification of adverse events.

2.4

Filtering Negated Adverse Drug Events

Xiao Liu [13] used semantic filtering algorithm to filter out negated adverse drug events. The algorithm makes use of context-based (semantic) information from a drug safety database and eliminates the drug indications. Negated ADEs are filtered using the linguistic rules from NegEx Python code, which is an open-source text-based processing algorithm system for negation detection from discharge summaries.

3 Overall System The overall system highlighting all processes involved toward identifying drugs and potential dug events from social media data given in Fig. 1. The system comprises of the following components: (1) Data preprocessing, (2) Extraction of Medical entities, (3) Extraction of ADEs and (4) Classification of Report and evaluation.

Fig. 1 System architecture

10

Identification of Adverse Drug Events from Social Networks

89

The work reported in this paper uses MetaMap [10] a highly configurable knowledge-intensive approach. The drug terms matching with the medical lexicon sources are recognized and extracted using MetaMap. Extensive requiring medical knowledge and complex linguistic rules are required to decode ADE discussions in forums as they are highly informal and colloquial. Such informal unstructured texts are handled using supervised learning methods and context-based semantic filtering methods. A rule-based Python module NegEx is used for detection of negated medical event. The work reported in this paper uses a language-based filtering algorithm that represents knowledge about medical events in the form of rules inferred from drug safety database. Moreover, elimination of negated ADEs is achieved using rules generated from the negation detection tool.

3.1

Data Preprocessing

The data collected from different Health forums are cleaned by removing unwanted data by using open natural language processing tools. Text cleaning is done by adopting regular NPL techniques, tokenization, regular expressions, stop word removal, stemming, etc. A sample pre-processed data after data cleaning and sentence boundary detection is displayed in Fig. 2. Each user review about the drug is segmented into separate sentences using sentence tokenizer.

3.2

Drug Entity Extraction Using NER Approach

Named Entity Recognition (NER) is a NLP process that discovers concept types from text corpus. The proposed system depends on multiple lexicon of drug names from UMLS and FDA to extract known drugs and adverse events from the user text data collected from social media. The drug terms matching with the medical lexicon sources are recognized and extracted using MetaMap. The NER-based approached followed in this work contained is as given in algorithm 1.

Fig. 2 Sample pre-processed data with extracted drug names

90

A. Balaji et al.

Algorithm 1: Drug Entity Extraction Using NER Approach Input: Text data Collected from Social Health Forums, Blogs Output: Labeled tokens Steps: 1. Preprocessing as explained in Sect. 3.1 2. Locating and extracting drug names from data collected using the Lexicon developed from UMLS and FDA resources. 3. Aggregating word and sentence features to construct feature vectors.

The word features considered in this work are: token, word stem, parts of speech tags, capitalization patterns, numbers, punctuation, prefix, and suffix characters. Matching drug lexicon characters/words to the left and right of the current word is once example sentence features used in this work. A sample set of drug entities extracted are given in Fig. 2. The drug terms matching with the medical lexicon sources in MetaMap are recognized and extracted.

3.3

Adverse Drug Event Extraction

Adverse drug events are extracted using MetaMap. Consumer Health Vocabulary (CHV) is mapped with MetaMap to extract adverse drug events from the data. Adverse drug events for a drug are extracted from multiple reviews by the patient. MetaMap is mapped with lexicon sources such as FAERS and CHV. The adverse drug terms matching with the lexicon sources are recognized and extracted using MetaMap. Interpretation of informal and colloquial ADE discussions is done using linguistic rules and rules derived from medical knowledgebase. These issues are addressed by deploying supervised learning methods and concept-based filtering method (Algorithm 2). Algorithm 2: Concept Based Semantic Filtering Input: Extracted adverse drug events in csv file format and rule base Output: ADEs filtered out from Negated ADEs Steps: 1: 2: 3: 4: 5: 6:

for rule in rule base: do if instance ‘i’ in csv file matches rule(s) in rule base: then return < drug, event > = negated adverse drug events; end if return < drug, event > = adverse drug events; end for

10

Identification of Adverse Drug Events from Social Networks

91

The extracted adverse drug entities are displayed above in Fig. 3. The symptoms of any disease (after taking a drug) or findings are extracted from the data and classified with respect to the drug.

3.4

Identifying and Extraction of Relationship Between Drug and Adverse Events

Determining the existence of drug and an associated medical event in a given text is an important task in ADE extraction [14]. Hence, the proposed system requires relationships between drug names identified using NER method as discussed in Sect. 3.2 and drug attributes/relations like: Severity, manner/route, reasons for prescription (labeled as Reason), dosage, duration, frequency, and ADEs (labeled as Adverse). Building the relationship extraction was done in two phases: frequent relations detection and relation classification. The frequent relation detection is achieved using Apriori algorithm and the relation classification is done using random forest model. Apriori algorithm as shown in algorithm 3 is used to detect frequent and potential associations between drug and medical events from the posts collected from patient forum posts. The association rules generated using Apriori algorithm helps in identifying potential adverse drug events of a drug. After extraction of the frequent , the same is given as input to a random-forest classifier to label as relation like: Severity, Route, Reason, Dosage, Duration, Frequency and Adverse. A sample frequent adverse drug event report is shown in Fig. 4.

Fig. 3 Sample adverse drug events extracted

92

A. Balaji et al.

Fig. 4 Adverse drug event report

Algorithm 3: Apriori based Drug-Adverse Reaction Association Mining Input: Drug_Named_Entities (DNE) and Drug_Attributes (DA) Dataset; min_sup Output: Frequent Drug!Drug_Attribute Steps: 1. Read the DNE and DA from the input dataset 2. Calculate the frequency of Candidate itemset (Ck) comprising of DNEs and DAs 3. Find frequent itemset, Lk from Ck, the set of all candidate itemset using min_sup. 4. Form Ck+1 from Lk. 5. Prune the frequent candidates by removing itemset from Ck whose elements do not come at least k-1 using min_sup 6. Mark all non-frequently occurring DNEs and DAs in any of the candidates in Lk. 7. Check the number of DAs in each record and the record from the dataset if ST¡ = k. 8. Repeat 4–7 until Ck is empty or transaction database is empty.

10

Identification of Adverse Drug Events from Social Networks

93

Fig. 5 Negated ADRs

Table 1 Precision and recall scores for relation extraction

Relation

Precision

Recall

Dosage Frequency Route Severity Duration Reason Adverse

80.7 78.1 86.2 85.4 79.6 78.4 82.3

82.1 75.4 82.8 86.7 75.2 70.3 80.9

Negated adverse drug events are then filtered out using a concept-based semantic filtering algorithm using a linguistic rule-based method. The algorithm as shown in Algorithm 2, filters out adverse drug events which does not cause any harm. The rules for negation detection are specified and the algorithm finds the negation status of the adverse drug event using those rules. The negation status of each adverse drug event in a review is displayed in Fig. 5. Affirmed indicates that the adverse drug event is valid and negated indicates that the adverse drug event can be filtered out. Performance metrics of the relation extraction model built using 350 posts collected from social blogs and forums are as shown in Table 1. The proposed framework is evaluated for its performance using average precision = 72.1, average recall = 53.4, and F1 score = 61.2, respectively.

4 Conclusion The experimental results show that the system can extract medical drug entities and adverse drug events from patient review effectively. The adverse drug event report is evaluated against ADR’s of the drugs obtained from drug Web sites and it achieved accuracy of 61.2%. Filtering out negated adverse drug events significantly improved the accuracy of the adverse drug event reporting framework from social

94

A. Balaji et al.

media data. Future work can be done to adopt meta data-based learning for aggregating ADEs from heterogeneous sources. Meta data-based learning may be used to predict drugs with high risks of adverse effects and thereby the overall performance of the proposed system may be improved. Acknowledgements This Publication is an outcome of the R&D work undertaken in the project under the Visvesvaraya PhD Scheme (Unique Awardee Number: VISPHD-MEITY-2959) of Ministry of Electronics and Information Technology, Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).

References 1. Piccinni C, Poluzzi E, Orsini M (2017) Pharmacovigilance surveillance through semantic web-based platform for continuous and integrated monitoring of drug-related adverse effects in open data sources and social media. In: IEEE 3rd international forum on research and technologies for society and industry, pp 1–5 2. Zhang Y, Wang X, Shen L, Hou Z, Guo Z, Li J (2018) Identifying adverse drug reactions of hypolipidemic drugs from Chinese adverse event reports. In: IEEE international conference on healthcare informatics workshop, pp 72–75 3. Dev S, Zhang S, Voyles J, Rao AS (2017) Automated classification of adverse events in pharmacovigilance. In: IEEE international conference on bioinformatics and biomedicine, pp 1562–1566 4. Galeano D, Paccanaro A (2018) A recommender system approach for predicting drug side effects. In: Joint conference on neural, pp 1–8 5. Ning X, Shen L, Li L (20147) Predicting high-order directional drug-drug interaction relations. In: IEEE international conference on healthcare informatics, pp 33–39 6. Wu L, Moh T-S, Khuri N (2015) Twitter opinion mining for adverse drug reactions. In: IEEE international conference on Big Data, pp 1570–1574 7. DrugRatingz: find, rate and review drugs and medications. http://www.drugratingz.com 8. Coden A, Gruhl D, Lewis N, Tanenblatt M (2015) SPOT the Drug! An unsupervised pattern matching method to extract drug names from very large clinical corpora. In: IEEE second international conference on healthcare informatics, imaging and systems biology, pp 33–39 9. Li F, Ji D, Wei X, Qian T (2015) A transition-based model for jointly extracting drugs, diseases and adverse drug events. In: IEEE international conference on bioinformatics and biomedicine, pp 599–602 10. Mahata D, Friedrichs J, Sha RR (2018) Detecting personal intake of medicine from Twitter. In: IEEE intelligent systems, pp 87–95 11. Liu X, Chen H (2016) AZDrugMiner: an information extraction system for mining patient-reported adverse drug events in online patient forums. In: Smart health, Springer 12. Peng Y, Moh M, Moh T-S (2016) Efficient adverse drug event extraction using Twitter sentiment analysis. In: IEEE/ACM international conference on advances in social networks analysis and mining, pp 1011–1018 13. Liu X, Chen H (2015) Identifying adverse drug events from patient social media: a case study for diabetes. In: IEEE intelligent systems, pp 44–51 14. Sampathkumar H, Wen Chen X, Luo B (2016) Mining adverse drug reactions from online healthcare forums using hidden Markov model. BMC Med Inform Decis Making

Chapter 11

A Novel Scheme for Energy Efficiency and Secure Routing Protocol in Wireless Sensor Networks R. Senthil Kumaran, R. Dhanyasri, K. Loga, and M. P. Harinee

1 Introduction The network which comprises of numerous sensor nodes and access point is known to be the wireless sensor network (WSN). The sensor nodes communicate with each other in a multi-hop manner such as base station and access point. The information is sensed and transferred to the base station using the sensor nodes (central location). Wireless networking is enabled by WNSs in environment where there is no cellular or wired infrastructure; it is not cost effective or adequate. WSN is more complex than the other type of wireless networks such as wireless local area networks or cellular networks due to the absence of base station and central coordinator. Security is one of the most important concerns in WSNs, because it is more vulnerable to attacks than the wireless network or wired network. In WSNs, security protocol designing is a very challenging task due to the unique characteristics of WSNs which has physical vulnerability, insecure operating environment, broadcast radio channel authority, lack of users among association, and limited resources availability. Network performance and energy efficiency are improved in wireless networks which exploit the broadcast nature of wireless medium using an emerging technique called as network coding. By combining two packets into a single packet, the increase in throughput and decreased transmission energy is R. Senthil Kumaran (&)  R. Dhanyasri  K. Loga  M. P. Harinee Department of Electronics and Communication Engineering, IFET College of Engineering, Villupuram, India e-mail: [email protected] R. Dhanyasri e-mail: [email protected] K. Loga e-mail: [email protected] M. P. Harinee e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_11

95

96

R. Senthil Kumaran et al.

achieved by minimizing the number of transmissions with the help of network coding method. In destination node, the overhearing of packets to another node at destined so that the coded packet can be decrypted.

2 Related Works In WSNs, different solutions are presented to increase the packet delivery ratio, network throughput and with low cost sensor, coverage area can be maintained. The author [1] proposed routing (ESMR) protocol for the improvement of energy efficiency, secure multi-hop transmission, and routing overhead. However, the delay may occur while transmitting the data in uncovered area. In [2], EEUC is used for the selection of cluster heads. This process has two main ways. First, the energy of the cluster head is randomly selected. Depending on the hop count, range, and degree of node, the selection of the terminal cluster head is done. In the second method, due to static sink usage and continuous monitoring, the path is fixed for data routing to cluster head leads to hot spot problem. LEACH method that has numerous data transmission and clusters are randomly formulated. The author [3] proposed the solution to increase the energy efficiency and maintenance of network which is compared with traditional algorithms. The fixed time period is used to rotate cluster head. However, the load of the network is not distributed in a uniformly manner among clusters. The delay ratio is the main problem and it is not used for nodes of vast network. The novel encryption schema based on clustering dynamic nature with data transmission in WSNs is secured by using ECC and homomorphic encryption. Then, private keys and public keys are generated for sensor nodes by using the ECC method. Combining the key of ECC, 176 bits sensor node has encrypted key with interspace to its cluster head and distance identification. CH is allowed by using homomorphic encryption such that the cluster member uses the encrypted data to aggregate without decrypt. The final message will be sent to the sink node to prevent the energy efficiency of cluster head as well as attack compromised by cluster head [4]. The heterogeneous encryption method is used to encrypt messages without an encryption process. This allows the cluster heads for aggregation so that it has aggregation delay and better energy efficiency. ECC is used to provide high security in terms of storage space with key size to overcome this conventional encryption schemes complexity in WSN. The data aggregation is secured at CH by using PEDS tiny which is proposed for heterogeneous encryption with privacy. This method is vulnerable to node compromise attacks because of shorter storage. It is used to prevent HELLO flooding attacks, sinkhole attack, and selective forwarding attacks. However, LEACH has memory size constraints, lifetime shortening, conventional encryption-based methods which degrades the network performance [5]. The major concept in AODV routing protocol scheme named as Collect Route Reply Table (CRRT) is used to save the sequence number of packet and received

11

A Novel Scheme for Energy Efficiency …

97

time in a table. The threshold value and the first request based on arrival time and then route validity are checked. Less routing overhead and shorter delay are used to determine the improvement in packet delivery ratio. However, when the source node is present at some interval, there should be an upsurge in the end-to-end delay due to the malicious node [6]. A new security routing (NSR) protocol is implemented with multi-hop technique in wireless sensor network. This protocol is used for secure data delivery by establishing the secure routing path. However, these networks are useless due to security attack of the user community. The first attack is wireless medium and second is on sensor nodes. The major limitations in the networks are less bandwidth, reduced memory, and slighter computational energy. However, it has high satisfactory performance, high level security protocols are avoided because of energy hunger [7–9]. (Fr-AODV) protocol is presented to reduce the threats formed in the network and identify the malicious nodes over routing path. Node identity and no de-reputation factors are used to analyze the trust value. However, route maintenance strategy results in breakage and route re-transmission [10]. As sensor nodes transmit the data, malicious nodes can receive the forwarded data from source and interrupts the data transmission. The malicious node will send the false data to destination. Furthermore, different solutions are presented to secure the data using multi-hop data routing protocol. However, computational overhead may occur and leads to security threats due to intermediate nodes (forwarders), it may forward the data to malicious node [11, 12]. The node is considered to be malicious, if the threshold value is lesser than the sequence number value of RREP, and then node is called as black-listed node. A duplicate packet is sent to its neighbours, if the node detects a malicious node. After that, the black-listed node can be checked spontaneously to overcome the problem in receiving RREP packets [13, 14].

3 Problem Formulation The dynamic movement of the nodes in the network leads to many attack in WSNs. Hence, the network encoding protocol has proposed to increase energy efficiency and secure routing in WSNs and the black hole attack is mitigated in the ad hoc on-demand distance vector routing protocol. It is applied to network consisting of various nodes with different degrees of malicious nodes and different initial energy levels.

98

R. Senthil Kumaran et al.

4 Proposed Energy Efficiency and Secure Routing Protocol The proposed routing protocol network encoding incorporates three different techniques, namely location-based routing, flooding restriction, and clustering method. The network regions are divided into four quadrants. Cluster formation is done randomly within each of the quadrant and then cluster head is assigned to each cluster shown in Fig. 1. The cluster head plays an important role to update and transfer the information through central locations. In proposed system, the encoding scheme is used for the purpose of energy efficiency and also it will combine two packets into single packet that will lead to delay minimization of the network. The malicious nodes send a fake reply to the destination node by receiving request from short route. Figure 2 describes overall process involved in the proposed system architecture. Initially, network is based on construction of clustering formation by finding Active Path Set (APS). After determining the APS for end-to-end secure data transfer by using network encoding method, the data packets are dispersed. It adds the limited redundancy to the messages based on the redundancy factor. Then, simultaneously, the dispersed messages are transmitted over the multiple paths. The message forwarding and receiving from the source to reception is assured by using the 2-hop ACK scheme. After the successful reception of the encrypted messages, reconstruction or decryption takes place. Then network changes have been adapted if there is vulnerability present in the path. The network coding method involves energy and delay constraints which are referred as the energy efficient (EE), delay reduction (DR), and quasi general (QG). The expression for average energy usage and delay is deducted with each proposed method which is provided by network coding and it has been compared with the results of traditional routing. The fundamental demonstration between packet delay

Fig. 1 Quadrant formation of cluster by using network coding scheme

11

A Novel Scheme for Energy Efficiency …

99

Fig. 2 System architecture

and transmission energy is calculated. It provides the accurate packet arrivals at destination, and decreasing transmission energy leads to more packet forwarding to destination. The queuing buffer stores the received packets by using relay nodes. The relay nodes send a packet with the kth probability when buffer is not empty (allowed probability transmission, p can bound the upper part). The received packets stored to source nodes by using network coding scheme which have the relay nodes to hold packet in two virtual buffers. Assume that the buffer has an infinite length to characterize the stability region. In Fig. 3, by performing bitwise XOR, the head which belongs to the line packets present in the virtual queues are combined together using the relay nodes when the duo queues are estimated to be non-empty. Consider that the relay node is used to transfer the coded packet with kth probability. The encoded packet with ki probability is received from the relay node when one of the queue i is non-empty and the alternate queue is empty in virtual buffer. The parameters kth, k1, and k2 which are allowed by transmission probability, p of upper bounded part. This simulation is based on route lifetime and the link stability and there is no route overhead was considered. It increases route average hop length by using square area with relatively small nodes. The anomaly nodes based on the routing protocol modification, because the research is based on the routing attack. Network

100

R. Senthil Kumaran et al.

Fig. 3 Opportunistic network coding scheme

encoding protocol is implemented for energy efficiency and secure routing the messages in both route discovery phase as well as data transmission phase. Node trust is used as a metric for routing. The routing protocol incorporates the proposed technique named as network coding method for efficient shorter route establishment.

5 Performance Metrics The following metrics are defined below and evaluated through simulation studies.

5.1

Energy Consumption

The amount of energy used by the nodes for sensing, transmitting, and receiving is known as the process of energy utilization. Figure 4 explains about the analysis of proposed EESR protocol in comparison with ESMR solution in terms of network energy efficiency as average of 65% more than the ESMR protocol.

5.2

Packet Delivery Ratio

Figure 5 illustrates the improvement in network lifetime of the EESR protocol in comparison with the existing method. It calculates from simulation results that EESR protocol can improve the network lifetime with an average of 94% to ESMR.

11

A Novel Scheme for Energy Efficiency …

Fig. 4 Number of nodes versus energy consumption

Fig. 5 Simulation time (sec) versus packet delivery ratio

101

102

R. Senthil Kumaran et al.

Fig. 6 Number of nodes versus route overhead

5.3

Route Length

Figure 6 exhibits the improvement in routing length of the EESR protocol which is compared with the ESMR protocol. It can be seen from simulation results that EESR protocol improves the route maintenance with an average of 64% as compared to ESMR.

6 Conclusion The proposed methodology proposes the network coding method for energy efficiency and secure routing (EESR) protocol in WSNs to achieve a packet arrival process without any delay and less energy consumption for the successful transmission probability of a coded packet. The EESR protocol involves quadrant-based clustering techniques, namely location-based routing, flooding restriction, and clustering method. Furthermore, using location based routing scheme, the network region is divided into four quadrants. Cluster formation develops from each of the quadrant and randomly cluster head is assign to each cluster. This proposed method provides network coding scheme for mixing two different packets into a single packet and it can limit the number of data transmission, reduce time taken by the node, increase the throughput by lowering the data transmission power, and it

11

A Novel Scheme for Energy Efficiency …

103

achieves desired energy efficiency and delay compromise. However the result between energy efficiency and reduced delay is achieved. The network performance of proposed system has been increased when compared to the existing system. For future work, the open medium and wide distribution of nodes make the nodes prone for malicious attacks. Hence, there is need to develop an efficient mechanism to detect and prevent such an in-depth attacks analysis is done on the EESR protocol, the common vulnerability of the protocol security, and attack need to be identified. The advancement in this EESR protocol should be needed for the improvement in WSNs security.

References 1. Haseeb K, Islam N, Almogren A, Almajed HN (2019) Secret sharing based energy aware and multi-hop routing protocol for IOT based WSNs. Mobile Edge Comput 7:79980–79988 2. Alagirisamy M, Chow C-O (2018) An energy based cluster head selection unequal clustering algorithm with dual sink (ECH-DUAL) for continuous monitoring applications in wireless sensor networks. Cluster Comput 21(8) 3. Batra PK, Kant K (2016) LEACH-MAC: a new cluster head selection algorithm for wireless sensor networks. Wirel Netw 22:49–60 4. Elhoseny M, Elminir H, Riad A, Yuan X (2016) A secure data routing schema for WSN using elliptic curve cryptography and homomorphic encryption. J King Saud Univ Compute Inf Sci 28(3):262–275 5. Kumar KA, Krishna AVN, Chatrapati KS (2017) New secure routing protocol with elliptic curve cryptography for military heterogeneous wireless sensor networks. J Inf Optim Sci 38 (2):341–365 6. Rani S, Talwar R, Malhotra J, Ahmed SH, Sarkar M, Song H (2015) A novel scheme for an energy efficient Internet of Things based on wireless sensor networks. Sensors 15(11):28603– 28626 7. Meng T, Wu F, Yang Z, Chen G, Vasilakos AV (2017) Spatial reusability aware routing in multi-hop wireless networks. IEEE Trans Comput 23:345–354 8. Miranda C, Kaddoum G, Bou-Harb E, Garg S, Kaur K (2020) A collaborative security framework for software-defined wireless sensor networks. IEEE Trans Inf Forensics Secur 15 (13):2602–2615 9. Liu Z, Liu W, Ma Q, Liu G, Zhang L, Fang L, Sheng VS (2019) Security cooperation model based on topology control and time synchronization for wireless sensor networks. IEEE J Commun Netw 21(5):469–480 10. Babber K, Randhawa R (2016) Energy efficient clustering with secured data transmission technique for wireless sensor networks. In: Proceedings of the 3rd international conference on computing for sustainable in global development (INDIA Com), Mar 2016 11. Krishnan AM, Kumar PG (2016) An effective clustering approach with data aggregation using multiple mobile sinks for heterogeneous WSN. Wirel Pers Commun 90(2):423–434 12. Senthil Kumaran R, Nagarajan G (2017) Energy efficient clustering approach for distributing heavy data traffic in Wireless Sensor Networks. In: Association for the advancement of modelling and simulation techniques in enterprises (AMSE, France): Advances D, vol. 22, No 1, pp 98–112

104

R. Senthil Kumaran et al.

13. He D, Chan S, Guizani M (2017) Cyber security analysis and protection of wireless sensor network for smart grid monitoring. IEEE Wirel Commun 24(6):98–103 14. Ali R, Pal AK, Kumari S, Karuppiah M, Conti M (2018) A secure user authentication and key-agreement scheme using wireless sensor networks for agriculture monitoring. Future Gener Comput Syst 84:200–215

Chapter 12

AlziHelp: An Alzheimer Disease Detection and Assistive System Inside Smart Home Focusing 5G Using IoT and Machine Learning Approaches Md. Ibrahim Mamun, Afroza Rahman, M. F. Mridha, and M. A. Hamid

1 Introduction Alzheimer disease (AD) is mostly common among aged people. AD patients normally have the symptoms of forgetting recent events, also they face difficulties to remember any persons whom they met previously, and also they forget about their intended day-to-day life tasks [1]. AD is considered as a chronic disease that results in death of brain cells [2]. In this paper, we have focused on aged people and their daily common activities inside their home. Our system monitors specific activities that person usually do always in his/her home. The system will collect data of a person’s position and time using IoT-enabled smart devices. The system will store the data using 5G wireless network in a fast manner. Data of a particular time period will be treated as a regular action for that particular time. Hence, after training our model using the collected data, our model can detect if that person has AD or not. Our proposed system may also assist an AD patient by predicting task that an AD patient may want to do that particular time but incapable to remember at that time. AD is categorized as progressive loss related to cognitive functions [3]. Alzheimer is also an irreparable brain disease that should be identified as early as possible before getting worsen [4]. Internet of things (IoT) is contributing to create Md. Ibrahim Mamun (&)  A. Rahman University of Asia Pacific, Dhaka, Bangladesh e-mail: [email protected] A. Rahman e-mail: [email protected] M. F. Mridha Bangladesh University of Business and Technology, Dhaka, Bangladesh e-mail: fi[email protected] M. A. Hamid King Abdulaziz University, Jeddah, Saudi Arabia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_12

105

106

Md. Ibrahim Mamun et al.

smart homes and enriching healthcare monitoring systems. Detection of AD using IoT devices inside smart homes is not a popular method around the world. Because of IoT, the healthcare sectors are blending with Information and Communication Technology (ICT) [5]. In our proposed system, IoT devices will be used to detect Alzheimer patients without any prior examination. Through IoT devices, the position and action of a particular person can easily be tracked and detected, also the mismatched between actions over time. By this way, the early symptoms of AD can be detected using smart IoT devices. Also according to our proposed system, the IoT-enabled devices can assist the people who have AD, by suggesting them about their intended daily tasks in a particular time range they usually do. In 5G wireless network, IoT devices can easily communicate with each other using the high speed data rates of 5G. Also 5G support massive connection of wireless devices. The ultra-reliable and low-latency communication (uRLLC) of 5G will allow data collection, data transfer, data store and analysis of data within seconds. Also the quality of healthcare applications and services that uses IoT devices can be improved through the use of 5G wireless network [6]. Hence, in a 5G covering smart home, the IoT devices can produce data, transfer the produced data to specific data storage very quickly using the characteristics of 5G. Healthcare industries, transportation industries, and manufacturing industries will get benefits of the low latency application of 5G [7]. Also to get the benefits of 5G, healthcare system should be distributed in real time over TCP/UDP pointed protocols by the fifth generation of cellular network [8]. Machine learning (ML) is contributing more and more in human healthcare analysis systems. In our proposed system AzliHelp, we have used k-nearest neighbor (K-NN) which is a ML algorithm to classify a particular person’s actions. Based on the classification result, our system can detect if that person has AD or not. AD can be a life threatening problems to some people because this disease can lead a person into danger without the realization of that person [9]. Hence, it is very important to detect AD as early as possible. As elderly people remain home most of the time, hence an AD detection system inside home using smart IoT devices can play a vital role in this scope. Figure 1 shows how smart IoT devices can be used to collect data from a person, being connected to 5G wireless network. It also shows that the flow of data for storing to the particular data storage where the data can be analyzed to detect AD. The rest of the paper is formed as follows. In Sect. 2, we have discussed about the related works considering both present and previous research works. In Sect. 3, we have discussed about the system preliminaries. In Sect. 4, we have stated our proposed AlziHelp: An Alzheimer disease detection and assistive system inside smart home focusing 5G using IoT and machine learning approaches using proper figures and algorithms. Finally, we have concluded our paper in Sect. 5.

12

AlziHelp: An Alzheimer Disease Detection …

107

Fig. 1 IoT-enabled smart home inside 5G wireless network environment

2 Related Work In this section, we will discuss both recent and previous works related to AD detection and assistive system for AD patients focusing IoT, ML, and 5G. In [10] authors proposed iCare which is a project that offer benefits to AD patients. The project requires IoT devices with a mobile application. But we did not find any AD detection system or application that focuses 5G wireless network environment inside a smart home. In [11], authors introduced a mobile app Alzimo that provide functionalities to AD patients about the safe-zone. But we did not find any AD detection system that uses IoT devices and ML approaches. In [12] [13], authors presented ICT4LIFE that uses multimodal fusion to extract features with low-level data capturing technique to monitor and improvement the life of an AD patient and also multimodal behavior analysis system to detect patients who have AD. But no application was presented there that focuses to assist the patients who have AD so that they can perform their day-to-day life activities. In [14], authors have explained the aspects of using sensing technologies for elderly people through capturing behavior changes. But we did not find any system related to AD detection or assist the AD patients using those sensing technologies. In [15], authors proposed an android-based mobile application to monitor the symptoms of Alzheimer’s patient. But there were no proposed system using IoT-enabled smart devices capable to collect and transfer data using 5G wireless network and capable to both detect AD and assist an AD patient. In [16], authors presented a novel 5G system architecture for healthcare system. In [17], authors examined the relation between 5G and wireless body area networks (WBANs). In [18], authors described a new prototype system using machine learning to detect the demeanor of patients. But there were no system or application proposed to detect AD using IoT devices and ML approaches. But we have included IoT and machine learning focusing 5G to address AD detection and an assistive system.

108

Md. Ibrahim Mamun et al.

3 Preliminaries We have considered preliminaries for our proposed system AlziHelp. We have adopted a complete 5G deployment scenario where various smart IoT devices will be available everywhere in cheaper cost. 5G-enabled smart watch, smart mobile, smart shoe will be connected with 5G wireless network and able to collect and transfer data using faster data rates of 5G. Also smart 5G wireless led signal and sonic buzzer will be connected to the network and act in real time. Ultra-reliable-low latency communication (uRRLC) and massive machine-type communication (mMTC) of 5G will support reliability.

4 Proposed System Considering all scenarios, in this paper, we have proposed AlziHelp: An Alzheimer disease detection and assistive system inside smart home focusing 5G using IoT and machine learning approaches. Our proposed system has two separate but connected parts. One part is the detection part of AD and the other part is the assistive part that assists an AD patient. Figure 2 shows the structure of AlziHelp system. Here to detect if a person has AD or not, we haves used smart watch, smart mobile, smart shoe which are smart IoT devices to collect data from a person inside smart home environment. These devices will collect data of a person position and actions that also related to the position against particular time. After collecting different data and store the collected data as a data set, our system can analyze the mismatches in actions and mismatches in timing for positons and actions of a person. If the mismatched value is noticeable that is greater than predefined range of values, our

Fig. 2 Structure of AlziHelp

12

AlziHelp: An Alzheimer Disease Detection …

109

system can detect the presence of AD for that person. After detection of AD, the system will update a flag so that the second part will be activated. Figure 3 shows the AD detection flow chart for AlziHelp. On the other hand, after detecting AD, the system will turn on its another part because of the updated flag value. This part can assist a person who has AD to perform his daily tasks using smart wireless led signals and sonic buzzer. The system will determine best possible action using k-nearest neighbor which is an ML algorithm, from the stored previous data values against the input time value. Figure 3 also shows the flow chart of assistive procedure of AlziHelp. Algorithm 1: AlziHelp-Detection of Alzheimer Disease (AD) Input: D1, D2, D3 and D4 (Data from Smart Watch(D1), Smart Mobile(D2), Smart Shoe(D3) and Wireless Motion Sensor(D4)) and Time- T1, T1, T1 and T4. Output: FLAG = 0(For, no AD) and FLAG = 1(For, AD) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

D1 = Collect_Data_From_Smart_Watch(Position) T1 = Collect_Current_Time_for_D1(time_1) D2 = Collect_Data_From_Smart_Mobile(Screen on/off, position) T2 = Collect_Current_Time_for_D2(time_2) D1 = Collect_Data_From_Smart_Shoe(position, steps_count) T1 = Collect_Current_Time_for_D1(time_3) D1 = Collect_Data_From_Motion_Sensor(position, motion_detection_count) T1 = Collect_Current_Time_for_D1(time_4) X = Check_Data_Collection_Status(D1,D2,D3,D4,T1,T2,T3,T4) IF(X = = NULL) {REPEAT from Step 1} ELSE {Store Data in Vector[M] and Increment Vector Pointer M} IF (M = = Predifined_Range) {Y = Calculate_Mismatch(Vector[P], Vector[Q])} ELSE {REPEAT Steps} IF(Y > MAX_VAL) {FLAG = 1, AD detected} ELSE {FLAG = 0, AD not detected}

According to Algorithm 1, data from IoT devices such as smart watch, smart mobile, smart shoe, and wireless motion sensor will be collected and transferred to data storage using 5G wireless network. Then, the data such as individuals position in a particular time, actions at a particular time, and the duration of time will be considered as individual data set. After analyzing the mismatches between the actions and positions, the system can detect if that individual has AD or not. If AD is detected the system will update the flag value.

110

Md. Ibrahim Mamun et al.

Fig. 3 AlziHelp: flow chart of AD detection procedure (left) and flow chart of the procedure to assist and an AD patient (right)

AlziHelp: An Alzheimer Disease Detection …

12

111

Algorithm 2. AlziHelp-Assist AD Patient by Predicting Actions at a Particular Time Input: FLAG, T, P (Flag,Time,Position) Output: LED = High/Low, BUZZER = High/Low (Wireless Led Light and Buzzer Signal) 1. 2. 3. 4. 5.

T = Collect_Current_Time(Time) P = Collect_Current_Position(Time) X1 = Predict_Action_Based_Using_K-NN(T) X2 = Predict_Action_Based_Using_K-NN(P) IF(X1 = = X2) {Action_Correct()} ELSE {REPEAT Steps} Action_Correct(){ LED_Signal_at_Position(P), BUZZER_Signal_at_Position(P)}

According to Algorithm 2, after getting the updated flag value, to assist an AD patient, the system will collect current position of that individual over a specific time. Using K-NN algorithm, the system will determine the nearest suitable action over that time (Table 1). pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Using K-NN, Value1 ¼ ðxDset1  xDset2 Þ þ ðyDset1  yDset2 Þ ; For, Time X1 ¼ K-NN Action for value1 Value2 ¼

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðxDset1  xDset2 Þ þ ðyDset1  yDset2 Þ ; For, Position

X2 ¼ K-NN Action for value2 [When X1 = = X2, Predict that Action]

Table 1 AlziHelp: format of data preparation table No. 1 2 … N

Time 10.00 A.M. 10:05 A.M. ……. P.Q:R.S(A.M./P. M.)

Position Room-1

Room-2

Room-N

Action Act-1

Act-2

Act-N

Low Low ……. Low

High Low ……. Low

Low High ……. High

Low High ……. Low

Low Low ……. Low

Low Low ……. High

112

Md. Ibrahim Mamun et al.

5 Conclusion Detection of Alzheimer disease (AD) can be done easily using IoT devices inside home instead of traditional method, i.e., MRI. AD patients cannot do things properly because it involves memory loss due to the death of brain cells. Besides, AD patient can create harmful situation for themselves as well as others without even knowing the situation and consequences. In this paper, we have proposed AlziHelp: An Alzheimer disease detection and assistive system inside smart home focusing 5G using IoT and machine learning approaches. In our system, various 5G-enabled IoT devices such as smart watch, smart mobile, smart shoe, wireless led signal, and buzzer are used to collect data from a person’s positions against particular times. After predefined repeated cycles, analyzing the mismatches actions among data sets using K-NN over particular time, the system can detect if that person has AD or not. Also Alzihelp can assist an AD patient by predicting actions for any particular time. 5G-enabled wireless led signals and sonic buzzer will assist an AD patient by suggesting the best possible action that the person usually do at that particular time. The future version of this work will include a complete test analysis with real-life data and comparison of more ML algorithms.

References 1. Fuse H, Oishi K, Maikusa N, Fukami T, Initiative JADN (2018) Detection of Alzheimer’s disease with shape analysis of MRI images. In: 2018 joint 10th international conference on soft computing and intelligent systems (SCIS) and 19th international symposium on advanced intelligent systems (ISIS), Toyama, Japan, pp 1031–1034 2. Thakare P, Pawar VR (2016) Alzheimer disease detection and tracking of Alzheimer patient. In: 2016 international conference on inventive computation technologies (ICICT), Coimbatore 3. Roopaei M, Rad P, Prevost JJ (2018) A wearable IoT with complex artificial perception embedding for Alzheimer patients. In: 2018 world automation congress (WAC), Stevenson, WA 4. Khan A, Usman M (2015) Early diagnosis of Alzheimer’s disease using machine learning techniques: a review paper. In: 2015 7th international joint conference on knowledge discovery, knowledge engineering and knowledge management (IC3K), Lisbon, pp 380–387 5. Sigwele T, Hu YF, Ali M, Hou J, Susanto M, Fitriawan H (2018) Intelligent and energy efficient mobile smartphone gateway for healthcare smart devices based on 5G. In: 2018 IEEE global communications conference (GLOBECOM), Dec 2018, Abu Dhabi, UAE 6. Shamim Hossain M, Muhammad G (2018) Emotion-aware connected healthcare big data towards 5G. IEEE Internet Things J 5(4), Aug 2018 7. Lema MA, Laya A, Mahmoodi T, Cuevas M, Sachs J, Markendahl J, Dohler M (2017) Business case and technology analysis for 5G low latency applications. IEEE Access, PP(99) 8. Aldaej A, Tariq U (2018) IoT in 5G Aeon: an inevitable fortuity of next generation healthcare. In: 2018 1st international conference on computer applications and information security (ICCAIS), Apr 2018, Riyadh, Saudi Arabia. https://doi.org/10.1109/cais.2018.8441986

12

AlziHelp: An Alzheimer Disease Detection …

113

9. Surendran D, Janet J, Prabha D, Anisha E (2018) A study on devices for assisting Alzheimer patients. In: 2018 2nd international conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), 2018 2nd international conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Palladam, India, pp 620–625 10. Aljehani SS, Alhazmi RA, Aloufi SS, Aljehani BD, Abdulrahman R (2018) iCare: applying IoT technology for monitoring Alzheimer’s patients. In: 2018 1st international conference on computer applications and information security (ICCAIS), Riyadh, pp 1–6 11. Helmy J, Helmy A (2016) The Alzimio App for Dementia, Autism & Alzheimer’s: using novel activity recognition algorithms and geofencing. In: 2016 IEEE international conference on smart computing (SMARTCOMP), St. Louis, MO, pp 1–6 12. Alvarez F et al (2017) Multimodal monitoring of Parkinson’s and Alzheimer’s patients using the ICT4LIFE platform. In: 2017 14th IEEE international conference on advanced video and signal based surveillance (AVSS), Lecce, pp 1–6 13. Alvarez F et al (2018) Behavior analysis through multimodal sensing for care of Parkinson’s and Alzheimer’s patients. In: IEEE multiMedia, vol 25, no 1, pp 14–25, Jan–Mar 2018 14. Mainetti L, Patrono L, Rametta P (2016) Capturing behavioral changes of elderly people through unobtrusive sensing technologies. In: 2016 24th international conference on software, telecommunications and computer networks (SoftCOM), Split, pp 1–3 15. Sharma J, Kaur S (2017) Gerontechnology—the study of alzheimer disease using cloud computing. In: 2017 international conference on energy, communication, data analytics and soft computing (ICECDS), Chennai, pp 3726–3733 16. Din S, Paul A, Ahmed A, Rho S (2016) Emerging mobile communication technologies for healthcare system in 5G network. In: 2016 IEEE 14th international conference on dependable, autonomic and secure computing, 14th international conference on pervasive intelligence and computing, 2nd international conference on big data intelligence and computing and cyber science and technology c(DASC/PiCom/DataCom/CyberSciTech), August 2016, Auckland, New Zealand 17. Jones RW, Katzis K (2018) 5G and wireless body area networks. In: 2018 IEEE wireless communications and networking conference workshops (WCNCW), April 2018, Barcelona, Spain. https://doi.org/10.1109/wcncw.2018.8369035 18. Healy M, Walsh P (2017) Detecting demeanor for healthcare with machine learning. In: 2017 IEEE international conference on bioinformatics and biomedicine (BIBM), Nov 2017, USA

Chapter 13

An Adjustment to the Composition of the Techniques for Clustering and Classification to Boost Crop Classification Ankita Bissa and Mayank Patel

1 Introduction India is, today, the second-largest agricultural producer in the world. Demographically, agriculture is the largest economic field and plays an important part in India’s overall socio-economic fabric. Agriculture is a single output of business crops, which depends on several factors related to climate and economy. The soil, climate, planting, irrigation, fertilizers, the temperature, precipitation, crops, weeds of pesticides, and other factors are all of the factors underlying agriculture. Data mining technology is a key element in the analysis of data. Data extraction is a process through which patterns are presented in broad sets which include artificial intelligence, machine learning, statistics, and database systems approaches. Two distinct types of data mining learning strategies are unmonitored (clustering) and supervised (classifications). Clustering is the mechanism by which data points are collected and clustered according to certain distance measured into clusters. The intention is to achieve a small distance from each other for data points in the same cluster, when data points are in separate clusters at a wide distance from each other. Cluster analysis separates data into categories that are well-formed. The “natural” data structure should be recorded by well-formed clusters. In addition to the use of these calculations, we are using the WEKA tool to identify the crop with different clustering classifiers, such as Naïve Bayes, SMO, Decision Table, and J48 applied to the non-clustered data and the clustered dataset (k-means). The test methods used for the experiments consist of correctly classified cases, the TP rate, precision of the FP rate, recall, and F-measuring.

A. Bissa (&)  M. Patel Geetanjali Institute of Technical Studies, Udaipur, Rajasthan, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_13

115

116

A. Bissa and M. Patel

2 Literature Survey Key research is aimed at developing/improving supervised algorithms for learning. A variety of such supervised approaches are Decision Tree, Support System vector, logistical regression, neural networks [1]. Alternatively, various unattended learning algorithms constructed on several heuristics are proposed, such as the clustering of items in a function room. For example, the k-means cluster, hierarchical clusters, DBSCAN clusters for Gaussian (EM), etc. [2]. Researchers gradually began to think about how uncontrolled learning can benefit from a performance of an unregulated method which managed to the indication of transduction learning [3] and half-controlled learning [4]. Current semi-controlled algorithms are hard to embrace as they typically take into account one supervised model and one uncontrolled model. A semi-supervised graph-based approach to ranking analysis has been recommended by Goldberg and Zhu [5]. Their method cannot, however, also incorporate numerous controlled and unattended outlets. A comparative study of several machine learning technologies was performed in a study conducted by Zaminur Rahman. They classified with Bangladesh info. They did this. Taking into account the six regional soil statistics and using classification environmental features. They finally compared the results of three algorithms, k-nearest neighbor, bagged tree, and SVM and developed a model to identify the types of soil and the right plants to be cultivated for that specific type of soil [6]. A comparative analysis of data mining algorithms in a examination conceded out by Leisa J. Armstrong. A large dataset from the Agriculture and Food Department of Australia (AGRIC) is used to perform examination [7]. In an methodology by Jay Gholap, a soil classification system based on fertility was carried out. The collection of the data was made from the Pune District soil testing labs. They used WEKA platform for the automation of device development [8]. Later exercising data mining procedures under a diverse climatic situation, several researchers have achieved strong prediction results [9]. This manuscript is the primary one that concentrates on the use of machine learning in the crop graduating challenge, according to our review survey papers, the key ones presented in this section. The literature was not extensively scrutinized in the current survey studies, and most of them discussed a particular problem of crop yield prediction. In this article, we have also presented 30 in-depth research programmes and discussed which in-depth study algorithms have been used.

3 Experiment Setup The data must be pre-processed before using the classification algorithm when dealing with crops classification. In reducing data sets dimensionality, this proposed approach utilizes feature selection. Data from the 2007–2017 state farm statistics in Rajasthan were collected for this report. There are 2343 instances in all nodes and there are 12 features. WEKA tool is used to enforce classifiers (Table 1).

13

An Adjustment to the Composition of the Techniques …

Table 1 Dataset attributes

117

Attribute

Description

Year District N P K Saline soil (Ha) Sodic or alkali Soil (Ha) Annual normal rainfall in mm Area Production Yield Crop

Year Names of district Nitrogen/ppm Phosphorus/ppm Potassium/ppm Soil type Soil type Rainfall Farmland area value Crop production value Yield value Crop names

All the steps involved are simple text mining procedures, except for the application of clustering techniques to perform feature reselection prior to training of the classifier. Until applying this classification model, the analysis utilized k-means clustering to cluster the data collection. The k-means cluster is based on an easy and understandable method by a predefined category to classify data. The clustering technique is used to maximize the performance of the classification model. The proposed technique is used to overcome the high-frequency information complexity. The Naïve Bayes (NB), SMO, Decision Table, and J48 classifications are used for the analysis.

4 Experiments and Their Results 4.1

Experiment I

We introduced the following algorithms for classifying the chosen attribute in the first experiment: Naïve Bayes, SMO, Decision Table, and J48. We also introduced these algorithms. The selection process was used to classify the particular features that might most affect the class (crop). In accordance with the evaluation parameters are evaluated and estimated the effects for classifiers used. Table 2 demonstrates the accuracy of a classifier prior to classifying crop in a data set, without applying any clustering algorithms. Compared to the other three classifications, the precision of the J48 model classifies the crop highest. The analysis of the TP rate, FP rate, precision, recall, F-measurement and precision of different classifiers without clustering is shown in Figs. 1 and 2. As seen in the diagram the accuracy of J48 is higher and the FP rate is lower.

118

A. Bissa and M. Patel

Table 2 Accuracy of different classification algorithms

SMO Naïve Bayes Decision Table J48

Correctly classified instances (%)

Incorrectly classified instances (%)

TP rate

FP rate

Precision

Recall

F-measure

39.91 13.21

60.09 56.76

0.399 0.432

0.086 0.081

0.392 0.131

0.399 0.432

0.384 0.399

59.54

40.46

0.595

0.058

0.607

0.595

0.591

64.02

35.98

0.64

0.051

0.641

0.64

0.634

Fig. 1 Evaluation of different classifiers

4.2

Experiment II

The following classification algorithms were implemented by Weka in the next experiment to classify the selected attributes using the k-mean clustering method: Naïve Bayes, SMO, table of decision and J48. In order to decide which unique characteristics will influence the class most the selection process was used (crop). The effects of the classifiers used are tested with the same parameters as experiment 1. And the outcomes are then compared between the two experiments. Table 3 shows, however, how exact a classifier is after k-means are applied to cluster algorithms before the cultivation of a dataset is graded. Compared to other three classification models, J48 is still the best one for classifying the crop.

13

An Adjustment to the Composition of the Techniques …

119

Fig. 2 Correctly classified instances and incorrectly classified instances

Table 3 Efficiency of various cluster classification algorithms

Naïve Bayes SMO Decision Table J4R

Correctly classified instances (%)

Incorrectly classified instances (%)

TP rate

55.78

44.22

0.558

62.69 66.41

37.31 33.60

79.59

20.41

FP rate

Precision

Recall

F-measure

0.055

0.604

0.558

0.548

0.627 0.664

11.059 0.049

0.62 0.676

0.627 0.664

0.619 0.664

0.796

0.032

0.796

0.796

0.795

Figures 3 and 4 demonstrate the exact implementation of the k-means clustering of classifiers. As shown in the figure, the accuracy of J48 is more than different classifiers and its precision is increased. By using the clustering process, the accuracy of a classifier can be enhanced before using the dataset.

4.3

Comparison of Results from Experiment I and Experiment II

The correspondence between the exhibits in the clustered and non-clustered data of different classifications is shown in Figs. 5, 6 and 7. The variations in precision, precision and reminder are the subject of interest rates. The qualities in the tables demonstrate that clustering enhanced every technique of exhibition.

120

A. Bissa and M. Patel

Fig. 3 Comparison of classifiers after applying k-mean clustering prior to classification

Fig. 4 Correctly classified instances of various k-means clusters and classification

Compared to Naïve Bayes, SMO and Decision Table for both tests, the J48 has superior results. The SMO had an average prediction accuracy of 39.90% compared to 43.24% using Naïve Bayes, 59.54% using the Decision Table and around 64.03% on the data set without clusters using the J48 method. For J48 and Decision Table, respectively, the best accuracy was 64.03 and 59.54%.

13

An Adjustment to the Composition of the Techniques …

121

Fig. 5 Output of various classifiers with clustered and unclustered data

Fig. 6 Precision measure of different classifiers

The average prediction accuracy using J48 is 79.59% compared to 66.41 using the Decision Table using k-means clustering prior to classification.

122

A. Bissa and M. Patel

Fig. 7 Recall measure of different classifiers

5 Conclusion This article presents the results of the classification of crops from the agricultural dataset of Rajasthan state. Four classification models with a k-mean clustering algorithm have been tested. Clustering algorithms can be used to further increase the accuracy of classification algorithms. A new approach for improving classification algorithm accuracy is proposed. Experimental findings indicate that it is advantageous to apply the clustering technical approach before the classification algorithm.

References 1. Kotsiantis SB (2007) Supervised machine learning: A review of classification techniques. In: Proceedings of the 2007 conference on emerging artificial intelligence applications in computer engineering: real word ai systems with applications in eHealth, HCI, information retrieval and pervasive technologies. Amsterdam, The Netherlands. IOS Press, The Netherlands, pp 3–24. Available http://dl.acm.org/citation.cfm?id=1566770.1566773 2. Xu R, Wunsch D (2005) Survey of clustering algorithms. IEEE Trans Neural Netw 16(3):645– 678 3. Pechyony D (2008) Theory and practice of transductive learning. Ph.D. dissertation, Israel Institute of Technology 4. Pise NN, Kulkarni P (2008) A survey of semi-supervised learning methods. In: 2008 International conference on computational intelligence and security, vol 2, Dec 2008, pp 30–34

13

An Adjustment to the Composition of the Techniques …

123

5. Goldberg AB, Zhu X (2006) Seeing stars when there aren’t many stars: graph-based semi-supervised learning for sentiment categorization. In: Proceedings of TextGraphs: the first workshop on graph based methods for natural language processing 6. Rahman SAZ, Mitra KC, Islam SM (2018) Soil classification using machine learning methods and crop suggestion based on soil series 7. Armstrong L, Diepeveen D, Maddern R (2004) The application of data mining techniques to categorize agricultural soil profiles 8. Chiranjeevi MN, Nadagoundar RB (2018) Analysis of soil nutrients using data mining techniques 9. Guo W, Xue H (2012) An incorporative statistic and neural approach for crop yield modelling and forecasting. Neural Comput Appl 21:109–117

Chapter 14

Minimization of Torque Ripple and Incremental of Power Factor in Switched Reluctance Motor Drive E. Fantin Irudaya Raj

and M. Appadurai

1 Introduction In many industrial applications, nowadays, the Switched Reluctance Motor (SRM) is used because of its simple construction and robust nature. The SRM is having windings in stator and having no windings in the rotor. Thus, the control of SRM becomes easier. But, The SRM also has the following disadvantages. Because of its construction nature, it creates more ripples in the torque production and acoustic noise also. Also, compare with the other motor drives, the power factor of the Switched Reluctance Motor drive is low. Figure 1 shows the control structure of the conventional SRM drive. The three-phase alternating current (AC) supply is given to the rectifier for the conversion of AC-to-direct current (DC). The converter of the SRM drive gets input from this converted DC. The converter is connected with all the phase windings of SRM. The rotor position sensor (RPS) is placed over the shaft of SRM. The RPS collects the actual speed of SRM, and it also provides information to the controller about the rotor position. Based upon the details received from the RPS, the controller appropriately turn-on or turn-off the respective power semiconductor switches. The current signal is also feedback to the controller to maintain the current within permissible limits. There are various converter topologies available to drive the SRM. In which, the conventional asymmetric bridge converter is mostly used one. It is suitable for E. Fantin Irudaya Raj (&) Department of Electrical and Electronics Engineering, Dr. Sivanthi Aditanar College of Engineering, Tiruchendur, India e-mail: [email protected] M. Appadurai Department of Mechanical Engineering, Dr. Sivanthi Aditanar College of Engineering, Tiruchendur, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_14

125

126

E. Fantin Irudaya Raj and M. Appadurai

Fig. 1 Conventional SRM drive

Fig. 2 Alternate asymmetric converter topology

high-speed operation and also provides insignificant shoot-through faults. But the converter needs a more number of power electronics switches to energize the phase windings. It will increase the switching losses. The alternate asymmetric converter is proposed in the literature to overcome these losses. Figure 2 shows the alternate asymmetric bridge converter topology. SRM work is based on the principle of minimum reluctance. The rotor always tries to align with the stator poles by means of minimum reluctance. The unidirectional current pulses energize the SRM phase coils. It contributes to the

14

Minimization of Torque Ripple and Incremental …

127

continuous operation of the SRM. The current pulses are influenced by their amplitude and the timing of turn-on and turn-off. Depends upon the shape of the current pulses, the speed of the SRM is decided [1]. At Intermediate and low rotor speeds in SRM, the voltage source imposes a rectangular current pulse through the exciting coil because of the lesser back electromagnetic force (emf). Likewise, when SRM operated at high speeds, the back emf gets boosted. Due to this, the current pulse becomes triangular [2]. The pulsating AC input current and voltage switching in the SRM phase winding contribute to high harmonics, torque ripple, and low power factor. In the present work, we are using Vienna-type rectifier along with SRM converter. It will enhance the performance of the SRM drive. This SRM drive configuration addresses the issue of power quality, minimizes the torque ripple, and enhances the power factor.

2 Literature Review The Quasi-Z-Source (QZS) converter-based topology is presented for improving the efficiency of SRM drive. By using this configuration torque ripple minimization is obtained. It is also useful in power factor correction of SRM drive [3]. By using a pair of Buck-Boost converter, a new configuration in SRM drive is implemented. It will allow the system to deliver a sinusoidal input current, and the configuration will increase the source voltage. It also provides the system, the ability of fast current escalation or decline in the stator windings. Thus, it offers good voltage regulation as well. The main drawback of the present system is its practical application is limited by the very complex structure and cost of the equipment [4]. For improving the power factor, the configuration with rectifier, which is having energy storage capacitors, is recommended. The author considers harmonic problems caused by the power grid. He also introduced the active filter configuration and passive input filter for harmonic reduction. Switching losses and cost of the switches are raised by using this configuration [5]. In [6], the author proposes to add a passive boost converter with the converter conventional converter of SRM. For the magnetization purpose, DC-link voltage is applied to the SRM machine terminal by using a passive boost converter. In this configuration, at the time of demagnetization, the DC-link voltage is doubled. But this will not perform power factor correction in the SRM. In [7], the author proposes the simplified neural network-based control for SRM drive. It will cut down the complexity of the section and improve the power factor and reduce the torque ripple. In [8], the power factor correction (PFC) converter is proposed. On the AC side, it is more focused on increasing power quality. The converter proposed in this paper is intended to operate in the discontinued mode of conduction. This topology will intensify the voltage stress across the semiconductor switches and inductor peak current. So, this will lead to higher conduction loss, and it also uses a more significant number of switches. In paper [9], in this, the author explains the application of SRM for aircraft engine fuel pump application. The various simplifications and approximations proposed to enhance the PF of SRM.

128

E. Fantin Irudaya Raj and M. Appadurai

The author presents a new converter topology for the improvement of power factor. It also provides DC voltage regulation [10]. In the paper [11], the nonlinear numerical analysis of the SRM drive system is described. The author also explained about the volt-ampere requirements for maintaining good power factor of the system. In [12], the author proposes buck converter fed SRM drive and its mathematical modeling to suppress the torque ripple and improvement of PF. In the paper [13], the author suggests a segmented rotor of SRM to provide high torque and low torque ripple. In [14], the single-phase Vienna rectifier is proposed by the author. The Vienna rectifier is a combination of a full-bridge rectifier, inductive filter in the AC side, and a boost converter. From this work, we can get the inference that the Vienna rectifier delivers good voltage regulation and PF correction. It also has an inductor filter. It will provide a smooth and continuous input current from the AC supply side. This configuration needs only one switch. It is the main advantage compared with any other converters with the same performance. The reduced number of components leads to cost savings and loss reduction. In [15], the author proposes the conventional asymmetric converter topology along with Vienna rectifier to fed the SRM. The paper also claimed that it would improve the power factor and minimize the torque ripple of the system. In [16], on the AC input side, the presence of semiconductor components makes the supply current tremendously distorted and contributes to high Total Harmonic Distortion (THD). This non-sinusoidal current results in increased loss of power, low PF, and high voltage stress. Insulation and thermal problems also need to be taken for consideration. In this work, the author proposed a Vienna rectifier. The power factor will be raised, and the harmonics will be decreased. And, the proposed structure imposes low voltage stress across the switching elements. It also contributes to a reduction in losses from switching. In [17], the author presents the Vienna rectifier-based system for improving power quality. Nowadays, the induction heating technology is mostly used in medical, residential, and industrial applications. There is an intrinsic tendency of the high-frequency harmonics to reverse flow toward the source side. It will deteriorate the power quality on the supply side. A single-phase Vienna rectifier device has been proposed to address this power quality issue.

3 Proposed Methodology From the literature, we can identify that the Vienna rectifier-based topology has more advantages compared with conventional converter topology. In the present work, we are going to combine the Vienna rectifier with the alternate asymmetric converter topology of the SRM drive. By using this configuration, the power quality of the SRM drive is enhanced. It also contributes to the depreciation of torque ripples. The entire work is carrying forward in the MATLAB/Simulink software package.

14

Minimization of Torque Ripple and Incremental …

129

Fig. 3 Vienna rectifier combined with alternate asymmetric converter fed SRM drive

Fig. 4 MATLAB – Simulink model of proposed SRM drive methodology

Figure 3 shows the configuration of the switched reluctance motor drive system incorporates alternate asymmetric H-bridge converter with Vienna rectifier. Figure 4 shows the MATLAB simulation model of nonlinear modeling of the SRM and the proposed configuration. The equivalent model of conventional SRM with Vienna rectifier and its different operating modes are shown in Fig. 5. The torque ripple can be evaluated by, Torque Ripple ¼

Tmax  Tmin  100% Tavg

ð1Þ

130

E. Fantin Irudaya Raj and M. Appadurai

Fig. 5 a Equivalent model of SRM with Vienna rectifier, b switch ON mode, c switch OFF mode

The THD can be calculated by, THD ¼

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Pn 2 i¼3;5;7;... Vi V12

 100%

ð2Þ

where Vi = RMS value of voltage, V1 = RMS fundamental voltage.

4 Results and Discussion The proposed methodology provides a smoother input current waveform. Figure 6 shows that phase current and phase voltage SRM drive with and without Vienna rectifier. From Fig. 6, we can sense the difference in input current waveform with and without Vienna rectifier. With Vienna rectifier, the input current of the switched reluctance motor drive is smoother than the drive without the Vienna rectifier. If the

Fig. 6 a Phase voltage and phase current of conventional SRM drive, b phase voltage and phase current of Vienna rectifier fed SRM drive

14

Minimization of Torque Ripple and Incremental …

131

Fig. 7 a Torque output of an SRM drive with Vienna rectifier, b torque output of a conventional SRM drive

waveform of the input current gets smoother, it also makes the torque waveform smoother. It will help in torque ripple minimization. The torque output is shown in Fig. 7. The pulsating input current contributes to the high ripples in the torque profile. It also contributes to the low power factor and triggers more harmonics in the system. By using Vienna-type rectifier, the current waveform gets smoother. It will prove the power factor and reduction in harmonics. Table 1 shows the torque ripple values with and without Vienna rectifier. We can able to observe that, Vienna rectifier fed SRM drive having low torque ripple. Table 2 shows the power factor and THD values for different load currents. The SRM drive with Vienna rectifier possesses good power factor and less THD. As per the IEC standard (IEC 61000-3-2), the current harmonics of the motor drive must be within the prescribed values. Figure 8 displays the different harmonic spectrum of the switched reluctance motor drive with Vienna rectifier and without the Vienna rectifier. From Fig. 8, we can conclude the input current in the SRM drive without the Vienna rectifier is highly distorted. Therefore, the value of THD Table 1 Torque ripple comparison—with and without Vienna rectifier Sl. No.

State of the SRM drive

Torque ripple (%)

1 2

SRM drive with Vienna rectifier SRM drive without Vienna rectifier

12.1 33.6

Table 2 THD and PF values with and without Vienna rectifier Load current (A)

THD (%) Without Vienna rectifier

With Vienna rectifier

Power factor Without Vienna rectifier

With Vienna rectifier

4 6 7 7 2

106.3 115.2 111.5 101.3 106.8

15.7 12 13.6 12.7 10.6

0.48 0.57 0.64 0.66 0.68

0.78 0.81 0.84 0.86 0.91

Harmonics Amplitude (A)

132

E. Fantin Irudaya Raj and M. Appadurai

5 4

without Vienna Recfier

3 2

with Vienna Recfier

1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Standard IEC 61000-3-2 Class A limits

Order of Harmonics Fig. 8 Comparison of SRM Drive’s different harmonic spectrum

gets higher. It results in the low PF and torque ripple in the system. By using, conventional SRM converter topology along with Vienna-type rectifier, we can obtain good PF and low harmonics. The configuration is also used for torque ripple minimization.

5 Conclusion In the present work, we are focusing on Vienna-type rectifier along with alternate asymmetric H-bridge converter fed SRM drive. The entire setup is designed by using the MATLAB/Simulink environment. By using the alternate asymmetric converter topology, we can reduce the switching losses compared with the conventional one. The paper compares, Vienna-rectifier-based SRM with a traditional drive of SRM. From the analysis, the new topology proposed in the manuscript provides minimization in torque ripple, good power factor and less THD in SRM drive. The entire setup is simulated by using the MATLAB/Simulink environment. The configuration provides satisfactory results. Further, it can be implemented in a real-time environment.

References 1. Krishnan R (2017) Switched reluctance motor drives: modeling, simulation, analysis, design, and applications. CRC press 2. Rabinovici R (2005) Torque ripple, vibrations, and acoustic noise in switched reluctance motors. HAIT J Sci Eng B 2(5–6):776–786 3. Mohamadi M, Rashidi A, Nejad SMS, Ebrahimi M (2017) A switched reluctance motor drive based on quasi Z-source converter with voltage regulation and power factor correction. IEEE Trans Ind Electron 65(10):8330–8339

14

Minimization of Torque Ripple and Incremental …

133

4. Rim GH, Kim WH, Kim ES, Lee KC (1994) A chopping less converter for switched reluctance motor with unity power factor and sinusoidal input current. In: PESC, pp 500–507. IEEE 5. Caruso L, Consoli A, Scarcella G, Testa A (1996) A switched reluctance motor drive operating at unity power factor. In: IAS, pp 410–417. IEEE 6. Anand A, Singh B (2017) PFC-based half-bridge dual-output converter-fed four-phase SRM drive. IET Electr Power Appl 12(2):281–291 7. Raj EFI, Kamaraj V (2013) Neural network based control for switched reluctance motor drive. In: 2013 IEEE international conference on emerging trends in computing, communication and nanotechnology (ICECCN), pp 678–682. IEEE 8. Anand A, Singh B (2019) Modified dual output cuk converter-fed switched reluctance motor drive with power factor correction. IEEE Trans Power Electron 34(1):624–635. https://doi. org/10.1109/TPEL.2018.2827048 9. Radun AV (1995) Design considerations for the switched reluctance motor. IEEE Trans Ind Appl 31(5):1079–1087 10. Rim GH, Kim WH, Kim ES, Lee KC (1994) A choppingless converter for switched reluctance motor with unity power factor and sinusoidal input current. In: Proceedings of 1994 power electronics specialist conference-PESC’94, vol 1, pp 500–507. IEEE 11. Miller TJ (1985) Converter volt-ampere requirements of the switched reluctance motor drive. IEEE Trans Ind Appl 5:1136–1144 12. Jing J (2020) A power factor correction buck converter-fed switched reluctance motor with torque ripple suppression. Math Prob Eng 13. Kondelaji MAJ, Mirsalim M (2020) Segmented-rotor modular switched reluctance motor with high torque and low torque ripple. IEEE Trans Transp Electrification 6(1):62–72 14. Kolar JW, Zach FC (1997) A novel three-phase utility interface minimizing line current harmonics of high-power telecommunications rectifier modules. IEEE Trans Ind Electron 44 (4):456–467 15. Sadeghi Z, Saghaiannezhad SM, Rashidi A (2020) Power factor correction and torque ripple reduction in SRM drive based on vienna-type rectifier. In: 2020 11th Power electronics, drive systems, and technologies conference (PEDSTC), pp 1–6. IEEE 16. Thangavelu T, Shanmugam P, Raj K (2015) Modelling and control of vienna rectifier a single phase approach. IET Power Electron 8(12):2471–2482 17. Kumar A, Sarkar D, Sadhu PK (2020) Power quality improvement in induction heating system using vienna rectifier based on hysteresis controller. In: Electric power components and systems, pp 1–14

Chapter 15

Optimization of Test Case Prioritization Using Automatic Dependency Detection Sarika Chaudhary and Aman Jatain

1 Introduction From last decade software’s has captured almost every part of our life and the society. The tremendous growth in technology and continuously emerging work scenarios has led organizations to modify, link, and test software’s concurrently while assuring the quality of software. The most important aspect that contributes in assuring software quality is software testing. It is the mirror of actual review of requirement specifications, design, and implementation steps. Further, testing take up 50% resources of overall resources assigned to SDLC [1]. Testing tends to target a goal of generating minimum number of test cases with maximum fault identification. Also, in industries, testing might takes certain days if performed manually. So, it is always advisable to optimize testing, i.e., to design and execute test cases automatically. Optimization in testing can be achieved by various methods and the most significant one is prioritization of test cases. Test case prioritization is an effective way to assist RT with limited time and budget and aid in detecting fault at an early stage [2]. Detecting faults prior accelerate the rate of early debugging which finally result into reduced testing cost as well as timely delivery of the product. Prioritization is an optimization technique to RT and let testers to organize test cases sequentially according to some predefined criteria in such a way that higher priority test suites are executed prior to the lower priority ones. While prioritizing various factors: coverage information, requirement data, time information, etc., need to be considered to generate all the feasible test case design combinations. This process guarantees that no redundant S. Chaudhary (&) Career Point University, Kota, India e-mail: [email protected] A. Jatain Amity University, Gurugram, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_15

135

136

S. Chaudhary and A. Jatain

test cases are taken into account during testing. However, arranging test cases in a sequence is a difficult task to achieve and should be carried out effectively because inappropriate sequencing would increase the probability of failure-proneness [3]. Also, inaccurate sequencing may result into increased testing overheads: delayed testing deadlines and cost, which ultimately lead to poor effort and schedule estimation [4]. Therefore, some criterion is required to explicitly state the proper sequence of the task in the software systems. Basically, to define a sequencing approach, the functional and architectural aspect needs to be understood. The functional design defines the different types of functions and their association with each other performed by the sub modules of a software whereas the architectural design illustrates the relationships or interactions among various components and sub-components of the system. The interaction between the different modules of a software is termed as dependency. Ryser and Glinz [5] categories software dependencies/coupling as: logical, functional, abstract, temporal, and data-dependency. Various researches in literature proved that if dependencies are not perceived by the developers and or not detected and debugged by the testers, would cause the faults to trigger and lead to failures [6–8]. After all, dependencies have a high repercussion on testability, reusability, reliability, and maintainability of the software. Hence, tester’s need to keep a rigorous check to analyze the behavior of different components of software system due to the changes occurred, in order to preserve the software quality wise. This paper presented a methodology for ranking test cases based on automatic dependency detection by examining coverage and requirement factors. Section 2 narrates related work in this field. Section 3 describes suggested methodology. Results and discussions are presented in Sect. 4 followed by conclusion and future scope in Sect. 5.

2 Related Work In literature, a wide variety of techniques have been recommended which describes usefulness in adopting test case prioritization for improving defect identification rate. Also, most of them utilized function-level or statement-level prioritization methods for ordering of test cases, which considered test cases in an independent way. Significantly very less approaches exploited dependencies: functional or non-functional among test cases in a test suite for prioritization. Moreover, existing dependency-based test case prioritizations featured functional dependencies which can be generated only after implementation of the system. They did not take in account requirement dependencies, which if considered might aid in discovering faults at early stages of the SDLC. Furthermore, researchers proposed methods that compute dependencies manually, which is undesirable while achieving the performance goal of RT because of more time consumption. Therefore, we aim to

15

Optimization of Test Case Prioritization …

137

compute dependencies between test cases automatically in order to enhance the effectiveness of early rate of fault detection during RT. Few major contributions in this field are cited in this subsection. It is evident from the studies carried out in past that most of the complexity observed for a software is the result of interactions which occur among the submodules of the system [9, 10]. That is why, it is logical to claim that by executing parts of the software having maximum coupling prior in the testing process would improve the fault detection rate. In continuation, test case prioritization helps in optimizing fault detection rate by discovering hidden bugs. One of the efficient ways for uncovering those errors is dependency structure extraction. Initially, Haidry and Miller [11] discussed a family of dependency structure prioritization (DSP) algorithms which allocate ranking based on graph-coverage values. For a test case, graph-coverage value is defined as the estimation of the complexity of the sum (number) of dependents to test case. They described that open and closed DS should be treated individually because of their inherent characteristics. Two different criteria’s were defined to assess the graph coverage values: the exact dependents count to a test case and the largest route among the direct and indirect dependents. Based upon these, weight of each test case is computed and highest priority was assigned to test case having more weight. The outcomes revealed that rate of early defect detection was improved, but the only issue was in documenting test dependencies in a way which aid in easy extraction of dependency information in the future. Tahvili [12] proposed a manual dependency detection testing approach conducted on six studies. The main aim of the research was to define effective methods for manual integration testing. Tahvili et al. [8] narrated a novel method based on NLP and deep learning to derive interactions among test cases from specification documents written in natural-language. At first, feature vectors are derived according to the semantics using Doc2Vec. Then, vectors are clustered by utilizing HDB-SCAN and FCM algorithms. The results accounts for the effectiveness of HDB-SCAN with an accuracy level of 80%. Kayes [13] presented a prioritizing technique based upon the dependencies between fault at execution time. This techniques claimed an improved quick feedback and rate of early debugging, thus helping programmers to maintain reliability of RT. Also, a new metric average percentage of fault dependency detected was proposed with a value ranging from [0 to 100]. Analysis conducted proves an improvement in defect detection rate provided all faults having identical severity and run time, which in turn is not feasible in real world. Following this, Indumathi and Selvamani [14] discussed an automated method to calculate accurate number of test case dependents thus optimizing the convergence rate of finding out dependency structures. For validation, Siemens test suite has been considered. Ultimately, these techniques aid in improving fault identification and fixing at an early stage. In addition to this, dependency criteria is further promoted, Kaur and Ghai [15], discussed hill-climbing method for test case prioritization utilizing functional dependency. The outcome generated accounts for the effectiveness of hill climbing technique and their compatibility with real-world problems.

138

S. Chaudhary and A. Jatain

3 Proposed Methodology This section illustrates the methodology adopted to reduce the negative impact of high-dimensional data obtained by generating the set of open dependencies among test cases. High coupling between the submodules of a software system results into more complexity. So, the proposed approach is based on the assumption that by testing highly coupled submodules first can improve the fault detection rate and is illustrated in Fig. 1.

3.1

Pre-processing

This step is performed to understand the basic structure of data set, i.e., the test report from two different pools (XML format) based on code coverage and requirement factors are transformed (CSV format) using the respective ID (report-ID)of the test cases. Fig. 1 Process flow of proposed methodology

15

3.2

Optimization of Test Case Prioritization …

139

Dependency-Extraction

After that the dependencies among the functions are exercised based on the time of execution (when-tag) and occurrence of faults from them. Finally the exact number of dependent faulty functions is derived from them. Pseudocode for ‘automatic dependency extraction’ is presented below for ease of understanding.

3.3

Prioritization

In this stage, extracted dependencies are executed first in comparison to ‘independent’ test data.

140

S. Chaudhary and A. Jatain

4 Results and Discussion To evaluate the proposed methodology, it is implemented on four products: Platform, PDE(Plug-in development environment), JDT(Java development tools), and CDT(C/C++ development tools) of Eclipse defect tracking dataset fetched from ‘Github repository’ [16]. Eclipse is a leading Java integrated development environment and is one of the best tool for developing projects based on eclipse platform. Table 1 enumerates the preferred products along with absolute number of components and number of report obtained from each incremental modification carried out in the lifecycle of software system. The number of reports are nothing but the bugs extracted with respect to modifications in the products. Each product contains ten separate XML files, in which bug attributes are stored. The files selected for testing motive are illustrated with description in Fig. 2. Every file is associated with the priority to fix the bug, severity level of the bug, the software application and version of that application to which bug is related, the sub modules of the system and the operating system for which bug is found, current state of the bug, resolution of the bug and identifier of the bug. Also, the attributes ‘report id’, ‘opening-time’ (time when bug reported) and ‘assigned_to’ remain unchanged during the complete life cycle of the bug. Table 1 Eclipse products with corresponding number of components and reports Sr. No.

Product

Number of components

Number of reports/bugs

1 2 3 4

Platform PDE JDT CDT

22 5 6 20

24,775 5655 10,814 5640

Fig. 2 Different attributes for products in ‘Eclipse Defect Tracking’ dataset

15

Optimization of Test Case Prioritization …

141

Generating Dependencies During pre-processing stage, all the XML files are converted into CSV format based upon ‘report id’ and ‘when-tag’. This tag constitutes the reporting time of a bug. After that, dependencies are generated automatically between the bugs using the most stable information about any bug, i.e., ‘opening_time’ and ‘report id’ as both remains unchanged throughout the whole life cycle of a bug. Table 2 represents the output generated after applying dependency structure formation algorithm to all products. For understanding, data for only few ‘report id’s’ are presented here. All the steps are implemented in JDK 7. Prioritization Stage This subsection shows the outcome for before and after applying dependency formation. All the dependent test cased identified in previous step are assigned higher priority and executed prior to other non-dependent test cases. Metric used for assessment of the described methodology is ‘average-percentage-of-fault-detected’ (APFD). Table 3 depicts average count of faults identified when ‘no-ordering’ and ‘dependency-based-prioritization’ is applied. Also, results in Fig. 3 ascertains for an increased number of defect-detection with the proposed approach.

5 Conclusion and Future Scope In this work, a novel approach for discovering dependencies among test cases automatically is illustrated. The suggested approach is implemented by extracting test cases from the Eclipse SDK environment for four products (CDT, JDT, PDE and Platform). Initially, the idea behind detecting interactions between test cases was to enhance the rate of fault identification during RT, because there is always a high possibility of severe faults occurrence between coupled test cases. Also, if testers are aware about level of coupling in advance, they can design the execution of test cases timely and efficiently, which actually optimize RT performance. Finally, extracted dependent test cases are assigned higher priorities then independent cases. The proposed technique is analyzed and compared with ‘no-ordering’ techniques by utilizing ‘APFD’ and observed results accounts for an average of 47% improved rate of defect detection. In the future, discussed algorithm can be integrated with clustering techniques to improve the run-time of overall prioritization process.

Assigned_to

Null Null Null Null [email protected] Null Null Olivier_Thomann@ca. ibm.com

Report ID

1136246610 1136246657 1136258575 1136262170 1136271526 1136273412 1136279650 1136279674

Table 2 Dependency extraction New RESOLVED New RESOLVED New RESOLVED REOPENED New

Bug_status Null Null Null Null Null Null Null Null

CC Windows XP Null Windows XP Null Windows XP Null Null Null

OP_sys JDT Null JDT Null JDT Null Null Null

Product major Null normal Null normal Null Null Null

Severity Null Null Null Null Null Null Null Null

Priority

Null INVALID Null FIXED Null REMIND Null Null

Resolution

3.2 Null 3.2 Null 3.2 Null Null Null

Version

Core Null UI Null Text Null Null Core

Component

142 S. Chaudhary and A. Jatain

15

Optimization of Test Case Prioritization …

143

Table 3 APFD w.r.t. ‘No-ordering’ and proposed approach Products

APFD (dependency-based prioritization)

APFD (no-ordering)

CDT JDT PDE Platform

0.76 0.54 0.67 0.71

0.38 0.23 0.54 0.53

Fig. 3 Line chart showing average number of faults detected for ‘EclipseProducts’

0.8

0.76 0.54

0.6 0.4

0.67 0.54

0.71 0.53 APFD(dependenc y-based prioriƟzaton) APFD (noordering)

0.38 0.23

0.2 0 CDT

JDT

PDE

Plaƞorm

References 1. Chaudhary S (2018) Findings and implications of test case prioritization techniques for regression testing. Int J Tech Innov Mod Eng Sci (IJTIMES) 4(5):1259–1266 2. Chaudhary S, Jatain A (2020) A systematic review: software test case prioritization techniques. Int J Adv Sci Technol 29(7):12588–12599 3. Lam W, Shi A, Oei R, Zhang S, Ernst MD, Xie T (2020) Dependent-test-aware regression testing techniques. In: ISSTA 2020—Proceedings of 29th ACM SIGSOFT international symposium on software testing and analysis, pp 298–311 4. Cataldo M, Mockus A, Roberts JA, Herbsleb JD (2009) Software dependencies, work dependencies, and their impact on failures. IEEE Trans Softw Eng 35(6):864–878 5. Ryser J, Glinz M (2000) Using dependency charts to improve scenario-based testing. In: 17th International conference on testing computer software, TCS 2000, pp 1–10 6. Podgurski A, Clarke LA (1989) Implications of program dependences for software testing, debugging, and maintenance, pp 168–178 7. Tahvili S, Saadatmand M, Larsson S, Afzal W, Bohlin M, Sundmark D (2016) Dynamic integration test selection based on test case dependencies. In: Proceedings of IEEE international conference on software testing, verification and validation workshops, ICSTW 2016, pp 277–286 8. Tahvili S, Hatvani L, Felderer M, Afzal W, Bohlin M (2019) Automated functional dependency detection between test cases using Doc2Vec and Clustering. In: Proceedings 2019 IEEE international conference on artificial intelligence testing, AITest, pp 19–26 9. Lew KS, Dillon TS, Forward KE (1988) Software complexity and its impact on software reliability. IEEE Trans Softw Eng 14(11):1645–1655 10. Sathya C, Karthika C (2015) A study on dependency optimization using machine-learning approach for test case prioritization 6(4):4–7 11. Haidry S, Miller T (2013) Using dependency structures for prioritization of functional test suites. IEEE Trans Softw Eng 39(2):258–275 12. Tahvili S (2018) Multi-criteria optimization of system integration testing, Dec 13. Kayes I (2014) Test case prioritization for regression testing based on fault dependency. pp 48–52 14. Indumathi C, Selvamani K (2015) Test cases prioritization using open dependency structure algorithm. Procedia—Procedia Comput Sci 48(2015):250–255

144

S. Chaudhary and A. Jatain

15. Kaur S, Ghai S (2016) Performance enhancement in hill-climbing approach for test case prioritization using functional dependency technique. Int J Softw Eng Appl 10(11):25–38 16. Lamkanfi A, Pérez J, Demeyer S (2013) The eclipse and mozilla defect tracking dataset: a genuine dataset for mining bug information. In: IEEE international working conference on mining software repositories, pp 203–206

Chapter 16

Model Selection for Parkinson’s Disease Classification Using Vocal Features Mrityunjay Abhijeet Bhanja, Sarika Chaudhary, and Aman Jatain

1 Introduction Parkinson’s disease (PD) is a progressive malady among the people of ages 60 and above. It impacts the central nervous system by endangering the dopaminergic neurons, often weakening the patient’s motor abilities. PD also causes speech impediment, mood swings and depression among the elderly. The early symptoms of PD include anosmia, cramped handwriting, change in vocal features and stooped posture. The major, long-term symptoms of PD include tremors while resting, slow movement, stiffness of muscles and posture issues which lead to imbalance while standing. Other derived symptoms are depression and anxiety, hallucinations and dementia. There is no concrete diagnosis for PD. It belongs to a spectrum of disorders called parkinsonism. Accurate diagnosis is time consuming and needs broad understanding of the disease and its effects. Revett et al. [1] mentions 90% of the patients of PD suffer from speech difficulties, which can be used as a criteria for early diagnosis (at very early stages). With the advent of machine learning and state-of-the-art data-driven approaches, building highly scalable, reliable and accurate classification models has become the new norm, where these algorithms promise to deliver accuracies of over 90% in correctly classifying the presence of a disease way before humans could manually find out. In this paper, we deploy various machine learning algorithms on the characteristic vocal features of suspected patients to detect the presence of PD M. A. Bhanja (&)  S. Chaudhary  A. Jatain Amity University, Gurugram, India e-mail: [email protected] S. Chaudhary e-mail: [email protected] A. Jatain e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_16

145

146

M. A. Bhanja et al.

before the arrival of major motor failure symptoms and perform a detained analysis of the top performing algorithm.

2 Literature Review Ramani and Sivagami [2] surveyed a few knowledge discovery techniques and used data-mining to classify the presence of Parkinson’s disease. They analysed the feature’s relevance and accuracy to come up with the best classification rule. Random tree classifier gave a perfect recall of 100%. Bind et al. [3] surveyed the work of many researchers who used various machine learning algorithms such as k-nearest-neighbours (kNN), neural networks (NN), support vector machines (SVM) and decision trees (DT) among others, to classify PD and summarised their results. The performance metrics noted by them, albeit not in each case, were accuracy, sensitivity, and specificity only. Ene [4] used probabilistic neural networks (PNN) to classify PD patients depending on their medical attributes establishing the benchmark of autonomous classification and their superiority compared to early diagnosis done by humans. He managed to achieve a maximum test accuracy of 81.28%. Ozcift and Gulten [5] used feature selection followed by and applied various machine learning algorithms to check their performance. This was followed by creation of ensemble models based on rotation forests (RF). Their study showed that RF classifier algorithm enhances the classification performance of base machine learning algorithms. Can and Sciences [6] have made use of boosting by filtering and a majority voting scheme to power a back propagation neural network to achieve an accuracy of >90% for true positive rates for each class (positive and negative) in the dataset. Das [7] have compared a few models such as neural networks, DM neural analysis and regression analysis and decision trees to classify PD patients. Neural nets happened to give the highest accuracy of 92.9%. The analysis was carried out using SAS software. Paper et al. [8] proposed two types of NN: radial basis function and a multilayer perceptron. They also used adaptive neuro-fuzzy classifier with linguistic hedges, which gave the best accuracy of 94.72 on the test dataset.

3 Proposed Methodology Previous analysis done on the PD dataset was carried out using traditional machine learning algorithms and neural networks. Traditional algorithms were infamous for their low accuracy and precision while neural nets were far too complicated to implement. The proposed system in this paper involves feature selection and implementation of various machine learning models ranging from the most basic ones to the latest and most advanced ones. Feature selection would be carried out

16

Model Selection for Parkinson’s Disease …

147

using correlation and variance inflation factor (VIF). The reduced set of features would then be used to train various models. A type of gradient boosting method, called the XG-boost [9] is being used, which had never been done before on this dataset. First, the complete dataset is checked for missing values. It is then passed on to a decision tree-based classifier which is invariant to scaling. The tree model would fetch us the importance of each variable in the analysis. We can then proceed to drop attributes which are of little to no value to our analysis. The obtained pruned dataset is used to fit various models such as logistic regression, k-NN among others where their performance is checked on the basis of their corresponding k-folds accuracy, precision and AUC (Fig. 1).

3.1

Data-Set

The data-set used in this study is obtained from the UCI Machine Learning Repository [10]. It consists of the following variables with their corresponding Pearson correlation value with the target variable (Fig. 2).

3.2

Feature Selection

Before applying classification algorithms on the dataset, it is importance to check for irrelevant features which increase the dimensionality unnecessarily. Adding irrelevant features will hamper the performance of the model. XGBoost algorithm is implemented on the complete dataset. Feature relevance is extracted from the trained model and upon checking cross validation score and VIF after reducing the number of attributes, an optimal number of 19 of the most important features was

Fig. 1 Proposed model flow design

148

M. A. Bhanja et al.

Fig. 2 Individual feature correlation with target variable

selected for analysis. By removing the least relevant columns from the classifier, we ensure our model is trained exclusively on data which is useful for classification (Fig. 3).

3.3

Preparing Data for Modelling

The dataset is split into X and y; X being the set of 19 independent variables, we picked for modelling and y being the output class, consisting of values 0 and 1. Both the independent and dependent variables were split in a ratio of 70:30 with 136 rows in the train set and 59 rows in the test set. All the data points were first scaled by the train set parameters followed by transforming the test set points, so as to keep the test set foreign from out classifier until the need for validation arises.

16

Model Selection for Parkinson’s Disease …

149

Fig. 3 Feature importance horizontal bar plot of all the features

3.4

Hyperparameter Optimization

Hyperparameter optimization was done using the Grid-Search CV function from Scikit-learn. With a cross-validation of five-folds, the classifier was training on a set of parameters which deemed to give the best accuracy. The best set of parameters for each classifier was used to train the classifier on train set and validated on the test set.

3.5 3.5.1

Classification Logistic Regression

Logistic regression is considered to be the most fundamental classification algorithms. It is based on the Sigmoid function which can take all real numbers as input and projects them on a scale of 0 to 1. It is mainly used in working with probabilities.

3.5.2

K-Nearest Neighbor

kNN is based on learning analogy. It compares the given test tuple with the train tuple and then decides its class. In an n-dimensional space where a new data point is being introduced, kNN algorithm checks its proximity for training points and

150

M. A. Bhanja et al.

assigns the new tuple a class based on the closeness of individual training tuples around it and the number of k. Closeness is define in terms of any distance metric.

3.5.3

Random Forest

It is an ensemble technique comprising of many decision tree-based classifiers. Individual trees are created by bootstrapping samples of data from the original dataset. It uses both random variable selection and bagging for building individual trees. A decision tree develops a set of if-then rules based on which, partitions the dataset until the desired number of nodes are reached.

3.5.4

Bagging Classifier

Sometimes referred to as Bootstrap-Aggregating, Bagging is an ensemble learning algorithm designed to improve accuracy and performance of machine learning models. It works by dividing the dataset into smaller sets with repeated variables (duplicates) and aggregates all their individual performances to obtain an accuracy score.

3.5.5

XGBoost Classifier

It is an ensemble machine learning algorithm which converts wear learners to strong ones. It is an optimized version of gradient boosting designed to be highly flexible and portable. It focuses on reducing bias primarily.

3.5.6

Stacking Classifier

Ensemble learning technique combines various classification techniques and combines them to obtain the model accuracy. The individual classification models are trained on the complete dataset, after which, a meta classifier is fitted on them.

4 Results and Discussion This section illustrates the performance analysis of proposed system. 70% of the data set was used for training and 30% for testing, with a random state of 42. The adjustable parameters of each algorithms were tuned as per the Grid-Search CV best parameters list. Accuracy analysis was done on the basis of the confusion matrix of each classifier along with their recall, accuracy score and F1 score.

16

Model Selection for Parkinson’s Disease …

Model

151

Confusion matrix

Logistic regression

K-nearest neighbours

Random forest

Bagging classifier

XGBoost

Stacking classifier

The confusion matrix is a 2*2 matrix which shows the error in properly classifying the output class. It consists of a set of 4 values, namely True-Positives, True-Negatives, False-positives, and False-Negatives. XGBoost appears to have

152

M. A. Bhanja et al.

Table 1 Performance metric table for all algorithms on feature-selected dataset

Logistic reg. kNN Random forest Bagging XGBoost Stacking

Accuracy (%)

Precision (%)

Recall (%)

F1 score (%)

AUC (%)

88.13 93.2 91.52

87.75 95.45 89.79

97.72 95.4 1.0

92.47 95.45 94.62

88.18 97.87 96.66

89.83 94.91 83.05

89.58 93.61 90.47

97.72 100 86.36

93.47 96.7 88.37

94.01 96.51 87.72

predicted the most number of true positives, which is identifying people having PD correctly. It also made sure nobody who did not have PD was not marked as sick (Table 1). The best model based on accuracy and recall is XGBoost. Recall is a measure that tells us what proportion of patients that actually had PD was diagnosed by the algorithm as having PD, and we have achieved a recall of 100%. XGBoost and random forest show exceptional recall values, partly because they are based on the same concept of trees. kNN has the highest value of AUC-ROC, which means it is best in distinguishing between output classes. F1-score is the harmonic mean of precision and recall, both of which are of utmost importance when it comes to the medical domain (Fig. 4). The receiver operator characteristic curve (ROC Curve) is a probability depicting curve and area under the curve (AUC) depicts the degree of separability. It is an indicator which informs us about how well our model can distinguish between the various classes present in the output variable. A good model is one where the AUC-ROC is closest to the value of 1. Each colour defines a classifier, with kNN performing the best followed by XGBoost.

Fig. 4 AUC-ROC curve of all algorithms on feature-selected dataset

16

Model Selection for Parkinson’s Disease …

153

5 Conclusion This paper verified the robustness of classifiers in predicting the presence of PD in a patient before the onset of the severe motor symptoms. The results obtained by following the pipeline of feature selection followed by feature engineering and model application demonstrated the effectiveness of such classifiers in the domain of healthcare. Out of all the well-performing algorithms, XGBoost turned out to be the most promising. It is invariant to the scale of the features by the virtue of its fundamental architecture, making it robust and flexible. Hyperparameter optimization played an important role in making the best out of each model. It is concluded that XGBoost was the best algorithm for classification ended up giving an accuracy of 94.91% of a dataset split in a 70:30 ratio, with an AUC of 96.51 and a recall of 100%. Future scope of this work would involve better hyperparameter optimization along with the use of neural networks to obtain higher accuracies and recall.

References 1. Revett K, Gorunescu G, Mohamed S (2009) Feature selection in Parkinson’s disease: a rough sets approach. In: Proceedings of the international multi conference on computer science and information technology, IMCSIT’09, vol 4, pp. 425–428 2. Ramani R, Sivagami G (2011) Parkinson Disease classification using data mining algorithms. Int. J. Comput. Appl. 32(9):17–22 3. Bind S, Tiwari A, Sahani A (2015) A survey of machine learning based approaches for Parkinson disease prediction. Int J Comput Sci Inf Technol 6(2):1648–1655 4. Ene M (2008) Neural network-based approach to discriminate healthy people from those with Parkinson’s disease. In: Annals of the University of Craiova—Mathematics and computer science series, vol 35, pp 112–116 5. Ozcift A, Gulten A (2011) Classifier ensemble construction with rotation forest to improve medical diagnosis performance of machine learning algorithms. Comput Meth Programs Biomed 104(3):443–451 6. Can M, Sciences N (2014) Boosting committee machines to detect the Parkinson’s disease by neural networks 7. Das R (2010) A comparison of multiple classification methods for diagnosis of Parkinson disease. Expert Syst Appl 37(2):1568–1572 8. Paper C, Caglar MF, Cetisli B (2009) Automatic recognition of parkinson disease from sustained phonation tests using ann and adaptive neuro-fuzzy classifier automatic recognition of Parkinson’s disease from sustained phonation tests using ANN and adaptive neuro-fuzzy classifier Yapay Sini, Oct 2016 9. Introduction to boosted trees—xgboost 1.3.0-SNAPSHOT documentation (2020). https:// xgboost.readthedocs.io/en/latest/tutorials/model.html 10. UCI Machine Learning Repository (2020) Parkinsons data set. https://archive.ics.uci.edu/ml/ datasets/parkinsons

Chapter 17

XAI—An Approach for Understanding Decisions Made by Neural Network Dipti Pawade, Ashwini Dalvi, Jash Gopani, Chetan Kachaliya, Hitansh Shah, and Hitanshu Shah

1 Introduction A convolutional neural network (CNN) works on unstructured data such as images, audio clips and text to perform tasks of classification, detection, etc. It consists of two main parts, namely the convolutional part and the fully connected part. The first part uses filters to extract usable features from images. At preliminary levels, it involves textures and colours, and at forward layers involves complex features like eyes, beaks, ears, etc. The next part uses the identified features to make decisions. Neural networks provide better accuracy than conventional ML models. Unfortunately, their integral complexity and sophisticated nature do not make them much interpretable. Even with great accuracy, many high-stake decisions cannot be taken without human validation. One such case is the malaria detection system wherein pigmented blood cells of patients are examined. Just providing a positive or negative label is not sufficient, an explanation for the output is required. This paper presents an approach that extends over the existing local interpretable model agnostic explanations (LIME) that enables us to analyse neural network activations without potentially having to understand the underlying architecture. As an implementation of our approach, we look forward to explain an already built CNN classifier that classifies microscopic images of cells as being infected by the malarial parasite or not. This approach allows the examiner to go through the layers individually, observe neuron activations and detect parts of input image that trigger neurons the most to assess decision made by the neural network. The microscopic images of blood smears can have irregularities, and sometimes, although not that often, the blot on the smear might not be due to the presence of parasites, but just a

D. Pawade  A. Dalvi  J. Gopani  C. Kachaliya  H. Shah  H. Shah (&) Department of Information Technology, K. J. Somaiya College of Engineering, Vidyavihar, Mumbai, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_17

155

156

D. Pawade et al.

pigment of the reagent used. We aim at providing what features or characteristics (parts of the input image) were in support of the decision provided by the network.

2 Literature Survey It has been considered important to separate the model explanation aspect from the design aspect as the model designer might not have the required subject knowledge required to understand the model enough to explain its predictions. This has led to the idea of model—agnostic or model—independent approaches to explainability. Model agnostic techniques effectively treat the model as a black box and focus solely on inputs and model output correlations as an approach to explainability [1]. A partial dependence plot (PDP) analyses the changes in the predictions by evaluating the clear correlation between the feature and the prediction [2]. This illustrates the marginal effect on the anticipated outcome of a machine learning model of one or two characteristics [3]. The marginalizing used in PDP has its own sources of inaccuracy as we give equal weightages to feature pairs of tuples that have very less likelihood of occurring together and we do not consider for correlation between data parameters [4]. Another technique used to understand the explainability is individual conditional expectation (ICE) plots. It is quite identical to PDP but display a line for each instance showing how the prediction of the instance changes when a function changes [5]. For each case, an ICE plot visualizes the projection dependence on a component separately, resulting in a single line per instance relative to a single overall line in partial dependency plots. An ICE plot is more intuitive and can uncover heterogeneous relationships; however, the other disadvantages of PDP plots are still valid. Another approach is accumulated local effects (ALE) Plot which explain how features, on average, affect a machine learning model’s prediction [6]. ALE plots can be described as how model predictions change is a small window of the features around the point of interest for data instances in that window. The fact that we avoid full-fledged marginalization makes ALE better at overestimating unlikely tuples and also compensate for correlated data variables; however, it is much less intuitive and is less stable and can shake drastically by an instance of noise or outlier in the window. Also, ALE works on easy human understandable variable descriptions [7]. Apart from these statistical models, permutation feature importance provides a relatively novel take on the explainability approach. By calculating the increase in the model’s estimation error after the function has been allowed, we determine the value of a feature. For random forests, the permutation feature value calculation was implemented by [8]. Model dependency, based on this idea, is a proposed model-agnostic version of the significance of the feature [9]. The next approach is Surrogate models [10]. Surrogate models are model agnostic where we do not need to know the type of original model or any of their parameters [11]. But surrogate models are often pretty inaccurate when it comes to emulating the behaviour of much complex neural networks. They may be able to approximate them at certain feature intervals but

17

XAI—An Approach for Understanding Decisions …

157

cannot be expected to be accurate in some data intervals. Local surrogate model (LIME) [12] is the closest and best performing interpretability model when it comes to ease of understanding complex features. Here, firstly, point of interest is picked up for training a LIME model. The next step is to perturb the dataset and get predictions for the new data points. Followed by this, the new samples should be weighted according to the proximity to the point of interest. This will give us a dataset with variations, on which we will have to train an interpretable model which will then help us to interpret the prediction [13]. The above technique is already implemented in Python as a library and in R as a package. The advantage of LIME is that it does not depend on the underlying model. LIME is much useful but is also pretty slow at image data and so can only be used to make major decisions that do not need to be taken quickly but involve high stake [14, 15]. Another very major issue is that the explanations are unreliable. In article [16], the authors showed that in a simulated setting the definitions of two very similar points differed greatly. If the procedure of sampling is reused, so the explanations that come out will be distinct. Instability means the reasons are impossible to believe and you should be particularly alert. Understanding neural networks using a model agnostic technique like LIME denotes how or what part of inputs are positively or negatively correlated to the classification of a particular class. However, they do not throw enough light on what features have our model actually learnt or how they combine to provide a prediction [17]. Thus, our approach focuses on not limiting the neural network to just how inputs affect outputs but also the features a neural network extracts, which layers/neurons represent these and how they combine to give a particular prediction. They also help uncover incumbent biases and allow decision makers to potentially ignore them or use that kind of data to make the model better. Besides most Convolutional Neural nets are extensions of State-of-the-Art Architectures like Image Net that are built to recognize objects into 1000 categories and then further specialized to focus on a specific use case to detect plants, diseases, etc. When extensions of such networks that were not purpose built for specific tasks but are utilized from them.

3 Methodology We commonly refer to non-explainable models as black boxes as we do not know what goes on inside and we just receive the output. In our approach, we focus on understanding what happens inside Blackbox. We use the following terms: (a) Point of interest (POI), i.e. an input image fed to the model for which we want an explanation. (b) Pre-trained model—the Blackbox model (CNN) we want to understand for which the POI serves as an input.

158

D. Pawade et al.

(c) XAI model—Our interpretable model used for drawing correlations between the inputs and outputs of Blackbox model. (d) Hyper-parameters—tuneable variables used to fit the XAI model so as to generate concrete explanations. (e) Target layer—The layer of Blackbox model in which the user is interested, the neurons of which the user wants to be analysed. (f) Activation graph—The graph of activations vs neuron number for the target layer when the POI is fed to the Blackbox model. (g) Superpixels—A superpixel can be defined as a group of pixels that share common characteristics (like pixel intensity). These superpixels can be said to be regions of an image having a similar colour, texture or some similar pattern. For our use case: 1. Pre-trained model—A CNN model that is built on top of Resnet34 architecture with six additional fully connected sequential layers with sigmoid activation function. The model is trained on a dataset containing images of segmented cells from the thin blood smear slide images from the Malaria Screener research activity. The images are of parasitized (infected) and uninfected cells, 13,800 images each. The train-test-validation ratio is 7:2:1. The validation accuracy of the model was about 85.32%. 2. XAI model—It is a regularized linear regression model (Ridge regression) that accepts a pre-trained model as an input, dissects it and provides explanation about the significance of a neuron in the target layer. The feature space consists of Boolean values which correspond to a particular superpixel (refer to visualization neuron module) being present in the POI and the labels being classification probabilities. The maximum number of classes that our model gives probability for is a hyper-parameter. For our use case, we have kept the number of classes as 10 since we were interested only in neurons with top 10 activation values. For simplicity of understanding, we break our system into following four modules. 1. Feed Input Module—The objective of this module is to accept the pre-trained model, feed that with POI and get the output the user wants an explanation for. To train the model, we used a pre-trained resnet34 model and modified it by attaching our own fully-connected layer that gives output probabilities for the 2 classes, namely infected and uninfected. Major steps involved in this module are • Provide pre-trained malaria model as an input to our system. • Feed POI image to the pre-trained model. • The pre-trained model processes the POI and returns an output that is used by other modules.

17

XAI—An Approach for Understanding Decisions …

159

Fig. 1 Input and output for pre-trained model

Figure 1 depicts the sample input image (POI) which is being feed to the pre-trained model, which processes it and classifies it as parasitized which infers that the input blood cell image contains malaria parasite. 2. Output Explanation Module—This module provides the user with a high-level explanation of output which was received in the previous module. The explanation technique is applied on the pre-trained model as a whole which ultimately returns the correlation between specific attributes of the POI and output. 3. Layer Activation Module—This module returns a layer activation graph the target layer. This helps user to get finer explanations at neuron level and to know the neurons which were activated the most. (Refer to results section for examples) • Choose a hidden layer (the target layer) whose activation graph is needed. • A layer activation graph for the selected layer is generated by storing and plotting activation values of neurons inside the selected hidden layer. In-order to obtain activations for intermediate layers, we have to stop forward propagation at the selected layer. However, this method is not plausible as it might break the input model and we might not be able to obtain activations for any other layer without reloading the model again. Looking at forward propagation in detail, we realise that the flow of control passes from layer to layer starting from the input layer and ending at the output layer. Thus, we use special functions called as hook functions in Pytorch. These hook functions can be attached to any layer and can be passed an inline function as an argument which will be executed just before passing the control to the layer after the hooked layer, i.e. the next layer. We use these inline functions to store the activations of the hooked layer and to generate the layer activation graph by plotting the stored values. Based on this activation graph, any neuron number which has a significant activation can be chosen for visualization. 4. Visualizing Neuron Module—This module returns a correlation between the POI attributes and the activation of a particular neuron. • Choose a neuron from the target layer by referring to the layer activation graph generated in the previous module • Our explanation technique is now applied to the selected neuron internally to get correlation between POI attributes and activation of the selected neuron. (Refer results section for examples of the same)

160

D. Pawade et al.

Our approach focuses on generating data points (images in this case) that are close to the POI and fit our interpretable model (XAI model) to understand the correlation between segments of the image (POI) and the neuron activation. To generate training images, the POI is segmented into superpixels. Multiple new samples are generated by turning the superpixels on or off, i.e. keeping a superpixel intact or replacing it with black colour. These samples are now used as an input to the pre-trained model and the corresponding activations obtained at the target layer are now saved as labels to be used to train our XAI model. In order for explanations to be local, the weights are not set to be random but are set in a way so as to resemble similarity to the POI. While generating superpixels, there is 50% probability that a superpixel might be included. Consequently, the distance metric is based on the number of superpixels included in the sample image to the total number of image segments as this ratio denotes the extent of similarity between the sample and the POI. The categorical Boolean values representing the inclusion or exclusion of each superpixel form the input to our XAI model, the corresponding activations form labels and the distance metric forms the weights or coefficients for the XAI model. We prefer ridge regression to reduce overfitting. For linear regression, following Eq. 1 is used. Cost Function ¼ R½Yi  F ðXi Þ2

ð1Þ

where for each data instance i, Y is the actual output label, X is the input feature and F is the model. The number of input features to our XAI model depends on the number of superpixels and an increase in their number may lead to overfitting. As a result, we use ridge regression with regularization parameter (k). This changes our cost function to the following Eq. 2 Cost Function ¼ R½Yi  F ðXi Þ2 þ k  jjW jj2

ð2Þ

Here, we try to optimize not only the difference of the label and the output but also reduce the coefficient of higher order polynomial (W) and reduce overfitting. Thus, on training the XAI model with custom weights, we can ensure that the model converges at the local minima to signify optimal local explanation. The coefficients of our XAI model correlate to the importance of a particular superpixel to the activation of the chosen neuron.

4 Results For the demonstration purpose, we tried to demonstrate the explainability for sample infected (parasitized) and uninfected cell shown in Fig. 2. Figures 3, 4, 5, 6, 7 and 8 consist of sample input image, neuron activation graph for a particular layer and output image. The neuron activation graph has plotted as

17

XAI—An Approach for Understanding Decisions …

161

Fig. 2 Sample input images (infected and uninfected)

neuron on X-axis and its activation on Y-axis. For analysis purpose, firstly parasitized cell image is considered for analysis. The application displays the number of layers and neurons in it. Figures 3, 4, 5, 6 and 7 show the most activated neuron from each layer for the same input (POI). It is evident from all the images that the region which is shown in green superpixels has the parasite and it triggers the neurons the most in all the layers where layer 2 is the second outermost layer and layer 6 is the innermost layer of the neural network.

Fig. 3 Sample input image, neuron activation graph for layer 2 and output image

Fig. 4 Sample input image, neuron activation graph for layer 3 and output image

162

D. Pawade et al.

Fig. 5 Sample input image, neuron activation graph for layer 4 and output image

Fig. 6 Sample input image, neuron activation graph for layer 5 and output image

Fig. 7 Sample input image, neuron activation graph for layer 6 and output image

To detect an uninfected cell, our tool has to explain that no peculiar stain can be seen. Looking at our model explanations for an uninfected cell, we got following observation. For the uninfected cell in Fig. 8, we can see that second neuron (1) is not activated, in contrast the first neuron (0) is activated in the second last layer. Another explanation of an uninfected cell (Fig. 9) paints a similar picture. Neuron 0

17

XAI—An Approach for Understanding Decisions …

163

Fig. 8 Sample input image 1 (uninfected), neuron activation graph for layer 2 and output image

Fig. 9 Sample input image 2(uninfected), neuron activation graph for layer 2 and output image

is activated again and is affected majorly by areas spanning almost the entire body of the cell. Vast uniformly coloured areas, i.e. areas without stain contributing to triggering of neuron 0 point to the fact that neuron 0 does not represent detection of a stain. The explanations are subjective and there are not any methods that can gauge the correctness of an explanation. The explainability is subjective and requires a person with domain knowledge to consider the explanation provided and take action. As far as the scope of malaria detection is concerned, multiple observations point to the fact that cell images with distinct coloration are parasitized and the explanations generated point in the same direction.

5 Conclusion and Future Scope According to our hypothesis, to uncover features further below in the neural network, we need to restrict forward propagation or save a snapshot of model output at a user defined layer. This can be done in Pytorch with the help of hook functions that can be made to activate during a forward or a backward pass. This has to be

164

D. Pawade et al.

done without making permanent changes to the model layers so as not to affect the general functionality of the model and to be able to use it later to obtain intermediate outputs at some different layer. Further in order to maintain uniformity across different activation functions that might be used, we have to apply SoftMax function to intermediate outputs to convert the activations to probabilities. We can then segment our point of interest image into a specific number of segments, permute those segments to generate random images. Those images are fed to our network and the intermediate outputs of the user specified layer are stored. Then an interpretable function is used to fit the image segment (super pixel data) to the intermediate values in order to uncover relationships between parts of the image and the neurons in that particular layer. Hereafter, this project could help understanding significance of model at neuron level. Furthermore, it could help understanding biases prevalent in the model or sensitivity to certain features of input. Additionally, getting an intuition regarding design of neural networks—increasing or decreasing complexity of neural network structure based on clarity of explanation. Hence, this is will be able to take high stake decisions in medicine, diagnosis, finance, defence and weaponry based on expert validation.

References 1. Molnar C (2019) Interpretable machine learning. Available online: https://christophm.github. io/interpretable-ml-book/ 2. Ribeiro MT, Singh S, Guestrin C (2016) Why should I trust you? Explaining the predictions of any classifier. arXiv:1602.04938v3 [cs.LG] 3. Montavon G, Samekb W, Muller KR (2017) Methods for interpreting and understanding deep neural networks. arXiv:1706.07979 [cs.LG] 4. Green DP, Kern HL (2010) Modeling heterogeneous treatment effects in large-scale experiments using Bayesian additive regression trees. In: Proceedings of Annul Summer Meeting of the Society of Political Methodology, pp 1–40 5. Seifert C et al (2017) Visualizations of deep neural networks in computer vi-sion: a survey. In: Cerquitelli T, Quercia D, Pasquale F (eds) Transparent data mining for big and small data. Studies in Big Data, vol 32. Springer, Cham 6. Qin et al (2018) How convolutional neural network see the world—a survey of convolutional neural network visualization methods. arXiv:1804.11191 [cs.CV] 7. Goldstein A, Kapelner A, Bleich J, Pitkin E (2015) Peeking inside the black box: visualizing statistical learning with plots of individual conditional expectation. J Comput Graph Statist 24 (1):44–65. https://doi.org/10.1080/10618600.2014.907095 8. Chalkiadakis (2018) A brief survey of visualization methods for deep learning models from the perspective of explainable AI. Heriot-Watt University 9. Guidotti et al (2018) A survey of methods for explaining black box models. arXiv:1802.01933 [cs.CY] 10. Nguyen et al (2019) Understanding neural networks via feature visualization: a survey. arXiv:1904.08939v1 [cs.LG] 11. Bastani O, Kim C, Bastani H (2017) Interpretability via model extraction. [Online]. Available: https://arxiv.org/abs/1706.09773

17

XAI—An Approach for Understanding Decisions …

165

12. Gilpin et al (2019) Explaining explanations: an overview of interpretability of machine learning. arXiv:1806.00069 [cs.AI] 13. Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138–52160 14. Das A, Rad P (2020) Opportunities and challenges in explainable artificial intelligence (XAI): a survey. Available at: http://search.ebscohost.com/login.aspx?direct=true&AuthType= cookie,ip,uid,url&db=edsarx&AN=edsarx.2006.11371&site=eds-live 15. Tjoa E, Guan C. A survey on explainable artificial intelligence (XAI): toward medical XAI. In: IEEE transactions on neural networks and learning systems. https://doi.org/10.1109/tnnls. 2020.3027314 16. Townsend J, Chaton T, Monteiro JM (2020) Extracting relational explanations from deep neural networks: a survey from a neural-symbolic perspective. IEEE Trans Neural Netw Learn Syst 31(9):3456–3470. https://doi.org/10.1109/TNNLS.2019.2944672 17. Lipton (2016) The mythos of model interpretability. arXiv:1606.03490 [cs.LG]

Chapter 18

Hybrid Shoulder Surfing Attack Proof Approach for User Authentication Dipti Pawade and Avani Sakhapara

1 Introduction Due to the improvement in technology, there are thousands of websites and applications that come up in the market which serve various purposes to the customers. A single user has access to many such applications and it becomes essential for the user to register for each application with a unique username and password. A person can use a single username–password pair for different applications but choosing the same password for all of these applications is not considered as a good practice. Because if an attacker gets hold of the user’s password, he can access all the applications that the user has registered to, using that common password. So for security purpose, it is recommended to use different passwords for most of the accounts. Even though if one wants to keep it same due to every application enforcing different username, password constraints, one ends up with different username and password pair. Usually, it is recommended to use strong texts for passwords which are difficult to crack. If complicated textual passwords are entered, it can withstand the Brute-force attack, but it is still vulnerable to Shoulder-surfing attack [1] in which an attacker records the users password, while the user is typing, either using a device or by observation. Shoulder-surfing attacks can be of the following 2 types viz. Weak Shoulder-surfing and Strong Shoulder-surfing. These attacks are explained in Table 1. With the advent of smart phones, shoulder surfing attack through video recording can be carried out easily for textual passwords. Keyloggers is another major threat for textual passwords. Keyloggers can be hardware or software intended to track and store every key stroke so that the key D. Pawade  A. Sakhapara (&) Department of Information Technology, K.J. Somaiya College of Engineering, Mumbai, India e-mail: [email protected] D. Pawade e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_18

167

168

D. Pawade and A. Sakhapara

Table 1 Types of shoulder-surfing attack Concept

Example

Weak shoulder-surfing

Strong shoulder-surfing

An attacker just peeps in to record a user’s password by just observation, without recording it using a video camera or any other device When a user enters his password in the ATM, he should be alert of any onlooker who might steal his password

An attacker makes use of any video recording equipment to record a user’s password An onlooker is recording username and password entered in gmail account through the spy camera

logs can be secretly transferred to attacker over Internet [2]. Also memorizing these complicated passwords is a difficult task. One way to overcome the memorizing drawbacks of textual passwords is using images as passwords instead of texts. It has been proven that humans memorize images better than texts, like- it is easier to understand difficult concepts when we draw a diagram or a flowchart rather than just reading what is written in its explanation. In the most basic algorithm using graphical passwords, the user has to select an image as a password in the registration phase. During the login phase, the user just clicks on the image that he had selected during registration to successfully login into his account [3]. If an attacker records this session using a video camera, he can easily find the password image of the user and login into the system. This comes as the major drawback of this algorithm which leads to Shoulder-surfing attack. Other graphical password techniques are summarized in Table 2. Table 2 Graphical password techniques Method

Working description

Features

Limitations

Cued click points [4]

Instead of clicking on an image as a whole, the user clicks on certain points on the image (Cued Click Points) as password

• Improved usability and security • Increases workload for attackers

Convex hull [5]

• User will select three images out of all the images • Visualize the area formed by joining center points on user selected images Then click within that region for authentication

• Comparatively more secure against Shoulder-surfing attack • Secure from Brute-force attack because of large number of images on the screen

It is difficult for the user to click within the tolerance square of the click point Shoulder-surfing attack possible as the attacker can video record the click point location It is difficult for the user to visualize and create a convex hull in the mind Shoulder-surfing attack possible for large regions

(continued)

18

Hybrid Shoulder Surfing Attack Proof Approach …

169

Table 2 (continued) Method

Working description

Features

Limitations

Image based recognition technique [6]

• User selects sets of images from a 2D grid and then during login, user has to click on the same set of images in the same sequence • The position of images displayed in 2D grid is changed in each pattern User can click anywhere on the image within the tolerance region

• It is easy for user to remember visual passwords then textual passwords • It is user friendly

• Not at all robust against shoulder surfing attack • Brute force attack also possible as the number of images are limited

It is not user-friendly as the user needs get acquainted with clicking inside the tolerance region Robust against shoulder surfing attack, but not user friendly

Not completely robust against Shoulder-surfing attack

Pass points technique [7]

Draw-a-secret technique [8]

GeoPass [9]

User has to write a password in 2  2 grid. During login, the user has to rewrite the password within the same cells of the grid as that of registration. But when the user will write password, it will not be displayed on the grid Using sequence of places as click points on Google maps

Easy to use by the user

It is difficult for the users to write the password in invisible form ensure that it is entered in the same cells of the grid as that of registration

Not robust against Shoulder-surfing attack

To ensure maximum security to the system, it becomes very essential to make use of strong algorithms that can withstand Brute-force attack, Shoulder-surfing attack, or any other such vulnerable attacks to the system. Our idea comes from the convex hull algorithm [5] and the color-dots algorithm [10] which is an intersection analysis algorithm. Hybrid Authentication Approach (HAA) algorithm mainly focuses on providing security to the system in terms of overcoming Brute-force attack and Shoulder-surfing attack and attack by using key loggers. HAA includes making use of animated objects and alpha-numeric characters over the user interface having a 3  3 grid of images. Authentication region will be determined by the system by forming a triangle joining those 3 images selected by the user.

170

D. Pawade and A. Sakhapara

2 Proposed Hybrid Authentication Approach In this section, we propose and implement a new Shoulder-surfing proof authentication algorithm, which is the improvement of approach proposed by Tzong-Sun Wu, Ming-Lun Lee, Han-Yu Lin, Chao-Yuan Wang’s [11]. We have adopted the Convex Hull Click Scheme to find the authentication region, but we have changed the way passwords are to be entered. Our Hybrid Authentication Approach (HAA) system consists of three phases: (1) Registration Phase, (2) Authentication Region Generation Phase, and (3) Authentication Phase.

2.1

Registration Phase

Step 1: The user enters his personal details like Full Name, User Name, Date of Birth, and email-id as shown in Fig. 1. Step 2: As shown in Fig. 2, the user will select 3 images in such a way that a triangle can be formed. This triangle will form the authentication region of the user. The user does not have to memorize these images. Step 3: The user chooses a color as shown in Fig. 3. The user must remember this chosen color. Step 4: The user chooses 3 objects. It is shown in Fig. 4. The user has to remember these objects and the sequence in which they are chosen. Step 5: The user chooses 3 alpha-numeric characters. It is shown in Fig. 5. The user has to remember these characters and the sequence in which they are chosen. The 3 objects, 3 alpha-numeric characters, and color of the triangle chosen by the user are stored in the database in an encrypted form to ensure security. Fig. 1 User personal details

18

Hybrid Shoulder Surfing Attack Proof Approach …

171

Fig. 2 User selecting three images

Fig. 3 User selects one colour

2.2

Authentication Region Generation Phase

Step 1: In step 2 of registration phase, the user has selected 3 images. The midpoints of these 3 images are joined to form a triangle. Step 2: This triangle border color is set to the color selected by the user in the step 3 of registration phase. This triangle and its border color form the authentication region of the user as shown in Fig. 6.

172 Fig. 4 User selecting 3 objects

Fig. 5 User selects 3 alpha-numeric characters

Fig. 6 Authentication region generation

D. Pawade and A. Sakhapara

18

2.3

Hybrid Shoulder Surfing Attack Proof Approach …

173

Authentication Phase

Step 1: The user enters his unique username. Step 2: A complex Graphical Password Interface is displayed having many different colored triangles in the background and objects as well as alpha-numeric characters moving around in the foreground as shown in Fig. 7. Step 3: The user identifies his authentication region as a triangle having the border color same as the color he chose while registering. Step 4: Whenever the object or alpha-numeric character chosen by the user in the registration phase enters in the authentication region of the user, the user presses a “spacebar” key. This object and its sequence are then authenticated against the database. Step 4 is repeated 6 times, 3 times for 3 objects, and another 3 times for 3 alpha-numeric characters. If the user completes this step correctly, he gets authenticated and thus gets logged into the system. The flow of the HAA system is shown in Fig. 8.

3 Results and Discussion In this section, the password space and the user usability results of the proposed HAA system are discussed. Password space plays an important role in determining the security of a password scheme against the Brute-force attack. In graphical passwords, the objects cannot be repeated. This leads to the problem of smaller password space in graphical passwords as compared to textual passwords. The password space Sbasic of graphical passwords is given by Eq. (1)  Sbasic ¼

Fig. 7 Graphical password interface

N k

 ð1Þ

174

D. Pawade and A. Sakhapara

Fig. 8 Flow of HAA system

Where N = Total number of objects, K = Number of objects selected by user which is also referred as password length (L)   9 For N = 9 and k = 7, the password space is ¼ 36 7 Consider the proposed HAA system. A password of length L consists of 1 color, k selected objects, and T alpha-numeric characters. Let C be the total number of

Hybrid Shoulder Surfing Attack Proof Approach …

18

175

Table 3 Sample set for different combination of passwords SN

Users

Color

Obj1

Obj2

Obj3

Alpha-numeric

1 2 3 4 5

User1 User2 User3 User4 User5

Yellow Brown Green Red Pink

Diamond Circle Pentagon Circle Diamond

Square Square Circle Square Triangle

Triangle Triangle Star Star Pentagon

y a b r p

9 4 y 0 0

G R 0 8 1

colors and N be the total number of objects. The total number of alphanumeric characters is 62. The password space Shybrid is given by Eq. (2).  Shybrid ¼ C 

N k

  62T

ð2Þ

For  L= 7, C = 9, N = 9, k = 3, and T = 3, the password space is calculated as 9 9  623  1:8  108 . 3 Comparing Eqs. (1) and (2), the password space of HAA is increased to great extent. The password space of HAA can be increased further by increasing the total number of colors and the total number of objects. The proposed HAA system is implemented and tested with different users. Table 3 shows sample set for different combinations of color, objects, and alpha-numeric characters (which user has chosen at the time of registration) for successful login. Figure 9 represents the time taken by each user for 5 successful

50 45

Time in seconds

40 35 30 25 20 15 10 5 0 1

2

3

4

Number of Successful Logins

Fig. 9 Variation in the login time of 20 users for 5 successful login sesssions

5

User1 User2 User3 User4 User5 User6 User7 User8 User9 User10 User11 User12 User13 User14 User15 User16 User17 User18 User19 User20

176

D. Pawade and A. Sakhapara

login sessions. In general, it is observed that initially the user takes longer time to login, but as he gets trained this time reduces. In some of the cases, for example, for User1 for second login attempt, the time required is high as compared to first attempt. This is because the waiting time of the user for the selected objects to enter in the authentication region is more. Considering all the conditions, the mean time for user login is 17.97 s. But this time can vary from user to user and can be reduced further as the user proficiency of using the system increases further.

4 Conclusion HAA system proposed in this paper uses a novel method of entering passwords and authenticating users using graphical passwords. It proposes to have a complex graphical user interface by displaying triangles of unique colors and the objects along with the alpha-numeric characters moving around. This makes it difficult for the attacker to understand when and why did the user press “spacebar” key. HAA overcomes the attacks against mouse clicking graphical password schemes. In many of the proposed graphical password methodologies, the user has to imagine the authentication region. On the other hand, our system provides user-friendly way, where the user can view the authentication region. On the basis of simulations carried out, we have observed that HAA is resistant to both, Shoulder-surfing, and brute-force attacks. HAA can be enhanced further by reducing the waiting time of the user for the selected objects to enter in the authentication region. A user’s inclination towards a particular color or object can create hotspots. These hotspots should be avoided. For making HAA system more secure and robust, these problems should be addressed.

References 1. Bandawane Reshma B, Gangadhar Mahesh M, Kumbhar Dnyaneshwar B (2014) Data security using graphical password and AES algorithm for E-mail system. Int J Eng Dev Res 2. Pawade D, Lahigude A, Reja D (2015) Review report on security breaches using keylogger and clickjacking. Int J Adv Found Res Comput 2(NCRTIT2015):55–59 3. Gao H, Liu X, Wang S, Liu H (2009) Design and analysis of a graphical password scheme. In: IEEE Computer Society, Washington, DC 4. Moraskar V, Jaikalyani S, Saiyyed M, Gurnani J, Pendke K (2014) Cued click point technique for graphical password authentication. Int J Comput Sci Mobile Comput (IJCSMC) 5. Samleti S, Kumar C, Prakash V, Kumar N, Kumar S (2014) Shoulder surfing resistant password authentication mechanism (using convex hull click scheme). Int J Adv Res Comput Commun Eng (IJARCCE) 6. Aishwaryar N, Purva Suryavanshi D, Pratiksha Navarkle R (2018) Survey on graphical password authentication techniques. Int Res J Eng Technol 5(2):26–28 7. Renaud K (2009) On user involvement in production of images used in visual authentication. J Visual Lang Comput 20(1):1–15

18

Hybrid Shoulder Surfing Attack Proof Approach …

177

8. Lin TH, Lee CC, Tsai CS, Guo SD (2010) A tabular steganography scheme for graphical password authentication. Comput Sci Inf Syst 7(4):824–841 9. Menga W, Zhu L, Li W, Han J, Li Y (2019) Enhancing the security of FinTech applications with map-based graphical password authentication. Future Gener Comput Syst 101, Elsevier 10. Jim L (2012) ColorDots: an intersection analysis resistant graphical password scheme for the prevention of shoulder-surfing attack. University of North Florida 11. Wu TS, Lee ML, Lin HY, Wang CY (2013) Shoulder-surfing-proof graphical password authentication scheme. Springer

Chapter 19

Redistribution of Dynamic Routing Protocols (ISIS, OSPF, EIGRP), IPvfi Networks, and Their Performance Analysis B. Sathyasri, P. Janani, and V. Mahalakshmi

1 Introduction The network in IPv6 here is enabled with dynamic routing protocols like OSPF, ISIS, RIP, and EIGRP. Thereby the information is being updated dynamically to enable it to forward the packet from source to destination. This routing concept will be explained in this project, and the compositions of a routing table in routers will also be discussed by measuring the performance of the routing instances [1]. To design new form of interconnection network with various protocols using the methods of route election and redistribution, [3] permitting the configuration of efficient routing system thereby measuring the performance metrics (Fig. 1).

2 Proposed System Route redistribution [4] is said to be allowing exchange of routing information between different networks. In a network, every protocol is said to be having routing information base (RIB) which is said to store the information regarding routing [5]. Here, we create redistribution between routing protocols (OSPF– EIGRP–ISIS).

B. Sathyasri  P. Janani  V. Mahalakshmi (&) Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, Tamil Nadu, India e-mail: [email protected] B. Sathyasri e-mail: [email protected] P. Janani e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_19

179

180

B. Sathyasri et al.

Fig. 1 Combination of nodes and routers

2.1

Creation of Network

The proposed system uses the wired scenario where the source and the destination devices (routers) are connected [6]. Three different networks are created, and the protocols namely OSPF, EIGRP, and ISIS are established along the network, respectively. These networks are connected using a switch [7]. The network also includes a router and a host [8].

2.2

Redistribution of OSPF and EIGRP Network

Initially, the routes between OSPF and EIGRP networks are redistributed, so that the connecting router has the routing information in its routing table which is used for routing redistribution mechanism [9]. EIRGP results in timer’s variation and less packet loss improving with high convergence time. But in OSPF, the timer’s variation has only less impact on convergence time and packet loss [2].

19

2.3

Redistribution of Dynamic Routing Protocols …

181

Redistribution of OSPF and ISIS Network

While it is attractive to run a solitary directing convention all through your whole IP inter-network, multi-convention steering is normal for various reasons, including organization mergers and numerous divisions oversaw by different organization administrators [10]. In these cases, OSPF and ISIS are used [11]. Detection of node failure data is said to be updated quickly by choosing the alternate path [12].

2.4

Redistribution Between EIGRP and ISIS Network

Redistribution is made necessary by the multiple protocol environment. The routes between ISIS and EIGRP are redistributed, and the entire network is connected with respective protocols [13]. Now, the three different networks are connected using a switch [14].

3 Performance Evaluation Latency: Meantime of data packet which travels from one point to another is latency. Where it is measured in milliseconds (ms). Latency ¼ Round Trip Time=2:

ð1Þ

Round trip time: (RTT), abbreviated as round trip delay, is the required time where the data packet is said to be destined to a particular node and is travelled back again from the node. Throughput: Throughput Is Given by Throughput ¼ Latency=Packet Size

ð2Þ

It is given by the maximum of total data that can be delivered to the destination within the given time constrain. Packet Loss: In a network when one or more packets are said to be sent from source to destination, the measure of the ratio of packet loss is said by packet loss. Convergence Time This is a combination of routers which has similar topological data over the Web in which they function. Convergence time is the proportion of how quick a gathering of switches arrives at the condition of intermingling [15]. It is one of the primary plan’s objectives and a significant presentation pointer for

182

B. Sathyasri et al.

directing conventions to actualize a component that permits all switches running this convention to rapidly and dependably unite. Convergence Time ¼ Packet Loss  RTT

ð3Þ

4 Result See the Tables 1, 2, 3, 4, 5 and 6.

Table 1 OSPF–EIGRP performance evaluation OSPF–EIRGP Data Packet size Loss

RTT

Latency (ms)

Throughput (Mbps)

Convergence time (ms)

500 2000 5000 10,000 14,000 18,000

475 535 610 786 996 1234

237.5 267.5 305 393 498 617

2.105 7.479 16.939 25.445 28.112 29.1734

475 1070 610 2358 1992 1234

1 2 1 3 2 1

Table 2 OSPF–EIGRP latency-throughput graph

19

Redistribution of Dynamic Routing Protocols …

183

Table 3 OSPF–ISIS performance evaluation OSPF–SIS Data Packet size Loss

RTT

Latency (ms)

Throughput (Mbps)

Convergence time (ms)

500 2000 5000 10,000 14,000 18,000

784 770 1061 1324 1494 1548

337.5 367.5 405 493 598 717

3.105 7.479 16.939 25.445 28.112 29.1734

1568 1540 1061 3972 5976 6192

2 2 1 3 4 4

Table 4 OSPF–ISIS latency-throughput sraph

Table 5 EIGRP–ISIS performance evaluation EIGRP–ISIS Data Packet size Loss

RTT

Latency (ms)

Throughput (Mbps)

Convergence time (ms)

500 2000 5000 10,000 14,000 18,000

460 589 611 956 980 1034

230 267.5 305.5 478 490 517

2.115 6.479 10.939 15.445 20.112 22.1734

460 1178 611 2868 1960 1034

1 2 1 3 2 1

184

B. Sathyasri et al.

Table 6 EIGRP–ISIS latency-throughput Graph

5 Network Modelling Ipv6 fabrication is a combination of the above-discussed protocols (Figs. 2 and 3). EIGRP (Fig. 4) ISIS (Figs. 5 and 6)

5.1

Redistribution Between EIGRP and ISIS

See the Fig. 7.

5.2

Redistribution between OSPF and ISIS through EIGRP

Here, connection between two protocols is being established using OSPF and ISIS through EIGRP (Fig. 8). Performance Evaluation OSPF–EIGRP network (Figs. 9 and 10) OSPF–ISIS network (Figs. 11 and 12) EIGRP–ISIS network (Figs. 13 and 14)

19

Redistribution of Dynamic Routing Protocols …

Fig. 2 Enabling IP address to each ports OSPF

Fig. 3 Enabling OSPF protocol

185

186

Fig. 4 Enabling EIGRP protocol

Fig. 5 Reallocation between OSPF and EIGRP

Fig. 6 Redistribution between OSPF and EIGRP

B. Sathyasri et al.

19

Redistribution of Dynamic Routing Protocols …

Fig. 7 Redistribution between EIGRP and ISIS

Fig. 8 Redistribution of OSPF and ISIS through EIGRP

Fig. 9 OSPF–EIGRP packet transfer with varying packet size

187

188

Fig. 10 Tracing path of convergence (OSPF–EIGRP)

Fig. 11 OSPF–ISIS varying packet size and datagram size

B. Sathyasri et al.

19

Redistribution of Dynamic Routing Protocols …

Fig. 12 Tracing path of convergence (OSPF–ISIS)

Fig. 13 EIGRP–IS–IS packet transfer with varying packet size

189

190

B. Sathyasri et al.

Fig. 14 Tracing path of convergence (EIGRP–ISIS)

6 Conclusion and Future Enhancement The main aim of our work deals with the overcoming with efficient approach in network in case of deficiencies in the network path using IPV6 configured network using the user-friendly software GNS3. In our proposed type of interconnection, accuracy is implicit, versatility is conceivable, and start to finish ways crossed by information bundles can adjust to execution measures. Through our overview, we can presume that this methodology in ipv6 network is exceptionally made sure about, and it has high location space which empowers a person to utilize roughly 3.6 million IP address. In this approach, the network was created using intense protocols like OSPF, EIGRP, and ISIS, and it increases throughput, which reduces packet loss compared to the existing model.

References 1. Mukmin C, Antoni D, Surya Negara E (2018) Comparison route redistribution on dynamic routing protocol (EIGRP into OSPF and EIGRP into IS-IS). ICIBA 2. Fiade A, Amelia (2017) Performance evaluation of routing protocol RIPv2, OSPF, EIGRP With BGP. IEEE 3. Ahmad Jaafar ANH, Salim S, Tiron LA, Mohd Hussin Z (2017) Performance evaluation of OSPFv3 and IS-IS routing protocol on IPv6 network. ICE2T 4. Mohammad Z, Abusukhon A (2019) Performance analysis of dynamic routing protocols based on opnet simulation. IJACSA 5. Masruroh SM, Robby F, Hakiem N (2016) Performance evaluation of routing protocols RIP2016OSPF, and EIGRP in an IPv6 Network. IEEE

19

Redistribution of Dynamic Routing Protocols …

191

6. Dey GK, Ahmed MM (2015) Performance analysis and redistribution among RIP, EIGRP & OSPF routing protocol in an IPV4 network. IEEE 7. Le F, Xie GG (2007) On guidelines for safe route redistributions. In: Proceedings SIGCOMM workshop internet network manage, pp 274–279 8. Le F, Xie GG, Zhang H (2007) Understanding route redistribution. In: Proceedings ICNP, pp 81–92 9. Mahajan R, Wetherall D, Anderson T (2007) Mutually controlled routing with independent ISPs. In: Proceedings 4th USENIX symposium network system design implement, pp 355–368 10. Sobrinho JL, Quelhas T (2012) A theory for the connectivity discovered by routing protocols. IEEE/ACM Trans Netw 20(3):677–689 11. Route selection in Cisco routers. Cisco Systems, San Jose, CA, USA Doc. ID: 8651 (2008) 12. Gouda MG, Schneider M (2003) Maximizable routing metrics. IEEE/ACM Trans Netw 11 (4):663–675 13. Benson T, Akella A, Maltz D (2009) Unraveling the complexity of network management. In: Proceedings 6th USENIX symposium network system design implement, pp 335–348 14. Maltz DA, Xie GG, Zhan J, Zhang H, Hjálmtýsson G, Greenberg A (2004) Routing design in operational networks: a look from the inside. In Proceedings SIGCOMM, pp 27–40 15. Redistributing routing protocols. Cisco Systems, San Jose, CA, USA, Doc. ID: 8606 (2012)

Chapter 20

Single Layer Tri-UWB Patch Antenna for Osteoporosis Diagnosis and Measurement of Vibrational Resonances in Biomedical Engineering for Future Applications Khalid Ali Khan

and Aravind Pitchai Venkataraman

1 Introduction In recent years, microwave radio technologies-based treatment and diagnosis are rising exponentially and drastically in medical science. It is noted, in the last few decades, verities of innovations and researches are going on microwave radio technologies-based treatment because of its non-ionizing, skin depth, and penetrating behavior. If the recent history of biomedical science and engineering is to be investigated, then it comes into light that only the lower part of the microwave frequency range is widely used here. For example, microwave range from 0.402 to 0.405 GHz (bandwidth of 5 MHz) is reserved as medical implant communication service (MICS) band because of lower-body tissue attenuation [1, 2]. 2400– 2483.5 MHz is allotted for Industrial Scientific and Medical (ISM) band, whereas 2.369–2.4 GHz is fixed as Med Radio Band by U.S.FCC (Federal Communications). A novel microwave system is designed by Le et al. [3] for microwave tomography, which is operational at 500 MHz–3 GHz. 3 GHz as a maximum measurement frequency is a traditional effort only to take the biological measurement and imaging of human body (bone volume fraction, breast cancer, brain strokes, etc.). Other systems are also designed for breast imaging and for the collection of the transmitted signal with the help of a microwave antenna of 4– 10 GHz [4]. But it is scientific fact that all microwave antennas in different devices, that are made-up on high-permittivity substrate or media behaves like a leaky-wave K. A. Khan Mettu University, Mettu, Ethiopia e-mail: [email protected] A. P. Venkataraman (&) Saranathan College of Engineering, Trichy, Tamil Nadu, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_20

193

194

K. A. Khan and A. P. Venkataraman

structure. This leaky-wave effect can be minimized significantly at higher frequencies such as 4, 6, and 8 GHz. Therefore, developed and recommended antenna in this paper allows the expedition of transceiver signal over the upper end of the higher frequency band. This may be useful in osteoporosis diagnosis, in vibrational resonance measurement, and in breast imaging system for future use. Basically, this antenna is a physically small-sized single layer, tri-band cum ultra-wide band (UWB) patch antenna designed over a single substrate. It covers the frequency band in between 3.5 and 6 GHz for biomedical applications especially for osteoporosis diagnosis and biological vibrational resonance analysis. Osteoporosis is a big health issue in old age people. Microwave frequency-based antenna for human body analysis and measurement suffers from various technical challenges in terms of impedance matching, compactness, and radiation characteristics. Compactness and impedance matching can be achieved by the technique of immersive antenna, in which a high permittivity material such as water [5] and canola oil are used as an immersive liquid [6]. Another water-based high dielectric resonator antenna [7, 8] is also available. But, using liquids inside the antenna system increases the volume of the device, and it increases the complication in clinical sanitation of these fluids. So, its alternate option may be an antenna fabricated on a white–gray mixture (high permittivity material). Now, this proposed and suggested single-layer tri-UWB patch antenna is the modified form of EGRH antenna [9] for a better result in the same biomedical treatment. Resonance at microwave frequencies in the biological system can be investigated when this system is coupled with an electromagnetic (EM) field radiator [10]. EM field can rotate or translate the charged particle, polar molecular structure, or cellular component of a biological system. Microtubules in a cell or another polar molecule are strongly dumped by the EM field [11]. In fact, water is a polar molecule and major constituent of biological tissues. It experiences oscillating motion by absorbing the EM energy. Polar molecules or macromolecules (Proteins, DNA) are aligned themselves with electromotive force (EMF) generated by field and impose damping all together. This energy associated with damping can be accumulated in rotational or vibrational mode. Rotational motion of the dipole molecules is the cause of tissue heating, whereas under-damped mode (with decay time greater than the time period of the RF oscillation) gives the resonance absorption. Otherwise for over-damped mode, resonance absorption does not occur. In the case study of resonance absorption, it has been proved that radiated power (P), scattering width (sS), and relaxation time (s) by electric dipole can be calculated by given formula [12] as Eqs. 1, 2, and 3, respectively. P ¼ dW=dt ¼ ðd 0^ 2 x^ 4Þ=ð4p 2 0 3C^ 3Þ

ð1Þ

20

Single Layer Tri-UWB Patch Antenna for Osteoporosis Diagnosis …

195

where d_0 = maximal oscillating dipole moment = qa, q = electronic charge, a = amplitude(oscillating). Energy associated with the oscillation of electronic charge with mass “m\” = W = 1/2 mx^2 a^2 s S ¼ P=W ¼ ðq^ 2 x^ 2Þ=ð6p 2 0 mc^ 3Þ

ð2Þ

where x = 2pC/k, C = speed of light in free space, k = wave length, m = mass of charge, q = charge on dipole, _0 = Permittivity of free space, s = 1/s_S.

2 Antenna Design Specifications and Discussion As previously explained, the proposed antenna is designed to interact directly with the human body and exhibits three windows of ultra-wide operating frequency bandwidth in between 3.7 and 6.0 GHz (for reflection coefficient under −6.5 dB). In order to meet the perfect matching with the average dielectric constant (permittivity) of bone, brain or other biological systems with rich amount of water, the relative permittivity(_r) of the substrate has been taken to be 50 with loss tangent of 0.001. Actually, this substrate is a bio-dielectric material, which is a white–gray compound composed by a specific mixture of corn flour, gelatin, and distilled water [9]. The geometry of the proposed antenna is depicted in Fig. 1(a, b); actually, this optimized geometry of inter-Dig Cap is sandwiched by two vertical rectangular plates. Proposed structure has the two finger pairs in which its respective finger width, finger spacing, and finger overlapping dimension can be used to adjust the radiation characteristics of the antenna as per our scientific need and applications. Apart from the finger pairs, other dimension such as terminal width and end gap also play the key role to control the size and radiation mechanism of the antenna. The geometrical dimension of inter Dig-Cap and its optimized structure has been given in Table 1 and Fig. 2, respectively. 0.035 mm thick copper is used as a radiating element with a conductivity of 5.80  107 s/m. The size of the rectangular patch is only 27.25  51 mm and fed by a 50 X.

196

K. A. Khan and A. P. Venkataraman

Fig. 1 a Proposed antenna geometry. b Dimensions of antenna geometry

Table 1 Specifications

Inter Dig-Cap specifications

Specification values

Terminal width Finger width Finger spacing Finger overlap End gap Finger pair

21 mm 3 mm 3 mm 5 mm 3 mm 2

Co-axial cable. The multiple strips or finger pair (as double finger pair is used here) with inter Dig-Cap edge are controlling elements for the magnetic field and its magnitude [13]. Here, it is also well known that multiple slots or strips created in a rectangular patch antenna results in multiple-band characteristics with acceptable values of the reflection coefficient. But this multiple-band patch antennas are fabricated on a substrate with the lower value of relative permeability, and the available band may be wide or maybe very narrow in order of 30–50 MHz. Narrowband is not more effective if it is out of FCC recommendation. However, it is a big technical challenge to design the patch antenna with high dielectric constant material for medical applications at lower frequencies range (0.5–3.0 GHz) with S11 less than −10 dB. Because a thicker substrate with higher dielectric constant furnishes narrower bandwidth.

20

Single Layer Tri-UWB Patch Antenna for Osteoporosis Diagnosis …

197

Fig. 2 Inter Dig-Cap structure

3 Simulated Parametric Analysis and Study The Sonnet Lite 15.53 software is used for the simulation. In this section, EM wave radiation performance of antenna will be evaluated on the basis of reflection coefficient/return loss (S11) and voltage standing wave ration (VSWR), whereas generated surface current density and charge current density at respective resonance frequency will be used to estimate resting potential. This resting potential generates the oscillation in cell membrane as we will see later. The simulated value of S11 (reflection coefficient) is depicted in Fig. 3. In this antenna design, bandwidth (BW) is considered where the reflection coefficient or return loss is below −7.0 dB. The return loss simulation graph of the proposed antenna ensures its ultra-wide band (UWB) features at three different bands of frequencies. Its first frequency band begins from 3.73 to 4.27 GHz (13.11%) with a bandwidth of 540 MHz. The second frequency band starts from 4.32 to 5.21 GHz (19.34%) with a bandwidth of 890 MHz, and the third frequency band is found from 5.23 to 6.0 GHz (13.41%) with a bandwidth of 770 MHz. A specific frequency in each band is observed at

198

K. A. Khan and A. P. Venkataraman

4.2, 4.60, and 5.74 GHz, which shows the minimum S11 as −25.24, −26.49, and −39.05 dB, respectively. Meanwhile, the plotted graph of voltage standing wave ratio (VSWR) as in Fig. 4 measures the value of VSWR in this tri-band. The VSWR value lies in between 2.3 and 1. 023. But at the frequency of minimum S11, as discussed above, it is 1.116, 1.099, and 1.023, respectively. These values are very near to 1.0, which satisfies the performance of an ideal antenna theoretically. The simulated current distribution for the proposed antenna at central resonance frequency in its respective tri-band is depicted in Fig. 5. So, it could be noted that at central resonance frequency in the first band (4.20 GHz), second band (4.60 GHz), and third band (5.74 GHz), the maximum magnitude of current distribution is 1.40, 1.50, and 1.50 amp/meter, respectively. This maximum value will make a node at the center of the patch and zero current at the edge of the patch (both sides of the center node which is an integral multiple of ð2n þ 1Þ k4 distance, where n = 0, 1, 2,). Consequently, the existence of an electric field node and magnetic field antinode is found at the center of the patch. A similar case can be also observed for other resonance frequencies which lies in the other bands. Figure 6 illustrates the charge distribution on antenna surface, which is maximum at high frequency of resonance with low return loss. It can be realized here, at resonance frequency of 5.74 GHz, the maximum accumulation of charge per unit surface area at node and anti-node point is 3.0  10−8 and 2.7  10−15 C/m2.

Fig. 3 Simulated value of S11 (reflection coefficient)

20

Single Layer Tri-UWB Patch Antenna for Osteoporosis Diagnosis …

199

Fig. 4 Simulated VSWR of proposed antenna

Fig. 5 a Surface current density at 4.2 GHz. b Surface current density at 4.60 GHz. c Surface current density at 5.74 GHz

Fig. 6 a Surface charge density at 4.2 GHz. b Surface charge density at 4.60 GHz. c Surface charge density at 5.74 GHz

200

K. A. Khan and A. P. Venkataraman

4 Result and Discussion The simulated result for reflection coefficient and VSWR of designed antenna with a higher dielectric constant material (equivalent to the relative permittivity of the bone) gives the good agreement for a UWB patch antenna (wherein BW > 500 MHz) over the desired band of frequencies (tri-band). Every band in a tri-band antenna has a series of resonance frequencies in which reflection coefficient is very low(S11 < −20 dB) and seems as a sub-band in it. Simulated surface current distribution picture around the finger slot cannot be neglected which is found at the resonance frequencies. The charge distribution at the antenna surface and its ground plate is produced by co-axial feeding technique of excitation, and finally, TM10 resonant mode of EM wave propagation exists. The maximum charge distribution at the central resonance frequency of 4.2, 4.6, and 5.74 GHz of respective tri-band is recorded as 3.2  10−8, 3.2  10−8, and 3.0  10−8 C/m2, respectively. Refer to Fig. 5(a–c), it can be illustrated that when proposed antenna is functional in lower frequency band (i.e., first band), maximum portion of patch area is covered by high surface current density and vice versa. Similar results are also observed for surface charge density as well.

5 Resonance Absorption and Radiative Width A biological system absorbs the radio frequency signal like a broadband receiver [14] and seems much smaller than the microwave wavelength. Hence, its attachment to electric dipole moment to the radiated field comes to be very small, and consequently, its maximal absorption by the biological system becomes very small. So, for such typical value as charge (q) = e, m = mP = 1.67  10−27 kg, from Eqs. 2 and 3, the value of scattering width and relaxation time can be calculated at the central resonance frequencies of all bands including its lower and upper range as given in Table 2. Table 2 Resonance absorption and radiative width

Frequency (GHz)

sS (Sec)

3.37 4.2 4.28 4.32 4.6 5.22 5.24 5.74 6.0

1.53 2.37 2.46 2.51 2.84 3.66 3.69 4.43 4.48

        

10−6 10−6 10−6 10−6 10−6 10−6 10−6 10−6 10−6

T (Sec) 6.53 4.21 4.06 3.98 3.52 2.73 2.71 2.25 2.23

        

Band (GHz) 105 105 105 105 105 105 105 105 105

3.37–4.28

4.32–5.22

5.24–6.0

20

Single Layer Tri-UWB Patch Antenna for Osteoporosis Diagnosis …

201

In Table 2, it reports that the radiative width because of interaction with electromagnetic field is very short (in order of microsecond), but radiative decay life time in such state is very long. It proves that resonant mode is strongly damped to affect the biology by absorption of microwave.

5.1

Resonance in Microtubules and Polarization Potential

Pokorny et al. [11] verified that frequencies at the higher range of frequencies (MHz and GHz) generate mechanical resonance in microtubules. Additionally, it was also observed strongly that such oscillation in cell membranes may affect its biology. By substituting the specific capacity (Cm) of cell membrane as 0.01 F/m2, the calculated value of resting potential at the central resonance frequency of tri-band antenna will be, Vm ¼

Q Cm

ð3Þ

where Q = charge density. Hence, value of maximum resting potentials at respective resonance frequencies will be, 3:2  108 Vm ¼ 0:01

¼ 3:2  106 volt

ð4Þ

¼ 3:2  106 volt

ð5Þ

3:0  108 ¼ 3:0  106 volt Vm ¼ 0:01 at 5:7 GHz

ð6Þ

3:2  108 Vm ¼ 0:01

at 4:2 GHz

at 4:6 GHz

The resting potential of Vm  3microvolt is the polarization potential across the membrane from dipole moment, and it is generated because of electric accumulated charges at the patch antenna at its resonance frequencies at node position. The charge density accumulation plot as shown in Fig. 6(a, b, c) also verifies that charge on characteristic sector of membrane with surface area of 10−10 m2 will be around 3.2  10−18 C at the all range of resonance frequencies in all band.

202

K. A. Khan and A. P. Venkataraman

6 Conclusion A single-layer tri-band patch antenna with double-finger Inter-Dig cap structure on high dielectric material has been carried out in this work for biomedical measurements, especially UWB microwave body-scope (osteoporosis diagnosis) and biological vibration measurement applications. Antenna bandwidth at every band is larger than 500 MHz with S11 < −7 dB. The antenna VSWR varies from 1.099 to 2.2 over the all operating bands which lie in between 3.70 and 6.0 GHz. Generated oscillating charge generates the oscillating dipole moment and because of that a strong damping with very short radiative period and a polarization potential of 3 microvolt is generated across the cell membrane, when it is coupled with the body. Future work may be extended version of this developed UWB antenna, operational on higher range of frequencies (3–10 GHz) for microwave imaging and scanning for human and animal biomedical test.

7 Future Work and Scope In addition to above results and conclusion, we recommend for improvement in all parameters such as reflection coefficient (S11), VSWR, input impedance (Zi), surface current density, and surface charge density by varying end gap and terminal width of the same patch for other bands.

References 1. El-Saboni Y, Magill MK, Conway GA, Cotton S, Scanlon WG (2017) Measurement of deep tissue implanted antenna efficiency using a reverberation chamber. IEEE J Electromagn, RF, Microw Med Biol 1(2):90–97 2. Yoo H, Cho Y (2016) Miniaturised dual-band implantable antenna for wireless biotelemetry. Electron Lett 52(12):1005–1007 3. Li D, Meaney PM, Raynolds T, Pendergrass SA, Fanning MW, Paulsen KD (2004) Parallel-detection microwave spectroscopy system for breast imaging. Rev Sci Instrum 75 (7):2305–2313 4. Gibbins D, Klemm M, Craddock IJ, Leendertz JA, Preece A, Benjamin R (2010) A comparison of a wide-slot and a stacked patch antenna for the purpose of breast cancer detection. IEEE Trans Antennas Propag 58(3):665–674 5. Latif SI, Flores-Tapia D, Pistorius S, Shafai L (2015) Design and performance analysis of the miniaturised water-filled double-ridged horn antenna for active microwave imaging applications. IET Microw Antennas Propag 9(11):1173–1178 6. Latif SI, Flores Tapia D, Rodriguez Herrera D, Solis Nepote M, Pistorius S, Shafai L (2015) A directional antenna in a matching liquid for microwave radar imaging. Int J Antennas Propag 2015 7. Qian Y-H, Chu Q-X (2017) A broadband hybrid monopole-dielectric resonator water antenna. IEEE Antennas Wirel Propag Lett 16:360–363

20

Single Layer Tri-UWB Patch Antenna for Osteoporosis Diagnosis …

203

8. Wang M, Chu Q-X (2018) High-efficiency and wideband coaxial dual-tube hybrid monopole water antenna. IEEE Antennas Wirel Propag Lett 17:799–802 9. Rashid S, Jofre L, Garrido A, Gonzalez G, Ding Y, Aguasca A, O’Callaghan J, Romeu J (2019) 3D printed uwb microwave bodyscope for biomedical measurements. IEEE Antennas Wirel Propag Lett. https://doi.org/10.1109/lawp.2019.2899591 10. Adair RK (1995) Effects of weak high-frequency electromagnetic fields on biological systems. In: Radiofrequency radiation standards. Plenum Publishing Corp., New York, pp 207–222 11. Pokorny J, Jelenek F, Trkval V, Lamprecht I, Holtzel R (1997) Vibrations in microtubules. Biophys J 48:261–266 12. Adair RK (2002) Vibrational resonances in biological systems at microwave frequencies. Biophys J 82:1147–1152 13. Kamal S, Mohammed AS, Ain MF, Najmi F, Hussin R, Ahmad ZA, Ullah U, Othman M, Ab Rahman MF (2020) A novel lumped LC resonator antenna with air-substrate for 5G mobile terminals. Prog Electromagn Res Lett 88:75–81 14. Adair RK (2002) Vibrational resonances in biological systems at microwave frequencies. Biophys J 82(3):1147–1152

Chapter 21

Development of Novel Evaluating Practices for Subjective Answers Using Natural Language Processing Radha Krishna Rambola, Atharva Bansal, Parth Savaliya, Vaishali Sharma, and Shubham Joshi

1 Introduction There are numerous ways through which we can evaluate a student’s knowledge. A major way of evaluating the knowledge of a candidate has been a subjective examination which evaluates student knowledge at a high level. In this technique, a student expresses his/her opinion in multiple sentences in response to the question. Based on subjective examinations, each university and other educational institution have their own examination pattern. In this coming new Web-based era, most of the examinations are taken online. For all kinds of examinations, computer-based evaluation of student performance plays a vital role worldwide. For different universities and educational institutions, the automatic descriptive response evaluation system will be very cooperative in assessing student performance very efficiently [1]. In general, online education courses have an objective response framework that is very simple to assess and sustain, but compared to university-level courses that need to measure a student’s concept level and a conventional educational method. For online objective-based analysis, proper evaluation techniques are readily R. K. Rambola  A. Bansal  P. Savaliya (&)  V. Sharma  S. Joshi Department of Computer Engineering, SVKM’s NMIMS University MPSTME Shirpur, Dhule, India e-mail: [email protected] R. K. Rambola e-mail: [email protected] A. Bansal e-mail: [email protected] V. Sharma e-mail: [email protected] S. Joshi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5_21

205

206

R. K. Rambola et al.

accessible, but subjective analysis requires specific evaluation methodology. A significant role is played by subjective or descriptive examination. The tedious jobs such as establishing question papers and mundane checking of the answer scripts can partially relieve today’s mentors. Even though there has been a tendency to move toward objective type problems to reduce labor in this field, most scholars still consider such methods to be an inappropriate evaluation procedure [12]. First, the true skills and depth of knowledge of the students are not appropriately challenged. Second, there is huge room for the examinees to take unfair measures. Third, instead of focusing on learning, they spend most of their time analyzing how to break down questions of the objective type [12]. Many researchers believe that the evaluation of the essay by evaluation tools and human graders contributes to a wide variance in the student performance [2]. Considering particular concepts, certain tests are often carried out. If there are certain specific concepts, then only marks are allocated otherwise marked as incorrect [2]. So, a new framework is being suggested to solve the problem. The objective of this paper is to introduce a system to assess the performance of students at a higher level by taking into account the evaluation of descriptive questions. The system will also offer feedback to students in order to increase their grades. The mentioned work presents evaluating subjective answers automatically using natural language processing. Natural language processing is a field of artificial intelligence that deals with the interaction between computers and human languages [3]. Usually, the NLP is used for question-answering, automated text description, machine translation, response script evaluation, etc. Evaluating subjective answers using natural language processing will assist us to resolve the difficulties faced in manual assessment. Instead of the manual process, NLP-based strategies are suitable for summary generation. In order to compute different measures of similarity, the summarized text will be fed as an input in the form of keywords and phrases [13]. The measure of similarity is a technique in sense of semantic, syntactic structure to find how much two phrases and keywords are similar [13]. To extract the significant keywords in the response given by staff before evaluation is done, PoS tagger (part of speech tagger) is introduced. The keywords extracted are categorized as subordinate keywords, mandatory keywords and technical keywords. The WordNet tool is used to give literal words in subordinate terms to the associated synonyms. Here, the written response of a student is given as input and after the assessment the device will automatically score points [4]. All possible variables such as spelling error, grammatical mistakes and different measurements of similarity for scoring marks are considered by the method. Natural language processing techniques make the management of used English language much simpler.

21

Development of Novel Evaluating Practices for Subjective …

207

2 Literature Survey A survey was carried out by Xu and Reynolds in 2012 to evaluate the written response of students to a dilemma in teacher management [5]. Identifying the accuracy of the categories generated by the IBM SPSS text analytics survey was the primary objective of the study. A review was conducted by Kudi and Manekar on evaluation of online descriptive responses with short text matching. This approach uses the machine learning method to solve issues using text mining and focuses on matching short answers [6]. In 2010, Siddhartha and Sameen created the Automated Essay Grading program [7]. The purpose of the system was to resolve issues of the local language impact while correcting and providing proper feedback to authors. Basically, AEG was based on natural language processing and techniques of machine learning. In 2011, Maiga and Laurie [8] produced an auto-assessor with the goal of marking the short answers automatically from students, based on the semantic significance to those answers. Auto-assessors are assisted by the natural language processing mechanism. Their structure consisted of an architecture based on components. The components were created to reduce phrases to their canonical form that were used in the preprocessing of correct answers given and the student response. When each word from the correct response is compared with the canonical word from the student response in canonical form and student response scores are eventually awarded, there is a subsequent student response assessment with the correct answer. Ade-Ibijola, Wakama and Amadi developed an expert system, Automated Essay Scoring (AES), to score text answers. Automated Essay Scoring is focused on information extraction (IE) [9]. This expert system consists of three basic modules: knowledge base, inference engine and working memory [9]. The inference engine used the technique of natural language processing to enforce pattern match from the rule of inference to the knowledge base data. The natural language processing module includes the following: lexical analyzer, filter, a synonym handler module. Using two parameters, percentage match and mark assigned, the evaluation of accuracy is carried out by a fuzzy model that produces the student response scores. John and Jana produced a C-rater to automatically grade student responses to prove understanding of a student about the concepts that C-rater has already defined. C-rater is supported by natural language processing and knowledge representation methods. With the help of already provided concepts, responses are generated in this system model and the responses of the later students were processed by natural language processing techniques. Subsequently, only concepts are listed and grades are ultimately given. However, there are limitations to this method, no separate concepts are defined, incorrect spelling errors, unexpected comparable lexicons and many more [10]. Nandini and Uma Maheswari in order to evaluate descriptive type response scripts, a syntactical-relation-based feature extraction technique was proposed [1].

208

R. K. Rambola et al.

Their approach includes steps such as classification of questions, classification of responses and evaluation of responses to students’ subjective responses and grades with an appropriate score [1]. Patil and Sonal Patil did research work on [2] “evaluating student descriptive answers using natural language processing (NLP).” The method evaluates the paper using the NLP tools. The proposed system will try to consider the collective meaning of multiple phrases. It will also attempt to check the response of the student for grammatical and spelling errors made. Aziz et al. [11] proposed design that parses each sentence and establishes grammatical connections for each sentence. Basically, the system aimed to evaluate short responses according to text as well as grammar characteristics. The basic limit for a short text was 200 words. The system works with supervised learning methods that use phases of training and testing. Roy and Chaudhuri [12] proposed semi-automated evaluation method in which subjective questions are augmented with model answer points. Their proposed model also offers reward schemes and penalty provisions. Rahman and Hasan Siddiqui [13] suggested a method for evaluating the response script using natural language processing (NLP). In order to generate a response script summary, they used a keyword-based summary technique in their method.

3 Application of Natural Language Processing 3.1

Sentiment Analysis

For machines, understanding natural language is particularly difficult when it comes to opinions, given that people often use sarcasm and irony. Sentiment analysis can, however, realize subtle nuances in emotions and views and determine how positive or negative they are. When analyzing sentiment in real time, measuring customer reactions to your recent marketing campaign or product launch, and getting an overall sense of how customers feel about your business, you can track social media mentions (and manage negative comments before they escalate). You can also carry out sentiment analysis periodically and understand what clients like and dislike about specific aspects of your business. It can also carry out sentiment analysis periodically and understand what customers like and dislike about particular aspects of your business. These insights can help you make smarter decisions, as they show you exactly what things need to be improved.

21

3.2

Development of Novel Evaluating Practices for Subjective …

209

Automatic Summarization

Information overload is a real problem when we need to access a specific, important piece of information from a huge knowledge base. In addition to summarizing the meaning of documents and information, it is important to automatically summarize the emotional meanings of information, such as the collection of social media data to understand them. Automatic summarization is especially relevant when used to provide an overview of a news item or blog posts, while avoiding redundancy from multiple sources and increasing the diversity of content obtained.

3.3

Email Classification

There are also email classifications running in the same vein, which, if you are a Gmail user, you will be familiar with. Basically, you will see that when you look at your Gmail inbox, your emails are categorized into 3 tabs: Primary, Social and Promotions. All your personal emails go to Primary, notifications from social media platforms go to Social, and you sign up for company newsletters to hear from Promotions Land. The system however is not 100% foolproof, which is why some newsletters may be filtered to your primary tab (especially those containing more text than images).

3.4

Conversational User Interface

A conversational user interface is a computer interface that emulates a conversation with a true human being, a chatbot, for instance. Chatbot offers an interface that allows machines and users to interact via text. The history of chatbots and their evolution have been impressive. Chatbots have certainly come a long way from performing the role of a customer service agent with a predefined set of Q/A to becoming a mobile app alternative. A chatbot is a text-based CUI that enables users to place orders, provide the status of their orders, sort data, book flight tickets, for financial transactions, enhance marketing campaigns and much more as a Q/A platform. A chatbot needs to understand user input, interpret it and respond accordingly in order to perform complex tasks. This is where the NLP has a significant role to play.

210

R. K. Rambola et al.

4 Problem Definition Evaluations are taking place on online media in this age of online education as part of teaching. For evaluations, institutions are moving toward automated systems which mainly evaluates multiple-choice questions or one-word answer questions. We have seen various numbers of online evaluation tools in the market at the moment. But the major drawback of these tools is they only support objective type questions like multiple-choice or single-line answers. These will only evaluate the profundity of student understanding at the lower level. These systems also fail to check student’s spelling and grammatical errors [2]. Even if words are in incorrect order, answers were given marks by the trivial presence of those words in the student answer. To address the challenges confronted, the system will be developed to evaluate descriptive responses from students by considering the multiple sentences’ collective meaning. The proposed scheme will be able to provide students with feedback in order to assist them in improving their academic results [2].

5 Proposed Solution The purpose of this research is to evaluate the descriptive answer text automatically and assign marks to the specific question [13]. Most existing systems only check multiple-choice questions and single words, so the proposed system will avoid this issue by taking into account the collective meaning of multiple sentences [2]. The answer is taken as input to achieve this, and NLP is used to extract the text from the answer and evaluate the data. Various measures of similarity that are used as parameters to assign marks have been calculated. Figure 1 shows flow of the system from which we can understand the whole processing of evaluation of answers. Proposed System Modules [14]: The system basically consists of 4 modules, which are the login module, the extraction module for information, the weightage module and the generation module for scores [14]. (A) Login Module To authenticate students and the faculty, the login module is used. They can proceed with their individual activities once the user and the faculty are authenticated. 1. Faculty Login Using their ID and password, the faculty is authenticated. Faculty can add questions and their respective responses to the database once authentication is done. For those students, the faculty can also add tests and subjects. The faculty provide questions and answers in the form of keywords, phrases and weightage of all the parameters.

21

Development of Novel Evaluating Practices for Subjective …

Fig. 1 Flow of system

211

212

R. K. Rambola et al.

Students will be shown the question, and the response will be stored in the database. The stored response is then compared with the faculty response which is stored in the form of keywords and phrases [14]. 2. Student Login Using their ID and password, the student is authenticated. Then, the student is redirected to the page where questions and a text box for answers are displayed if all the credentials are met. The answer to the question displayed can then be written by the user. After completing the answers, students may submit answers for evaluation [14]. (B) Information Extraction Module: The module for the extraction of information is a type of module where all the keywords are extracted from the student’s responses and model response which the faculty had stored. The keywords provide answers with the key concept and are of great significance in answer. Less value is given to the keywords that are repeated in the answer very often. But there will be great importance for the keywords that appear infrequently in the answer [14]. (C) Weightage Module: Major factors considered for evaluations are:1. Length: Length is a factor that most of the examiners consider vital during evaluating answers, many times a student is required to explain a number of concepts in a single answer, and that cannot be done without writing a particular number of words. Therefore, the examiner will define the required number of words and evaluation of the student answer will take place according to it. 2. Keyword: There are some words without which the answer cannot be marked as complete; such keywords will be defined by the examiner as mandatory keywords. Other than mandatory keywords, there are a number of words which enhance the relevance of an answer and are related to the domain of the answer in any sense; these keywords will be defined as supporting keywords. In the student’s response, these keywords are checked and marks are allocated depending on keywords present in the student’s answer [14] (Table 1). 3. Grammar: Grammar defines the meaning of the natural language through defining the rules for the structure. Wrong grammar can change the meaning of the answer or make it hard to understand the meaning; therefore, the examiners consider grammar is really important while evaluating answers. The percentage of marks allocated to the grammar of the answers will be determined by examiners.

21

Development of Novel Evaluating Practices for Subjective …

213

Table 1 Keywords-based marking scheme [14] Keywords matched in percentage

Marks obtained out of max marks for length and grammar

90–100 80–90 60–80 40–60 20–40 1–20 0

100% of max marks 90% of max marks 80% of max marks 50% of max marks 30% of max marks 10% of max marks 0% of max marks

4. Accuracy: To achieve the required length, the students may write irrelevant details in the answers submitted; to overcome this problem, the system will check what percentage of the answer submitted matches the mandatory keywords and supporting keywords and will generate an accuracy percentage. Examiners can define what accuracy percentage is required based on the subject or the type of question. (D) Score Generation Module: The final score for answers from students is produced by adding length, keyword, grammar and accuracy marks from individual sections (Table 2). Faculty Interface Faculty will be directed to the login screen. The fields available on this screen are faculty Id and the password. The faculty has to enter the correct Id and password. After the correct authentication, the faculty would be directed to a page where different subjects would be mentioned, which he/she is currently teaching the student. Faculty will create a test depending on the subject. He/she will write the question for the test depending on the course material. There is also an option where faculty will have privileges to update or delete any question in the near future. Faculty will store the important keyword for that particular question and also provide the length of a particular answer. After storing all the questions and answers, the faculty would provide the weightage of all the factors which are keyword, grammar, length and the accuracy. According to the weightage, marks Table 2 Score generation criteria [14]

Criteria

Percentage of marks allocated

Length Keywords Grammar Accuracy

10 50 20 20

214

R. K. Rambola et al.

would be calculated. The faculty will also provide the start time and the end time of that test. Student Interface Students will be directed to the login screen. The fields available on this screen are username and password. The student has to enter the correct username and password. After the correct authentication, the student would be directed to a page where different subjects would be mentioned which they are currently studying. Depending on the examination, the student has to select that subject. Test options would be mentioned on that particular subject along with the date. The student has to select that particular test. He/she has to make sure the test timing is being mentioned by the faculty. On clicking on the test, the first question would be displayed and below that text box would be there where the student can type their answer. At the bottom of the screen, two options will be there which are: Next—to go to the next question Previous—to go to previous question. On the last question page, the submit button will be there to submit their test.

6 Technical Flow • At the very beginning of the process, the system will remove all the stop words that are available in the WordNet library of the NLTK package in Python (e.g., a, the, an, etc.) so that only meaningful words are taken further in the process (Fig. 2). • After removing all the stop words, the system will do part of speech (POS) tagging which will help in lemmatization of all the words to their structurally root word as per the definition of that word in WordNet (e.g., running to run). • After the above processing, the length of the answer is calculated through a predefined function in Python, which gives an insight of how long the answers are and the student has not only submitted the key words and phrases. • The data is forwarded as it is from the previously processed module and is stored in Document Term Matrix (DTM), from where the answers are compared with the mandatory keywords and their synonyms as per submitted by the examiner/faculty. The presence of these keywords can alter the marks significantly, and marks are allocated, depending on keywords present in student’s response. • Further, the system will firstly create a copy of the submitted answer with corrected grammar and then compare both the copies of the answer and based on the percentage similarity between both the answers it will assign the marks to

21

Development of Novel Evaluating Practices for Subjective …

215

Fig. 2 Technical flow

answer on the basis of grammar. In understanding the language, grammar plays a key role and this evaluation parameter is therefore considered important. • Finally, the system will match the whole submitted answer with the submitted mandatory and supporting keywords. The system can add synonyms of the supporting keywords to the supporting keywords list. By comparing the answer with these keywords, it will give what percentage of answer matches the

216

R. K. Rambola et al.

keywords and hence that percentage will be the percentage accuracy of the answer and the system will allot on basis of that. This parameter will ensure that the students are writing the answer with the correct context only and are writing about relevant topics only.

7 Conclusion A lot of software is available for objective analysis for online examination, but we have restricted sources when we are concerned with subjective or descriptive analysis. Manual assessment and evaluation processes have many issues, such as prolonged, expensive, massive resource sources, a huge amount of effort and enormous pressure on teachers. Since the e-learning systems of learners have gained huge attraction in the field of education, this paper proposes an approach in order to decrease manual work and effectively evaluate textual assignments by subject experts using the given marking scheme. We have proposed a natural language processing (NLP)-based method to evaluate the descriptive answers automatically. Major factors considered in evaluation of descriptive answers are keywords, length, grammar and accuracy. Based on the keywords, the proposed system calculates the student’s answer. Marks are given to the student by judging the student’s response and model answers. If students have written all the keywords which have been declared in the model response, the highest marks are earned [14]. The system also provides feedback for students so that they can improve their knowledge. For matching responses, we used Python with NLTK tool kit [13], and we immediately considered the worldwide dictionary for synonymous paraphrasing, i.e., through WordNet.

8 Future Work One short drop in the current method is that only textual components can be accommodated in answer components and question papers so far. In order to be a full-fledged artificial intelligence tool, images and symbols must be allowed at each point. A system can be developed in the coming time to evaluate subjective responses with mathematical expressions and diagrams. The proposed system analyzes responses that are written only in the English language. It can also extend to assess responses that are also written in other languages [14].

21

Development of Novel Evaluating Practices for Subjective …

217

References 1. Nandini V, Uma Maheswari P (2018) Automatic assessment of descriptive answers in online examination systems using semantic relational features. J Supercomput 2. Patil SM, Sonal Patil MS. Evaluating the student descriptive answer using natural language processing. Int J Eng Res Technol 3(3) 3. Meena K, Raj L (2014) Evaluation of the descriptive type answers using hyperspace analog to language and self-organizing map. In: 2014 IEEE international conference on computational intelligence and computing research, Coimbatore, pp 1–5 4. Lakshmi V, Ramesh V (2017) Evaluating students’ descriptive answers using natural language processing and artificial neural networks. Int J Creat Res Thoughts (IJCRT) 5 (4):3168–3173 5. Xu Y, Reynolds N (2012) Using text mining techniques to analyze students’ written response to a teacher leadership dilemma. Int J Comput Theory Eng 4 6. Kudi P, Manekar A (2014) Online examination with short text matching. In: IEEE Global conference on wireless computing and networking 7. Ghosh S, Fatima SS (2010) Design of an automated essay grading (AEG) system in Indian context. Int J Comput Appl (0975-8887) 1(11) 8. Cutrone L, Chang M, Kinshuk (2011) Auto-assessor: computerized assessment system for marking student’s short-answers automatically. In: IEEE International conference on technology for education 9. Ade-Ibijola AO, Wakama I, Amadi JC (2012) An expert system for automated essay scoring (AES) in computing using shallow NLP techniques for inferencing. Int J Comput Appl (0975-8887) 51(10) 10. Sukkarieh JZ, Blackmore J (2009) C-rater: automatic content scoring for short constructed responses. In: Proceeding of the 22nd international FLAIRS conference 11. Aziz MJA, Ahmad FD, Ghani AAA, Mahmod R (2009) Automated marking system for short answer examination (AMSSAE). In: IEEE symposium on industrial electronics & applications, 2009. ISIEA 2009, pp 47–51 12. Roy C, Chaudhuri C (2018) Case based modeling of answer points to expedite semi-automated evaluation of subjective papers. In: 2018 IEEE 8th international advance computing conference (IACC), pp 85–90 13. Rahman M, Hasan Siddiqui F (2018) NLP-based automatic answer script evaluation. DUET J 4(1):35–42 14. Tulaskar A, Thengal A, Koyande K (2017) Subjective answer evaluation system. Int J Eng Sci Comput 7(4) 15. Rokade A, Patil B, Rajani S, Revandkar S, Shedge R (2018) Automated grading system using natural language processing. In: 2018 Second international conference on inventive communication and computational technologies (ICICCT) 16. Saipech P, Seresangtakul P (2018) Automatic Thai subjective examination using cosine similarity. In: 2018 5th international conference on advanced informatics: concept theory and applications (ICAICTA) 17. Nikam P, Shinde M, Mahajan R, Kadam S (2015) Automatic evaluation of descriptive answer using pattern matching algorithm. Int J Comput Sci Eng 3(1):69–70 18. Kashi A, Shastri S, Deshpande AR (2016) A score recommendation system towards automating assessment in professional courses. In: 2016 IEEE eighth international conference on technology for education, pp 140–143 19. Praveen S (2014) An approach to evaluate subjective questions for online examination system. Int J Innov Res Comput Commun Eng 2(11) 20. Patil P, Joshi S (2014) Kernel based process level authentication framework for secure computing and high-level system assurance. Int J Innov Res Comput Commun Eng 2(1)

218

R. K. Rambola et al.

21. Bhosale H, Joshi S (2014) Review on DRINA: a lightweight and reliable routing approach for in-network aggregation in wireless sensor networks. Int J Emerg Trends Technol Comput Sci 2(11) 22. Jog S, Joshi S (2014) Review on self-adaptive semantic focused crawler for mining services information discovery. Int J Eng Res Technol (IJERT) 1(1)

Author Index

A Ahuja, Laxmi, 81 Anoop, Chiluveru, 35 Appadurai, M., 125 Ashok Kumar, Chiluveru, 35 B Balaji, A., 85 Bansal, Atharva, 205 Banyal, Rohitash Kumar, 19 Basha, Shaik Johny, 71 Bhanja, Mrityunjay Abhijeet, 145 Bhardwaj, Sushil, 1 Bhat, Amjad Husain, 1 Bissa, Ankita, 115 C Chaudhary, Sarika, 135, 145 D Dalvi, Ashwini, 155 Dhanyasri, R., 95 Diksha, 19 F Fantin Irudaya Raj, E., 125 Firdous, Naira, 1 G Garg, Anurag, 29 Gopani, Jash, 155 Gupta, Eshita, 29

H Hamid, M.A., 105 Harinee, M.P., 95 I Ibrahim Mamun, Md., 105 J Jagan Mohan Reddy, D., 71 Janani, P., 179 Jatain, Aman, 135, 145 Joshi, Shubham, 205 K Kachaliya, Chetan, 155 Kannan, E., 47 Khan, Khalid Ali, 193 Kusma Kumari, Ch, 57 L Loga, K., 95 M Mahalakshmi, G.S., 85 Mahalakshmi, V., 179 Malik, Vikas, 19 Mayank, Sharma, 35 Mridha, M.F., 105 N Naresh Kumar, S., 81

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 A. Singh Pundir et al. (eds.), Recent Trends in Communication and Intelligent Systems, Algorithms for Intelligent Systems, https://doi.org/10.1007/978-981-16-0167-5

219

220 P Panjwani, Shweta, 81 Patel, Mayank, 115 Pawade, Dipti, 155, 167 R Rahman, Afroza, 105 Rahul, Kumar, 19 Rajkumar, N., 47 Rambola, Radha Krishna, 205 Ranjeet Singh, Tomar, 35 S Sakhapara, Avani, 167 Salim, Mohammad, 11

Author Index Sankaracharyulu, Pedagadi V.S., 57 Sathyasri, B., 179 Savaliya, Parth, 205 Sendhilkumar, S., 85 Senthil Kumaran, R., 95 Shah, Hitansh, 155 Shah, Hitanshu, 155 Sharma, Vaishali, 205 Suresh Kumar, Munaka, 57 U Upadhyaya, Vivek, 11 V Venkataraman, Aravind Pitchai, 193