Information and Communication Technology for Intelligent Systems: Proceedings of ICTIS 2020, Volume 1 [1st ed.] 9789811570773, 9789811570780

This book gathers papers addressing state-of-the-art research in all areas of information and communication technologies

660 59 26MB

English Pages XVI, 780 [756] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Matter ....Pages i-xvi
Modeling and Fuzzy Availability Analysis of Computer Networks: A Case Study (Ashish Kumar, Ombir Dahiya, Monika Saini)....Pages 1-10
Stochastic Modeling and Profit Evaluation of a Redundant System with Priority Subject to Weibull Densities for Failure and Repair (Monika Saini, Kuntal Devi, Ashish Kumar)....Pages 11-20
Convolutional Neural Networks: An Overview and Its Applications in Pattern Recognition (Aseem Patil, Milind Rane)....Pages 21-30
SAMKL: Sample Adaptive Multiple Kernel Learning Framework for Lung Cancer Prediction (Ashima Singh, Arwinder Dhillon, Jasmine Kaur Thind)....Pages 31-44
Optimal Multiple Access Scheme for 5G and Beyond Communication Network ( Nira, Aasheesh Shukla)....Pages 45-54
Exergy and Energy Analyses of Half Effect–Vapor Compression Cascade Refrigeration System (Mihir H. Amin, Hetav M. Naik, Bidhin B. Patel, Prince K. Patel, Snehal N. Patel)....Pages 55-75
Effect of Servicescape and Nourishment Quality on Client’s Loyalty in Fine Dining Restaurants: A Statistical Investigation (Aravind Kumar Rai, Ashish Kumar, Pradeep Singh Chahar, C. Anirvinna)....Pages 77-89
Application of Data Mining for Analysis and Prediction of Crime (Vaibhavi Shinde, Yash Bhatt, Sanika Wawage, Vishal Kongre, Rashmi Sonar)....Pages 91-102
Malnutrition Identification in Geriatric Patients Using Data Mining (Vaishali P. Suryawanshi, Rashmi Phalnikar)....Pages 103-109
Blockchain: A Survey on Healthcare Perspective and Its Challenges (Deepa Kumari, B. S. A. S. Rajita, Subhrakanta Panda)....Pages 111-119
Open Data Readiness Assessment Framework for Government Projects: Indian Perspective (Amrutaunshu Nerurkar, Indrajit Das)....Pages 121-129
Extended LBP Based Secret Image Sharing with Steganography (Sujit Kumar Das, Bibhas Chandra Dhara)....Pages 131-140
Efficient Partitioning Algorithm for Parallel Multidimensional Matrix Operations by Linearization (Kazi Saeed Alam, Tanvir Ahmed Shishir, K. M. Azharul Hasan)....Pages 141-149
An Efficient Two-Phase Metaheuristic for the Multiple Minimum Back-Walk-Free Latency Problem (Ha-Bang Ban, Dang-Hai Pham, Tuan-Anh Do)....Pages 151-159
Unfolding Healthcare: Novel Method for Predicting Mortality of Patients Within Early Hours of ICU (Rajni Jindal, Sarthak Aggarwal, Saanidhi)....Pages 161-168
Classification of Disaster-Related Tweets Using Supervised Learning: A Case Study on Cyclonic Storm FANI (Pankaj Kumar Dalela, Sandeep Sharma, Niraj Kant Kushwaha, Saurabh Basu, Sabyasachi Majumdar, Arun Yadav et al.)....Pages 169-178
Detection of Cardio Vascular Disease Using Fuzzy Logic (Shital Chaudhary, Sachin Gajjar, Preeti Bhowmick)....Pages 179-189
Experimental Evaluation of Motor Skills in Using Jigsaw Tool for Carpentry Trade (Sasi Deepu, S. Vysakh, T. Harish Mohan, Shanker Ramesh, Rao R. Bhavani)....Pages 191-201
Patent Trends in Higher Education of India: A Study on Indian Central Universities (J. P. Singh Joorel, Abhishek Kumar, Sanjay Tiwari, Ashish Kumar Chauhan, Ramswaroop Ahirwar)....Pages 203-214
Spatial Rough k-Means Algorithm for Unsupervised Multi-spectral Classification (Aditya Raj, Sonajharia Minz)....Pages 215-226
Enhancement and Comparative Analysis of Environmental Sound Classification Using MFCC and Empirical Mode Decomposition (Ridhima Bansal, Namita Shukla, Maghav Goyal, Dhirendra Kumar)....Pages 227-235
Effective Use of Naïve Bayes, Decision Tree, and Random Forest Techniques for Analysis of Chronic Kidney Disease (Rajesh S. Walse, Gajanan D. Kurundkar, Santosh D. Khamitkar, Aniket A. Muley, Parag U. Bhalchandra, Sakharam N. Lokhande)....Pages 237-245
The Assessments of Local Manager on the Quality of Administrative Civil Servants—A Case Study in Hanoi City, Vietnam (Ngo Sy Trung, Do Huu Hai, Vu Thi Yen Nga, Tran Thi Hanh)....Pages 247-259
An Overview of Various Types of CAPTCHA (Shivani Deosatwar, Swarnima Deshmukh, Vaibhavi Deshmukh, Reva Sarda, Lalit Kulkarni)....Pages 261-269
Low Pass Filter-Based Enhancement of Arabic Handwritten Document Images (M. Ravikumar, Omar Ali Boraik)....Pages 271-277
Marine Motion Control Using Single Neuron Fuzzy Logic Controller (T. K. Sethuramalingam)....Pages 279-288
Statistical Evaluation of Effective Trust Models for Wireless Networks (Shahin Sirajuddin Shaikh, Dilip G. Khairnar)....Pages 289-295
Comparative Study of Open-Source NOSQL Document-Based Databases (Nidamanuri Amani, Yelchuri Rajesh)....Pages 297-303
Hybridization of K-means Clustering Using Different Distance Function to Find the Distance Among Dataset (Kusum Yadav, Sunil Gupta, Neetu Gupta, Sohan Lal Gupta, Girraj Khandelwal)....Pages 305-314
A Survey on Spoofing Detection Systems for Fake Fingerprint Presentation Attacks (Riley Kiefer, Jacob Stevens, Ashok Patel, Meghna Patel)....Pages 315-334
Crop Prediction Based on Environmental Conditions and Disease Prediction (Gresha Bhatia, Nikhil Joshi, Srivatsan Iyengar, Sahil Rajpal, Krish Mahadevan)....Pages 335-344
Application of Secret Sharing Scheme in Software Watermarking (K. K. Aiswarya, K. Praveen, P. P. Amritha, M. Sethumadhavan)....Pages 345-353
Hybrid Approach for Predicting Heart Disease Using Optimization Clustering and Image Processing (Nibir Kumar Paul, K. G. Harsha, Prateek Kumar, Shynu Philip, Jossy P. George)....Pages 355-362
FP-MMR: A Framework for the Preprocessing of Multimodal MR Images (Amrita Kaur, Lakhwinder Kaur, Ashima Singh)....Pages 363-375
Machine Learning in Medical Image Processing (Himanshu Kumar, Yasha Hasija)....Pages 377-383
Adaptive Educational Resources Framework for ELearning Using Rule-Based System (Leo Willyanto Santoso)....Pages 385-396
A Progressive Non-discriminatory Intensity Equalization Algorithm for Face Analysis (Khadijat T. Bamigbade, Olufade F. W. Onifade)....Pages 397-403
Big Data Analytics in Health Informatics for Precision Medicine (Pawan Singh Gangwar, Yasha Hasija)....Pages 405-412
Software Tools for Global Navigation Satellite System (Riddhi Soni, Sachin Gajjar, Manisha Upadhyay, Bhupendra Fataniya)....Pages 413-419
Encryption and Decryption: Unraveling the Intricacies of Data Reliability, Attributed by Incorporating the Usage of Color Code and Pixels (Bikrant Bikram Pratap Maurya, Aman Upadhyay, Aniket Saxena, Parag Sohani)....Pages 421-431
Edge Intelligence-Based Object Detection System Using Neural Compute Stick for Visually Impaired People (Aditi Khandewale, Vinaya Gohokar, Pooja Nawandar)....Pages 433-439
MH-DSCEP: Multi-hop Dynamic and Stable Cluster-Based Energy-Efficient Protocol for WSN (Kameshkumar R. Raval, Nilesh Modi)....Pages 441-449
Blockchain Framework for Social Media DRM Based on Secret Sharing (M. Kripa, A. Nidhin Mahesh, R. Ramaguru, P. P. Amritha)....Pages 451-458
An Overview of Blockchain Consensus and Vulnerability (Gajala Praveen, Mayank Anand, Piyush Kumar Singh, Prabhat Ranjan)....Pages 459-468
Statistical Analysis of Stress Levels in Students Pursuing Professional Courses (Harish H. Kenchannavar, Shrivatsa D. Perur, U. P. Kulkarni, Rajeshwari Hegde)....Pages 469-477
Smart Helmet using Advanced Technology (K. Muni Mohith Reddy, D. Venkata Krishna Rohith, C. Akash Reddy, I. Mamatha)....Pages 479-488
Analyze and Compare the Parameters of Microstrip Rectangular Patch Antenna Using Fr4, RT Duroid, and Taconic Substrate (Prakash Kuravatti)....Pages 489-495
Performance Analysis of Hand-Crafted Features and CNN Toward Real-Time Crop Disease Identification (Vivek Tiwari, Aditi Agrahari, Sriyuta Srivastava)....Pages 497-505
Performance Analysis of GNSS Utility by Multi-constellation Over the Indian Region (Madhu Ramarakula, Goparaju V. R. Sai Sukesh)....Pages 507-515
Digital Learning: A New Perception to Learn Beyond the Classroom Boundary (Dweepna Garg, Radhika Patel, Rima Patel, Binal Kaka, Parth Goel, Bhavika Patel)....Pages 517-527
Performance Analysis of Channel Capacity of MIMO System Without CSI (Divya Singh, Aasheesh Shukla)....Pages 529-535
Firmware Injection Detection on IoT Devices Using Deep Random Forest (E. Arul, A. Punidha, V. D. Ambeth Kumar, E. Yuvarani)....Pages 537-544
Adaptive Protection Scheme for Renewable Integrated Microgrid—A Case Study (S. G. Srivani, C. Suresha, K. N. S. V. Theertha, D. Chandan)....Pages 545-554
Relevant Feedback-Based User-Query Log Recommender System from Public Repository (V. Kakulapati, D. Vasumathi, G. Suryanarayana)....Pages 555-568
Intelligent Sentiments Information Systems Using Fuzzy Logic (Roop Ranjan, A. K. Daniel)....Pages 569-578
Possibility Study of PV-STATCOM with CHB Multilevel Inverter: A Review (K. M. Nathgosavi, P. M. Joshi)....Pages 579-589
Design and Development of Wireless Sensor for Variable Temperature and for Various Security Purposes (Prabhakar Singh, Minal Saxena)....Pages 591-599
Analysis of Cloud Forensics Challenges and Solutions (Ashish Revar, Abhishek Anand, Ishwarlal Rathod)....Pages 601-608
Cohesion Measure for Restructuring (Sarika Bobde, Rashmi Phalnikar)....Pages 609-614
Analysis of F-Shape Antenna with Different Dielectric Substrate and Thickness (Radhika Raina, Komal Jaiswal, Shekhar Yadav, Dheeraj Kumar, Ram Suchit Yadav)....Pages 615-626
Analyzing Forensic Anatomization of Windows Artefacts for Bot-Malware Detection (Vasundhra Gupta, Mohona Ghosh, Niyati Baliyan)....Pages 627-635
Low-Power Two-Stage OP-AMP in 16 nm (Gopal Agarwal, Vedvyas Dwivedi)....Pages 637-642
Computation of Hopf Bifurcation Points in the Magnetic Levitation System (Sudarshan K. Valluru, Anshul Gupta, Aditya Verma)....Pages 643-650
Face Sketch-Image Recognition for Criminal Detection Using a GAN Architecture (Sunil Karamchandani, Ganesh Shukla)....Pages 651-659
Studying Network Features in Systems Biology Using Machine Learning (Shubham Mittal, Yasha Hasija)....Pages 661-669
Smart Predictive Healthcare Framework for Remote Patient Monitoring and Recommendation Using Deep Learning with Novel Cost Optimization (Anand Motwani, Piyush Kumar Shukla, Mahesh Pawar)....Pages 671-682
Enhanced Question Answering System with Trustworthy Answers (C. Valliyammai, V. P. Siddharth Gupta, Puviarasi Gowrinathan, Kalli Poornima, S. Yaswanth)....Pages 683-692
Rumour Containment Using Monitor Placement and Truth Propagation (Amrah Maryam, Rashid Ali)....Pages 693-702
Classification of Satellite Images (N. Manohar, M. A. Pranav, S. Aksha, T. K. Mytravarun)....Pages 703-713
Sign Language Recognizer Using HMMs (Venkata Sai Rishita Middi, Middi Appala Raju)....Pages 715-724
Attendance Monitoring Using Computer Vision (Akanksha Krishna Singh, Mausami, Smita Kulkarni)....Pages 725-732
Heart Disease Prediction Using Ensemblers Learning (Meenu Bhatia, Dilip Motwani)....Pages 733-743
Impact of Influencer Credibility and Content on the Influencer–Follower Relationships in India (Adithya Suresh, Akhilraj Rajan, Deepak Gupta)....Pages 745-751
Smart Employment System: An HR Recruiter (Kajal Jewani, Anupreet Bhuyar, Anisha Kaul, Chinmay Mahale, Trupti Kamat)....Pages 753-763
Alzheimer’s Disease Prediction Using Fastai (Chiramel Riya Francis, Unik Lokhande, Prabhjyot Kaur Bamrah, Arlene D’costa)....Pages 765-775
Back Matter ....Pages 777-780
Recommend Papers

Information and Communication Technology for Intelligent Systems: Proceedings of ICTIS 2020, Volume 1 [1st ed.]
 9789811570773, 9789811570780

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Smart Innovation, Systems and Technologies 195

Tomonobu Senjyu Parikshit N. Mahalle Thinagaran Perumal Amit Joshi   Editors

Information and Communication Technology for Intelligent Systems Proceedings of ICTIS 2020, Volume 1

123

Smart Innovation, Systems and Technologies Volume 195

Series Editors Robert J. Howlett, Bournemouth University and KES International, Shoreham-by-sea, UK Lakhmi C. Jain, Faculty of Engineering and Information Technology, Centre for Artificial Intelligence, University of Technology Sydney, Sydney, NSW, Australia

The Smart Innovation, Systems and Technologies book series encompasses the topics of knowledge, intelligence, innovation and sustainability. The aim of the series is to make available a platform for the publication of books on all aspects of single and multi-disciplinary research on these themes in order to make the latest results available in a readily-accessible form. Volumes on interdisciplinary research combining two or more of these areas is particularly sought. The series covers systems and paradigms that employ knowledge and intelligence in a broad sense. Its scope is systems having embedded knowledge and intelligence, which may be applied to the solution of world problems in industry, the environment and the community. It also focusses on the knowledge-transfer methodologies and innovation strategies employed to make this happen effectively. The combination of intelligent systems tools and a broad range of applications introduces a need for a synergy of disciplines from science, technology, business and the humanities. The series will include conference proceedings, edited collections, monographs, handbooks, reference books, and other relevant types of book in areas of science and technology where smart systems and technologies can offer innovative solutions. High quality content is an essential feature for all book proposals accepted for the series. It is expected that editors of all accepted volumes will ensure that contributions are subjected to an appropriate level of reviewing process and adhere to KES quality principles. ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, SCOPUS, Google Scholar and Springerlink **

More information about this series at http://www.springer.com/series/8767

Tomonobu Senjyu Parikshit N. Mahalle Thinagaran Perumal Amit Joshi •





Editors

Information and Communication Technology for Intelligent Systems Proceedings of ICTIS 2020, Volume 1

123

Editors Tomonobu Senjyu Department of Electrical and Electronics Engineering University of the Ryukyus Nishihara, Japan Thinagaran Perumal Universiti Putra Malaysia Serdang, Malaysia

Parikshit N. Mahalle Sinhgad Technical Education Society SKNCOE Pune, India Amit Joshi Global Knowledge Research Foundation Ahmedabad, India

ISSN 2190-3018 ISSN 2190-3026 (electronic) Smart Innovation, Systems and Technologies ISBN 978-981-15-7077-3 ISBN 978-981-15-7078-0 (eBook) https://doi.org/10.1007/978-981-15-7078-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

This SIST volume contains the papers presented at the ICTIS 2020: Fourth International Conference on Information and Communication Technology for Intelligent Systems. The conference was held during May 15–16, 2020, organized on a digital platform ZOOM due to Pandemic COVID–19. The supporting partners were InterYIT IFIP and Knowledge Chamber of Commerce and Industry (KCCI). This conference aimed at targeting state-of-the-art as well as emerging topics pertaining to ICT and effective strategies for its implementation in engineering and intelligent applications. The objective of this international conference is to provide opportunities for the researchers, academicians, industry persons, and students to interact and exchange ideas, experience, and expertise in the current trend and strategies for information and communication technologies. Besides this, participants will also be enlightened about the vast avenues and current and emerging technological developments in the field of ICT in this era and its applications will be thoroughly explored and discussed. The conference is anticipated to attract a large number of high-quality submissions and stimulate the cutting-edge research discussions among many academic pioneering researchers, scientists, industrial engineers, students from all around the world and provide a forum to researchers; propose new technologies, share their experiences, and discuss future solutions for design infrastructure for ICT; provide a common platform for academic pioneering researchers, scientists, engineers, and students to share their views and achievements; enrich technocrats and academicians by presenting their innovative and constructive ideas; and focus on innovative issues at the international level by bringing together the experts from different countries. Research submissions in various advanced technology areas were received, and after a rigorous peer-review process with the help of the program committee members and external reviewers, 76 papers were accepted with an acceptance rate of 0.19 for this volume. The conference featured many distinguished personalities like Mike Hinchey— PhD University of Limerick, Ireland, President, International Federation of Information Processing; Bharat Patel—Honorary Secretary General, Knowledge Chamber of Commerce and Industry, India; Aninda Bose—Sr. Editor, Springer, India; Mufti Mahmud—PhD, Nottingham Trent University, UK; Suresh Chandra v

vi

Preface

Satapathy—PhD, Kalinga Institute of Industrial Technology, Bhubaneswar, India; Neeraj Gupta—PhD, School of Engineering and Computer Science, Oakland University, USA; Nilanjan Dey—PhD, Techno India College of Technology, Kolkata, India. We are indebted to all our organizing partners for their immense support to make this virtual conference successfully possible. A total of 23 sessions were organized as a part of ICTIS 2020 including 22 technical and 1 inaugural session. Approximately, 154 papers were presented in 22 technical sessions with high discussion insights. The total number of accepted submissions was 112 with a focal point on ICT and intelligent systems. Our sincere thanks to our Organizing Secretary, ICTIS 2020—Mihir Chauhan, Conference Secretary, ICTIS 2020— Aman Barot, and the entire team of Global Knowledge Research Foundation and Conference committee for their hard work and support for the entire shift of ICTIS 2020 from physical to digital modes in these new normal times. Nishihara, Japan Pune, India Serdang, Malaysia Ahmedabad, India

Tomonobu Senjyu Parikshit N. Mahalle Thinagaran Perumal Amit Joshi

Contents

Modeling and Fuzzy Availability Analysis of Computer Networks: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ashish Kumar, Ombir Dahiya, and Monika Saini

1

Stochastic Modeling and Profit Evaluation of a Redundant System with Priority Subject to Weibull Densities for Failure and Repair . . . . . Monika Saini, Kuntal Devi, and Ashish Kumar

11

Convolutional Neural Networks: An Overview and Its Applications in Pattern Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aseem Patil and Milind Rane

21

SAMKL: Sample Adaptive Multiple Kernel Learning Framework for Lung Cancer Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ashima Singh, Arwinder Dhillon, and Jasmine Kaur Thind

31

Optimal Multiple Access Scheme for 5G and Beyond Communication Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nira and Aasheesh Shukla

45

Exergy and Energy Analyses of Half Effect–Vapor Compression Cascade Refrigeration System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mihir H. Amin, Hetav M. Naik, Bidhin B. Patel, Prince K. Patel, and Snehal N. Patel Effect of Servicescape and Nourishment Quality on Client’s Loyalty in Fine Dining Restaurants: A Statistical Investigation . . . . . . . . . . . . . Aravind Kumar Rai, Ashish Kumar, Pradeep Singh Chahar, and C. Anirvinna Application of Data Mining for Analysis and Prediction of Crime . . . . . Vaibhavi Shinde, Yash Bhatt, Sanika Wawage, Vishal Kongre, and Rashmi Sonar

55

77

91

vii

viii

Contents

Malnutrition Identification in Geriatric Patients Using Data Mining . . . 103 Vaishali P. Suryawanshi and Rashmi Phalnikar Blockchain: A Survey on Healthcare Perspective and Its Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Deepa Kumari, B. S. A. S. Rajita, and Subhrakanta Panda Open Data Readiness Assessment Framework for Government Projects: Indian Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Amrutaunshu Nerurkar and Indrajit Das Extended LBP Based Secret Image Sharing with Steganography . . . . . . 131 Sujit Kumar Das and Bibhas Chandra Dhara Efficient Partitioning Algorithm for Parallel Multidimensional Matrix Operations by Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Kazi Saeed Alam, Tanvir Ahmed Shishir, and K. M. Azharul Hasan An Efficient Two-Phase Metaheuristic for the Multiple Minimum Back-Walk-Free Latency Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Ha-Bang Ban, Dang-Hai Pham, and Tuan-Anh Do Unfolding Healthcare: Novel Method for Predicting Mortality of Patients Within Early Hours of ICU . . . . . . . . . . . . . . . . . . . . . . . . . 161 Rajni Jindal, Sarthak Aggarwal, and Saanidhi Classification of Disaster-Related Tweets Using Supervised Learning: A Case Study on Cyclonic Storm FANI . . . . . . . . . . . . . . . . . . . . . . . . . 169 Pankaj Kumar Dalela, Sandeep Sharma, Niraj Kant Kushwaha, Saurabh Basu, Sabyasachi Majumdar, Arun Yadav, and Vipin Tyagi Detection of Cardio Vascular Disease Using Fuzzy Logic . . . . . . . . . . . . 179 Shital Chaudhary, Sachin Gajjar, and Preeti Bhowmick Experimental Evaluation of Motor Skills in Using Jigsaw Tool for Carpentry Trade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Sasi Deepu, S. Vysakh, T. Harish Mohan, Shanker Ramesh, and Rao R. Bhavani Patent Trends in Higher Education of India: A Study on Indian Central Universities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 J. P. Singh Joorel, Abhishek Kumar, Sanjay Tiwari, Ashish Kumar Chauhan, and Ramswaroop Ahirwar Spatial Rough k-Means Algorithm for Unsupervised Multi-spectral Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Aditya Raj and Sonajharia Minz Enhancement and Comparative Analysis of Environmental Sound Classification Using MFCC and Empirical Mode Decomposition . . . . . . 227 Ridhima Bansal, Namita Shukla, Maghav Goyal, and Dhirendra Kumar

Contents

ix

Effective Use of Naïve Bayes, Decision Tree, and Random Forest Techniques for Analysis of Chronic Kidney Disease . . . . . . . . . . . . . . . . 237 Rajesh S. Walse, Gajanan D. Kurundkar, Santosh D. Khamitkar, Aniket A. Muley, Parag U. Bhalchandra, and Sakharam N. Lokhande The Assessments of Local Manager on the Quality of Administrative Civil Servants—A Case Study in Hanoi City, Vietnam . . . . . . . . . . . . . 247 Ngo Sy Trung, Do Huu Hai, Vu Thi Yen Nga, and Tran Thi Hanh An Overview of Various Types of CAPTCHA . . . . . . . . . . . . . . . . . . . . 261 Shivani Deosatwar, Swarnima Deshmukh, Vaibhavi Deshmukh, Reva Sarda, and Lalit Kulkarni Low Pass Filter-Based Enhancement of Arabic Handwritten Document Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 M. Ravikumar and Omar Ali Boraik Marine Motion Control Using Single Neuron Fuzzy Logic Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 T. K. Sethuramalingam Statistical Evaluation of Effective Trust Models for Wireless Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Shahin Sirajuddin Shaikh and Dilip G. Khairnar Comparative Study of Open-Source NOSQL Document-Based Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Nidamanuri Amani and Yelchuri Rajesh Hybridization of K-means Clustering Using Different Distance Function to Find the Distance Among Dataset . . . . . . . . . . . . . . . . . . . . 305 Kusum Yadav, Sunil Gupta, Neetu Gupta, Sohan Lal Gupta, and Girraj Khandelwal A Survey on Spoofing Detection Systems for Fake Fingerprint Presentation Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Riley Kiefer, Jacob Stevens, Ashok Patel, and Meghna Patel Crop Prediction Based on Environmental Conditions and Disease Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 Gresha Bhatia, Nikhil Joshi, Srivatsan Iyengar, Sahil Rajpal, and Krish Mahadevan Application of Secret Sharing Scheme in Software Watermarking . . . . . 345 K. K. Aiswarya, K. Praveen, P. P. Amritha, and M. Sethumadhavan Hybrid Approach for Predicting Heart Disease Using Optimization Clustering and Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Nibir Kumar Paul, K. G. Harsha, Prateek Kumar, Shynu Philip, and Jossy P. George

x

Contents

FP-MMR: A Framework for the Preprocessing of Multimodal MR Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Amrita Kaur, Lakhwinder Kaur, and Ashima Singh Machine Learning in Medical Image Processing . . . . . . . . . . . . . . . . . . 377 Himanshu Kumar and Yasha Hasija Adaptive Educational Resources Framework for ELearning Using Rule-Based System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Leo Willyanto Santoso A Progressive Non-discriminatory Intensity Equalization Algorithm for Face Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Khadijat T. Bamigbade and Olufade F. W. Onifade Big Data Analytics in Health Informatics for Precision Medicine . . . . . 405 Pawan Singh Gangwar and Yasha Hasija Software Tools for Global Navigation Satellite System . . . . . . . . . . . . . . 413 Riddhi Soni, Sachin Gajjar, Manisha Upadhyay, and Bhupendra Fataniya Encryption and Decryption: Unraveling the Intricacies of Data Reliability, Attributed by Incorporating the Usage of Color Code and Pixels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Bikrant Bikram Pratap Maurya, Aman Upadhyay, Aniket Saxena, and Parag Sohani Edge Intelligence-Based Object Detection System Using Neural Compute Stick for Visually Impaired People . . . . . . . . . . . . . . . . . . . . . 433 Aditi Khandewale, Vinaya Gohokar, and Pooja Nawandar MH-DSCEP: Multi-hop Dynamic and Stable Cluster-Based Energy-Efficient Protocol for WSN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Kameshkumar R. Raval and Nilesh Modi Blockchain Framework for Social Media DRM Based on Secret Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 M. Kripa, A. Nidhin Mahesh, R. Ramaguru, and P. P. Amritha An Overview of Blockchain Consensus and Vulnerability . . . . . . . . . . . 459 Gajala Praveen, Mayank Anand, Piyush Kumar Singh, and Prabhat Ranjan Statistical Analysis of Stress Levels in Students Pursuing Professional Courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 Harish H. Kenchannavar, Shrivatsa D. Perur, U. P. Kulkarni, and Rajeshwari Hegde Smart Helmet using Advanced Technology . . . . . . . . . . . . . . . . . . . . . . 479 K. Muni Mohith Reddy, D. Venkata Krishna Rohith, C. Akash Reddy, and I. Mamatha

Contents

xi

Analyze and Compare the Parameters of Microstrip Rectangular Patch Antenna Using Fr4, RT Duroid, and Taconic Substrate . . . . . . . . 489 Prakash Kuravatti Performance Analysis of Hand-Crafted Features and CNN Toward Real-Time Crop Disease Identification . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Vivek Tiwari, Aditi Agrahari, and Sriyuta Srivastava Performance Analysis of GNSS Utility by Multi-constellation Over the Indian Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 Madhu Ramarakula and Goparaju V. R. Sai Sukesh Digital Learning: A New Perception to Learn Beyond the Classroom Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 Dweepna Garg, Radhika Patel, Rima Patel, Binal Kaka, Parth Goel, and Bhavika Patel Performance Analysis of Channel Capacity of MIMO System Without CSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 Divya Singh and Aasheesh Shukla Firmware Injection Detection on IoT Devices Using Deep Random Forest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 E. Arul, A. Punidha, V. D. Ambeth Kumar, and E. Yuvarani Adaptive Protection Scheme for Renewable Integrated Microgrid—A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 S. G. Srivani, C. Suresha, K. N. S. V. Theertha, and D. Chandan Relevant Feedback-Based User-Query Log Recommender System from Public Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 V. Kakulapati, D. Vasumathi, and G. Suryanarayana Intelligent Sentiments Information Systems Using Fuzzy Logic . . . . . . . 569 Roop Ranjan and A. K. Daniel Possibility Study of PV-STATCOM with CHB Multilevel Inverter: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579 K. M. Nathgosavi and P. M. Joshi Design and Development of Wireless Sensor for Variable Temperature and for Various Security Purposes . . . . . . . . . . . . . . . . . . 591 Prabhakar Singh and Minal Saxena Analysis of Cloud Forensics Challenges and Solutions . . . . . . . . . . . . . . 601 Ashish Revar, Abhishek Anand, and Ishwarlal Rathod Cohesion Measure for Restructuring . . . . . . . . . . . . . . . . . . . . . . . . . . . 609 Sarika Bobde and Rashmi Phalnikar

xii

Contents

Analysis of F-Shape Antenna with Different Dielectric Substrate and Thickness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615 Radhika Raina, Komal Jaiswal, Shekhar Yadav, Dheeraj Kumar, and Ram Suchit Yadav Analyzing Forensic Anatomization of Windows Artefacts for Bot-Malware Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627 Vasundhra Gupta, Mohona Ghosh, and Niyati Baliyan Low-Power Two-Stage OP-AMP in 16 nm . . . . . . . . . . . . . . . . . . . . . . . 637 Gopal Agarwal and Vedvyas Dwivedi Computation of Hopf Bifurcation Points in the Magnetic Levitation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643 Sudarshan K. Valluru, Anshul Gupta, and Aditya Verma Face Sketch-Image Recognition for Criminal Detection Using a GAN Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651 Sunil Karamchandani and Ganesh Shukla Studying Network Features in Systems Biology Using Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661 Shubham Mittal and Yasha Hasija Smart Predictive Healthcare Framework for Remote Patient Monitoring and Recommendation Using Deep Learning with Novel Cost Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671 Anand Motwani, Piyush Kumar Shukla, and Mahesh Pawar Enhanced Question Answering System with Trustworthy Answers . . . . 683 C. Valliyammai, V. P. Siddharth Gupta, Puviarasi Gowrinathan, Kalli Poornima, and S. Yaswanth Rumour Containment Using Monitor Placement and Truth Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693 Amrah Maryam and Rashid Ali Classification of Satellite Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703 N. Manohar, M. A. Pranav, S. Aksha, and T. K. Mytravarun Sign Language Recognizer Using HMMs . . . . . . . . . . . . . . . . . . . . . . . . 715 Venkata Sai Rishita Middi and Middi Appala Raju Attendance Monitoring Using Computer Vision . . . . . . . . . . . . . . . . . . . 725 Akanksha Krishna Singh, Mausami, and Smita Kulkarni Heart Disease Prediction Using Ensemblers Learning . . . . . . . . . . . . . . 733 Meenu Bhatia and Dilip Motwani

Contents

xiii

Impact of Influencer Credibility and Content on the Influencer–Follower Relationships in India . . . . . . . . . . . . . . . . . 745 Adithya Suresh, Akhilraj Rajan, and Deepak Gupta Smart Employment System: An HR Recruiter . . . . . . . . . . . . . . . . . . . . 753 Kajal Jewani, Anupreet Bhuyar, Anisha Kaul, Chinmay Mahale, and Trupti Kamat Alzheimer’s Disease Prediction Using Fastai . . . . . . . . . . . . . . . . . . . . . 765 Chiramel Riya Francis, Unik Lokhande, Prabhjyot Kaur Bamrah, and Arlene D’costa Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777

About the Editors

Dr. Tomonobu Senjyu received his B.S. and M.S. degrees in Electrical Engineering from the University of the Ryukyus in 1986 and 1988, respectively, and his Ph.D. degree in Electrical Engineering from Nagoya University in 1994. Since 1988, he has been with the Department of Electrical and Electronics Engineering, University of the Ryukyus, where he is currently a Professor. His research interests include stability of AC machines, power system optimization and operation, advanced control of electrical machines and power electronics. He is a member of the Institute of Electrical Engineers of Japan and IEEE. Dr. Parikshit N. Mahalle holds a B.E. degree in CSE and an M.E. degree in Computer Engineering. He completed his Ph.D. at Aalborg University, Denmark. Currently, he is working as a Professor and Head of the Department of Computer Engineering at STES Smt. Kashibai Navale College of Engineering, Pune, India. He has over 18 years of teaching and research experience. Dr. Mahalle has published over 140 research articles and eight books, and has edited three books. He received the “Best Faculty Award” from STES and Cognizant Technologies Solutions. Dr. Thinagaran Perumal received his B.Eng., M.Sc. and Ph.D. Smart Technologies and Robotics from Universiti Putra Malaysia in 2003, 2006 and 2011, respectively. Currently, he is an Associate Professor at Universiti Putra Malaysia. He is also Chairman of the TC16 IoT and Application WG National Standard Committee and Chair of IEEE Consumer Electronics Society Malaysia Chapter. Dr. Thinagaran Perumal is the recipient of 2014 IEEE Early Career Award from IEEE Consumer Electronics Society. His recent research activities include proactive architecture for IoT systems; development of the cognitive IoT frameworks for smart homes; and wearable devices for rehabilitation purposes. Dr. Amit Joshi is currently the Director of the Global Knowledge Research Foundation. An entrepreneur & researcher, he holds B.Tech., M.Tech. and Ph.D. degrees. His current research focuses on cloud computing and cryptography. xv

xvi

About the Editors

He is an active member of ACM, IEEE, CSI, AMIE, IACSIT, Singapore, IDES, ACEEE, NPA and several other professional societies. He is also the International Chair of InterYIT at the International Federation of Information Processing (IFIP, Austria). He has published more than 50 research papers, edited 40 books and organized over 40 national and international conferences and workshops through ACM, Springer and IEEE across various countries including India, Thailand, Egypt and Europe.

Modeling and Fuzzy Availability Analysis of Computer Networks: A Case Study Ashish Kumar, Ombir Dahiya, and Monika Saini

Abstract The traditional theory of reliability is based on the Bernoulli trials, i.e., either success or failure. However, it seems unrealistic for enormous complex systems like computer networks. To overcome this issue, in the present study, an effort has been made regarding the development of a mathematical model for a computer network system and analysis of fuzzy availability of the same. The concepts of constant failure, constant repair, and coverage factor have been used for the development of the model. Impact of coverage factor, repair rates and failure rates of components has been analyzed on fuzzy availability of system. Markov birth-death process has been used for the development of Chapman-Kolmogorov differential-difference equations. The leading differential equations have been simplified by Runge–Kutta method of order four employing MATLAB (Ode 45 function).

1 Introduction During the last few decades, computer networks become the integral part of human society as most of the infrastructural systems like power plants, communication systems, commercial systems, healthcare and academic system has become censoriously contingent on these. With the shifting of life critical and necessary facilities on Internet, it becomes important to ensure the reliability and availability of these networks. Because, failure of these networks significantly affect the performance of network service and have disastrous results. Reliability and availability are the key attributes of the system performance. In literature, the traditional reliability theory approach has been extensively used for evaluation of performance measures of systems. The traditional theory of reliability is based on the Bernoulli trials, i.e., either success or failure. However, it seems unrealistic for enormous complex systems A. Kumar (B) · O. Dahiya · M. Saini Department of Mathematics and Statistics, Manipal University Jaipur, Jaipur, Rajasthan 303007, India e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_1

1

2

A. Kumar et al.

like computer networks. The assumption of Bernoulli states has been replaced by the technique of fuzzy reliability theory developed by Zadeh [29]. Fuzzy reliability theory provides the chance of studying all possible states that falls between operative and down states. This approach is known as profust reliability. Though, both the profust and traditional reliability approached have their own importance and no one can replace or dominant each other. But in current age, the performance of industrial systems based on computer networks can be efficiently analyzed by using profust reliability approach.

2 Literature Review Fratta and Montanari [11] developed Boolean algebra methodology for computing communication network’s reliability. The reliability of unit represented by terminal has been investigated. The complications occurred in the modeling of a network design has been discussed by Ball [4]. The reliability analysis of network has been done in his work. The well-known methodology of model developing Markovian approach has been applied in system availability analysis having constant random variables by Dhillon and Singh [10]. Performance evaluation of sugar plant has been done by Kumar et al. [20] using Markovian approach. The paper industry is a very complex system in which lot of components arranged in series. Among these components washing system is a prominent one and its availability analysis has been done by Kumar and Singh [18]. A new procedure to obtain the reliability using fault tree and fuzzy set theory has been proposed by Singer [25]. Ghafoor et al. [14] performed the reliability evaluation of a flaw-lenient multi-bus multiprocessor system. The consequence of fluctuating environment on system’s reliability has been studied by Dayal and Singh [9]. System’s availability analysis using fuzzy confidence interval has been discussed by Cheng and Mon [8]. A new scheme of reliability investigation for system’s reliability evaluation adopting fuzzy arithmetic has been proposed by Chen [6]. Utkin and Gurov [28] articulated a universal recognized method for fuzzy reliability analysis. Lin and Chen [22] discussed the computational intricacy of reliability crisis on dispersed network systems. Singh and Mahajan [26] obtained steady state availability of a utensils production factory. The concept of imperfect repairs in maintained systems has been discussed for availability evaluation by Biswas and Sarkar [5]. Selvam et al. [24] carried out the availability and performance assessment of distributed processing networks considering two failure modes. Knezevic and Odoom [15] projected Petri nets methodology as an alternative of fault trees approach. Loman and Wang [23] developed a reliability model and evaluate reliability for large scale systems which required high reliability for operation. The use of Markovian approach for queuing systems has been initiated as an application in computer networks and communication systems by Sztrik and Kim [27]. Chen [7] offered a different way for examining the fuzzy reliability of systems by utilizing vague set theory. The Petri net approach for reliability evaluation has been utilized by Kumar and Aggarwal [16]. It is used for reliability evaluation of distributed

Modeling and Fuzzy Availability Analysis of Computer …

3

systems. The concept of time dependent fuzzy random variables has been discussed in system’s fuzzy reliability evaluation by Aliev and Kara [3]. Kumar et al. [19] developed a mathematical model for butter oil manufacturing plant in which units work as a serial process and derived fuzzy reliability measures. Fuzzy set theoretical concepts for parameter estimation of repair and failure rates of industrial systems has been discussed by Garg and Sharma [13]. Kumar and Kumar [17] examined the biscuit manufacturing plant’s fuzzy availability. The intuitionistic fuzzy numbers in place of unadventurous reliability for reliability evaluation has been used by Kumar and Yadav [21]. Fuzzy Lambda-Tau approach for performance evaluation of complex systems has been utilized by Garg and Rani [12]. The confidence interval for performance measures has been developed. Aggarwal et al. [1] deliberate the steady state availability and performance of a butter oil production system by developing a mathematical model using fuzzy reliability theory. Mathematical modeling and performance evaluation of feeding system in sugar plant has been articulated by Aggarwal et al. [2]. The fuzzy availability of plant has been investigated. But most of the approaches used in the literature are time consuming. Also, the computer networks do not obey the rule of binary state. So, in the present study, we try to study the computer network as a whole system under fuzzy environment by using an advance numerical method Runge–Kutta method of order four. The needful data has been collected from the IT personals of a private university situated at Jaipur, India.

3 System Description A computer network is a framework where numerous PCs are associated with one another to share data and assets. PC organize is planned by a combi-country of different subsystems. The fundamental subsystems of a PC arrange are network links, wholesalers, switches, interior system cards and outer system cards. The system links and wholesalers can be worked in corrupted stage. The subsystems have been associated in arrangement setup with past units and complete disappointment/failure of the units came about the total disappointment/failure of the entire framework. The state transition diagram of the computer network has been shown in Fig. 1. Subsystem—A (Network Cables): The network cables play a key role in system configuration. These are used to connect computers. Subsystem—B (Distributors): It is a device to connect a computer to an alternative one via a sequential port but if we want to connect a number of computers to generate a network, then a central body is used to connect the computers. This central body is known as distributer. Subsystem—C (Router): A router is a kind of gadget which goes about as the essential issue among PCs and different gadgets that are a piece of the system. It is furnished with openings called ports. PCs and different gadgets are associated with a router utilizing system links. Subsystem—D (Internal Network Card): System/Network card is a fundamental part of a PC without which a PC can’t be associated over a system. Motherboard has

4

A. Kumar et al.

Fig. 1 State transition diagram

a space for inward system card where it is to be embedded. Inward system cards are of two kinds in which the principal type utilizes Peripheral Component Interconnect association, while the subsequent sort utilizes Industry Standard Architecture. Subsystem—E (External Network Card): External network cards can be divided into two categories one is wireless and other is USB based. Notations αs=1,...,5 and βt=1,...,5 : Failure and repair rates of subsystem A, B, C, D, and E. Pi (t): Probability that system remains in state i at time t. C: Coverage factor which varies from 0 to 1. ◯: Operative stage of system. ♦: Partially failed stage of system; : Failure stage of system. Assumptions • • • •

Two or more units cannot fail simultaneously. All random variables are statistically independent. Units after repair become as good as new. Failure/disappointment and repair rates of original and partially failed units are identical. • All random variables related to failure and repair follows exponential distribution.

4 Formulation of Mathematical Model A mathematical model for a computer network has been formulated using Markov birth-death process. The following Chapman-Kolmogorov equations have been

Modeling and Fuzzy Availability Analysis of Computer …

5

derived: dP1 (t) + [(α1 C + α2 C + α3 (1 − C) + α4 (1 − C) + α5 (1 − C))]P1 (t) dt = β1 P2 (t) + β2 P3 (t) + β3 P9 (t) + β4 P10 (t) + β5 P11 (t)

(1)

dP2 (t) + [(β1 + α1 (1 − C) + α2 (1 − C) + α3 (1 − C) + α4 (1 − C) dt + α5 (1 − C))]P2 (t) = β1 P12 (t) + β2 P13 (t) + β3 P14 (t) + β4 P15 (t) + β5 P16 (t) + α1 C P1 (t) (2) dP3 (t) + [(β2 + α1 (1 − C) + α2 (1 − C) + α3 (1 − C) + α4 (1 − C) dt + α5 (1 − C))]P3 (t) = β1 P4 (t) + β2 P5 (t) + β3 P6 (t) + β4 P7 (t) + β5 P8 (t) + α2 C P1 (t) dP j (t) + βk P j (t) = αk (1 − C)P3 (t) ( j = 4, 5, 6, 7, 8; k = 1, 2, 3, 4, 5) dt dPm (t) + βn Pm (t) = αn (1 − C)P1 (t) (m = 9, 10, 11; n = 3, 4, 5) dt

(3) (4) (5)

dPs (t) + βt Ps (t) = αt (1 − C)P2 (t) (s = 12, 13, 14, 15, 16; t = 1, 2, 3, 4, 5) dt (6) with initial conditions:  Pi (0) =

1, if i = 1 0, if i = 1

(7)

The numerical method Runge-Kutta method of 4th order has been used to solve the system of linear differential equations (1) to (6) along with initial condition delivered by Eq. (7). The numerical outcome has been accomplished for above characterized series of expectations. The framework’s fuzzy accessibility (fuzzy availability) has been figured for a span of 360 days. To highlight the significance of the examination different decisions of disappointment rate, fix rate and inclusion/coverage factor have been taken. The condition of fluffy accessibility (fuzzy availability) is made as follows: Fuzzy Availability = P0 (t) +

4 4 P1 (t) + P2 (t) 5 5

(8)

6

A. Kumar et al.

5 Performance Analysis Right now, numerical outcome has been acquired for the fluffy accessibility/fuzzy availability of a computer network utilizing condition (8) for a stationary arrangement of estimations of the different parameters. The investigation has been finished as for different estimations of fix/repair rate, disappointment/failure rate and inclusion/coverage factor. Impact of inclusion factor (c) on the fluffy availability of the PC organize. For a predefined set of disappointment and fix rates stationary qualities the numerical aftereffects of fluffy accessibility have been delineated for different estimations of inclusion factor, i.e., for c = 0–1 with some variation in value of c with respect to time as shown in Table 1. The specified values are defined as follows: α1 = 0.0025, α2 = 0.0002, α3 = 0.0021, α4 = 0.0001, α5 = 0.004 and β1 = 0.5, β2 = 0.71, β3 = 0.4, β4 = 0.55, β5 = 0.65 . The time duration has been taken up to 300 h with a constant variation of 30 h. From Table 1, it is distinguished that the fluffy accessibility diminished as for time. The inclusion factor assumes a key job in the fluffy accessibility of the framework if the issue has been recognized effectively with high estimation of inclusion factor then framework is exceptionally accessible for use. There is around 1% variety in the estimation of fluffy accessibility for c = 0–1. Accordingly, we reason that for c = 1 framework is profoundly accessible though for c = 0 framework is less accessible. The effect of failure and repair rates of network cables on the fuzzy availability of the computer network has been analyzed for different values of failure and repair rates of network cables. The failure rate (α1 ) and repair rate (β1 ) lies in the intervals [0.0025, 0.2] and [0.5, 1.4] respectively. From Table 2, it is identified that availability decreases as failure rate increase and increases as repair rate increases. For c = 1, there is no impact of variety in the fix (repair) and disappointment (failure) rates on the framework’s availability. The effect of failure and repair rates of distributers on the fuzzy availability of the computer network has been analyzed for different values of failure and repair rates of network cables. The failure rate (α2 ) and repair rate (β2 ) lies in the intervals [0.0002, 0.5] and [0.71, 1.9] respectively. From Table 3, it is identified that availability decreases as failure rate increase and increases as repair rate increases. For Table 1 Effect of coverage factor (C) on the fuzzy availability of computer network w.r.t. time Time

C=0

C = 0.2

C = 0.4

C = 0.5

60

0.9886

120

0.9886

180 240 300

C = 0.6

C = 0.8

C = 0.9

0.99078

0.99276

0.99078

0.99276

0.9886

0.99058

0.9886

0.99058

0.9886

0.99058

C=1

0.99376

0.99486

0.99686

0.99796

0.99904

0.99376

0.99486

0.99686

0.99796

0.99904

0.99268

0.99368

0.99478

0.99686

0.99796

0.99904

0.99268

0.99368

0.99478

0.99686

0.99796

0.99904

0.99268

0.99368

0.99478

0.99686

0.99796

0.99904

Modeling and Fuzzy Availability Analysis of Computer …

7

Table 2 Effect of failure rate (α1 ) and repair rate (β1 ) of network cable on the fuzzy availability of computer network w.r.t. time C.F.

Time

Failure rate of network cable α1 = 0.0025 α1 = 0.02 α1 = 0.2

Repair rates of network cable β1 = 0.5 β1 = 0.98 β1 = 1.4

C=1

60

0.99904

0.99234

0.943

0.99904

0.99944

0.99968

120

0.99894

0.99234

0.94274

0.99894

0.99944

0.99968

60

0.99686

0.99128

0.93132

0.99686

0.99734

0.99738

120

0.99686

0.9912

0.93132

0.99686

0.99724

0.99738

60

0.99376

0.98996

0.9302

0.99376

0.994

0.994

120

0.99376

0.98988

0.9302

0.99376

0.99392

0.9939

60

0.9886

0.9885

0.9885

0.9886

0.9886

0.9886

120

0.9886

0.9885

0.9885

0.9886

0.9886

0.9886

C = 0.8 C = 0.5 C=0

Table 3 Effect of failure rate (α2 ) and repair rate (β2 ) of distributers on the fuzzy availability of computer network w.r.t. time C.F.

Time

Failure rate of network cable α2 = 0.0002 α2 = 0.03 α2 = 0.5

Repair rates of network cable β2 = 0.71 β2 = 1.2 β2 = 1.9

C=1

60

0.99904

0.99098

0.91702

0.99904

0.99906

0.99898

120

0.99894

0.99098

0.91692

0.99894

0.99906

0.99898

60

0.99376

0.98912

0.86256

0.99376

0.99378

0.9937

120

0.99376

0.98912

0.86246

0.99376

0.99378

0.9937

60

0.9886

0.9885

0.9885

0.9886

0.9886

0.9886

120

0.9886

0.9885

0.9885

0.9886

0.9886

0.9886

C = 0.5 C=0

c = 0.5, 0.8 and 1, there is steep variation in system’s availability w.r.t failure and repair rates. The effect of failure and repair rates of router on the fuzzy availability of the computer network has been analyzed for different values of failure and repair rates of network cables. The failure rate (α3 ) and repair rate (β3 ) lies in the intervals [0.0021, 0.9] and [0.4, 1.6] respectively. From Table 4, it is identified that availability decreases as failure rate increase and increases as repair rate increases. For c = 0.5, 0.8 and 1, there is steep variation in system’s availability w.r.t failure and repair rates. The effect of failure and repair rates of External Network Cards (ENC) on the fuzzy availability of the computer network has been analyzed for different values of failure and repair rates of network cables. The failure rate (α4 ) and repair rate (β4 ) lies in the intervals [0.0001, 0.83] and [0.55, 1.3] respectively. From Table 5, it is identified that availability decreases as failure rate increase and increases as repair rate increases. For c = 0, 0.5, 0.8 there is very steep variation in system’s availability w.r.t failure and repair rates. The effect of failure and repair rates of Internal Network Cards (INC) on the fuzzy availability of the computer network has been analyzed for different values of

8

A. Kumar et al.

Table 4 Effect of failure rate (α3 ) and repair rate (β3 ) of router on the fuzzy availability of computer network w.r.t. time C.F.

Time

Failure rate of network cable α3 = 0.0021 α3 = 0.3 α3 = 0.9

Repair rates of network cable β3 = 0.4 β3 = 0.9 β3 = 1.6

C=1

60

0.99904

0.99904

0.99904

0.99904

0.99904

0.99904

120

0.99894

0.99894

0.99894

0.99894

0.99894

0.99894

60

0.99376

0.72522

0.46976

0.99376

0.99528

0.99568

120

0.99376

0.72522

0.46964

0.99376

0.99518

0.99568

60

0.9886

0.5694

0.3071

0.9886

0.9914

0.9924

120

0.9886

0.5694

0.3071

0.9886

0.9914

0.9924

C = 0.5 C=0

Table 5 Effect of external network cards (ENC) failure rate (α4 ) and repair rate (β4 ) on the fuzzy availability of computer network w.r.t. time C.F.

Time

Failure rate of network cable α4 = 0.0001 α4 = 0.5 α4 = 0.83

Repair rates of network cable β4 = 0.55 β4 = 0.9 β4 = 1.3

C=1

60

0.99904

0.99904

120

0.99894

0.99894

0.99894

0.99894

0.99894

0.99894

C = 0.5

60

0.99376

0.68444

0.5678

0.99376

0.99378

0.99378

120

0.99376

0.68444

0.5678

0.99376

0.99378

0.99378

60

0.9886

0.5207

0.3968

0.9886

0.9886

0.9887

120

0.9886

0.5207

0.3968

0.9886

0.9886

0.9887

C=0

0.99904

0.99904

0.99904

0.99904

failure and repair rates of network cables. The failure rate (α5 ) and repair rate (β5 ) lies in the intervals [0.0001, 0.83] and [0.55, 1.3] respectively. From Table 6, it is identified that availability decreases as failure rate increase and increases as repair rate increases. For c = 0, 0.5, 0.8 there is very steep variation in system’s availability w.r.t failure and repair rates. Table 6 Effect of internal network cards (INC) failure rate (α5 ) and repair rate (β5 ) on the fuzzy availability of computer network w.r.t. time C.F.

Time

Failure rate of network cable α5 = 0.004 α5 = 0.4 α5 = 0.91

Repair rates of network cable β5 = 0.65 β5 = 0.92 β5 = 1.5

C=1

60

0.9990

0.99904

120

0.9989

0.99894

0.99894

0.99894

0.99894

0.99894

C = 0.5

60

0.9937

0.7627

0.58708

0.99376

0.99458

0.99538

120

0.9937

0.7627

0.58698

0.99376

0.99458

0.99538

C=0

60

0.9886

0.617

0.4157

0.9886

0.9903

0.9919

120

0.9886

0.6169

0.4157

0.9886

0.9903

0.9919

0.99904

0.99904

0.99904

0.99904

Modeling and Fuzzy Availability Analysis of Computer …

9

6 Conclusion The availability analysis of computer network carried out above helps in increasing the successful operation of any network or industry based on it. For above analysis, we analyze that coverage factor along with increased failure rate of subsystems plays key responsibility in the failure of a structure. A comparative analysis indicates that router, internal network cards (INC) and external network cards (INC) have a prominent effect on the system than that of the other units. The network cables and distributers have the capability to work in the degraded situation but router, internal network cards (INC) and external network cards (INC) failed directly. So, we conclude that by expanding the procedure of fault coverage, delivering standby units to router, internal network cards (INC) and external network cards (INC) and implementing appropriate preservation policies the fuzzy availability will be enhanced.

References 1. Aggarwal, A.K., Singh, V., Kumar, S.: Availability analysis and performance optimization of a butter oil production system: a case study. Int. J. Syst. Assur. Eng. Manage. 8(1), 538–554 (2014) 2. Aggarwal, A.K., Kumar, S., Singh, V.: Mathematical modeling and fuzzy availability analysis for serial processes in the crystallization system of a sugar plant. J. Ind. Eng. Int. 1–12 (2016) 3. Aliev, I.M., Kara, Z.: Fuzzy system reliability analysis using time dependent fuzzy set. Control Cybern. 33(4), 653–662 (2004) 4. Ball, M.O.: Complexity of network reliability computations. Networks 10, 153–165 (1980) 5. Biswas, A., Sarkar, J.: Availability of a system maintained through several imperfect repair before a replacement or a perfect repair. Stat Reliab Lett 50, 105–114 (2000) 6. Chen, S.M.: Fuzzy system reliability analysis using fuzzy number arithmetic operations. Fuzzy Sets Syst. 64(1), 31–38 (1994) 7. Chen, S.M.: Analyzing fuzzy system reliability using vague set theory. Int. J. Appl. Sci. Eng. 1(1), 82–88 (2003) 8. Cheng, C.H., Mon, D.L.: Fuzzy system reliability analysis by interval of confidence. Fuzzy Sets Syst. 56(1), 29–35 (1993) 9. Dayal, B., Singh, J.: Reliability analysis of a system in a fluctuating environment. Microelectron. Reliab. 32, 601–603 (1992) 10. Dhillon, B.S., Singh, C.: Engineering Reliability-New Techniques and Applications. Wiley, New York (1981) 11. Fratta, L., Montanari, U.G.: A Boolean algebra method for computing the terminal reliability in a communication network. IEEE Trans. Circuit Theory CT-20, 203–211 (1973) 12. Garg, H., Rani, M.: An approach for reliability analysis of industrial systems using PSO and IFS technique. ISA Trans. 52(6), 701–710 (2013) 13. Garg, H., Sharma, S.P.: Multi-objective optimization of crystallization unit in a fertilizer plant using particle swarm optimization. Int. J. Appl. Sci. Eng. 9(4), 261–276 (2011) 14. Ghafoor, A., Goel, A.L., Chan, J.K., Chen, T.-M.. Sheikh, S.: Reliability analysis of a faulttolerant multi-bus multiprocessor system. In: Proceedings of the Third IEEE Symposium on Parallel and Distributed Processing 1991, pp. 436–443 (1991) 15. Knezevic, J., Odoom, E.R.: Reliability modelling of repairable systems using Petri nets and fuzzy Lambda-Tau methodology. Reliab. Eng. Syst. Saf. 73(1), 1–17 (2001)

10

A. Kumar et al.

16. Kumar, P., Aggarwal, K.K.: Petri net modeling and reliability evaluation of distributed processing systems. Reliab. Eng. Syst. Saf. 41(2), 167–176 (1993) 17. Kumar, K., Kumar, P.: Fuzzy availability modeling and analysis of biscuit manufacturing plant: a case study. Int. J. Syst. Assur. Eng. Manage. 2(3), 193–204 (2011) 18. Kumar, D., Singh, J.: Availability of a washing system in the paper industry. Microelectron. Reliab. 29, 775–778 (1989) 19. Kumar, K., Singh, J., Kumar, P.: Fuzzy reliability and fuzzy availability of the serial process in butter-oil processing plant. J. Math. Stat. 5(1), 65–71 (2009) 20. Kumar, D., Singh, J., Pandey, P.C.: Availability of a washing system in the paper industry. Microelectron. Reliab. 29(5), 775–778 (1989) 21. Kumar, M., Yadav, S.P.: The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability. ISA Trans. 51(4), 531–538 (2012) 22. Lin, M.S., Chen, D.J.: The computational complexity of reliability problem on distributed systems. Inf. Process. Lett. 64(3), 143–147 (1997) 23. Loman, J., Wang, W.: On reliability modeling and analysis of highly-reliable large systems. In: Reliability and Maintainability Symposium 2002. Proceedings. Annual, pp. 456–459 (2002). ISSN 0149-144X 24. Selvam, S., Moinuddin, K., Ahmed, U.: Reliability evaluation of distributed computing networks using 2-mode failure analysis. IETE Tech. Rev. 18, 45 (2001). ISSN 0256-4602 25. Singer, D.: A fuzzy set approach to fault tree and reliability analysis. Fuzzy Sets Syst. 34(2), 145–155 (1990) 26. Singh, J., Mahajan, P.: Reliability of utensils manufacturing plant—a case study. Oper. Res. 36(3), 260–269 (1999) 27. Sztrik, J., Kim, C.S.: Markov-modulated finite-source queueing models in evaluation of computer and communication systems. Math. Model. Comput. 38(7–9), 961–968 (2003) 28. Utkin, L.V., Gurov, S.V.: A general formal approach for fuzzy reliability analysis in the possibility context. Fuzzy Sets Syst. 83(2), 203–213 (1996) 29. Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965)

Stochastic Modeling and Profit Evaluation of a Redundant System with Priority Subject to Weibull Densities for Failure and Repair Monika Saini, Kuntal Devi, and Ashish Kumar

Abstract In the present investigation, a stochastic model has been proposed for a non-indistinguishable unit’s excess framework and contrasted the results between the current model and Kumar et al. (Rev Invest Oper 37(3):247–257, [5]). Here, the thoughts of preventive upkeep, need to preventive support of copy unit over fix of unique unit, unit all around great, and Weibull densities for disappointment and fix have been considered in building up the framework model in which unique unit is operational and copy unit is kept in cool reserve. Semi-Markovian methodology and regenerative point strategy has been utilized to develop the articulations for unwavering quality lists of the framework model created by utilizing the classification of Kumar et al. [5]. The conduct of the framework execution has been investigated by portraying the charts of the distinction of accessibility and benefit of both framework models.

1 Introduction Despite advancement in modern science and innovation, greater part in modern mechanical, computational and electrical frameworks becomes complex structure of little segments day by day. The vulnerability of a part impacts the vulnerability of the entire framework. The dependability of the framework relies upon the unwavering quality of these segments. A great deal of dependability improvement strategies has been created by analysts to improve the unwavering quality of such frameworks. Excess is the most generally utilized system for dependability improvement. Repetition strategy has been utilized in two sense dynamic excess and reserve excess. Li [9] played out a similar investigation of dynamic and backup by repetition in unwavering quality assessment and reasoned that the two methods are useful in expanding M. Saini (B) · K. Devi · A. Kumar Department of Mathematics and Statistics, Manipal University Jaipur, Jaipur, Rajasthan 303007, India e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_2

11

12

M. Saini et al.

the framework operational time however reserve excess system is somewhat better. Further, reserve repetition likewise characterized into three classes: cold backup, warm backup and hot backup excess. A large portion of the creators including Malik and Deswal [11], Ram et al. [13], Ram and Singh [12], Deswal and Malik [2] and Zheng et al. [15] picked cold reserve excess because of its predominance over hot and warm backup repetition alongside different arrangement of suppositions. Need in fix exercises including preventive upkeep has additionally been utilized to improve the unwavering quality of framework models. Kumar and Malik [5], Kumar et al. [8], Chopra and Ram [1] and Lin and Pham [10] created different unwavering quality models for repetitive frameworks. In a large portion of the investigations a typical closeness of thought of consistent disappointment and fix rates is imagined. Be that as it may, because of unreasonable use, mechanical pressure and inappropriate taking care of the greater part of designing frameworks can’t achieve the steady disappointment and fix rates. Their disappointment information pursues different conveyances like Weibull, coordinations and log coordinations and so forth. Gupta et al. [3], Kishan and Jain [4], Kumar et al. [7], Saini and Kumar [14] and Kumar and Saini [6] considered the conduct of some single unit and excess frameworks whose disappointment and fixes rates pursued Weibull dispersion. From the above exchange, it is seen that a great deal of repetitive frameworks has been created under various arrangement of suspicions. Be that as it may, very little push has been given on excess frameworks of non-indistinguishable units under the idea of preventive support, need and Weibull conveyance for arbitrary factors up until now. So, in the present examination, an exertion has been made to propose a stochastic model for non-indistinguishable unit’s repetitive framework, wherein one unit is employable, and another is kept in chilly reserve. Need has been delivered to preventive upkeep of copy unit over fix of the first unit. The proportions of framework adequacy, including state progress probabilities, mean visit times, relentless state accessibility, occupied period and benefit work, valuable for producers and originators are gotten by utilizing semi-Markov procedure and regenerative point method.

2 Performance Evaluation of System Model The present segment surrounds around the development of a stochastic model of two non-identical unit redundant system. The model is developed utilizing the idea of need and Weibull conveyance. The conceivable progress states have been appeared in Table 1. Here, state S4 S3 S2 S1 S0 are the usable along with regenerative states and S7 is just bombed regenerative state and rest are non-regenerative and bombed states.

Stochastic Modeling and Profit Evaluation of a Redundant System …

13

Table 1 Possible states of the system model S11 (DFUR, Fwr)

S1 (Pm, Do)

S7 (Fwr, DPm)

S10 (PM, DFur)

S4 (O, DPm)

S5 (DPM, Fwr)

S8 (DPM, WPm)

S2 (Fur, Do)

S6 (FUR, DFwr)

S9 (PM, DWPm)

S3 (O, D Fur )

S0 (O, DCs)

3 Transition Probability Matrix and Mean Sojourn Time An arrangement of transition probabilities, i.e., probability of moving from one state to other, in a m × n array with row sum equal to unity is known as a transition probability matrix. In a transition probability matrix zero probability is assigned in case of no transition between states. ⎡

p00 ⎢ ... S=⎢ ⎣ ... pk0

... ... ... ...

... ... ... ...

⎤ p0 j ... ⎥ ⎥ where {k, j = 0, 1, 2, . . . , 12 ... ⎦ pk j k× j

As an example, mathematical derivation of transition probability along with mean sojourn time are explained in Eqs. (1) and (2).  p01 =

[Probability that the operating unit in state S0 does not

face any failure until time t, but operation time of the unit  η is completed during time (t, t + t)] dt = αηt η−1 e−(α+β)t dt

(1)

Here, μi represent ith state mean sojourn times which at state S i is defined as μi = P[Ui > t]. As an illustration, to obtain  μ0 = [Probability that working unit/component in state S0 does not face any failure and maximum operation time  (1 + 1/η) η does not completed upto time t] dt = ηt η−1 e−(α+β)t dt = (2) (α + β)1/η Rest of the transition probability for all states and mean sojourn times has been derived on similar pattern of Eqs. (1) and (2).

14

M. Saini et al.

4 Reliability Measures 4.1 Accessibility By considering Ai (t) as a random variable that represent the probability of system operationality on time instant “t” provided that system enters at state Si which represents a regenerative state. By simple probabilistic claims, recursive relations for Ai (t) are as follows:

(n) qi, j (t)© A j (t) (3) Ai (t) = Mi (t) + j

With η

η

η

M0 (t) = e−(α+β)t , M1 (t) = e−(α+h+γ )t , M3 (t) = e−(α+k+h)t , M4 (t) = e−(α+β+γ )t (4) Conducting Laplace Transformation of above relations (3–4) and solving for A∗0 (s), one can derive A∗0 (s). The accessibility is given by A0 (∞) = lim s A∗0 (s) = s→0

N2 D2

(5)

4.2 Occupied Time Analysis By considering BiR (t), BiPm (t) as random variables that represent the probability of repairman’s involvedness in fixing, preventive support of system on time instant “t” provided that system enters at state Si which represents a regenerative state. The recursive relations for BiR (t), BiPm (t) are as follows: BiR (t) = Wi (t) +



qi,(n)j (t)©B Rj (t)

(6)

qi,(n)j (t)©B j (t)

(7)

j pm

Bi (t) = Wi (t) +



pm

j η

η

η

With W2 (t) = e−(α+k+h)t , W3 (t) = e−(α+β+l)t , W1 (t) = e−(α+h+γ )t , W4 (t) = −(α+β+γ )t η . e By using LT of (6)–(7) and resolving for B0∗R (s) and B0∗Pm (s). The occupied time of repairman due to fix/repair and preventive maintenance is respectively derived by

η

Stochastic Modeling and Profit Evaluation of a Redundant System …

B0R = lim s B0∗R (s) = s→0

N3R N Pm and B0Pm = lim s B0∗Pm (s) = 4 s→0 D2 D2

15

(8)

4.3 Anticipated Number of Repairs By considering E iR (t) as a random variable that represent the probability of number of anticipated repairs by repairman of system on time instant “t” provided that system enters at state Si which represents a regenerative state. The recursive relations for E iR (t) are given as E iR (t) =



Q i,(n)j (t) δ j + E Rj (t)

(9)

j

In relation (9), state i and j represents the regenerative states and the transition between states holds according to the characteristic function. The term δ j here represents the characteristic function. Carrying L.S.T. of relations (9) and settling for E˜ 0R (s). The normal quantities of fixes per unit time are given by E 0R (∞) = lim s E˜ 0R (s) = s→0

N5R D2

(10)

4.4 Expected Number of Protective Protection (PM) By considering E iPm (t) as a random variable that represent the probability of number of anticipated preventive maintenances by repairman of system on time instant “t” provided that system enters at state Si which represents a regenerative state. The recursive relations for E iPm (t) are given as E iPm (t) =



Q i,(n)j (t) δ j + E Pm j (t)

(11)

j

In relation (11), state i and j represents the regenerative states and the transition between states holds according to the characteristic function. The term δ j here represents the characteristic function. Carrying L.S.T. (11) and illuminating for E˜ 0Pm (s). The normal quantities of preventive support per unit time are given by E 0Pm (∞) = lim s E˜ 0Pm (s) = s→0

N6Pm D2

(12)

16

M. Saini et al.

4.5 Anticipated Number of Visits by the Server By considering X i (t) as a random variable that represent the probability of number of anticipated visits by repairman to system on time instant “t” provided that system enters at state Si which represents a regenerative state. The recursive relations for X i (t) are given as X i (t) =



Q i,(n)j (t) δ j + X i (t)

(13)

j

In relation (13), state i and j represents the regenerative states and the transition between states holds according to the characteristic function. The term δ j here represents the characteristic function. Carrying L.S.T. (13) and unraveling for X 0 (s). The normal number of visits per unit time by the server are given by X 0 (∞) = lim s X 0 (s) = s→0

N7 , D2

(14)

5 Profit Analysis The benefit brought about to the framework model in consistent state can be acquired as Profit = K 0 Revenue − K 1 Occupied time in PM − K 2 Occupied time in repair − K 3 Expected No. of PM − K 4 Expected No. of repairs − K 5 Repairman visit (15) K0 = Returns per unit up-time of the framework K1 = expenditure on server when occupied due preventive maintenance K2 = expenditure on server when occupied due to repair K3 = expenditure on server when preventive maintenance is performed K4 = expenditure on server when repair is performed K5 = Expenditure on visits by server/repairman.

6 Graphical Analysis After assigning various values to shape parameter and other variables as Base line = α = 2, η = 0.5, γ 1 = 5, h = 0.009, k = 1.5, l = 1.4, the pdf of Weibull distribution behaves as Weibull, exponential and Rayleigh distribution. Now, for a fixed set values of various parameters used by Kumar et al. [7] availability and profit of the proposed

Stochastic Modeling and Profit Evaluation of a Redundant System …

17

Fig. 1 Availability difference (Kumar et al. [7]-proposed model) in case of shape parameter = 0.5 w.r.to β

model has been obtained using Eqs. (5) and (15). Then graphs for difference of availability and profit have been depicted in Figs. 1, 2, 3, 4, 5 and 6.

Fig. 2 Availability difference (Kumar et al. [7]-proposed model) in case of shape parameter = 1 w.r.to β

Fig. 3 Availability difference (Kumar et al. [7]-proposed model) in case of shape parameter = 2 w.r.to β

18

M. Saini et al.

Fig. 4 Profit difference (Kumar et al. [7]-proposed model) in case of shape parameter = 0.5 w.r.to β

Fig. 5 Profit difference (Kumar et al. [7]-proposed model) in case of shape parameter = 1 w.r.to β

Fig. 6 Profit difference (Kumar et al. [7]-proposed model) in case of shape parameter = 2 w.r.to β

7 Conclusion In the present paper, a non-indistinguishable repetitive framework has been broke down under the idea of need to preventive upkeep of copy unit over fix of unique unit, preventive support and Weibull densities for disappointment and fix rates. For a specific case, having values k = 1.5, α = 2, γ = 5, l = 1.4, h = 0.009 conduct of different unwavering quality estimates, for example, accessibility and net projected consistent state benefit of the framework got various estimations of the shape parameter η = 0.5, 1 and 2 concerning disappointment rate (β). The estimations of revenue driven capacity are accepted as K 2 = 150, K 3 = 100, K 4 = 75, K 5 = 80, K 1 = 200, K 0 = 5000. From graphical outcomes, appeared in Figs. 1, 2, 3, 4,

Stochastic Modeling and Profit Evaluation of a Redundant System …

19

5 and 6, it is uncovered that the accessibility and benefit of the framework decays with the expansion of disappointment rate (β), most extreme activity time and shape parameter (η) while estimations of these measures increment with addition of the fix rate and preventive support rate. By leading a relative investigation, it is seen that present proposed model is progressively productive over the model annexed in Kumar et al. [7]. Henceforth the examination uncovers that idea of need to preemptive support of copy unit over fix of unique unit is increasingly valuable over the framework where no arrangement of need is made.

References 1. Chopra, G., Ram, M.: Reliability measures of two dissimilar units parallel system using Gumbel-Hougaard family copula. Int. J. Math. Eng. Manag. Sci. 4(1), 116–130 (2019) 2. Deswal, S., Malik, S.C.: Reliability measures of a system of two non-identical units with priority subject to whether conditions. J. Reliab. Stat. Stud. 8(1), 181–190 (2015) 3. Gupta, R., Kumar, P., Gupta, A.: Cost-benefit analysis of a two dissimilar unit cold standby system with Weibull failure and repair laws. Int. J. Syst. Assur. Eng. Manage. 4(4), 327–334 (2013) 4. Kishan, R., Jain, D.: Classical and Bayesian analysis of reliability characteristics of a two-unit parallel system with Weibull failure and repair laws. Int. J. Syst. Assur. Eng. Manage. 5(3), 252–261 (2014) 5. Kumar, A., Malik, S.C.: Reliability modelling of a computer system with priority to H/W repair over replacement of H/W and up-gradation of S/W subject to MOT and MRT. Jordan J. Mech. Ind. Eng. 8(4), 233–241 (2014) 6. Kumar, A., Saini, M.: Analysis of some reliability measures of single-unit systems subject to abnormal environmental conditions and arbitrary distribution for failure and repair activities. J. Inf. Optim. Sci. (2018). https://doi.org/10.1080/02522667.2017.1406626 7. Kumar, A., Saini, M., Devi, K.: Performance analysis of a redundant system with Weibull failure and repair laws. Rev. Investig. Oper. 37(3), 247–257 (2016) 8. Kumar, A., Saini, M., Devi, K.: Stochastic modeling of non-identical redundant systems with priority, preventive maintenance and Weibull failure and repair distributions. Life Cycle Reliab. Saf. Eng. (2018). https://doi.org/10.1007/s41872-018-0040-1 9. Li, J.: Reliability comparative evaluation of active redundancy vs. standby redundancy. Int. J. Math. Eng. Manag. Sci. 1(3), 122–129 (2016) 10. Lin, T., Pham, H.: Reliability and cost-benefit analysis for two-stage intervened decisionmaking systems with interdependent decision units. Int. J. Math. Eng. Manag. Sci. 4(3), 531– 541 (2019) 11. Malik, S.C., Deswal, S.: Stochastic analysis of a repairable system of non-identical units with priority for operation and repair subject to weather conditions. Int. J. Comput. Appl. 49(14), 33–41 (2012) 12. Ram, M., Singh, S.B.: Availability and cost analysis of a parallel redundant complex system with two types of failure under preemptive-resume repair discipline using Gumbel-Hougaard family copula in repair. Int. J. Reliab. Qual. Saf. Eng. 15(04), 341–365 (2008) 13. Ram, M., Singh, S.B., Singh, V.V.: Stochastic analysis of a standby system with waiting repair strategy. IEEE Trans. Syst. Man Cybern. Syst. 43(3), 698–707 (2013)

20

M. Saini et al.

14. Saini, M., Kumar, A.: Stochastic modeling of a single-unit system operating under different environmental conditions subject to inspection and degradation. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. (2018). https://doi.org/10.1007/s40010-018-0558-7 15. Zheng, J., Okamura, H., Dohi, T.: Reliability importance of components in a real-time computing system with standby redundancy schemes. Int. J. Math. Eng. Manag. Sci. 3(2), 64–89 (2018)

Convolutional Neural Networks: An Overview and Its Applications in Pattern Recognition Aseem Patil and Milind Rane

Abstract CNNs have rapidly become state-of-the-art frameworks for various applications used in image classification. We typically need big, ground-based training data set so as to use revolutionary techniques in making a model more complex without affecting its accuracy and precision. In this case study, we shall discuss the different approaches and measures that are used in pattern recognition using CNN. We shall also discuss some of the major applications used nowadays in pattern recognition as well as their implementation (outputs only). We shall identify the key features used in building that application as well as its roles in making the application fit from any obstacles such as over-fitting, etc.

1 Introduction Neural networks can provide a unique solution to multiple possibilities and can detect and classify targets by developing networks capable of handling detection and classification. Nonetheless, problems remain, such as the need for very large ground-breaking and enormous sized data sets of CNNs or interpretation errors neither achievable by the human nor understood by the human brain’s neural network system. The model shifts in different points of view, object variance, process disruption, etc., which during training is considered to be essential. Therefore, if the data is reasonable enough it may need an abundant source of information to train the model in giving the required results. To make sure that such disruptions do not occur, the best alternative solution is to train the architecture and then test it on real data. Perception is an important perspective that scientists and engineers came naturally to recreate it [1–3]. The goal was to teach machines to understand and adjust accordingly while eliminating errors and interference from human beings. For perception, scientists found that a neural network was the solution for all our problems. The way human A. Patil (B) · M. Rane Department of Electronics Engineering, Vishwakarma Institute of Technology, Pune, India e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_3

21

22

A. Patil and M. Rane

beings visualize and comprehend on information by processing it before acting on it shows that the neural network system for machines could be more precise. Within the final few years, deep learning has driven to exceptionally great execution on an abundance of issues, such as visual acknowledgment, discourse acknowledgment, and normal dialect preparing. Among distinctive sorts of profound neural systems, convolutional neural systems have been most broadly considered. Leveraging on the quick development within the sum of the clarified information and the incredible advancements within the qualities of design processor units, the enquiries about convolutional neural systems have been developed quickly and accomplished state of the art comes about on different assignments. This research focuses on the main objective in recognition and classification of applications, either in code or trained and tested to build a model.

2 Introduction to Convolutional Neural Networks Neural systems consist of smaller units categorized as neurons. In each cell, neurons are connected to each other to form branches of neurons. Data comes from just the source on these compositions to the production level. A simple mathematical calculation is carried out in each compute node, while the information is transmitted to all the nodes with which it is linked to. Constitutional neural networks require visual cortical characteristics. Image classification is among the most popular uses of such an architecture. The primary task of identifying images is to recognize an image and to identify its class. This is a capability that people understand from conception and can reasonably deduce that the image is an apple in the picture. The machine sees a pixel set rather than the image. Let us consider a 200 × 200 image scale. The collection size would then be 200 × 200 × 3 in such instances. While 200 is width, the next 200 is height and 3 are values for the RGB stream. Each of these numbers is assigned a value between 0 and 255. The pixel intensity at each point is represented by these values, as shown in Fig. 1. The first layer is always the convolution layer. The picture (pixel value matrix) will be entered. Now, the smaller matrix called a filter (neuron) is selected. The filter then creates convolution, i.e., passing through the image. The task of that same filter is for the initial pixel values to multiply the values. All these statistical data information (dot products) are listed. At the end, you get a number as the output of the process. Because the filter only checks the image in the top left corner, 1 unit continues to move forward and perform a similar operation [4]. This process is similar to the recognition of boundaries and basic colors in the picture from a human point of view. But the entire network is important for identification of the characteristics of a higher level, like the stem or size of the apple. Convolutional neural networks use two layers that help maintaining its stability, namely nonlinear layer and pooling layer.

Convolutional Neural Networks: An Overview and Its Applications …

23

Fig. 1 Illustration of what an individual sees and what the computer understands

Fig. 2 Identifying the maximum value of pixel intensity from the 4 × 4 matrix (output)

After every convolution operation, the nonlinear layer is added. It has a nonlinear property activation function. Without this property, there is no enough network and the response variable cannot be modeled. The method of pooling matches the nonlinear structure. This deals with image width and height and carries out a retrieval process. This reduces the volume of the image, as we can see in Fig. 2. This means that, if features (such as boundaries) were identified during the previous convolution, a detailed image for further processing would no longer be required and reduced to a less detailed image.

3 CNN-Based Architectures Used in Pattern Recognition Most CNN architectures are used to enhance image classification efficiency. Some of them are listed below.

24

A. Patil and M. Rane

Fig. 3 AlexNet architecture

3.1 AlexNet The laureate of the ILSVRC 2012 was AlexNet. It elucidated the issue of image classification, in which the input is a 1000 class image and the output is a 1000 numbers matrix. The output vector’s last variable is interpreted as the likelihood of the input image belonging to the last and final class. The total of all output vector components is thus 1. The AlexNet input considered is a 256 × 256 RGB file. It ensures that all pictures in the training set and all test pictures should be 256 × 256. If the picture of input is not 256 digits, it must be translated to 256 digits 256 before it is used for network processing. The smaller dimension will be reduced to 256, and the resulting image will now be cut to 256 (Fig. 3).

3.2 ResNet The product of the human image classification is deep convolutional neural networks [8]. Deep networks delete features and classifications at low, medium, and high level in end-to-end, multi-layer fashion and expand the “level” of features by the number of stacked layers. When the deeper network is converging, problems with deterioration have been revealed: with increasing network width, precision saturated (which could be unusual), and easily degraded. This is not due to over-fitting, or to additional layers into a deep network, resulting in a higher training error. The loss of training specific results in a failure to automate all processes. 1. ResNets are easy to optimize, but when the depth is the, the “simple” networks (with layers simply stacked by them) display more errors. 2. ResNets can easily reliably achieve results that are better than previous networks by significantly increasing their depths (Fig. 4).

Convolutional Neural Networks: An Overview and Its Applications …

25

Fig. 4 Layering the levels using weights in a mathematical form of representation

3.3 YOLO (You Only Look Once) All at once, a single convolutional network allows for multiple bounding boxes and class probabilities. YOLO trains on full pictures and enhances detection efficiency directly. In addition to traditional methods of object detection, this single model has several advantages. First of all, YOLO is very fast. We do not need a complex pipeline as we frame detection as a regression problem. Our neural network is just working on a new image during the test to predict detections [5]. 1. When making predictions, YOLO globally explains the picture. In comparison with proposal-based sliding windows and area strategies, YOLO presents the entire image in training and testing time, thus encoding the contextual information and its presentation implicitly. Fast R-CNN, the best method of detection, detects minute errors in the background of objects in an image since the larger context cannot be seen. Compared to Fast R-CNN, YOLO does less than half of the context mistake. 2. YOLO discovers common object representations. YOLO offers a wide variety of top detection methods such as DPM and R-CNN when educated on natural images and checked on artwork. Because YOLO is highly universal, when extended to new domains or unforeseen inputs, it is less likely to break down (Fig. 5).

Fig. 5 Architecture of YOLO

26

A. Patil and M. Rane

3.4 R-CNN R-CNN receives and needs to categorize parts of an image. We saw earlier, thanks to deep learning networks like AlexNet, that image classification is a quite easy work. That is R-CNN’s argument, with Deep Learning the algorithm generated by Object Proposal classifies each region of interest. For all regional proposals, R-CNN does not explicitly use AlexNet because the model should also be able to correct the position of an area proposal if the picture is not marked as right [6]. 1. First, R-CNN produces around 2 K area proposals by using selective search, i.e., bounding boxes for image classification. 2. The image classification is then performed using CNN for every bounding box. 3. Eventually, regression may be used to optimize any bounding box. In the context of R-CNN, CNN is required to concentrate on one region, because this minimizes conflict because only one single interest topic is supposed to dominate in one region. Selective search algorithms are used to identify the regions in the RCNN and to resize them in order to ensure that regions equal in size are fed to a CNN for classifying and board regression. However, R-CNN has many flaws. 1. This takes a lot of time to train the network, as 1000 regional ideas per picture have to be categorized. 2. It does not take about 47 s for each test picture to be implemented in real time. 3. A fixed algorithm is the restricted search algorithm. So at this point, there is no understanding. This could lead to bad candidate regions being suggested (Fig. 6).

Fig. 6 Architecture of R-CNN using a sample image

Convolutional Neural Networks: An Overview and Its Applications …

27

Fig. 7 Output after object detection using convolution layers

4 Some Applications of CNN in Pattern Recognition 4.1 Object Detection Using Neural Networks Object detection is the job of localizing images, but an image can contain nested loops which need to be located and identified [3]. This is more difficult than merely classifying images or localizing images, as there are often many artifacts in the image of different types. Methods developed for localization image categorization are often used and demonstrated for object detection. 1. Draw the bounding box in a cityscape and label each object. 2. Draw a bounding box and in an enclosed picture label each object. 3. Draw a bounding box and mark each object in a landscape. From a research paper, “Distinguishing and perceiving things in unstructured and ordered circumstances is one of the toughest companies in computer vision and manmade discovery,” which helps in understanding the influence neural networks play as a major role (Fig. 7).

4.2 Biometrics System Biometric systems are automated methods to verify or recognize the identity of a person on the basis of certain physical features, such as thumbprint or facial patterns or certain aspects of behavior, like writing or voice command patterns. A biometric

28

A. Patil and M. Rane

Fig. 8 Block diagram of the major roles a biometric system can configure

system based on physiological properties, even though it is easier to integrate it in certain specific applications, is more reliable than one that has behavioral features (Fig. 8). 1. The identity authentication (or check) requires that the person to claim his identity by a PIN for example; the device matches directly (1:1) to the current authentication feature of the person with a previously obtained one that is recovered by the PIN. 2. Identification requires the system to scan a number of candidates and decide whether one of them fits the identifying person. This task, of course, is harder since a (1: N) match is required which can be very costly on a computational basis on a large database.

4.3 Style Transfer A neural algorithm of artistic style which allows the content and type of natural images to be separated and reassembled. The algorithm enables new high-perception images to be produced that merge the contents of a photograph inconsequential with many famous works of art. The task of learning styles from one or more images is to move the style or neutralist styles into a new image. This task can be considered as a kind of photograph filter or as a process which cannot be assessed objectively. Data sets often include famous copyright protected artwork and modern computer view data sets photographs (Fig. 9).

4.4 Handwriting Character Recognition A machine’s ability to access and interpret intelligibly handwritten input from sources such as paper documents, photographs, touch-screens as well as other tools is called as handwritten recognition (HWR), also known as Handwritten Text Recognition (HTR).

Convolutional Neural Networks: An Overview and Its Applications …

29

Fig. 9 Style transferring an image

Fig. 10 Output of handwritten character recognition from a given input image

Neural network will provide exceptional performance to identify images that include the content of our requirements [7]. We may identify the results by merging server and source images. We have pictures from the database with a variety of types of writing and fonts. In addition to this technique, we also use CNN, gated recurrent units (GRU), and long-short term memory (LSTM) methods. This technique was used. These are the layers to analyze different kinds of recognition of handwriting (Fig. 10).

5 Conclusions We have now studied some of the main applications and CNN-based architectures used in pattern recognition. The neural system consists of five convolution layers, three adopting peak pooling layers, and two dynamically connected layers with softmax. We have studied the use of non-saturating neurons and a highly effective

30

A. Patil and M. Rane

GPU application of convolution networks to make training faster. We also studied and implemented a new regulation approach that has proven highly effective for reducing over-fitting in widely connected layers. This paper discusses how a convolution neural network (CNN) functions in pattern recognition from a mathematical point of view. This paper is independent and is aimed at making it understandable to beginners in the field of CNN. In many computer vision, machine training, and pattern recognition problems, the convolution neural network (CNN) has shown excellent performance.

References 1. Ioffe, S., Szegedy, C.: Batch normalization: speeding up deep network workouts by reducing internal covariate changes. ArXiv. ArXiv (2015) 2. Kang, G., Liu, K., Hou, B., Zhang, N.: 3D multi-view convolutional neural networks for lung nodule classification (2017) 3. Rane, M., Patil, A., Barse, B.: Real object detection using TensorFlow. In: Kumar, A., Mozar, S. (eds.) ICCCE 2019. Lecture Notes in Electrical Engineering, vol. 570. Springer, Singapore (2019) 4. Krizhevsky, N., Hinton, G.E., Srivastava, N., Sutskever, I., Salakhutdinov, R.R.: Improving the neural networking processes by preventing feature detectors from being coadapted. ArXiv. ArXiv (2012) 5. Lee, D.H.: Simple, efficient and semi-supervised method of learning for deep neural networks. In: ICML 2013 Workshop: Challenges in Representation Learning (2013) 6. Talele, A., Patil, A., Barse, B.: Detection of real time objects using TensorFlow and OpenCV. Asian J. Converg. Technol. (AJCT) (2019). Retrieved from http://www.asianssr.org/index.php/ ajct/article/view/783 7. Talele, A., Patil, A.: Detecting characters using OCR and Tesseract in Open CV by LSTM. Think India J. 22(14), 15112–15117 (2019). Retrieved from https://journals.eduindex.org/index.php/ think-india/article/view/16936 8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

SAMKL: Sample Adaptive Multiple Kernel Learning Framework for Lung Cancer Prediction Ashima Singh, Arwinder Dhillon, and Jasmine Kaur Thind

Abstract In the last decade, cancer has emerged as one of the most widespread and leading deadly diseases across many countries. Lung cancer is caused by abnormal cell growth in the lung tissue. However, cancer can be cured, if detected at an early stage. Lung cancer prognosis and diagnosis hence play an important role in medical healthcare by increasing the chances of survival. Early detection and prediction may help patients in many ways. Machine learning has been vastly used in the area of cancer diagnosis and prognosis. Being the leading cause of death, its early prediction of lung cancer is vital. With the advancement in technology and medical sciences, the amount and type of data have increased rapidly which has helped researchers and medical practitioners with better predictions and accuracy than ever. High-dimensional data has become widely valuable in survival predictions along with the usage of machine learning. Researchers have achieved results using data of different dimensions and types but there is still a great scope of improvement required in obtaining better accuracies. In this paper, we introduce a framework for integration of genomic and pathological image data for lung cancer called sample adaptive multiple kernel learning (SAMKL) which introduces feature fusion which is incorporated in lung cancer prediction. The analysis of the SAMKL model indicated supremacy of this model over other existing models and brings into focus the fact that pathological image data along with genomic data gives remarkably better results than using either of them.

A. Singh (B) · A. Dhillon · J. K. Thind Computer Science and Engineering Department, Thapar Institute of Engineering and Technology, Patiala, Punjab 147001, India e-mail: [email protected] A. Dhillon e-mail: [email protected] J. K. Thind e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_4

31

32

A. Singh et al.

1 Introduction Cancer is a general term for a group of diseases that may affect several or various parts of the body such as liver, lung, breast, blood, skin. There were about 17 million new cases of cancer worldwide in 2018 and estimates suggest that there will be 27.5 million new cases of cancer each year by 2040. According to the American lung association, lung cancer is the leading cancer death disease for both men and women in the USA and has surpassed the death rate in women more than breast cancer. Moreover, lung cancer has a bleak five-year survival rate of approximately 19% second only to pancreatic cancer [1]. Smoking is the predominant risk factor for lung cancer as studies show that smoking is directly linked to 90% of women and 79% of men [2]. Lung cancer is categorized as small cell lung cancer and nonsmall cell lung cancer. Lung cancer, like other cancers, is most curable in its earliest stages before it has spread to other areas of the body. Non-small cell lung cancer is most common and can break down into lung adenocarcinomas, squamous cell carcinomas, and large cell carcinomas. SCLC has a strong connection with cigarette smoking. The World Health Organization (WHO) carried out research worldwide in 2012 which shows that 8.2 million people died in one year from the cancer-related disease [3]. The diagnosis and treatment of lung cancer in its early stages will increase the patient’s survival rate from 14 to 49%, according to the American lung cancer society. This reason is a driving force in searching for efficient algorithms to predict survival rates for better treatment. Due to the high dimensionality of data, extracting information out of it is a tedious task. Researchers have used various algorithms to the best of their use in this area to utilize a vast amount of data in medical sciences. The capability of machine learning to employ various techniques such as probabilistic, optimization, statistical etc. makes this branch tailor-made suited for handling vast data for predictions. With the rapid development of proteomic, genomic, imaging, and transcriptomic technologies, the molecular level information about the patient or disease can be gained without any difficulty [4]. In machine learning, cox proportional hazard models have been widely used for cancer prediction [5]. Supervised principal components can be used for regression and regularized regression such as in survival analysis [6]. Furthermore, semi-supervised methods can be used to identifying subtypes of cancer using gene expression and clinical data [7]. With the advancement in medical technology, tumor data is easily available. For accurate results, one cannot handle thousands of samples manually. Therefore, following the previous state-of-the-art technologies, cell profiler is used which prepares thousands of image samples per day using automation, enabling chemical screens and functional genomics. In this paper, we have proposed a framework SAMKL to efficiently integrate image and genomics features in a prediction model which adds novelty to the present work. The results obtained demonstrate this to be an improved method to fuse features together in a high-dimensional dataset. Along with this, the data is preprocessed using various methods and algorithms in various stages to help further enhance the accuracy of prediction. We verified the effectiveness of this model by comparing

SAMKL: Sample Adaptive Multiple Kernel Learning Framework …

33

it with existing models that only use either genomic data or image data. Also, it is compared with preexisting survival models for comparative analysis.

1.1 Motivation Most of the work has been done to incorporate genomic data or one or more omics data such as genomic/proteomic data and genomic/transcriptomic data. In the case of lung cancer, pathological images features can be taken into account, since digital images are also used in clinical trials to detect illness. Thus, the incorporation of both genomic and pathological image data can offer more feature space on which prediction can be carried out [8]. Therefore, integrating genomic as well as pathological images data can present more feature space on which prediction can be carried out [8]. This motivated to integrate genomic and pathological image data and develop a framework for early prediction using sample adaptive multiple kernel learning model.

1.2 Our Contribution • SAMKL helps to deal with high-dimensional genomic data and pathological images data efficiently. • It features efficient preprocessing techniques to work upon high-dimensional integrated genomic and pathological images dataset. • It also selects features from high-dimensional genomic and pathological images dataset using efficient feature selection techniques. • It employs sample adaptive multiple kernel learning with 85% accuracy selected after rigorous analysis on other types of machine learning algorithms.

2 Related Work Wienert et al. [9] proposed a minimal modal approach for detection of cell nucleus in virtual microscopy images. The author described a minimal model approach that uses a negligible amount 19 of former information. The dataset from the TCGA cancer genome portal was taken and the experiment was performed. The result indicated that the proposed approach works well with precision and recall values of 90.8 and 85.9, respectively. Dekker et al. [10] proposed an approach for Survival Prediction in Lung Cancer Treated patients with Radiotherapy using Bayesian Networks. Dataset of 322 lung cancer patients with two-year survival was taken and Bayesian network (BN) was applied after preprocessing and feature extraction techniques. The obtained results are compared with the results of SVM and it is evident that BN outperforms with the area under curve (AUC) value of 82%. Bovelstad et al. [11]

34

A. Singh et al.

presented a cox regression model for survival prediction of lung cancer patients using clinical and genomic data. The authors combined clinical covariates with genomic data in a clinico-genomic prediction model with the help of the cox model. The experiment was performed on integrated genomic and clinical data, and on clinical and genomic data alone. The obtained results indicated that the integrated clinicalgenomic data performed better with an accuracy value of 85%. Toussi and Yazdi [12] developed a new eigenvector selection method based on the entropy of clusters for the prediction of lung cancer with the help of kernel PCA to extract nonlinear features. The individual vectors that are capable of revealing homogeneous clusters are selected based on their entropy and weighted according to their own importance and clustering power. The best vectors are subsequently chosen and weighted. They are used to render weighted labels. The results are obtained and it is proved that the proposed approach performed best with an accuracy value of 73%. Krishnaiah et al. [13] discussed classification-based data mining techniques such as Naïve Bayes, rule-based, ANN, and decision tree to massive volume of healthcare data for lung cancer prediction. The healthcare industry collects enormous amounts of healthcare data that are sadly not “mined” to uncover hidden information. Data preprocessing is performed using One Dependence Augmented Naïve Bayes classifier (ODANB) and naive creedal classifier 2 (NCC2) for efficient decision making. After preprocessing model training was performed and obtained results indicate that the proposed approach works well 85% accuracy. Luo et al. [14] proposed computational advances to examine the morphological features of digital images for NSCLC patients. Pathological images from 523 ADC patients and 21,511 SCC were analyzed and extracted features were used to predict survival outcomes in ADC and SCC patients. The dataset used was the cancer genome atlas which is publicly available. The experiment was performed and results are obtained which shows that the proposed approach works well with concordance index and hazard ratio value of 2.2 and 58% respectively. Tang et al. [15] used the random forest and relief F algorithm for prediction of survival in patients with non-small cell lung cancer. Feature selection algorithms along with various machine learning tools were used to predict five-year survival in NSCLC patients. The results show that the proposed framework has better accuracy than other feature selection manners. Oh et al. [16] discussed the application of machine learning for prediction of radiation pneumonitis in lung cancer patients. Several regular classification algorithms were used for classifying different groups according to their risk. The experiment was performed and results indicate that the presented approach works well with accuracy, sensitivity, and AUC value of 85%, 89%, and 83% respectively. Xie et al. [17] proposed a method for background correction by making use of existing RMA background correction to include other information like negative control beads for the prediction of lung cancer. The author also considered approaches for estimation of parameters such as non-parametric, maximum likelihood estimation, and Bayesian out of which maximum likelihood and Bayes methods seem to give the best results. The experiment was conducted on an illumine bead array dataset. Mouli et al. [18] discussed the use of s deep learning algorithms for the diagnosis of lung cancer. The author tested the feasibility of using deep learning algorithms in this field by implementing three deep learning algorithms which are

SAMKL: Sample Adaptive Multiple Kernel Learning Framework …

35

Table 1 Comparison of related work with SAMKL Author

Sensitivity

Precision

Accuracy

AUC

c-index

Hazard ratio

Wienert et al. [9]













Dekker et al. [10]













Bovelstad et al. [11]













Toussi and Yazdi [12]













Krishnaiah et al. [13]













Luo et al. [14]













Tang et al. [15]













Oh et al. [16]













Xie et al. [17]













Mouli et al. [18]













SAMKL













convolutional neural network, deep belief networks, stacked denoising autoencoder (SDAE) with deep belief having the maximum accuracy of 81%. Table 1 shows the comparison of related work with our proposed work.

3 Proposed Framework The framework proposed consists of firstly integrating the selected features obtained from genomic data and pathological images using PGSAMKL (PATHOLOGICAL GENOMIC SAMPLE ADAPTIVE MULTIPLE KERNAL LEARNING) followed by training and testing using existing models and finally obtaining the results using performance parameters. Overview of the proposed framework is shown in Fig. 1.

3.1 Data Preparation In our framework, we used genomic data and pathological image data. The DNA data and genome data of an organism collectively constitute genomic data. This data is enormously large and with the advancement in technologies has got more refined and easily available for researchers. For image data, we use hematoxylin and eosin stained pathological images. It is mostly used in big data processing and analysis in medical studies. The dataset was downloaded from publicly available lung cancer dataset from the cancer genome atlas website https://portal.gdc.cancer.gov/. The dataset has 585 cases for gene expression and 590 for pathological image data with various data types such as gene expression, CNA methylation, copy number alteration, protein expression, and pathological image. Gene expression as the name includes genes that

36

A. Singh et al.

Fig. 1 Overview of the proposed framework

consist of proteins that regularize cell function. It is a process by which instructions in DNA are converted to a functional product which is generally a protein. Transcription and translation are included in the process of gene regulation. Several methods have emerged for rapid estimation of gene expression that enables thousands of genes from several hundred samples to be measured with high throughput. Cancer is the result of a gene that is not naturally expressed in a cell but is turned on and expressed at high levels due to mutations or differences in gene regulation. DNA methylation, on the other hand, is an epigenetic modification that can alter the function of the DNA molecule without altering its structure by adding methyl groups to the DNA molecule. CNV is a process in which genome parts are replicated and the number of genome repeats varies among individuals in the human population. It is a kind of structural variation that affects a significant number of base pairs. The gene copy number is the number of copies of a gene in a genotype. These changes persuade most traits including vulnerability to disease. Protein expression is a way in which proteins are synthesized, tailored, and regulated in a living being. The protein test is far more complex than gene research. That is because protein architectures and functions are more complex and diverse. The ability to express specific proteins makes it easier for researchers to perform in vitro studies. Cell-free protein production is a protein’s in vitro synthesis, utilizing translation-compatible extracts from whole

SAMKL: Sample Adaptive Multiple Kernel Learning Framework …

37

cells. Entire cell extracts in theory include all the macromolecules and components necessary for translation, transcription, and even post-translation alteration. Other elements comprise polymerase RNA, protein regulation factors, ribosomes, transcription factors, and tRNA. Such extracts will synthesize proteins of concern in a few hours when combined with cofactors, nucleotides, and the particular gene template. These proteins are used in our research. Moving ahead, pathological image is the microscopic image obtained by converting glass slides into a digital slide that can be viewed, analyzed, and modified under the computer. With the advancement of slide imaging, the quality of data collected from images has improved drastically to achieve a better and faster diagnosis. Sometimes staining is done to highlight structures. The digital slides are then numerically analyzed using computer algorithms.

3.2 Data Preprocessing Preprocessing of Genomic Data The data we need to analyze is required in a particular format; hence, the data collected needs to be processed before it can be used to bring out some useful incites. First, the collected gene data is cleaned by removing the missing rows from each of the data types which are gene expression, DNA methylation, copy number alteration, and protein expression. 9% of the total data is removed. The 5% of the irrelevant noisy data is removed by clustering (k means). Next, we normalize the data and further divide into three categories −1, 1, and 0 for under expression, overexpression, and baseline, respectively. CNA is normalized by min max normalization and gene expression along with methylation is normalized using a z score algorithm. Feature extraction is a vital step after normalizing data. The data we are dealing with is high dimensional and has thousands of features. It impacts the performance of the model. Too many features often cause overfitting hence degrading the performance of the model. To further extract the best features, we apply the information gain ratio measure (F selector package) and hence obtain a final set of features as shown in Table 2. Table 2 Features after and before preprocessing for each data type

Data type

Features before preprocessing

Features after cleaning the data

Features after F selector

Gene expression

35,000

11,000

65

CNA

46,000

19,888

42

DNA methylation

55,000

12,010

36

Protein expression

21,000

135

98

38

A. Singh et al.

Pathological Image Data The digital images obtained from the TCGA website are downloaded and processed to extract image features. The hematoxylin and eosin slides are used which are very high-resolution images, which need to be broken down to ease the process. We use cell bftools to cut down the resolution and then select the densest slides for further investigations. For pathological image data, whole slide diagnose image is taken in which one single is up to 500 mb. The total image size for all patients is of the magnitude of terabytes. So, for that, we need to decrease the resolution. We used bftools for decreasing image resolution, and in order to extract important image features, cell profiler is used. A single segment is divided into tiles where a single tile contains numerous features which are extracted which results in large data for all tiles. So, we select only the densest images which are of our interest. After profiling, we obtain a reduced number of features, i.e., 154 consisting of cell size, cell radii, cell shape, cell perimeter, cell area, nucleus area, and so on. Features extracted from pathological image data include geometry features, textural features, and holistic features.

4 Data Integration 4.1 Sample Adaptive Multiple Kernel Learning The main challenge in high-dimensional data is to how to integrate or merge features from heterogeneous data types. In this paper, we target at integrating features from genomic data and pathological image data effectively. Motivated by earlier use of kernel learning in data integration [19], we use kernel learning in a different approach to merge multidimensional data types in a single type of feature. Multiple kernel learning consists of methods that work by using a predefined set of kernels. The reason for using multiple kernel learning is that it allows integration of data from different sources (e.g., sound and video images) that have different correlation ideas and therefore require different kernels. Instead of creating a new kernel, multiple kernel algorithms can be used to merge already existing kernels for every single source of data. MKL applies the same set of kernel combination weights over all samples along with optimization algorithms, which is not effective because applying the same set of kernels to all samples forcefully can affect the model performance. To improve the performance, sample adaptive multiple kernel learning is used to integrate the heterogeneous features gathered from genomic and pathological image data because in this base kernels are allowed to switched on/off with respect to each sample. In this algorithm, the base kernels are capable of or allowed to be adaptively switched on/off with respect to each sample. We made use of the parametric model to predict the kernel combination weights for a sample. In the end, we define a latent binary vector, to each sample to adaptively switch each base kernel on/off. The weights along with latent variables are jointly optimized by margin maximization

SAMKL: Sample Adaptive Multiple Kernel Learning Framework …

39

principle. We achieve high classification performance as compared to existing MKL algorithms. Algorithm 1 SAMKL Input {Kp }p = 1m , y, C and m0 Output: α, β, γ and {hi}i = 1n Initialize h0 and set t = 1 and g0 = 1 Repeat update (αt , βt , γt , ξt ) with (ht − 1 , gt − 1 ) for i = 1 to n do update ht with (αt , βt , γt , ξt − 1 , ht − 1 , gt − 1 ) end for update gt with (αt , γt , ht , gt − 1 , ht = 1 ) t=t+1 until (objt − 1 − objt )/objt ≤ 1e − 4

5 Experimental Analysis 5.1 Experimental Setup The collected dataset is arbitrarily divided into training (80%) and test sets (20%) to analyze our proposed approach widely. Further, 15% of training data is used for the cross-validation process. Using the trained model and validation process, the pretrained model and the selected number of features are used to predict on test set followed by predicting the outcome on the test data. There are a total 528 ADCs and SCCs patients with both gene expression signatures and pathological image slides. We have worked on R STUDIO version 3.5 and cell profiler tool for image processing and segmentation. The packages used for the implementation of the proposed approach are Earth (MARS), superpc, hmeasure, survival, mboost, randomforestsrc.

5.2 Performance Metrics Employed Following parameters are considered in our research (Table 3).

6 Results After applying the proposed framework with existing survival models, we are able to efficiently predict the survival rate of lung cancer patient’s data. We also construct

40

A. Singh et al.

Table 3 Performance parameters used

Parameter

Equation

Parameter

Equation

Sensitivity

Sensitivity =

Precision

Precision = (3)

Accuracy

Accuracy =

TP TP+FN

Specificity

(1)

Specificity = TN TN+FP

(2)

TP TP+FP

TP+TN TP+TN+FP+FN

(4)

independent models for genomic data and pathological data which are referred to as SAMKL(G) and SAMKL(P) for genomic and pathological data, respectively. Then we are able to compare it with SAMKL(GP) model which incorporated feature fusion. Training all the models given in Table 4 by repeating the procedure 15–20 times for performance efficiency, we are able to predict the performance using performance metrics discussed earlier in the paper for easy evaluation and visualization. We calculate sensitivity, precision, accuracy, specificity, ROC curves, and AUC values. In the graphs shown, the dark line shows the actual values. We use SAMKL(G) for singledimensional data which includes gene expression, CNA, methylation, and protein expression. Roc curves are plotted to compare the predictive performance. We also calculate AUC values for each model given in Table 4 and compare them with all other models. AUC value for SAMKL trained with only genomic data is 0.814, and the AUC value for SAMKL trained with the only pathological image is 0.742. While after integrating the AUC value increases to 0.891, which indicated that integration has proved to be of considerable importance in improving the performance. The results are shown in Table 4. We also compare for accuracy, sensitivity, precision, and concordance index value for base paper models and our approach to show the difference in results as shown in Table 5. It is clearly visible the difference between different models. The values for boostCI, survreg, random forest, and superpc are far less than the proposed framework SAMKL also compared to the framework from the previous paper. Table 4 Comparison of AUC values obtained from all the models Pathological

Genomic + pathological

0.621 ± 0.021

0.615 ± 0.043

0.624 ± 0.040

Superpc

0.5960 ± 0.036

0.548 ± 0.029

0.621 ± 0.053

BoostCI

0.720 ± 0.022

0.637 ± 0.034

0.719 ± 0.022

Survreg (cox)

0.696 ± 0.078

0.540 ± 0.067

0.654 ± 0.051

Methods RSF

Genomic

SVM

0.654 ± 0.020

0.603 ± 0.021

0.6 ± 0.028

BoostedMARS

0.725 ± 0.031

0.712 ± 0.036

0.719 ± 0.056

SAMKL(G)

0.742 ± 0.016

0.629 ± 0.072

0.768 ± 0.035

SAMKL(P)

0.668 ± 0.052

0.651 ± 0.024

0.721 ± 0.042

SAMKL(GP)

0.842 ± 0.037

0.732 ± 0.015

0.892 ± 0.077

SAMKL: Sample Adaptive Multiple Kernel Learning Framework …

41

Table 5 Comparison of accuracy, sensitivity, precision, and c-index obtained from all the models Base paper

Proposed approach

Methods

Accuracy

Sensitivity

Precision

c-index

Hazard ratio

Pcr

0.62

0.21

0.52

0.47

0.86

Superpc

0.54

0.20

0.62

0.59

0.67

RSF

0.58

0.15

0.52

0.60

0.81

Space

0.60

0.29

0.61

0.59

0.72

Mkl

0.73

0.28

0.64

0.61

0.45

Superpc

0.81

0.31

0.59

0.62

0.45

Rsf

0.79

0.0304

0.61

0.625

0.59

SVM

0.76

0.29

0.60

0.600

0.39

SurReg(cox)

0.75

0.30

0.61

0.631

0.41

BoostCI

0.80

0.31

0.620

0.64

0.23

BoostedMARS

0.82

0.32

0.625

0.620

0.20

SAMKL

0.85

0.33

0.631

0.653

0.15

Compared to the other models, the proposed framework increased the result by 2%, 3%, 2%, 5%, and 3%, for sensitivity, accuracy, AUC, hazard ratio, and concordance index in comparison to the base paper. ROC curves generated from our proposed framework SAMKL(GP) for each of the models is shown in Figs. 2, 3, 4 and 5 respectively. In order to show the comparison between base paper models and propose a framework, bar plots for accuracy, precision, and c-index are plotted. It is shown in Fig. 6 which shows the difference clearly. Along with that, line plot for c-index and hazard ratio is also plotted which clearly shows the achieved performance as shown in Fig. 7. Fig. 2 Roc curves for boosted MARS

42

A. Singh et al.

Fig. 3 Roc curves for SVM

Fig. 4 Roc curves for survreg

Fig. 5 Roc curves for boostCI

7 Conclusion In this paper, we have proposed a new genetic-image data integration framework for lung cancer survival analysis SAMPLE ADAPTIVE MULTIPLE KERNAL LEARNING (SAMKL) which is an improvement over the general MKL. From our implementation of various models on TCGA lung cancer dataset (ADC), we can

SAMKL: Sample Adaptive Multiple Kernel Learning Framework …

43

1 0.8 0.6 0.4 0.2 0

Base Paper

Accuracy

precision

C index

Proposed Approach

Fig. 6 Bar graph for accuracy, precision, and c-index 1.5 1 0.5

HAZARD RATIO C index

0

Fig. 7 Line graph for hazard ratio and c-index

conclude that image data along with omics data plays an important role in prediction efficiency. Being a widespread and malignant disease, we required an efficient model that can improve the prediction performance of survival time, as already discussed that lung cancer if detected at an early stage can be cured effectively. SAMKL is proved to be a success because of its excellent capability of dealing with heterogeneous data and we were able to extract valuable information from image data that had co-relation with the occurrence of the disease. Further, we also analyzed the reliability of the framework with the breast cancer dataset as well as the pancreatic dataset, with a genomic dataset separately, with an image dataset and fused dataset. The integrated framework has proved to be a success all through. The result will help further research on this topic thereby benefitting the patients and the field of medical sciences.

References 1. Cryer, A.M., Thorley, A.J.: Nanotechnology in the diagnosis and treatment of lung cancer. Pharmacol. Ther. 198, 189–205 (2019) 2. Lauren, G., Collins, M.D., Haines, C., Perkel, R., Rnck, R.E.: Lung cancer: diagnosis and management. Am. Fam. Phys. 75(1), 56–63 (2007) 3. Deshmukh, S., Shinde, S.: Diagnosis of lung cancer using pruned fuzzy min-max neural network. In: 2016 International Conference on Automatic Control and Dynamic Optimization Techniques (ICACDOT), pp. 398–402. IEEE (2016) 4. Cruz, J.A., Wishart, D.S.: Applications of machine learning in cancer prediction and prognosis. Cancer Inform. 2 (2006) 5. David, R.C.: Regression models and life-tables. J. R. Stat. Soc. Ser. B (Methodol.) 34(2), 187–220 (1972)

44

A. Singh et al.

6. Bair, E., Hastie, T., Paul, D., Tibshirani, R.: Prediction by supervised principal components. J. Am. Stat. Assoc. 101(473), 119–137 (2006) 7. Bair, E., Tibshirani, R.: Semi-supervised methods to predict patient survival from gene expression data. PLoS Biol. 2(4), E108 (2004) 8. Zhu, X., Yao, J., Luo, X., Xiao, G., Xie, Y., Gazdar, A., Huang, J.: Lung cancer survival prediction from pathological images and genetic data—an integration study. In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), pp. 1173–1176, Apr 2016. IEEE 9. Wienert, S., Heim, D., Saeger, K., Stenzinger, A., Beil, M., Hufnagl, P., Dietel, M., Denkert, C., Klauschen, F.: Detection and segmentation of cell nuclei in virtual microscopy images: a minimum-model approach. Sci. Rep. 2, 503 (2012) 10. Dekker, A., Dehing-Oberije, C., De Ruysscher, D., Lambin, P., Komati, K., Fung, G., Yu, S., Hope, A., De Neve, W., Lievens, Y.: Survival prediction in lung cancer treated with radiotherapy: Bayesian networks vs. support vector machines in handling missing data. In: 2009 International Conference on Machine Learning and Applications, pp. 494–497, Dec 2009. IEEE 11. Bovelstad, H.M., Nygård, S., Borgan, O.: Survival prediction from clinico-genomic models—a comparative study. BMC Bioinform. 10(1), 413 (2009) 12. Toussi, S.A., Yazdi, H.S.: Feature selection in spectral clustering. Int. J. Sig. Process. Image Process. Pattern Recogn. 4(3), 179–194 (2011) 13. Krishnaiah, V., Narsimha, D.G., Chandra, D.N.S.: Diagnosis of lung cancer prediction system using data mining classification techniques. Int. J. Comput. Sci. Inf. Technol. 4(1), 39–45 (2013) 14. Luo, X., Zang, X., Yang, L., Huang, J., Liang, F., Rodriguez-Canales, J., Wistuba, I.I., Gazdar, A., Xie, Y., Xiao, G.: Comprehensive computational pathological image analysis predicts lung cancer prognosis. J. Thorac. Oncol. 12(3), 501–509 (2017) 15. Tang, H., Xiao, G., Behrens, C., Schiller, J., Allen, J., Chow, C.W., Suraokar, M., Corvalan, A., Mao, J., White, M.A., Wistuba, I.I.: A 12-gene set predicts survival benefits from adjuvant chemotherapy in non–small cell lung cancer patients. Clin. Cancer Res. 19(6), 1577–1586 (2013) 16. Oh, J.H., Al-Lozi, R., El Naqa, I.: Application of machine learning techniques for prediction of radiation pneumonitis in lung cancer patients. In: 2009 International Conference on Machine Learning and Applications, pp. 478–483. IEEE (2009) 17. Xie, Y., Wang, X., Story, M.: Statistical methods of background correction for Illumina BeadArray data. Bioinformatics 25(6), 751–757 (2009) 18. Mouli, S.C., Naik, A., Ribeiro, B., Neville, J.: Identifying user survival types via clustering of censored social network data. arXiv preprint arXiv:1703.03401 (2017) 19. Sun, D., Li, A., Tang, B., Wang, M.: Integrating genomic data and pathological images to effectively predict breast cancer clinical outcome. Comput. Methods Programs Biomed. 161, 45–53 (2018)

Optimal Multiple Access Scheme for 5G and Beyond Communication Network Nira and Aasheesh Shukla

Abstract As we know that in today’s era of communication, a number of mobile users are growing very fast. So, an efficient use of channel is required to counter the problem of multiple access of users in the network. In literature, many multiple access schemes have been suggested to increase the throughput of the wireless communication channel. Except traditional multiple access such as frequency division multiple access (FDMA) and time division multiple access (TDMA), some advanced schemes like non-orthogonal multiple access (NOMA), space division multiple access (SDMA), and rate splitting multiple access (RSMA) have also been suggested for 5G and beyond communication networks. In this paper, the recent and advanced multiple access schemes are reviewed and studied for different parameters like network load, complexity, style of design, etc. On the basis of analysis, this is concluded that RSMA gives better results as compare to other MA schemes for 5G and beyond networks.

1 Introduction Nowadays, 5G and beyond wireless network have delightful boundless research interest. There are several techniques which demanding fast and vast speed of interest like enhanced mobile broadband (eMBB), ultra-reliable, and low-latency communications (URLLC), which work on high speed or it need 5G and beyond network [1, 2]. These techniques desire the massive connectivity with high system performance but there are lot of issues to design of general 5G network. Hence, to fulfill the demand or solve the issues, we have to study about the different types of multiple access (MA) schemes to realize that which MA techniques resolve all type of demands. The Nira (B) · A. Shukla Department of Electronics and Communication, GLA University, Mathura 281406, India e-mail: [email protected] A. Shukla e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_5

45

46 Table 1 Differentiation of NOMA and OMA

Nira and A. Shukla Parameter

NOMA

OMA

Throughput

Greater

Lower

Consumption of energy

Greater

Lower

Complexity at receiver side

Greater

Lower

multiple access means that one channel can be accessed by multiple users. Initially, the technique was orthogonal multiple access (OMA) which can be considered as the root of past and present wireless network; In 2G–3G, the TDMA and FDMA were popular scheme and in later 3G, code division multiple access become popular [3–6]. Recently, for 4G, the orthogonal frequency division multiple access (OFDMA) and interleave division multiple access (IDMA) were introduced. All these schemes rely on the orthogonality of the data among users to reduce the interference. However, due to orthogonality, only limited users. Nowadays, users are increasing very fast, and so, these OMA techniques are not sufficient for high data rates and future communication. To overcome the limitations of OMA, NOMA is introduced for large number of users in the system, in which orthogonality between the users is not required [7–10]. There are two types domain of NOMA: (1) power-domain and (2) code-domain: unique powers and codes are individually given to all users, respectively. With these techniques, it revised the extent of 5G wireless network with more interference and a receiver side complexity will become bulky (Table 1). For single input single output broadcasting-channel (SISO-BC), the performance of NOMA is good. But for multiple input multiple output broadcast channel (MISOBC), it does not give good performance. Therefore, the another MA scheme SDMA is introduced; it is one type of MA scheme which uses multiple antenna and restate the same set of cell frequency in given service region. When the number of users increases, then it will become difficult to make groupism of the users, and also for optimal degree of freedom, it should be compulsory that channel state information theory (CSIT) to be known. So, to overcome all above limitation of NOMA and SDMA, there is another type of multiple access called RSMA which introduced RSMA in detail later. Our goal is to collect the knowledge about the MA schemes, and we have also confidence that this collection will help the researchers to aware the techniques of MA for 5G and beyond networks [9, 10]. In this paper, we analyze the different type of multiple access scheme for 5G and beyond network. The rest of this paper coordinated are as follows: In Sect. 2, NOMA in 5G wireless network is introduced. In Sect. 3 we introduced the SDMA in detail. Section 4 discussed about the RSMA. In Sect. 5, we compared all these three MA Scheme. In Sect. 6, we conclude this article.

Optimal Multiple Access Scheme for 5G and Beyond Communication …

47

2 Non-orthogonal Division Multiple Access In previous technique, OMA is only valid for limited user. Hence for unlimited users, NOMA technique will access. In this technique, at transmitter side, superposition coding technique is used and at receiver side, successive interference cancellation (SIC) technique is used [1]. We know that NOMA recognizes the two types of strategies which are SC-SIC and SC-SIC per group where SC-SIC is considered only one group of users. Therefore, we briefly described WSR and SINR of the two main strategies of NOMA one by one. These are followings:

2.1 SC-SIC In this strategy, precoder and decoding orders are playing very important role. For achieving the rate of each users, it should be mandatory to be known all decoding orders of users. Here, the decoding order is denoted by π. The signal to interference noise ratio (SINR) is given by,

yπ(i)→π(k)

 2  H  h π(i) Pπ(k)  =  2   H  P h  +1 π( j) j>k, j∈K π(i)

(1)

The weighted sum-rate (WSR) is obtained by SC-SIC is RSC - SIC (π ) = max p



u π(k) Rπ(k)

k∈K

Rk ≥ Rkth , ∀k ∈ K

(2)

where Rπ(k) = mini≥k,i∈K {log2 (1 + yπ(i)→π(k) )}.

2.2 SC-SIC Per Group It is considered as more than one group of users. Therefore, the decoding orders is denoted by π g (k) for user k. So, the SINR is given by

yπg (i)→πg (k)

 2  H  h πg (i) Pπg (k)  =  2   H  h P  j>k, j∈K g π(g) πg ( j)  + Iπg (i) + 1

So, the WSR is obtained by SC-SIC per group is given by

(3)

48

Nira and A. Shukla group

R≤SC - SIC (π, G) = max p



u πg (k) Rπg (k)

g∈G k∈K g

Rk ≥ Rkth , ∀k ∈ K

(4)

Overall conclusion has to obtain the rate-region of SC-SIC per group, and it should be compulsory to knowledge about all the decoding orders.

3 Space Division Multiple Access In space, many antenna is placed at different places by using multi-antenna and the information is transmit. Basically, space is exploited where signal quality totally depend upon the position of antenna. It means there is one concept applied near and far distance with antenna. Suppose there are two users. One user is near to the antenna and another user is far, then obviously the near user 1 will get good signal strength rather than user 2 that means space is properly utilized. In SISO-BC, the SC-SIC gives perfect channel strength but for multi-antenna broadcast channel (MABC), it will non-vulgarized [2]. This is the reason that why SC-SIC has not obtained good capacity of channel strength for MA-BC. Hence, to overcome the problem of complexity on transmitter side, linear precoder is used, which helps to make it simplicity and this is the best solution to reduce the complexity on transmitter design. In this type of MA scheme, MU-LP is generally realized. The SINR at user-k is given by  H 2 h Pk  k yk =   H 2   +1 j=k, j∈K h k P j

(5)

where Pk = Precoding matrix of user-k. h k ∈ c Nt ∗1 is the channel between the base station and user-k. The WSR is reach by MU-LP RMU - LP = max p



u (k) R(k)

k∈K

Rk ≥ Rkth , ∀k ∈ K

(6)

where Rk = log2 (1 + yk ) is the achievable rate of user k. uk = non-negative constant which allows resource allotment. Rkth = Explanation for any capability individual rate constraints for user-k. It concluded that SDMA scheme is benefited that it uses LP which reduces the complexity on transmitter side. So, the RSMA takes their benefit at transmitter side, and we use LP to split the message in RSMA scheme.

Optimal Multiple Access Scheme for 5G and Beyond Communication …

49

4 Rate Splitting Multiple Access We analyze the advantages and limitation of NOMA and SDMA. In NOMA, we have studied that it required massive connectivity. Due to this, it does not efficiently cope with the high throughput. Now, in SDMA, when no. of users increase then at space complexity occur. So, it is difficult to efficiently manage. Due to this, schedular (making group of the users) will exhaust. So, to overcome all the disadvantages of NOMA and SDMA, reported a new multiple access called RSMA which gives the quite better result in terms of performance and complexity. It is more simple and strong multiple access scheme in which at transmitter side, linear precoder and at receiver side, successive interference cancellation are used [2]. At transmitter side by using linear precoder, we split the message. It divided into two parts: Suppose one message is divided into S1 message and S2 message. In which S1 message is called private message and S2 message called common message. Private message (S1): It means that important or secret part of the message is going in private message. Common message (S2): It means that common for all users are going into common message. In this way, split the rate of this message and then it transmit. So, common message can receive by all the users. But the private message are only receive by corresponding user. The complete message is received by main user [2]. Hence, RSMA gives much more attractive solution in terms of performance and complexity wise. We have seen that RSMA scheme has given perfect structure and low complexity. To proof them, we have to analysis some mathematical model. Now, we explain RSMA with the help of system model or with mathematical model for two users: In this, take the example of two users. In which user 1 message is denoted by S1 and user 2 message is denoted by S2. Then the message of users is divided into the part of private and common message. Actually, in RSMA, each users initially decode their common stream D12 by considering the interference from uses decrypt past of the message of the alternative interfering uses encoded in S 12 that is why the interference is partially decrypt at every users (Fig. 1). So, the achieving SINR of the common stream is given by yk12

  H h P12 2 k =    h H P1 2 + h H P2 2 + 1 k k

(7)

The achieving SINR of the private stream Dk at uses k is given by  H 2 h Pk  yk =  k 2 h H P j  + 1 k Therefore, for two uses, the obtained WSR is given by RRS2 (u) = max u 1 R1 + u 2 R2 p,c

(8)

50

Nira and A. Shukla

Fig. 1 Block diagram of RSMA approach

C112 + C212 ≤ R12 Rk ≥ Rkth , k ∈ {1, 2} C ≥0 (9) where C = [C112 , C212 ] is the common rate. To obtain the maximized WSR, it should be properly upgraded (Fig. 2). Therefore, with the help of mathematical model, we proved that WSR and SINR of RSMA scheme achieve good result as compared with MU-LP and SC-SIC. The RS scheme has more simple and flexible structure. Or we can also say that RSMA is good MA scheme and it smoothly links the two previous MA scheme to perform alternatively. One important thing is that the derived RS scheme utilizes more layer of SIC at receiver increasing the performance of rate and quality of service (QoS) enhancement. If the number of SIC increases then design will become complex, and so as to reduce the complexity, the RSMA has divided into two parts: One-layer RS and hierarchical layer RS. In the one-layer of RS, only one SIC will use and in hierarchical layer more than one SIC layer are used.

5 Comparison of NOMA, SDMA, and RSMA In Sects. 2, 3, and 4, we have proved that WSR and SINR of RSMA give better results. Now, in this article, we compare the three MA scheme on the parameter of network load, complexity and some other parameters. Actually, NOMA studied that strategy was SC-SIC. This type of strategy achieves larger capacity region of single input single output broadcast channel (SISO-BC) as compared to OMA. But

Optimal Multiple Access Scheme for 5G and Beyond Communication …

51

Fig. 2 WSR versus SNR comparison of different strategies [2]

we know that day by day the number of users are increasing so it become complex in SISO-BC. To remove that the another type of strategy applied example SC-SIC per group in terms of NOMA technology [2] (Fig. 3). The benefit of these types of strategy is able to decrease the number of SIC layers at each users. Due to the presence of single transmission antenna in SISO-BC, it is more convenient to overloaded network (number of transmitting antenna is larger than number of register device). But at the present time, work with multiple antenna at a time that type of MA scheme called SDMA in which multi-user (MU) at transmitter side and linear precoder (LP) at receiver side use. And the SDMA is more convenient for underloaded network (number of transmitting antenna is smaller than number of register device). But the RSMA is more conventional calamitous as SDMA and NOMA because it is convenient for any type of network load (Table 2). One more important parameter is there, design principle. In NOMA, superposition coding-successive interference cancellation (SC-SIC) principle is fully decode interference but it will not give good performance. So, in SC-SIC per group principle

52

Nira and A. Shukla

Fig. 3 Achievable energy efficiency region comparison of different scheme [3] Table 2 Comparison of NOMA, SDMA, and RSMA Multiple access parameters

NOMA

SDMA

RSMA

Strategy

SC-SIC

MU-LP

One-layer RS and hierarchical layer of RS

Foundation of design

Entirely translate interference

Entirely consider interference as noise

Partly translate interference and partly consider interference as noise

Network load

Valid for overloaded Valid for underloaded network network

Valid for any type of network load

Spectrally efficient Lower

Lower

Greater

Energy efficient

Lesser than RSMA

Equal and more than NOMA and SDMA

Lesser than RSMA

Optimal Multiple Access Scheme for 5G and Beyond Communication …

53

is in each group fully decode interference and treated interference between groups as noise, we have seen that in NOMA, complexity is a big issue for using multiple antenna just because it is obviously increasing at both transmitter and receiver and in addition, SC-SIC is only suited for aligned users with large channel gain different for that scheduling algorithm increasing due to which schedular complexity will increasing. In short, if number of users increasing the schedular and receiver algorithm increasing the SIC getting more error.

6 Conclusion In this article, we have compared the three different multiple accesses which are NOMA, SDMA, and RSMA. It concluded that RSMA is more simple and strong multiple access scheme than NOMA and SDMA. We have also analyzed that RSMA achieves quiet good performance in terms of spectrally and energy efficient. Complexity-wise RSMA is simple and it is valid for all types of network. Finally, with the reference of this article, now, we can say that RSMA is better multiple access scheme as compared with another trending multiple access scheme.

References 1. Dai, L., Wang, B., Yuan, Y., Han, S., Chih-Lin, I., Wang, Z.: Non-orthogonal multiple access for 5G: solutions, challenges, opportunities, and future research trends. IEEE Commun. Mag. 53(9), 74–81 (2015) 2. Mao, Y., Clerckx, B., Li, V.O.K.: Rate-splitting multiple access for downlink communication systems: bridging, generalizing, and outperforming SDMA and NOMA. EURASIP J. Wireless Commun. Netw. 2018(1), 133 (2018) 3. Mao, Y., Clerckx, B., Li, V.O.K.: Energy efficiency of rate-splitting multiple access, and performance benefits over SDMA and NOMA. In: 2018 15th International Symposium on Wireless Communication Systems (ISWCS), pp. 1–5. IEEE (2018) 4. Mao, Y., Clerckx, B., Li, V.O.K.: Rate-splitting for multi-antenna non-orthogonal unicast and multicast transmission. In: 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 1–5. IEEE (2018) 5. Hao, C., Clerckx, B.: MISO networks with imperfect CSIT: a topological rate-splitting approach. IEEE Trans. Commun. 65(5), 2164–2179 (2017) 6. Shukla, A., Deolia, V.K.: Performance analysis of modified tent map interleaver in IDMA systems. J. Electr. Eng. 68(4), 318–321 (2017) 7. Joudeh, H., Clerckx, B.: Sum-rate maximization for linearly precoded downlink multiuser MISO systems with partial CSIT: a rate-splitting approach. IEEE Trans. Commun. 64(11), 4847–4861 (2016) 8. Shukla, A., et al.: Cooperative relay beamforming in IDMA communication networks. J. Electr. Eng. 69(4), 300–304 (2018) 9. Shukla, A., Deolia, V.K.: Performance analysis of chaos based interleaver in IDMA system. ICTACT J. Commun. Technol. 7(4) (2016) 10. Shukla, A., Deolia, V.K.: Performance improvement of IDMA scheme using chaotic map interleavers for future radio communication. ICTACT J. Commun. Technol. 8(2) (2017)

54

Nira and A. Shukla

11. Dai, M., Clerckx, B., Gesbert, D., Caire, G.: A rate splitting strategy for massive MIMO with imperfect CSIT. IEEE Trans. Wireless Commun. 15(7), 4611–4624 (2016) 12. Shukla, A., Purwar, D., Kumar, D.: Multiple access scheme for future (4G) communication: a comparison survey. Int. J. Comput. Appl. (IJCA) (2011)

Exergy and Energy Analyses of Half Effect–Vapor Compression Cascade Refrigeration System Mihir H. Amin, Hetav M. Naik, Bidhin B. Patel, Prince K. Patel, and Snehal N. Patel

Abstract A novel cascade refrigeration system with half effect absorption system as the topping cycle and vapor compression system as bottoming cycle has been proposed. Parametric analysis has been performed, and the result of changing different input parameters on the performance of the system has been quantitatively studied. First law and second law analyses have been applied to each component of the system proposed. Total exergy destruction by entire system has been evaluated using second law of thermodynamics and exergy equations. Effect of change in working fluid in bottoming VCR cycle has been done and observed that R507A has least exergy destruction out of chosen working fluids. In addition to this, optimization using minimization of total exergy destruction principle has also been done with proposed system giving less exergy destruction as compared to conventional VCRS system.

M. H. Amin (B) · H. M. Naik · B. B. Patel CHARUSAT University, Nadiad-Petlad Road, Changa, Gujarat 388421, India e-mail: [email protected] H. M. Naik e-mail: [email protected] B. B. Patel e-mail: [email protected] P. K. Patel GSFC University, P.O. Fertilizernagar, Vadodara, Gujarat 391750, India e-mail: [email protected] S. N. Patel Alumnus, IIT Kharagpur, Kharagpur, India e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_6

55

56

M. H. Amin et al.

Nomenclature COP ED h m m˙ P Q s To W X Temp gen abs cond evap p V HX Eff MFR VCR

Coefficient of performance (non-dimensional) Rate of exergy destruction (kW) Specific enthalpy (kJ kg−1 ) Mass (Kg) Mass flow rate (kg s−1 ) Pressure (kPa) Heat transfer rate (kW) Specific entropy (kJ kg−1 K−1 ) Ambient temperature (K) Work transfer rate (kW) Mass fraction of Lithium bromide in solution Temperature (°C) Generator Absorber Condenser Evaporator Pump Valve Heat exchanger Effectiveness Mass flow rate Vapor compression refrigeration

Greek Word ï

Second law efficiency

Subscripts 1, 2, 3 Represent state points in Fig. 1

Exergy and Energy Analyses of Half Effect–Vapor Compression …

57

1 Introduction Conventional cooling systems, which operate with vapor compression, have a negative impact on sustainable development because substances used as refrigerants generate environmental issues. The quick reduction of conventional fuels has drawn the scientists to research for a new effective cooling system which consumes less high-grade energy for certain operation. In previous several years, more interest in absorption refrigeration system is continuously increasing because these systems consume such environment-friendly couples of refrigerant and absorbent which do not affect the ozone layer negatively and also does not cause any bad impact on the environment and also involved in the design, energy-efficient as well as innovative integrated refrigeration systems [1]. Analysis of conservation in energy is applied using first law of thermodynamics. But this analysis does not take into consideration of losses of the system. This limitation can be eradicated by using second law analysis applied to system components which take losses into consideration. This is possible because second law considers irreversibility during analysis [2–4]. In 1998, Horuz closely observed the mixtures of NH3 –water and water–LiBr comparing COP, its cooling capacity, maximum and minimum operating pressure and summarized that water–lithium bromide mixture gives better performance and efficient output. Domínguez-Inzunza et al. observed comparability of the performance of a different system and reviewed that half effect arrangement work better than any of them in the context of the lower operating temperature of generator [5, 6]. Gebreslassie et al. [7] also noticed different cycle and measured exergy destruction without made a part of the most advantageous value of average pressure in half effect. Kaushik et al. Dixit observed that the maximum COP and maximum second law efficiency reduces as a result of gain in absorber temperature [8]. Cimsit and Ozturk [9] worked on absorption–compression cascade refrigeration cycle with different combinations of working fluid and noticed performance analysis of cascade system as well as result of different temperature on various equipments and totally six cascade cycle analyzed by them. Keeping and considering this viewpoint, a novel system is proposed that has half effect as the top cycle and vapor compression system as a bottom cycle. This analysis is carried out to finding the destruction of exergy for each component and then finding overall exergy destruction for the entire cycle. Additionally, optimization using exergy destruction is an objective function and applying minimization rulekeeping desired input as independent variables done by us. Also, the parametric analysis applied to observe the effects of varying input variables on first law and second law efficiencies. Effect of working fluid in the VCR system on efficiencies is also observed by us. The proposed system has a wide range of application: large cold storage, warehouses, cryogenics, and many more.

58

M. H. Amin et al.

Motivation: Research in refrigeration has been done since years and modifications of existing systems have been current trend of research. Novelty: Two completely different cycles working on completely different principles are clubbed together in the present analysis. Contribution: Complete analysis has been done to quantitatively get results of the proposed system.

2 Methodology The topping cycle is half effect absorption refrigeration system which has two desorber, absorber, pump, throttle valve, and heat exchanger. A mixture of LiBr– water is used as an operating fluid in half effect cycle. In each stage, a pump is used to increase pressure of mixture which is preheated by a weak solution coming from desorber. This high-pressure mixture is heated in desorber which separates LiBr from water. Then the weak solution is returned to absorber after passing through a throttle valve which reduces its pressure to absorber pressure. In absorber, heat is taken out so that water is absorbed by LiBr. So, lithium bromide is absorbent and water is refrigerant in the topping cycle. Topping cycle cools the working fluid flowing in the condenser unit of the bottoming vapor compression cooling system. In vapor compression refrigeration system, a refrigerant used is R507A which is compressed in compressor, cooled in condenser and then passed through throttle valve entering evaporator. Since absorption system evaporator temperature cannot reach less than 278 K; since below this temperature, water crystallizes, vapor compression system is used as a bottoming cycle. Assumptions 1. 2. 3. 4. 5.

Steady flow analysis has been exercised to each element of the system. Pressure loss during fluid flow in all components is removed. Changes in kinetic and potential energy are neglected. Saturation conditions are suitably taken to simplify the analysis. Heat loss and increase to any system due to friction inside, etc., are neglected.

Thermodynamic Analysis: By applying first and second law of thermodynamics scrutiny of energy and exergy absorption system can be done [10]. On the basis of that the general fundamental equations are specified below with individual components (Fig. 1). Lower Solution Heat Exchanger ·

·

·

·

m 5 = m 4 (Mass Balance) m 3 = m 2 (Mass Balance)

(1) (2)

Exergy and Energy Analyses of Half Effect–Vapor Compression …

Fig. 1 Half effect and VCRS cascade system

59

60

M. H. Amin et al. ·

·

·

·

Q hxl = m 2 × (h 3 − h 2 ) (Energy Balance) Q hxl = m 4 × (h 4 − h 5 ) (Energy Balance) ·

·

Q hxl = EffHX × Cmin1 × (T4 − T2 ) (Heat Transfer Rate)

(3) (4) (5)

Upper Solution Heat Exchanger ·

·

·

·

m 9 = m 8 (Mass Balance) m 11 = m 10 (Mass Balance) ·

·

Q hxu = m 8 × (h 9 − h 8 ) (Energy Balance) ·

·

Q hxu = m 10 × (h 10 − h 11 ) (Energy Balance) ·

·

Q hxu = EffHX × Cmin2 × (T10 − T8 ) (Heat Transfer Rate)

(6) (7) (8) (9) (10)

Lower Generator ·

·

·

m 3 = m 4 + m 17 (Mass Balance) ·

·

·

·

(h 3 × m 3 ) − (h 4 × m 4 ) − (h 17 × m 17 ) + Q ld = 0 (Energy Balance)

(11) (12)

Upper Generator ·

·

·

m 9 = m 13 + m 10 (Mass Balance) ·

·

·

·

(h 9 × m 9 ) − (h 10 × m 10 ) − (h 13 × m 13 ) + Q hd = 0 (Energy Balance)

(13) (14)

Condenser ·

·

m 13 = m 14 (Mass Balance)

(15)

Q c = m 13 × (h 13 − h 14 ) (Rate of Heat Removal by Condenser)

(16)

·

·

Exergy and Energy Analyses of Half Effect–Vapor Compression …

61

Valve ·

·

m 14 = m 15 (Mass Balance)

(17)

h 14 = h 15 (Energy Balance)

(18)

Evaporator ·

·

m 16 = m 15 (Mass Balance) ·

·

Q e = m 15 × (h 16 − h 15 ) (Energy Balance)

(19) (20)

Lower Absorber ·

·

·

m 16 + m 6 − m 1 = checkm1 (Mass Balance is Redundant, Check for Consistency) (21) ·

·

·

·

(h 6 × m 6 ) − (h 1 × m 1 ) − (h 16 × m 16 ) + Q la = 0 (Energy Balance)

(22)

Upper Absorber ·

·

·

m 12 + m 17 − m 7 = checkm2 (Mass Balance is Redundant, Check for Consistency) (23) ·

·

·

·

(h 12 × m 12 ) − (h 7 × m 7 ) + (h 17 × m 17 ) − Q ha = 0 (Energy Balance)

(24)

Lower Solution Valve ·

·

m 6 = m 5 (Mass Balance)

(25)

h 6 = h 5 (Energy Balance)

(26)

Upper Solution Valve ·

·

m 12 = m 11 (Mass Balance)

(27)

h 12 = h 11 (Energy Balance)

(28)

62

M. H. Amin et al.

Lower Pump Calculation ·

·

m 2 = m 1 (Mass Balance)

(29)

·

Wp1

h2 = h1 +

·

m1

(Energy Balance)

(30)

Upper Pump Calculation ·

·

m 7 = m 8 (Mass Balance)

(31)

·

Wp2

h8 = h7 +

·

m7

(Energy Balance)

(32)

Cycle Efficiency ·

COP = ·

Qe ·

·

Q ld + Q hd

·

·

(Coefficient of Performance)

·

·

(33)

·

Q ld + Q hd + Q e −( Q c + Q la + Q ha ) ·

·

+ Wp1 + Wp2 = CheckEB (Overall Energy Balance)

(34)

VCRS System as Bottom Cycle in Cascade Cycle Q evcr = m rvcr × (h vcra − h vcrd )

(35)

Wevcr = m rvcr × (h vcrb − h vcra )

(36)

COPvcr = COPoverall =

Q evcr Wevcr

(37)

Q evcr ·

·

·

·

(38)

Wevcr + Q ld + Q hd + Wp1 + Wp2

Exergy Destruction Equations ·

EDTV1 = −(m 2 ×T0 × (S5 − S6 ))

(39)

Exergy and Energy Analyses of Half Effect–Vapor Compression …

63

·

EDTV2 = −(m 11 ×T0 × (S11 − S12 ))

(40)

·

EDTV3 = −(m 15 ×T0 × (S14 − S15 ))

(41)

·

EDTVvcr = −(m 3 ×T0 × (Svcra − Svcrb ))

(42)

·

EDhxu = m 2 × (h 8 − h 9 − T0 × (S5 − S6 )) ·

+ m 10 × (h 10 − h 11 − T0 × (S10 − S11 )) ·

·

·

·

(43)

EDhxl = m 2 × (h 2 − h 3 − T0 × (S2 − S3 )) + m 4 × (h 4 − h 5 − T0 × (S4 − S5 )) (44) EDhxl = m 2 × (h 2 − h 3 − T0 × (S2 − S3 )) + m 4 × (h 4 − h 5 − T0 × (S4 − S5 )) (45) ·

·

EDla = m 16 × (h 16 − T0 × S16 ) + m 6 × (h 6 − T0 × S6 )   · T0 · − m 1 × (h 1 − T0 × S1 ) − Q e × 1 − TabsSI ·

(46)

·

EDld = m 3 × (h 3 − T0 × S3 ) + m 17 × (h 17 − T0 × S17 )   · T0 · − m 4 × (h 4 − T0 × S4 ) − Q ld × 1 − TgenSI ·

(47)

·

EDhd = m 9 × (h 9 − T0 × S9 ) + m 13 × (h 13 − T0 × S13 )   · T0 · − m 10 × (h 10 − T0 × S10 ) − Q hd × 1 − TgenSI ·

EDcond

(48)

·

EDha = m 17 × (h 17 − T0 × S17 ) + m 12 × (h 12 − T0 × S12 )   · T0 · − m 7 × (h 7 − T0 × S7 ) − Q ha × 1 − TabsSI   · T0 · = m 13 × ((h 13 − h 14 ) − (T0 (S13 − S14 ))) − Q c × 1 − TcondSI

(49) (50)

·

EDvcrcond = m rvcr × (h vcr2 − h vcr3 − T0 (Svcr2 − Svcr3 )) ·

+ m 15 × (h 15 − h 16 − T0 (S15 − S16 )) EDvcrevap =

· m rvcr

× (h vcr4 − h vcr1 − T0 (Svcr4 − Svcr1 )) +

· Q evcr

(51)  × 1−

T0



TevcrSI (52)

64

M. H. Amin et al.

EDtotal = EDpump1 + EDpump2 + EDvcr + EDhxu + EDhxl + EDla + EDld + EDhd + EDha + EDcondensor + EDvcrcond + EDvcrevap

ηexergy

  T0 Q evcr × 1 − TevcrSI = ·     · · · · T0 T0 + Q hd × 1 − TgenSI + Wp1 + Wp2 + Wevcr Q ld × 1 − TgenSI

(53)

(54)

3 Results and Discussion To perform calculations of equations mentioned aforesaid, a code has been written in EES software. In order to verify the present analysis, half effect absorption system first law analysis output is validated in Fig. 2 which shows that error is inside acceptable range [11]. Table 1 conveys the result of generator temperature on the coefficient of performance, exergy destruction and second law efficiency in half effect vapor compression cascade refrigeration system. Rising the generator temperature is not fruitful for our system as it leads to the decrement of COP and exergetic efficiency and leads to increment in exergy destruction. Half effect system works better than single effect, double effect in series and inverse absorption cooling system in the context of the lower operating temperature of the generator equal to 58 °C. At this temperature, we obtained COP overall = 0.2578, exergetic efficiency = 0.5991 kW, and exergy destruction = 48.16 which gives better and efficient output for our proposed system. Variations in the absorber temperature on the COP, second law efficiency as well as exergy destruction are mentioned in Table 2. While looking attentively at the various temperatures for the absorber, reduction in temperature effects negatively by depleting the COP of our system and depleting the exergic efficiency, respectively. And a reduction in temperature makes the energy destruction rise which is hazardous for our system. Considering the code, the most suitable temperature for the proposed system on which one can rely is 33 degrees which gave overall COP as 0.2526 and exergy destruction as 49.03 kW. Input parameters High pressure generator Low pressure Condenser Evaporator

Fig. 2 Validation of present work

Temp ( °C ) COP (present work) COP [12] Error (%) 58.2 0.437 0.438 0.22 33 10

Generator

0.421

0.422

0.423

0.423

0.423

59

60

61

62

COP

58

Temp (°C)

0.187

0.1998

0.2152

0.2341

0.2578

COP (OVERALL)

1.217

1.385

1.619

1.966

2.538

COP (VCR)

0.3213

0.3621

0.4142

0.4882

0.5991

ï (exergy)

71.46

64.65

58.41

52.87

48.16

EDtotal (kW)

Table 1 Result of variation in generator temperature

139.1037

122.3037

104.6037

88.16368

66.73368

W total (kW)

335.4

317.1

297.9

277.8

256.6

Qc (kW)

324.7

307

288.4

269

248.5

Qe (kW)

370.5

350.6

329.9

308.3

285.7

Qha (kW)

371.8

351.8

331

309.3

286.5

Qhd (kW)

23.39

22.94

22.49

22.02

21.55

Qhxl (kW)

21.91

21.58

21.25

20.9

20.55

Qhxu (kW)

385.6

364.7

342.9

320.2

296.5

Qla (kW)

395

373.6

351.3

328.1

303.8

Qld (kW)

169.4

169.4

169.4

169.4

169.4

Qevcr (kW)

Exergy and Energy Analyses of Half Effect–Vapor Compression … 65

0.428 0.1916

0.425 0.2183

0.421 0.2526

31

32

33

0.1702

0.43

0.432 0.1527

2.396

1.655

1.256

1.006

0.835

0.5727

0.4317

0.3445

0.2853

0.2424

49.03

56.47

66.58

79.32

94.66

70.70368

102.4037

134.9037

168.4038

202.9038

260.9

295.2

330.4

366.5

403.7

252.8

286

320.3

355.5

391.8

290.3

326

362.6

400.1

438.6

291.1

37.1

364

401.9

440.8

21.64

22.07

22.46

22.8

23.09

20.62

21.01

21.34

21.63

21.87

301.3

338.5

376.6

415.6

455.8

308.7

346.5

385.2

424.8

465.4

169.4

169.4

169.4

169.4

169.4

COP COP ï (exergy) EDtotal (kW) W total (kW) Qc (kW) Qe (kW) Qha (kW) Qhd (kW) Qhxl (kW) Qhxu (kW) Qla (kW) Qld (kW) Qevcr (kW) (OVERALL) (VCR)

30

Absorber 29

Temp COP (°C)

Table 2 Effect of variation in absorber temperature

66 M. H. Amin et al.

Exergy and Energy Analyses of Half Effect–Vapor Compression …

67

Table 3 conveys the result of condenser temperature on the COP, exergetic efficiency and total exergy destruction. The outcome of the analysis for the condenser is a reduction in temperature leads to the value of coefficient of performance and total second law efficiency of the system lesser as it might lead the value of energy destruction higher. It can be concluded the higher temperature (40 °C) is prominent for our system by analyzing certain parameters for the condenser. Using parametric analysis, we noticed COP as 0.421, exergetic efficiency as 0.5727, and total exergy destruction as 49.03 kW. Table 4 conveys the result of temperature effect on the COP, exergetic efficiency, and total exergy destruction. Analyzing the parameters for evaporator, rise in temperature makes the COP and exergic efficiency lower and energy destruction higher. It can be concluded that the high temperature might make our system week. According to parametric analysis, it can be observed that COP overall is 0.2674, exergetic efficiency as 0.6491, and total exergy destruction as 47.11 kW. Table 5 shows effectiveness solution heat exchangers effectiveness. For this, if we want to reduce the energy destruction for the betterment of our system, we might just increase the temperature which leads to a rise in COP and second law efficiency of the system. 0.54 is the appropriate value of solution heat exchanger effectiveness. At this point, we obtained COP overall = 0.2539, exergetic efficiency = 0.5739, and total exergy destruction = 48.77 kW. As shown in Table 6 after analyzing the parameters analyzing, it is observed that whenever values of mass flow rate provided at state 1 with pump 1 which passes through a pump (m˙ 1 ) gentle rises make minor decrement in COP and exergetic efficiency of the system which leads to higher energy destruction which might not be good for the system. Here it can be noticed that COP overall = 0.259, exergetic efficiency = 0.6049, and total exergy destruction = 48.05 kW. As shown in Table 7, the analytical result for the mass flow rate provided at state 7 with pump 2 is the same as mass flow rate at 1 as risen the temperature up, COP and efficiency of the system decrease and energy destruction increases. After doing the parametric analysis, we obtained that COP overall = 0.2584, exergetic efficiency = 0.6019, and total exergy destruction = 48.1 kW. Table 8, after this analysis for the given parameters, shows the optimized suggested values for the betterment of our system. As shown in Table 9 by the detailed exergy analysis of different working fluid on the VCR system, the comparison of the fluids for the improvement of the system is easier by the various parameters like COP, exergetic efficiency, and exergy destruction. Similar exergy and energy analyses are exercised to simple VCR cycle for R507A as working fluid. It is observed that simple VCRS has 121.7 kW total destruction of exergy and 0.3209 s law efficiency for the same range of evaporator and condenser temperature, and present proposed system has 49.03 kW total exergy

0.421 0.2526

0.423 0.2425

0.424 0.2335

0.426 0.2253

0.427 0.2179

39

38

37

36

1.632

1.767

1.931

2.135

2.396

0.4227

0.4553

0.4877

0.5262

0.5727

58.08

55.58

53.23

51.04

49.03

103.8028

95.58299

87.70321

79.33344

70.70368

296.3

287.9

279.2

270.2

260.9

287.5

279.2

270.6

261.8

252.7

326.7

317.9

308.9

299.8

290.3

327.5

318.8

309.8

300.6

291.1

21.03

21.17

21.32

21.48

21.64

19.37

19.67

19.97

20.29

20.62

338.1

329.3

320.3

310.9

301.3

346

337.1

328

318.5

308.7

169.4

169.4

169.4

169.4

169.4

COP COP ï (exergy) EDtotal (kW) W total (kW) Qc (kW) Qe (kW) Qha (kW) Qhd (kW) Qhxl (kW) Qhxu (kW) Qla (kW) Qld (kW) Qevcr (kW) (OVERALL) (VCR)

Condenser 40

Temp COP (°C)

Table 3 Effect of variation in condenser temperature

68 M. H. Amin et al.

0.419 0.2674

0.421 0.2523

0.423 0.2389

0.425 0.2264

0.427 0.2147

10

10.5

11

11.5

1.579

1.789

2.053

2.396

2.859

0.4165

0.4596

0.5108

0.5727

0.6491

56.66

53.8

51.26

49.03

47.11

105.2037

93.48365

81.98366

70.70368

59.62369

294.4

283

271.9

260.9

250.2

285.6

274.4

263.5

252.8

242.2

324

312.6

301

290.3

279.5

325.1

313.5

302.2

291.1

280.3

21.4

21.48

21.56

21.64

21.72

20.22

20.36

20.49

20.62

20.76

335.7

324

312.6

301.3

290.2

343.5

331.7

320.1

308.7

297.5

166.1

167.2

168.3

169.4

170.5

COP COP ï (exergy) EDtotal (kW) W total (kW) Qc (kW) Qe (kW) Qha (kW) Qhd (kW) Qhxl (kW) Qhxu (kW) Qla (kW) Qld (kW) Qevcr (kW) (OVERALL) (VCR)

Evaporator 9.5

Temp COP (°C)

Table 4 Effect of variation in evaporator temperature

Exergy and Energy Analyses of Half Effect–Vapor Compression … 69

2.396

2.396

2.396

2.396

2.396

0.52 0.423 0.2635

0.53 0.423 0.2635

0.54 0.424 0.2539

0.5739

0.5736

0.5736

0.573

0.5727

48.77

48.84

48.84

48.96

49.03

70.70368

70.70368

70.70368

70.70368

70.70368

260.9

260.9

260.9

260.9

260.9

252.7

252.7

252.7

252.7

252.7

288.7

289.1

289.1

289.9

290.3

289.5

289.9

289.9

290.7

291.1

23.37

22.94

22.94

22.07

21.64

22.27

21.86

21.86

21.04

20.62

299.6

300

300

300.9

301.3

307

307.4

307.4

308.3

308.7

169.4

169.4

169.4

169.4

169.4

COP COP ï (exergy) EDtotal (kW) W total (kW) Qc (kW) Qe (kW) Qha (kW) Qhd (kW) Qhxl (kW) Qhxu (kW) Qla (kW) Qld (kW) Qevcr (kW) (OVERALL) (VCR)

0.421 0.2526

COP

0.51 0.422 0.2529

Eff_hx 0.5

Eff of HX

Table 5 Effect of variation in solution heat exchangers effectiveness

70 M. H. Amin et al.

m˙ 1

0.421 0.259

0.421 0.2587

0.421 0.2585

0.421 0.2583

0.421 0.258

0.964

0.965

0.966

0.967

2.545

2.551

2.558

2.565

2.572

0.6003

0.6041

0.6026

0.6037

0.6049

48.14

48.12

48.1

48.07

48.05

66.56368

66.38368

66.21368

66.03368

65.86368

256.4

256.2

256.1

255.9

255.7

248.4

248.2

248

247.8

247.6

285.5

285.3

285.1

284.9

284.7

286.3

286.1

285.9

285.7

285.5

21.52

21.5

21.48

21.45

21.43

20.56

20.56

20.56

20.56

20.57

296.2

296

295.8

295.6

295.3

303.5

303.3

33.1

302.8

302.6

169.4

169.4

169.4

169.4

169.4

COP COP ï (exergy) EDtotal (kW) W total (kW) Qc (kW) Qe (kW) Qha (kW) Qhd (kW) Qhxl (kW) Qhxu (kW) Qla (kW) Qld (kW) Qevcr (kW) (OVERALL) (VCR)

0.963

MFR COP (kg/s)

Table 6 Effect of variation in mass flow rate provided at state 1 with pump 1

Exergy and Energy Analyses of Half Effect–Vapor Compression … 71

m˙ 7

0.421 0.2584

0.421 0.2583

0.421 0.2582

0.421 0.258

0.421 0.2579

0.803

0.804

0.805

0.806

2.541

2.544

2.547

2.551

2.554

0.5997

0.6002

0.6008

0.6013

0.6019

48.15

48.14

48.13

48.11

48.1

66.65368

66.57367

66.49367

66.41367

66.32366

256.5

256.4

256.4

256.3

256.2

248.5

248.4

248.3

248.2

248.1

285.6

285.5

285.3

285.2

285.1

286.4

286.3

286.1

286

285.9

21.55

21.55

21.55

21.55

21.56

20.53

20.5

20.47

20.44

20.41

296.4

296.3

296.2

296.1

296

303.7

303.6

303.5

303.4

303.3

169.4

169.4

169.4

169.4

169.4

COP COP ï (exergy) EDtotal (kW) W total (kW) Qc (kW) Qe (kW) Qha (kW) Qhd (kW) Qhxl (kW) Qhxu (kW) Qla (kW) Qld (kW) Qevcr (kW) (OVERALL) (VCR)

0.802

MFR COP (Kg/s)

Table 7 Effect of variation in mass flow rate provided at state 7 with pump 2

72 M. H. Amin et al.

Exergy and Energy Analyses of Half Effect–Vapor Compression … Table 8 Parameters of optimized value

Lower

Upper

73 Optimum value

Eff_HX

0.5

0.8

0.8

m_dot [1]

0.95

0.98

0.95

m_dot [7]

0.8

0.85

0.8

m_rvcr

1.2

1.5

1.2

T_abs

30

33

33

T_cond

35

41

41

T_evap

8

15

8

T_evcr

−50

−40

−40

T_gen

58

63

58

destruction and 0.5727 exergetic efficiency. This showed that proposed system has lower losses as compared to simple VCR system. After doing all the analyses, the conclusion R507A is better than R502 and R404A in terms of coefficient of performance, second law efficiency, and total exergy destruction [12].

4 Conclusion First and second law analyses of thermodynamics have been applied the proposed cascade half effect system and vapor compression system. Quantitative analysis has been performed, and the effect of varying various input parameters supplied to the system on outputs has been observed. This proposed system has been compared theoretically with a simple vapor compression refrigeration system operating between the same evaporator and condenser working limits. We have changed the working fluid in bottoming VCRS cycle and observed that R507A had the least exergy destruction out of chosen working fluids as well as we have also done optimization using minimization of total exergy destruction principle. It was observed that the present system had lower total exergy destruction as compared to a simple vapor compression refrigeration system. This gave merit to the analysis performed. Also, suitable validation has been done to ascertain correctness of the present analysis applied.

2.814

2.051

R502

0.421 0.2377

3.423

R404A 0.421 0.2671

0.5109

0.6452

0.6765 45.07

49.3

28.75 256.6

256.6

205.8 248.5

248.5

198.8 285.7

285.7

221.5 286.5

286.5

222 21.55

21.55

35.62 20.55

20.55

33.86

296.5

296.5

230.7

303.8

303.8

237.2

158.7

174.2

146.2

77.39368

61.90368

42.70398

COP COP ï (exergy) EDtotal Qc (kW) Qe (kW) Qha (kW) Qhd (kW) Qhxl (kW) Qhxu (kW) Qla (kW) Qld (kW) Qevcr (kW) W total (kW) (OVERALL) (VCR)

R507A 0.433 0.2912

COP

Table 9 Analysis of different working fluid

74 M. H. Amin et al.

Exergy and Energy Analyses of Half Effect–Vapor Compression …

75

References 1. Kaynakli, O., Kilic, M.: Theoretical study on the effect of operating conditions on performance of absorption refrigeration system. Energy Convers. Manag. 48(2), 599–607 (2007) 2. Kaushik, S.C., Arora, A.: Energy and exergy analysis of single effect and series flow double effect water–lithium bromide absorption refrigeration systems. Int. J. Refrig. 32(6), 1247–1258 (2009) 3. Kilic, M., Kaynakli, O.: Second law-based thermodynamic analysis of water-lithium bromide absorption refrigeration system. Energy 32(8), 1505–1512 (2007) 4. Horuz, I.: A comparison between ammonia-water and water-lithium bromide solutions in vapor absorption refrigeration systems. Int. Commun. Heat Mass Transfer 25(5), 711–721 (1998) 5. Domínguez-Inzunza, L.A., Sandoval-Reyes, M., Hernández-Magallanes, J.A., Rivera, W.: Comparison of the performance of single effect, half effect, double effect in series and inverse absorption cooling systems operating with the mixture H2 O-LiBr. Energy Procedia 57, 2534–2543 (2014) 6. Domínguez-Inzunza, L.A., Hernández-Magallanes, J.A., Sandoval-Reyes, M., Rivera, W.: Comparison of the performance of single-effect, half-effect, double-effect in series and inverse and triple-effect absorption cooling systems operating with the NH3 –LiNO3 mixture. Appl. Therm. Eng. 66(1–2), 612–620 (2014) 7. Gebreslassie, B.H., Medrano, M., Boer, D.: Exergy analysis of multi-effect water–LiBr absorption systems: from half to triple effect. Renew. Energy 35(8), 1773–1782 (2010) 8. Arora, A., Dixit, M., Kaushik, S.C.: Energy and exergy analysis of a double effect parallel flow LiBr/H2 O absorption refrigeration system. J. Therm. Eng. 2(1), 541–549 (2016) 9. Cimsit, C., Ozturk, I.T.: Analysis of compression–absorption cascade refrigeration cycles. Appl. Therm. Eng. 40, 311–317 (2012) 10. Jain, V., Sachdeva, G., Kachhwaha, S.S.: Energy, exergy, economic and environmental (4E) analyses based comparative performance study and optimization of vapor compressionabsorption integrated refrigeration system. Energy 91, 816–832 (2015) 11. Maryami, R., Dehghan, A.A.: An exergy based comparative study between LiBr/water absorption refrigeration systems from half effect to triple effect. Appl. Therm. Eng. 124, 103–123 (2017) 12. Arora, A., Kaushik, S.C.: Theoretical analysis of a vapour compression refrigeration system with R502, R404A and R507A. Int. J. Refrig. 31(6), 998–1005 (2008)

Effect of Servicescape and Nourishment Quality on Client’s Loyalty in Fine Dining Restaurants: A Statistical Investigation Aravind Kumar Rai, Ashish Kumar, Pradeep Singh Chahar, and C. Anirvinna Abstract The determination of this study was to find out the impact of servicescape and nourishment quality on customer’s loyalty in fine dine restaurants. A total of 435 paying customers having age 18–60 years from fine dining restaurants of Jaipur were selected by using convenience sampling for this cross-sectional study. The data related to customer loyalty, servicescape, and nourishment quality was collected by using a self-structured questionnaire. A regression model is developed using generalized linear model technique due to the presence of heteroscedasticity in the data. All the parameters used in this study have been tested using appropriate statistical test. In continuation with analysis, t-test results revealed that there is no significant difference between the service, loyalty, and nourishment quality in context to gender, there is a significant difference among the customer’s marital status in context to nourishment quality while insignificant differences found in context to loyalty and service given to customers, an insignificant difference among the customer’s occupation in context to nourishment quality, service, and loyalty, significant difference among the various house hold size group in context to nourishment quality while insignificant differences were found in case of service and loyalty, on the other side, ANOVA results revealed an insignificant difference among the various education group in context to nourishment quality, service, and loyalty, significant difference among A. K. Rai (B) Department of Hotel Management, Manipal University Jaipur, Jaipur, Rajasthan 303007, India e-mail: [email protected] A. Kumar Department of Mathematics and Statistics, Manipal University Jaipur, Jaipur, Rajasthan 303007, India e-mail: [email protected] P. S. Chahar Department of Physical Education, Banaras Hindu University, Varanasi 221005, India e-mail: [email protected] C. Anirvinna TAPMI School of Business, Manipal University Jaipur, Jaipur, India e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_7

77

78

A. K. Rai et al.

the various house hold size group in context to nourishment quality, insignificant differences were found in case of service and loyalty, significant difference among the various income group in context to nourishment quality and service, insignificant differences were found in case of loyalty, significant difference in the service and nourishment quality in context to international exposure while insignificant differences were found in case of loyalty, significant difference in the loyalty in context to customer’s frequency while insignificant differences were found in case of service and nourishment quality. Hence, it is concluded that servicescape and nourishment quality have a great impact on customer’s loyalty specially in fine dining restaurants.

1 Introduction In today’s focused commercial center, eatery clients have a plenty of café decisions. So as to make due in this condition, restaurateurs need to rehearse a solid client-driven direction and fulfill customer’s needs more viably than the challenge [1]. Eating out is viewed as additional as an encounter today, a family amusement, a method for associating in the network where individuals get together in a predetermined climate, while appreciating tasty treats. As it were, purchasers look for an “eating background” which incorporates delicious nourishment, great feeling, excitement, and brisk help [2]. A Fine Eat eatery is either strength or a multi-cooking café with a solid spotlight on quality fixings, introduction, and faultless assistance. The market lays to a great extent on the princely buyer who has the readiness to enjoy and encounter intriguing nourishments. Fine Eat is a specialty portion of the nourishment administrations advertise, with few contending players, as these outlets are constrained to featured lodgings/boutique cafés. In general, the Indian Fine Eat market is in its earliest stages and is developing with expanded buyer experimentation and rising ability to spend on quality nourishment and experience. The section’s player is moving to versatile plans of action with solid back-end activities concentrating on high-caliber, crisp, and natural fixings and on growing new kitchen procedures to improve productivity [3]. Servicescape speaks to the totality of the mood and physical condition at the administration conveyance locales, these improvements pull in client’s eyeballs in any case preceding the genuine purchasing, and subsequently nowadays, servicescape is drawing the consideration of administration, advertisers, and analysts. As indicated by Bitner [4], the style and presence of physical surroundings joined with substantial items at administration conveyance destinations encourage execution or correspondence of the administration. Levitt [5] noticed that when clients assess immaterial items (administrations), they generally depend somewhat on both appearance and outer impression to make decisions and assessments with respect to support utilization circumstances. Administrations being immaterial, the imminent clients face high vulnerability with respect to administration highlights and results, constraining regularly to depend on substantial signs. Servicescape which is

Effect of Servicescape and Nourishment Quality …

79

wealthy in such unmistakable signals thusly is persuasive in conveying the association’s picture and accordingly forming the administration desires and encounters [6]. Shashikala and Suresh [7] led an investigation to examine the effect of servicescape on client saw an incentive in fancy cafés. They presumed that servicescape basically assumes a basic job in making and upgrading client esteem. Further, surrounding elements, cleanliness, and tasteful elements saw as the significant worth drivers in the feasting situations. Adzoyi and Klutse [8] have done a cross-sectional examination to research client discernments toward lodging administration condition in setting to client’s degree of fulfillment and dependability practices. Their investigation uncovered high scores for gear, smell, and lighting components of the administration condition, however, scored low on music, outfitting, temperature, tidiness, and client assistance. A solid connection between consumer loyalty and client unwaveringness was likewise settled. Ellen and Zhang [9] considered that how organization café servicescape impacted supporters’ passionate states and social goals. The examination was led on 149 visitors of organization café of a Dutch legislative association, The Hague, and found that the visitors’ view of the organization eatery servicescape affected their enthusiastic states (delight and excitement) and through these feelings, their social expectations. They inferred that delight significantly affected conduct goals. In the nourishment administration industry, there are high in rivalry among fancy eatery and other nourishment administration classifications. To accomplish steadfast client and rehash buys, consumer loyalty ought to be the significant goal to be achievers in business [10]. As per concentrate done by Sulek and Hensley [10], rather than physical setting and administration quality, nourishment quality is the one of significant huge indicators of consumer loyalty despite the fact, that recurrent goal shows just 17%. This is expected that nourishment winds up one of the fundamental components of the café experience, and there is no waver that the nourishment in any event majorly affects consumer loyalty just as return support [11]. With that, café ventures today were confronting a basic test to give quality nourishment which is not just enrapturing the clients yet in addition can be more prominent to business contenders. Client discernment about the café must know about the administration measurement of nourishment quality which has a causal relationship to consumer loyalty. Rozekhi et al. [12] directed an examination to explore the impact of nourishment quality toward consumer loyalty in top-notch cafés. Their investigation additionally endeavors to investigate the connection between nourishment quality and consumer loyalty. The aftereffect of the examination uncovered that general nourishment quality properties influence fundamentally toward consumer loyalty, and relapse investigations exhibited that freshness and assortment of nourishments are the two most affecting traits that impact consumer loyalty in high end eatery which be strikingly significant for restaurateurs. The primary elements of the nourishment administration quality have been considered by Ko and Su [13] who recognized two classes of measurements as related with clients and items. The merchandise classification contributes in wellbeing, cleanliness, culinary expressions, and item character. The customer classification contained

80

A. K. Rai et al.

help quality, showcasing, and advancement and condition. The impact of nourishment quality on purchaser purchasing conduct has been explored by Ryu et al. [14], in which they have discovered that client seen qualities are demonstrated by nourishment quality and that these apparent qualities rely upon nourishment execution. Subsequently, the extent of nourishment quality has been featured as a proportion of buyer fulfillment inside café advertise introduction. Therefore, this study is conducted to find out the effect of servicescape and nourishment quality on client’s loyalty in fine dining restaurants.

2 Methodology A total of 435 customers having age 18–60 years from two fine dining restaurants of Jaipur having similarity in the various aspects of operations were selected for the present study by using convenience sampling. The data related to customer loyalty, servicescape, and nourishment quality was collected by using a self-structured questionnaire. The questionnaire so prepared includes 61 items (with a seven-point scale, 1 = not important to 7 = very important) that were identified by the investigator as vital features based on the previous works review. On the location of fine dining restaurant, investigator’s representative has been contacted customers and collected the responses by giving them a self-prepared questionnaire and their opinion about servicescape of the restaurant and food nourishment were recorded. The pre-chosen information gathering days secured different days of the four-week study period to stay away from conceivable inclination emerging from tasks crosswise over various days of the week. Three days from every one-off a month were haphazardly chosen for survey appropriation, two of the days were arbitrarily chosen from Monday through Thursday and one day from Friday and Saturday in every week. This technique was accepted to give a similar example characteristic of a typical top-notch food business week while diminishing mediation to the example café’s business tasks. To fulfill the objectives of the present study descriptive statistics, generalized linear model, independent t-test, and analysis of variance as a statistical technique were employed.

3 Results After collecting the pre-chosen information, the data was undergone through cleaning process and proper remedial measures were applied for rectification of missing values. The reliability and KMO value of the questionnaire used in this study are 0.978 and 0.974, respectively, which shows the questionnaire is reliable and valid for this study. The data was found normal using appropriate normality test. No multicollinearity was present in the data, but the data is heteroscedastic. So, considering heteroscedasticity in the data, generalized linear model is applied. To find out the effect of various demographic variables like age, gender, marital status, income, etc.,

Effect of Servicescape and Nourishment Quality …

81

Table 1 Generalized linear model for predicting the customers’ loyalty based on servicescape and nourishment quality Source of variation

Sum of squares

Corrected model

91,958.432a

df

Mean sum of square

F-value

p-value

327

281.218

8.862

0.000

Intercept

4133.062

1

4133.062

130.242

0.000

Gender

39.443

1

39.443

1.243

0.267 0.240

Age

44.344

1

44.344

1.397

Marital

8.607

1

8.607

0.271

0.604

Education

446.440

1

446.440

14.068

0.000

Occupation

61.655

1

61.655

1.943

0.166

Household size

86.268

1

86.268

2.718

0.102

Income

57.297

1

57.297

1.806

0.182

Int_Exposure

300.197

1

300.197

9.460

0.003

Frequency

15.596

1

15.596

0.491

0.485

Selection

68.098

1

68.098

2.146

0.146

Service

39,074.543

98

398.720

12.565

0.000

FoodQuality

4307.348

41

105.057

3.311

0.000

Service * FoodQuality

14,142.229

178

79.451

2.504

0.000

Error

3363.773

106

31.734

Total

1,438,819.000

434

Corrected total

95,322.205

433

aR

squared = 0.965 (adjusted R squared = 0.856)

on selected factors, independent t-test and analysis of variance were used. The results are appended below. Table 1 revealed the generalized linear model for customer’s loyalty based on servicescape and nourishment quality of fine dining restaurants. The model so developed is significant as the p-value is less than 0.05. The developed model is Loyalty = 4133.062 + 446.440 ∗ Education + 300.197 ∗ International Exposure + 39,074.543 ∗ Service + 4307.348 ∗ Nourishment Quality + 14,142.23 ∗ Service ∗ Nourishment Quality The R squared value of the model is 0.965 along with significantly contributing factors education, international exposure, servicescape, nourishment quality, and contrast of service and nourishment quality. Table 2 revealed that there is no momentous difference between the servicescape, loyalty, and nourishment quality in context to gender as the p-value is more than 0.05 level. It means that the gender is equally affected by the nourishment quality, service, and loyalty.

Homogenous variance

Homogenous variance

Homogenous variance

Service

Loyalty

Nourishment quality

0.045

2.375

0.999

0.833

0.061

0.318

434

434

−0.925 0.987

434

0.788

df

0.324

0.356

0.431

Sig. (2-tailed)

t-value

F-value

p-value

t-test for equality of means

Test for homogeneity of variances

Table 2 Servicescape, loyalty, and nourishment quality in context to gender

1.168

−1.440

2.341

Mean difference

1.184

1.557

2.972

Std. error difference

1.621 3.495

−4.501 −1.159

Upper 8.183

Lower −3.501

95% confidence interval of the difference

82 A. K. Rai et al.

Homogenous variance

Homogenous variance

Homogenous variance

Service

Loyalty

Nourishment quality

12.908

4.289

6.084 0.000

0.039

0.014 2.581

0.340

0.519 429

429

429

p-value

0.010

0.734

0.604

F-value

F-value

F-value

p-value

t-test for equality of means

Test for homogeneity of variances

Table 3 Servicescape, loyalty, and nourishment quality in context to marital status

2.913

0.510

1.483

p-value

1.129

1.501

2.859

F-value

3.460 5.131

−2.441 0.695

Upper 7.103

Lower −4.137

p-value

Effect of Servicescape and Nourishment Quality … 83

84

A. K. Rai et al.

Table 4 Servicescape, loyalty, and nourishment quality in context to various education group Service

Loyalty

Nourishment quality

Sum of squares

df

Mean square

F

Sig.

Due to treatment

1008.783

4

252.196

0.310

0.871

Due to error

350,825.279

431

813.980

TSS

351,834.062

435

Due to treatment

258.459

4

0.289

0.885

Due to error

96,405.107

431

TSS

96,663.567

435

Due to treatment

239.072

4

0.463

0.763

Due to error

55,640.350

431

TSS

55,879.422

435

64.615 223.678 59.768 129.096

Table 3 discovered that there is a significant variance among the customer’s marital status in context to nourishment quality as the p-value is less than 0.05 level, while insignificant differences found in context to loyalty and service given to customers as p-value is more than 0.05 level. It means that the nourishment quality perception might vary from marital status. Table 4 revealed that there is an insignificant difference among the various education group in context to nourishment quality, service, and loyalty as the p-value is more than 0.05 level. It means that these parameters are independent from customer’s education profile. Table 5 revealed that there is an insignificant difference among the customer’s occupation in context to nourishment quality, service, and loyalty as the p-value is more than 0.05 level. It means that the customer’s occupation has no role in nourishment quality, service, and loyalty. Table 6 discovered that there is a significant modification among the various house hold size group in context to nourishment quality as the p-value is less than 0.05 level, while insignificant differences were found in case of service and loyalty as the p-value is more than 0.05 level. Table 7 discovered that there is a significant difference among the various income group in context to nourishment quality and service as the p-value is less than 0.05 level, while insignificant differences were found in case of loyalty as the p-value is more than 0.05 level. Table 8 revealed that there is an important difference in the service and nourishment quality in context to international exposure as the p-value is less than 0.05 level, while insignificant differences were found in case of loyalty as the p-value is more than 0.05 level. Table 9 discovered that there is a significant transformation in the loyalty in context to customer’s frequency as the p-value is less than 0.05 level, while insignificant differences were found in case of service and nourishment quality as the p-value is more than 0.05 level (Figs. 1 and 2).

Homogenous variance

Homogenous variance

Homogenous variance

Service

Loyalty

Nourishment quality

5.454

10.890

5.232

F-value

0.020

0.001

0.023

p-value

104.223

−0.143 115.135

110.279

−0.101 0.590

p-value

F-value

0.556

0.886

0.920

F-value

Test for homogeneity t-test for equality of means of variances

Table 5 Servicescape, loyalty, and nourishment quality in context to occupation

2.347 1.593

−0.336

3.871

F-value

−0.939

−0.391

p-value

−4.094

−4.991

−8.063

Lower

p-value Upper

2.215

4.319

7.281

Effect of Servicescape and Nourishment Quality … 85

86

A. K. Rai et al.

Table 6 Servicescape, loyalty, and nourishment quality in context to various household size group Service

Loyalty

Nourishment quality

Sum of squares

df

Mean square

F

Sig.

Due to treatment

3421.729

3

1140.576

1.438

0.231

Due to error

340,969.091

430

TSS

344,390.820

433

Due to treatment

1161.696

3

387.232

1.768

0.152

Due to error

94,160.509

430

218.978

TSS

95,322.205

433

Due to treatment

2759.791

3

919.930

7.541

0.000

Due to error

52,452.780

430

121.983

TSS

55,212.571

433

792.951

Table 7 Servicescape, loyalty, and nourishment quality in context to income group Service

Loyalty

Nourishment quality

Sum of squares

df

Mean square

Due to treatment

9873.052

3

3291.017

Due to error

341,961.010

432

TSS

351,834.062

435

Due to treatment

1313.047

3

437.682

Due to error

95,350.519

432

220.719

TSS

96,663.567

435

Due to treatment

5955.675

3

Due to error

49,923.747

432

TSS

55,879.422

435

F

Sig.

4.158

0.006

1.983

0.116

17.179

0.000

791.576

1985.225 115.564

Table 8 Servicescape, loyalty, and nourishment quality in context to international exposure Service

Loyalty

Nourishment quality

Sum of squares

df

Mean square

F

Sig.

Due to treatment

14,919.738

3

4973.246

6.377

0.000

Due to error

336,914.324

432

TSS

351,834.062

435

Due to treatment

1545.567

3

515.189

2.340

0.073

Due to error

95,117.999

432

220.181

TSS

96,663.567

435

Due to treatment

1386.471

3

462.157

3.664

0.012

Due to error

54,492.951

432

126.141

TSS

55,879.422

435

779.894

Effect of Servicescape and Nourishment Quality …

87

Table 9 Servicescape, loyalty, and nourishment quality in context to customer’s frequency Service

Loyalty

Nourishment quality

Sum of squares

df

Mean square

F

Sig.

Due to treatment

3074.716

5

614.943

0.758

0.580

Due to error

348,759.345

430

811.068

TSS

351,834.062

435

Due to treatment

2786.114

5

557.223

2.552

0.027

Due to error

93,877.452

430

218.320

TSS

96,663.567

435

Due to treatment

396.639

5

0.615

0.689

Due to error

55,482.783

430

TSS

55,879.422

435

79.328 129.030

Fig. 1 Relationship between quality, loyalty, and servicescape

4 Conclusion The present study was conducted to find out the impact of servicescape and nourishment quality on customer’s loyalty in fine dine restaurants. The study assumed that nourishment quality and servicescape both would have a significant impact on

88

A. K. Rai et al.

Fig. 2 Relationship between observed, predicted, and error of loyalty

customer loyalty, which in turn would absolutely affect the retention of customers. In order to test the assumption, generalized linear model was used, and their result shows that the servicescape and nourishment quality (food quality) contributed 95% in predicting the customer’s loyalty. However, t-test results revealed that there is no significant difference between the service, loyalty, and nourishment quality in context to gender, which shows that these mentioned parameters are independent from gender, amd in other words, servicescape, customer’s loyalty, and nourishment quality are same among the gender. On the other side, significant difference was found among the customer’s marital status in context to nourishment quality while insignificant differences found in context to loyalty and service given to customers, an insignificant difference among the customer’s occupation in context to nourishment quality, service, and loyalty, significant difference among the various house hold size group in context to nourishment quality while insignificant differences were found in case of service and loyalty, while ANOVA results revealed that an insignificant difference among the various education group in context to nourishment quality, service, and loyalty, significant difference among the various house hold size group in context to nourishment quality while insignificant differences were found in case of service and loyalty, significant difference among the various income group in context to nourishment quality and service while insignificant differences were found in case

Effect of Servicescape and Nourishment Quality …

89

of loyalty, significant difference in the service and nourishment quality in context to international exposure while insignificant differences were found in case of loyalty, significant difference in the loyalty in context to customer’s frequency while insignificant differences were found in case of service and nourishment quality. Hence, one can conclude that servicescape and nourishment quality have a great impact on customer’s loyalty specially in fine dining restaurants.

References 1. Parsa, H.G., Self, J.T., Njite, D., King, T.: Why restaurants fail. Cornell Hotel Restaur. Adm. Q. 46(3), 304–322 (2005) 2. Datamonitor: The future decoded: deciphering the sensory mega-trend in global consumer trends-sensory food experiences. http://www.ats-sea.agr.gc.ca/inter/pdf/5977-eng.pdf. Last accessed 2013/12/09 3. Nusra: All about a fine dining restaurant. Retrieved on July 2018 from https://www.restauran tindia.in/article/All-about-a-Fine-Dine-Restaurant.6139. Last accessed 2018/07/12 4. Bitner, M.J.: Servicescapes: the impact of physical surroundings on customers and employees. J. Mark. 56(2), 57–71 (1992) 5. Levitt, T.: Marketing intangible products and product intangibles. Cornell Hotel Restaur. Adm. Q. 22(2), 37–44 (1981) 6. Rapoport, A.: The Meaning of Built Environment. Sage, Beverly Hills, CA (1982) 7. Shashikala, R., Suresh, A.M.: Impact of servicescape on customer perceived value in fine dining restaurants. Amity Bus. Rev. 19(1), 33–46 (2018) 8. Adzoyi, P.N., Klutse, C.M.: Servicescape, customer satisfaction and loyalty in Ghanaian hotels. J. Tour. Hosp. Sports 10, 30–36 (2015) 9. Ellen, T., Zhang, R.: Measuring the effect of company restaurant servicescape on patrons’ emotional states and behavioral intentions. J. Foodserv. Bus. Res. 17, 85–102 (2014) 10. Sulek, J.M., Hensley, R.L.: The relative importance of food, atmosphere, and fairness of wait. Cornell Hotel Restaur. Adm. Q. 45(3), 235–247 (2004) 11. Namkung, Y., Jang, S.: Does food quality really matter in restaurant: its impact of customer satisfaction and behavioral intentions? J. Hosp. Tour. Res. 31(3), 387–410 (2007) 12. Rozekhi, N.A., Hussin, S., Siddiqe, A.S.K.A.R., Rashid, P.D.A., Salmi, N.S.: The influence of food quality on customer satisfaction in fine dining restaurant: case in Penang. Int. Acad. Res. J. Bus. Technol. 2(2), 45–50 (2016) 13. Ko, W.H., Su, L.J.: Foodservice quality: identifying perception indicators of foodservice quality for hospitality students. Food Nutr. Sci. 5(2), 132–137 (2014) 14. Ryu, K., Lee, H.K., Woo, G.: The influence of the quality of the physical environment, food, and service on restaurant image, customer perceived value, customer satisfaction, and behavioral intentions. Int. J. Contemp. Hosp. Manag. 24(2), 200–223 (2012)

Application of Data Mining for Analysis and Prediction of Crime Vaibhavi Shinde, Yash Bhatt, Sanika Wawage, Vishal Kongre, and Rashmi Sonar

Abstract Crime is a significant component of every society. Its costs and consequences touch just about everyone to a remarkable extent. About 10% of the culprits commit about 50% of the crimes (Nath in Crime Pattern Detection Using Data Mining. IEEE, 2006, [4]). Explorations that aid in resolving violations quicker will compensate for itself. But, due to the massive increase in the number of crimes, it becomes challenging to analyze crime manually and predict future crimes based on location, pattern, and time. Also today, criminals are becoming technologically advanced, so there is a need to use advanced technologies to keep police ahead of them. Information mining can be employed to demonstrate wrongdoing apprehension issues. Considerable research work turned out to be published earlier upon this topic. In the proposed work, we thoroughly review some of them. The main focus is on the techniques and algorithms used in those papers for examination and expectation of violation.

V. Shinde · Y. Bhatt (B) · S. Wawage · V. Kongre · R. Sonar Computer Science and Engineering, Prof Ram Meghe College of Engineering and Management, Amravati, Maharashtra, India e-mail: [email protected] V. Shinde e-mail: [email protected] S. Wawage e-mail: [email protected] V. Kongre e-mail: [email protected] R. Sonar e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_8

91

92

V. Shinde et al.

1 Introduction A crime rate, which is increasing day by day, has become a topic of major concern assuredly to limit the improvement of healthy governance. Crimes are neither precise nor irregular, and generally, crimes cannot be dissected. Violations similar to homicide, sex abuse, assault, and so on seem to be raised, whereas offenses like housebreaking, firebombing, and so forth seem to be diminished [16]. Crime is a critical part of each general public. Its expenses and outcomes contact pretty much everybody to a surprising degree. About 10% of the guilty parties carry out about half of the violations. In any case, because of the enormous increment in the number of violations, it becomes difficult to analyze and foresee future crimes dependent on area, pattern, and time. Likewise today, criminals are getting innovatively progressed, so there is a need to utilize trend-setting innovations to keep police in front of them. Data mining can be utilized to model crime issues. Any exploration that can help in settling crimes quicker will make up for itself. An extensive number of research papers have been distributed before on this theme. In this paper, we completely survey some of them. The primary spotlight is on the methods and algorithms utilized in those papers for examination and forecast of crime.

2 Literature Analysis Figure 1 shows the general strategy used by most of the researchers for crime prediction and analysis. Table 1 presents the detailed classification and analysis of literature. Table consists of attributes including: Title of the research paper, focus of the research, dataset used by them, algorithms or tools, and the future scope of the research.

3 Review Analysis The review is thoroughly based on infringement examination and forecast employing information mining methods. We have extensively studied research papers from the former few years containing maximum papers from 2017 and 2018. In Figure 2, pie chart presents the most frequently used algorithms for crime analysis and prediction. 1. K-means Clustering The undertaking of clubbing an assortment of items in such a way, that articles in a similar gathering, which are otherwise called clusters, are significantly related in some understanding to each other than to those in different gatherings, is known as clustering. K-means clustering expects to separate several assessments say n, in k clusters containing every assessment, that relates with the group including the most neighboring mean, obeying the requirement for clustering. K-means clustering algorithm plays a vital role in analyzing and predicting crime and is used extensively.

Application of Data Mining for Analysis and Prediction of Crime Fig. 1 Common approaches for crime analysis and prediction

93

Data Collection

Preprocessing

Attribute Selection

Clustering

Classification

Crime Clusters

Crime Prediction

Visualization

Yadav et al. [1] have utilized k-means to make various clusters as indicated by rates that can be maximum or minimum. A pair cluster is built: Group 1: A huge count of individuals associated with wrongdoing. Group 2: A tiny number of individuals correlated with wrongdoing. Firstly, for preprocessing, the information is introduced from specific document saved as book.csv within Weka tool, succeeded by employing kmeans clustering to that particular data collection, utilizing a similar realistic Weka’s GUI. Joshi et al. [3] have used the K-means clustering information mining technique on the respective dataset, to identify cities with a huge violation rate by detecting violation rates of each sort of violence. The methodology used is dataset collection supported by preprocessing of data and followed by the analysis concerning k-means employing a clustering tool which includes, (a) recognition concerning k, by applying silhouette measure and (b) adding that information within the K-means clustering tool. Later, using k-means, cluster 0 to cluster 4 are gained supported by examination regarding clusters gained applying K-means along with the case study concerning violation in different areas. Nath [4] used k-means clustering to detect the patterns in crime. Offenses extensively differ in nature. Violation information sets usually contain numerous unsolved crimes. The nature of violations changes over time. For instance, cybercrimes or infringements by utilizing cell-phones were unique before few years. Essentially, the classification system relies on the current and known comprehended violations, and it will not give great prescient quality for future wrongdoings. Paying attention to the above details, the author proposed that the clustering technique is superior to other supervised methods like classification. Hazarika et al. [7] have presented the analysis of the lost kids’ information index, based on the previously observed place of the kids before they were claimed to be

94

V. Shinde et al.

Table 1 Classification and analysis of studied literature Title

Focus

Algorithms/tools

Future scope

[1]

To give a survey National Crime of research Records Bureau related to the Web site avoidance of the offenses and to execute various information analysis algorithms for connecting crime and its pattern

Dataset

Algorithms: apriori, k-means, naïve Bayes, correlation, and regression. Tools: Weka tool and R tool

To create the violation problem areas and to apply these methods on the comprehensive information set which comprises 42 violation heads possessing 14 characteristics

[2]

A model that San Francisco recognizes Homicide dataset violation designs from deductions gathered from the wrongdoing scene and foretells the depiction of the culprit who is likely doubted to carry out the violation

Algorithms: multilinear regression, K-neighbors classifier, and neural networks

Employing complex neural networks similar to CNN and RNN to improve the accuracy of the structure

[3]

Crimes including robbery, murder, and different drug offenses which additionally incorporate doubtful actions, commotion grievances, and robber alerts are investigated by utilizing subjective and quantitative methodology

Web site of Bureau K-means clustering, of Crime Statistics RapidMiner tool and Research of New South Wales Government’s Justice Department



(continued)

Application of Data Mining for Analysis and Prediction of Crime

95

Table 1 (continued) Title

Focus

Dataset

[4]

Transgression patterns are recognized by using clustering algorithms and consequently, the process of resolving crime becomes more accelerated

Genuine K-means clustering wrongdoing information from a sheriff’s office

Algorithms/tools

[5]

The method of City police crime forecast is department recommended, in the light of the naïve Bayes classifier

[6]

Identifying potential violation designs implementing earlier underutilized characteristics from police registered offense information

[7]

To locate the Missing children missing children dataset of Delhi in in Delhi year 2016

KNN classifier and naïve Bayes classifier

Computer Aid Apriori algorithm Dispatch System of Beijing Shijingshan Police Sub-bureau

Future scope Generate models for foreseeing the crime problem areas at prominently expected places of wrongdoing for some random span of time, developing social link networks systems to interface hoodlums –



K-means clustering, A forecast model distance matrix—Haversine, for approximate and Euclidean mapping of the point wherever a kid is expected to be seen applying lost and found kids’ information set is constructed (continued)

96

V. Shinde et al.

Table 1 (continued) Title

Focus

[8]

Making an UK police expectation model toward foretelling the frequency concerning numerous sorts of violations by LSOA principle, also the recurrence of social conduct violation

Dataset

Algorithms/tools

Future scope

Regression, instance-based learning, and decision trees



[9]

Contrasts pair Socio-economic Naïve Bayes and BP concerning data from 1990 US separate census classification algorithms, i.e., naïve Bayes plus BP concerning the foretelling category of offense for unique states in the USA

To assess the forecast performance of separate classification algorithms upon the information set

[10]

Examine an US police assortment of department (real classification time) techniques to figure out which is most suitable for anticipating violation areas. Explore characterization on increment or development



Classification, spatial data mining

(continued)

Application of Data Mining for Analysis and Prediction of Crime

97

Table 1 (continued) Title

Focus

Algorithms/tools

Future scope

[11]

Highlight CCIS database, existing systems NCRB, India used by Indian police. An intelligent question-based interface as a wrongdoing examination instrument is proposed

Dataset

Clustering



[12]

Violations related to credit card utilization are recognized

No mention of dataset

Communal detection and spike detection algorithms

To demonstrate the concept of adaptivity properly

[13]

Clustering methods are adopted to prognosticate violation within 6 cities from Tamil Nadu. Crooks are recognized via applying classification approaches

NCRB dataset

Classification: K-nearest neighborhood, clustering (k-means, DBSCAN, agglomerative-hierarchical) algorithms

To improve classification algorithms and enhance privacy and security measures

[14]

Regression is adopted to forecast violations, and integer linear programming formulation is employed for optimizing the distribution of police officers

The US’s FBI violations (2013–2014)

Utility-based regression, SVM, RF, and MARS

The implementation of the proposed framework in separate countries or zones is thought of as being employed

[15]

Establishment of Nationwide police criminal profile. extracted data Recommend a novel two-level clustering algorithm

Affinity propagation (AP) clustering algorithm

Making the algorithm more versatile to assess its influence on the cluster quality and durability

98

V. Shinde et al.

K-Means Clustering, 5, 25%

Others (SVM, RF, Distance Matrix, AP), 5, 25%

Apriori Algorithm, 1, 5%

Regression, 3, 15%

Naive Bayes Classification, 3, 15% KNN Classification, 3, 15%

Fig. 2 Frequently used algorithms for crime analysis and prediction

missing. The research makes use of clustering, to club the regions wherever the degree of lost kids is more eminent. In addition, it also is used to distinguish the patterns, to foretell later possible violation areas. The K-means clustering algorithm is implemented on the respective information set employing Euclidean distance and the haversine distance. Sivaranjani et al. [13], to obtain inner patterns and connections in the offense information set, implemented the K-means clustering algorithm. The approach presents a boundary of comprehensive violation information and clarifies in administration, exploring followed by reclaiming of the favored offense information. Other clustering algorithms include agglomerative-hierarchical clustering [13], DBSCAN clustering algorithm [13], and affinity propagation (AP) clustering algorithm [15]. 2. Naïve Bayes Classification An information function that designates objects within an assortment, to intended sections/classes is termed as classification. The purpose of classification is to precisely foretell the aimed group for individual cases within particular information. A classification algorithm based on the Bayesian principle for computing probabilities and conditional probabilities, thus, used for prediction is known as naive Bayes. Yadav et al. [1] have used naive Bayes classification to understand the existing dataset and to prognosticate in what way unique personal information sets will function dependent upon specific classification standards. Babakura et al. [9] have compared naive Bayes and back propagation for foretelling crime categories in which the average accuracy of naive Bayes comes to be 92%, whereas for back propagation is 65%.

Application of Data Mining for Analysis and Prediction of Crime

99

3. KNN Classification K-nearest neighbors is one of several supervised learning algorithms employed in data mining and machine learning, and it is a classifier algorithm wherever the training is based on the extent of similarity of data from others. Shermila et al. [2] has used the KNN classifier whenever the goal variable includes multiple classes to perform classification. Within the respective information set, the specific destination variable relationship holds twenty-seven unique classes like friend, husband, wife, etc. Furthermore, the objective variable perpetrator gender possesses three classes viz male, female, and not known. Henceforth, KNN classifier is employed to classify those objective variables that are accused’s gender and association. Kiran and Kaishveen [5] have compared KNN classification and naive Bayes for crime prediction and concluded that naive Bayes has higher precision and more inferior execution time as contrasted to KNN. Sivaranjani et al. [13] has applied the KNN classification technique that quests within the information set to obtain the greatest related occurrence if the input is provided over that. Significant input over the KNN algorithm comprises the characteristic values of the offense information set. In the view of that query, the KNN algorithm provides the output, which serves to examine the massive violation information set moreover it supports foretelling the fate of violation in several cities, illustrating the offense patterns concerning numerous cities. 4. Regression For a particular dataset, regression is a data mining technique employed to predict a range of continuous values. Yadav et al. [1], has applied linear regression in order to create a constant variable called “Y,” like a mathematical function concerning at least one variable called “X,” so that regression model could be employed to foretell Y while just the X is identified. Therefore by regression, they have predicted the number of personalities that conducted the crime versus the estimate of experiments performed during the year. Shermila et al. [2], uses multilinear regression for finding the relationship between a dependent variable that is the culprit’s age, with input evidence, which is a provided group of independent variables including, obtained from the wrongdoing scene. This method foretells the most likely culprit’s age based on the input features since their dataset had simple traits that are nonbinary, along with the prognostication involved further than two consequent predictors. Cavadas et al. [14] used regression for foretelling furious crimes followed by resource optimization, analyzing the past predictions. This design employs the concept of utility-based regression and depends over the interpretation concerning a relevance function. 5. Apriori Algorithm Apriori algorithm is created to discover frequently much of the time happening things and affiliation rules from value-based information sets. Chen and Kurland [6], applied the apriori algorithm concerning violation pattern apprehension. The approach operates by begin creating candidate item collections having length called as K, of item collections of length which is K − 1 by employing a breadth-first

100

V. Shinde et al. Data Mining Techniques

Clustering

K-means, DBSCAN, Agglomerative hierarchical clustering, Affinity Propagation (AP) Clustering Algorithm

Classification

Association mining

Prediction

KNN, Naïve Byes, back propagation, Decision tree, SVM, Neural Networks, Random Forest

Apriori Algorithm

Regression

Fig. 3 Data mining techniques and algorithms

search moreover a hash-tree construction for computing applicant item collections, later that tailors the applicants possessing rare sub-items till the applicant collection comprises complete frequent k-length object collections, from that point forward, the transaction information set is browsed in order to discover mostly occurring item collections among the applicants. 6. Other Algorithms Other algorithms and techniques used for examination and prognostication of violation include: (a) support vector machine [14] which is a supervised machine learning algorithm, including correlated training algorithms that examine information which utilized concerning classification and regression analysis. (b) Random forest (RF) [14] comprises of a huge number of individual decision trees that serve as an whole learning approach concerning classification, regression, and separate businesses that work through creating several decision trees at training time followed by producing as an output the class which is the mode of those classes that are classification or mean forecast that is regression of specific trees. (c) Distance matrix [7] is a twodimensional array or a square matrix that shows the distance between pairs of objects. (d) Back propagation (BP) [9] is an influential algorithm for enhancing the precision of predictions in data mining and machine learning. To compute a gradient descent with respect to weights, back propagation is employed by artificial neural networks. Figure 3 describes the information mining methods and algorithms employed for crime analysis and prediction used by the research work that we have examined.

4 Conclusion The violation analysis is a sensitive domain which is expanding day by day and has a severe impact on society. How to efficiently and precisely analyze the expanding volumes of crime data manually is the most prominent challenge faced by various law enforcement agencies today. This research work focuses on reviewing different methods used for analyzing and predicting crime that can prove to be useful for the

Application of Data Mining for Analysis and Prediction of Crime

101

police forces to handle crimes efficiently. Thus, a criminal investigation ought to have the option to distinguish the crime patterns as quick as could be expected under the circumstances and in a viable way for future crime recognition. The review analysis includes a detailed description of the algorithms that are utilized by the studied literature along with a pie chart that describes the most frequently used algorithms concerning analysis and foretelling of violation using information mining. This work is limited to social crime and can be further expanded by considering cybercrime.

5 Future Scope For future work, we intend to expand this research to enhance and implement crime analysis and prediction techniques to resolve the present limitations of the current approaches to obtain more precise results and better performance.

References 1. Yadav, S., Timbadia, M., Yadav, A., Vishwakarma, R., Yadav, N.: Crime pattern detection, analysis and prediction. In: 2017 International Conference on Electronics, Communication and Aerospace Technology ICECA 2017. IEEE (2017) 2. Shermila, M.A., Bellarmine, A.B., Santiago, N.: Crime data analysis and prediction of perpetrator identity using machine learning approach. In: 2018 2nd International Conference on Trends in Electronics and Informatics (ICOEI2018). IEEE (2018) 3. Joshi, A., Sabitha, A.S., Choudhury, T.: Crime analysis using k-means clustering. In: 2017 International Conference on Computational Intelligence and Networks. IEEE (2017) 4. Nath, S.: Crime pattern detection using data mining. In: 2006 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT 2006 Workshops). IEEE (2006) 5. Kiran, J., Kaishveen, K.: Prediction analysis of crime in India using a hybrid clustering approach. In: The Second International conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC 2018). IEEE (2018) 6. Chen, P., Kurland, J.: Time, place, and modus operandi: a simple apriori algorithm experiment for crime pattern detection. IEEE (2018) 7. Hazarika, A.V., Sai Raghu Ram, G.J., Jain, E.: Cluster analysis of Delhi crimes using different distance metrics. In: International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS-2017). IEEE (2017) 8. Saltos, G., Cocea, M.: An exploration of crime prediction using data mining on open data. Int. J. Inf. Technol. Decis. Mak. (2017) 9. Babakura, A., Sulaiman, M.N., Yusuf, M.A.: Improved method of classification algorithms for crime prediction. In: 2014 International Symposium on Biometrics and Security Technologies (ISBAST). IEEE (2014) 10. Yu, C.-H., Ward, M.W., Morabito, M., Ding, W.: Crime forecasting using data mining techniques. In: 2011 11th IEEE International Conference on Data Mining Workshops (2011) 11. Gupta, M., Chandra, B., Gupta, M.P.: Crime Data Mining for Indian Police information System. IIT Delhi, India (2006) 12. Dutta, S., Gupta, A.K., Narayan, N.: Identity crime detection using data mining. In: 2017 International Conference on Computational Intelligence and Networks. IEEE (2017)

102

V. Shinde et al.

13. Sivaranjani, S., Sivakumari, S., Aasha, M.: Crime prediction & forecasting in Tamil Nadu using clustering approaches. In: 2016 International Conference on Emerging Technological Trends [ICETT]. IEEE (2016) 14. Cavadas, B., Branco, P., Pereira, S.: Crime Prediction Using Regression & Resources Optimization. Springer International Publishing, Switzerland (2015) 15. Alphonse Inbaraj, X., Rao, A.S.: Hybrid clustering algorithms for crime pattern analysis. In: 2018 IEEE International Conference on Current Trends toward Converging Technologies, Coimbatore, India 16. Chauhan, C., Sehgal, S.: A review: crime analysis using data mining techniques and algorithms. In: International Conference on Computing, Communication and Automation (ICCCA2017). IEEE (2017)

Malnutrition Identification in Geriatric Patients Using Data Mining Vaishali P. Suryawanshi

and Rashmi Phalnikar

Abstract It is a big challenge to assess and monitor the well-being of geriatric people due to the rise in the average population and keep track of their health status on a daily basis. Nutrition screening is a method used to find the nutrition risk in patients that can be improved by nutritional therapy. Nutrition screening strategy helps to evaluate the nutrition risk of patients which can be improved by nutritional therapy. There are different types of screening forms identified based on age group and hospital settings. The primary objective of the paper is to study the screening forms MUST, NRS-2002, and MNA to identify the correlation between the form parameters using statistical methods. The purpose of this study is to design a system to understand the nutritional status of the patient and identify patients that will be benefited from nutrition therapy. A classification algorithm of data mining is proposed to identify patients with comorbidities and hence suffering from malnutrition. It will help physicians to take the daily decision-making activities to prevent the events before occurring. The patients’ health care can be monitored by the use of devices like mobile applications by observing patients’ nutrition intake. The developed system helps to keep track of the geriatric patient’s health and improve the method of how dietician or practitioner delivers care when the patient is monitored remotely.

1 Introduction Individuals above age of 60 and are suffering from acute and chronic diseases are geriatric patients. Very few hospitals take initiatives to follow standard practices to identify malnutrition in hospitalized patients in different regions [1]. Malnutrition V. P. Suryawanshi (B) · R. Phalnikar School of Computer Engineering and Technology, MIT-WPU, Pune, India e-mail: [email protected] R. Phalnikar e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_9

103

104

V. P. Suryawanshi and R. Phalnikar

in geriatric patients is associated with chronic and acute diseases. Nutrition medication often goes neglected due to lack of attentiveness, knowledge, and the protocol followed in the hospitals. Malnutrition can be thought of as a type of disease and should be treated carefully. It may affect the patients’ physical health. It increases patients’ length of stay at the hospital which may lead to the increase of staying cost [1]. The screening forms Malnutrition Universal Screening Tool (MUST) and Mini Nutrition Assessment (MNA), Nutritional Risk Screening (NRS-2002) were studied which gives better results in identifying the level of nutrition of hospitalized patients. This study helps to promote the standard practices to be followed while identifying malnutrition in geriatric patients. Many hospitals measure the nutritional risk of hospitalized patients by considering physical and physiological conditions at the time of admission [1]. Elderly patients are at risk of deteriorating health; the care should be taken not only at the hospital but also at the home setting [2]. Machine learning helps to improve the system resulting in increased efficiency and effectiveness. Data mining algorithms help to process, analyze classify, and predict the large data which helps to make the decision. Data preprocessing is used to set the data in the desired format to get the expected output. Statistical method helps to identify the co-relation among every form parameter identifying the most significant parameter from the form. Classification is a supervised learning method that classifies patients suffering from nutritional risk. The geriatric patients suffering from severe nutrition risk will be recommended for aggressive nutrition therapy. In this paper, a study of correlations between the MUST, NRS-2002, MNA form parameters is done using a machine learning and statistical approach. This study discusses both the methods of research that contain collecting, analyzing, and interpretation both measurable and qualitative data to investigate the same. The paper is structure is as follows: Section 2 discusses the literature survey, Section 3 is about proposed architecture, Section 4 states the result of the study followed by a conclusion and future work.

2 Literature Survey Marian et al. [3] identified almost twenty screening tools based on age group, hospitalized patients, and home nursing for nutrition care. In the community setting, the frequently used malnutrition identifying tool in older people is the MNA-SF, DETERMINE, and SCREEN-II. In hospitalized settings, the commonly used nutritional screening tools are MNA [4], MUST, Malnutrition Screening Tool (MST), NRS-2002 and the Geriatric Nutrition Risk Index (GNRI). In the homecare setting, the systematic malnutrition identification tool is Short Nutritional Assessment Questionnaire for Residential Care (SNAQ-RC) but, currently, there are no screening tools available for a specific group of people for a specific purpose. Malnutrition Universal Screening Tool (MUST) is a nutritional screening tool for patients who are underweight or suffering from malnutrition [5]. It is used in patients by taking into consideration the height and weight of patients. MUST is

Malnutrition Identification in Geriatric Patients Using Data …

105

used to observe different nutritional requirements such as food intake, weight loss, height, obesity, and risk of undernutrition. This tool calculates the score based on five steps. The authors [6] find the correlation between each nutritional status evaluated by four nutritional screening tools MNA-SF, MUST, NRS-2002, and GNRI at hospital admission for hip fracture patients. The authors [7] state that the nutrition risk rate is high in hip fracture senior citizen patients. The MNA-SF correlates the most with malnutrition clinical parameters and also predicts lower readmissions in wellnourished patients. The authors [8, 9] study screening tools NRS-2002, MUST, and MNA among nursing home residents and carried evaluation on every aspect of the body. The authors [10] did analysis and prediction is done to associate the nutritional categories obtained from the different nutritional screening tools.

3 Proposed Architecture As per the literature survey, geriatric care is not standardized in India, there is a large need to collect data for every standard of care practice. The validation of nutrition screening tools in elderly populations needs further evaluation [2]. Nutrition screening should be performed regularly in elderly patients to find patients who may need nutritional support to improve mortality [11]. So the observational survey should be carried out on how nutrition is delivered across every setting to reduce the rate of mortality and improve the health of patients in every setting using aggressive nutrition therapy. As shown in Fig. 1, hospitalized patients personal information will be collected for monitoring patient’s health along with height, weight, food intake status, suffering from comorbidity. Nutrition screening forms MUST, NRS-2002, MNA identify the patients’ overall risk of malnutrition based on scores. The score is calculated from

Fig. 1 System architecture

106

V. P. Suryawanshi and R. Phalnikar

the parameters of the form. For the MUST screening form if the total score is zero, then there is a low risk of malnutrition, for the score with 1 value, there is medium risk, and with a score greater than two, the patient is identified with high risk. The patients identified with medium risk need to follow document dietary suggested by the physician. The patients identified as high risk need to follow the diet suggested by a dietitian. The system automatically calculates the overall risk of malnutrition as stated by the mentioned process. The system will identify the required nutrient like protein and calories. The classification algorithm, decision tree is used to identify patients suffering from malnutrition as low, high, medium. The system suggests the required calorie, protein considering the comorbidity from patients health details. Thus, the aim is to validate nutrition score and understand the interrelation between the parameters for the three forms MUST, MNA, NRS-2002 to find the nutrient requirement for hospitalized patients from the nutritional score. The study includes the geriatric patients (age > 60) admitted in the hospital including the male and female category. The personal information of the patient is kept confidential.

4 Result The experiment is performed in Python. The patients’ data is stored in a.csv file. The linear regression model is used to estimate the logarithmic value for each category of the variable estimating the relationships among parameters. The data preprocessing technique is used to check the missing values, handling noisy data, transformation in the data set for analysis [12]. For a few parameters from the screening form, data transformation (encoding) is applied to convert the text data into the numerical data. To choose the important parameter, a threshold was introduced. The threshold value is set to 0.5. If the correlation is greater than equal to threshold value, the parameter is considered as an important or significant screening form parameter. As shown in Fig. 2, it has been observed that there is a strong correlation of nutrition risk score with BMI, i.e., BMI is playing an important role in MUST nutrition screening form. The MUST co-relation score matrix reveals that correlation with every parameter. Similarly, correlation is identified with the NRS-2002 and MNA screening form. For both NRS-2002 and MNA screening forms, BMI is the significant parameter along with a few more parameters. Figure 3 displays the graph of patients count against nutrition screening forms for showing the nutrition status as normal, at_risk and malnourished. The MUST score identifies the patients with malnutrition in three categories: low risk (with Score = 0), medium risk (with Score = 1), high risk (Score ≥ 2). The MNA score identifies the patients with normal nutrition status if the total score is high (24–30 points), at risk of malnutrition (17–23 points), malnourished ( 0 do 7: Select an arbitrary route Rl (Rl ∈ R); 8: r d = rnd(2); {Choose an insertion scheme randomly. Using two schemes to control between randomness and greediness} 9: if (r d == 1) then  10: Randomly pick a new vertex v and an inserted position j < |Rl | so that the cost of Rl after inserting is minimal; {Cheapest scheme} 11: else 12: Randomly pick a new vertex v and an inserted position j < |Rl | so that its cost is minimal  and the cost of Rl after inserting is maximal; {Farthest scheme}  13: Update Rl by Rl 14: end if 15: Remove v from L; 16: end while 17: for (i = 1; i ≤ k; i + +) do 18: T = T ∪ Ri ;{update all routes in the tour} 19: end for

2.3 Perturbation Perturbation techniques prevent the algorithm to get trapped into local optimal by driving the search to unexplored solution space. Two perturbations are selected as follows: (1) in perturbation in a route, it picks randomly a route and then exchanges vertices in a random manner; (2) in perturbation in two routes, it selects randomly two routes and after that swaps randomly some vertices each other. The new solution is replaced by the current one.

2.4 AM The AM [10] stores the solutions obtained in each iteration. For each solution, a BF calculates its diversity and cost in comparison with the others in the AM as follows: BF(s) = (|AM| − rank_fit(T ) + 1) + α × (|AM| − rank_div(T ) + 1) where |AM| is the size of AM. α is a parameter (α ∈ [0, 1]).

(1)

156

H.-B. Ban et al.

rank_fit(T ) is its rank based on the cost. rank_div(T ) is its rank based on the diversity value. If the size of AM is more than (n max ), only the solution with the best BF is kept, and it becomes a starting solution.

3 Computational Results The IH-RVNS is implemented on CPU Core i7, 2.10 GHz, and Ram 8 GB. The parameters in the algorithm are determined through pilot experiments: |AM| = 100, pos = 5, NL = 100, n max = 50. This configuration is fixed in all experiments. The IH-RVNS is implemented on the benchmark for VRP problem [11]. These are: (1) CMT-instances; (2) Tai-instances; (3) P-instances. In this dataset, the lower bounds of the optimal solutions can be found in [11]; (4) E-instances; (5) Z. Luo et al. [12]. In Z. Luo et al.’s dataset, we run our algorithm in two cases. In this dataset, we run our algorithm in two cases. In the mMBLP case, the distance constraint is removed while the constraint (MD) that is less than the predefined distance (MD = 2 × dmax ) is added in the mMLP case. Note that: dmax is the largest distance from depot to a vertex. − OPT × We evaluate the IH-RVNS in percent as followings: Gap1 [%] = Best.Sol OPT Best.Sol − Init.Sol 100%, and Improv[%] = × 100%, where Init.Sol, Best.Sol, and OPT Init.Sol correspond to the initial, best, and the optimal solution, respectively. In the tables, Init.Sol, Aver.Sol, Best.Sol, and T are corresponding to the initial, average, best solution, and average time in seconds of 10 executions, respectively. The IH-RVNS algorithm with a fixed random vertex is named “fix” while the other is named “full.” Tables from 1, 2, 3, 4, 5, 6, and 7 [13] indicate the results of the algorithm for the mMBLP in several datasets. Note that: In LQL-instances, the distance constraint is removed. The values in Table 10 [13] are the average values in the case of distance constraint. The results in Table 8 are the average those reported from Tables in [13]. In Table 8, for all instances, the IH-RVNS has an impressive improvement for most of the

Table 1 Average results on 190 instances without distance constraints Instances Full Fix Improv. T Improv. LQL-instances without distance constraints E-instances P-instances Tai-instances CMT-instances

T

−58.19

5.47

−50.04

3.25

−51.13 −50.71 −52.54 −53.80

7.68 7.75 12.35 26.27

−85.04 −75.66 −61.35 −47.21

4.42 4.53 7.52 16.15

An Efficient Two-Phase Metaheuristic for the Multiple Minimum . . .

157

Table 2 Evolution of average improvement Schemes 1 iteration

10 iterations

30 iterations

50 iterations

100 iterations

Improv. T

Improv. T

Improv. T

Improv T

Improv. T

Improv. T

Fix

−45.94 0.44

−51.12 1.22

−52.39

−52.74

−53.27 11.91

−53.27

Full

−54.66 3.12

−61.22 8.72

−62.88 21.09

−63.86

−63.86 189.54

2.94

5.99

−63.32 42.96

7.17

200 iterations 26.43

Table 3 Average experimental results on 300 LQL-instances with distance constraints Instances MD = 2 × dmax MD = 2 × dmax Gap1 T Gap1 T LQL_30_x LQL_40_x LQL_50_x LQL_60_x LQL_70_x LQL_80_x

0.00 0.00 0.00 0.58 0.68 0.68

0.24 0.44 0.83 1.12 1.48 3.57

0.00 0.00 0.00 0.49 0.69 0.68

0.23 0.43 0.83 1.45 1.91 4.65

Table 4 Average experimental results on P-instances and E-instances Instances IOE SNG Our algorithm Best.Sol T Best.Sol T Best.Sol T P-n40-k5 P-n45-k5 E-n51-k5 P-n50-k7 P-n51-k8 E-n76-kl0 E-n76-kl4 E-nl01-k8 E-nl01-kl4

– – 3320 – – 4094 3762 6383 5048

– – 2.25 – – 1.48 0.50 89.4 5.43

1537.79* 1912.31* 2209.64* 1547.89* 1448.92* 2310.09* 2005.4* – –

0.25 0.39 0.70 0.70 0.67 4.20 3.40 – –

1580.21 1912.31 2247.83 1590.41 1448.92 2419.89 2005.40 4051.47 3288.53

0.58 0.62 1.32 1.26 1.21 6.62 2.77 6.40 6.74

*Is the optimal value Table 5 Comparison of the best-found mMBLP solution with the best-found mMLP solution using the mMBLP objective function on 190 instances Instances % difference LQL-instances E-instances P-instances Tai-instances CMT-instances Aver

12.99 13.37 9.8 16.18 9.81 12.43

158

H.-B. Ban et al.

instances while it requires less running time. The mean improvement of the IH-RVNS with two schemes is about −53.27 and −63.86%, respectively. Both schemes seem to work well because they are useful for each case. In the large instances sizes, the “fix” scheme uses significantly less running time, but its solution quality is a bit worse. Otherwise, full neighborhood implementation raises the average quality of the solution by 19.8% when compared with the other implementation. Therefore, it is suitable for the small and medium sizes because the full neighborhood search is too time consuming. For two schemes, Table 2 indicates the improvement of the average deviation in terms of Improv. The deviations in two schemes are −45.94% (−54.66%), −61.22% (−62.09%), −52.39% (−62.88%), −52.74% (−63.32%), −53.27% (−63.86%), and −53.27% (−63.86%) obtained by 1, 10, 30, 50, 100, and 200 iterations, respectively. Additional iterations only make a small improvement, but it consumes much time. Therefore, to reduce the running time, the algorithm uses less than 100 iterations. The fastest way is to run the algorithm with a single iteration. As a result, the average deviation obtains −45.94% (−54.66%) for two schemes. Moreover, the IH-RVNS is applied well to the close variants of the problem such as mMLP [4, 14] and mTRPD [12] (note that: mMLP and mTRPD are the other variants of the mMBLP in our work). In Table 3, our algorithm’s solutions are nearoptimal solutions when the average Gap1 is only 0.32%. Also, some of our solutions are the optimal solutions for the problems with up to 76 vertices. In comparison with the other algorithms for the mMLP in [4, 14], our solutions are better than those of the other algorithms. Specifically, in Table 4, the algorithm gives the solutions that are much better than I. O. Ezzine et al.’s results (IOE) in [4] and comparable with S. Nucamendi-Guillen et al.’s results (SNG) in [14] while the running times of them are equivalent. Moreover, the optimal solutions are reached for the mMLP instances with 76 vertices in several seconds. Table 5 shows that the best solution for mMLP is often not a good solution for the mMBLP. On average, the best solution in our algorithm is about 12.43% better than the best mMLP solution. Therefore, the methods designed for the mMLP may not be adapted easily to solve mMBLP. Obviously, developing an efficient algorithm for the mMBLP is necessary.

4 Conclusions In this work, the first metaheuristic algorithm which combines RVND and IH to solve the mMLBP is proposed. Experimental results on benchmark inid that on average, the IH-RVNS finds good solutions for the instances with up to 200 vertices in several seconds. For some close variants of the problem, the optimal solutions are reached for the problems with 76 vertices in a short time. Moreover, our algorithm is comparable with the other metaheuristic algorithms in accordance with the solution quality. Acknowledgements This research was supported by the Asahi Glass Foundation.

An Efficient Two-Phase Metaheuristic for the Multiple Minimum . . .

159

References 1. Archer, A., Levin, A., Williamson, D.: A faster, better approximation algorithm for the minimum latency problem. J. SIAM 37(1), 1472–1498 (2007) 2. Ausiello, G., Leonardi, S., Marchetti-Spaccamela, A.: On salesmen, repairmen, spiders and other traveling agents. In: Proceedings of CIAC, pp. 1–16 (2000) 3. Blum, A., Chalasani, P., Coppersmith, D., Pulleyblank, W., Raghavan, P., Sudan, M.: The Minimum latency problem. In: Proceedings of STOC, pp. 163–171 (1994) 4. Ezzine, I.O., Elloumi, S.: Polynomial formulation and heuristic based approach for the ktravelling repairman problem. Int. J. Math. Oper. Res. 45, 503–514 (2012) 5. Jittat, F., Harrelson, C., Rao, S.: The k-traveling repairman problem. In: Proceedings of ACMSIAM, pp. 655–664 (2003) 6. Jothi, R., Raghavachari, B.: Minimum latency tours and the k-traveling repairmen problem. In: Proceedings of LATIN, pp. 423–433 (2004) 7. Lin, Y.-L.: Minimum back-walk-free latency problem. In: Proceedings of Computing and Combinatorics, pp. 525–534 (2002) 8. Mladenovic, N., Hansen, P.: Variable neighborhood search. J. Oper. Res. 24(11), 1097–1100 (1997) 9. Johnson, D.S., Mcgeoch, L.A.: The traveling salesman problem: a case study in local optimization in local search in combinatorial optimization. In: Aarts, E., Lenstra, J.K. (eds.) pp. 215–310 10. Mathlouthi, I., Gendreau, M., Potvin, J. Y.: A metaheuristic based on Tabu search for solving a technician routing and scheduling problem (2018) 11. NEO: http://neo.lcc.uma.es/vrp/vrp-instances/capacitated-vrp-instances/ (2013) 12. Luo, Z., Qin, H., Lim, A.: Branch-and-price-and-cut for the multiple traveling repairman problem with distance constraints. J. Oper. Res. 234(1), 49–60 (2013) 13. https://sites.google.com/a/soict.hust.edu.vn/mmblp/ 14. Nucamendi-Guillén, S., Martínez-Salazar, I., Angel-Bello, F., Moreno-Vega, J.M.: A mixed integer formulation and an efficient metaheuristic procedure for the k-travelling repairmen problem. J. JORS 67(8), 1121–1134 (2016)

Unfolding Healthcare: Novel Method for Predicting Mortality of Patients Within Early Hours of ICU Rajni Jindal, Sarthak Aggarwal, and Saanidhi

Abstract Patients who are taken to Intensive Care Units (ICUs) are severely ill or injured and require a high level of care. These patients are at much greater risk of dying than typical hospital patients. This paper’s primary goal is to classify highrisk patients so they can seek more aggressive treatment and decrease their chance of dying. An early estimate of patient survival based on initial (first 24 h) test tests, chart events, and patient demographic information was established to accomplish this. The secondary goal of the project and classifier is to classify patients at lower risk who do not need to be handled as vigorously so that appropriate care can be received, thus lowering health costs. This paper has employed eight classifiers over feature selection and extraction methods involving PCA and Chi2 Test were used for novel preprocessed dataset files: linear support vector (SVM), K-nearest neighborhood (KNN), decision tree, random forest, boosted trees, linear logistic regression, Naive Bayes, and regular SVM. Several studies were performed using the well available and labeled medical dataset called Medical Information Mart for Intensive Care III (MIMIC-III) to gain an understanding of quality of the proposed method.

1 Introduction The capability to predict the mortality in ICU within the first 24 h of admission may contribute to higher survival rates in that determination of illness severity may help to guide treatment decisions. Patients who are identified as more likely to die may receive more aggressive treatments which may increase their likelihood of survival. R. Jindal · S. Aggarwal (B) · Saanidhi (B) Department of Computer Science & Engineering, Delhi Technological University, Delhi, India e-mail: [email protected] Saanidhi e-mail: [email protected] R. Jindal e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_15

161

162

R. Jindal et al.

Prediction of mortality may also act to reduce treatment costs in those patients who are predicted more likely to survive, who may currently be treated very aggressively, may receive less aggressive and less expensive care without affecting their survival rates. Current prediction tools are useful but were found to be inadequately calibrated and were not robust over broad ranges of circumstances. Patient data were gathered from the Medical Information Mart for Intensive Care III abbreviated as MIMIC-III [1], is a massive, free and readily available dataset composed of de-identified health-related data related for the patients who remained in the BID Medical Center’s critical care units for 11 years between 2012 and 2001. The data mining models applied different classifying techniques such as decision tree, boosted trees, LSVM, linear logistic regression, random forest, Naive Bayes, K-nearest neighborhood (KNN), and regular SVM. Such procedures were structured on the basis of certain clinical reports that were obtained in the initial hours following admission to ICU. Data were preprocessed and features were selected and extracted. These features were used to train and test a number of candidate machine learning classifiers. Classifiers were optimized across their respective parameter spaces as well as across input feature set and training/testing data set sizes. A classifier was selected from the group which provided the best performance with regard to predicting patient mortality.

2 Literature Review There is an increasing interest in tackling the prediction of early mortality in hospitals. The addressed structures can be divided into three score-like, method-based, and data mining models. Diverse score-like [2] methods such as SAPS [3], qSOFA [4], APACHE [5] have been addressed. They also used features that are not always available when ICU is admitted. The method-based [6] models formed their decision on the basis of the features of some particular health conditions such as cardiorespiratory arrest, or specific geographic places like Australia. Although these models have satisfactory outcomes, the majority of patients with ICU are elderly people over the age of 65 who face multiple ailments. Additionally, the templates for specific geographical places cannot be generalized to other situations. The third class of approaches used techniques from data mining [7] to predict mortality. The primary objective of this research was to classify high-risk patients using our novel method so that the patients admitted can seek more aggressive procedures and thereby lowering their probability of them passing away.

Unfolding Healthcare: Novel Method …

163

3 Proposed Work 3.1 Data Exploration and Description In addition to data on juvenile patients, the MIMIC-III report has data associated with around 53,423 ICU admissions for non-juvenile patients representing 38,597 discrete non-juvenile patients and 49,785 hospital admittance. Data included details such as patient profiles, measures of vital signs, laboratory results, treatments, drugs, medical observations, diagnostic detail, and mortality. Data were queried in groups of patient demographics, chart data, and laboratory data. Patient demographic data were queried from the ADMISSIONS, ICU_STAY and PATIENT tables, with 25 patient diagnoses identified based on HCUP CCS 2015 diagnostic groups and ICD9 codes6. Demographic data consisted primarily of categorical data with some continuous and date/time variables. Laboratory data were queried from the LABEVENTS table and included laboratory results like hematocrit, WBC count, oxygen saturation, and blood glucose measurements. Chart data were queried from the CHARTEVENTS table and included information captured on the patient’s chart including blood pressure, heart rate, temperature, GCS, and blood gas measurements.

3.2 Data Preprocessing Laboratory and chart data typically required reducing dimensionality by using summary statistics like mean, median, standard deviation, etc. i j = lab_data, chart_data, pat_demo_data

(1)

Features from the different sources, demographics, laboratory and chart data, were preprocessed and evaluated separately. For classification, all data were converted to categorical data in the form of ‘dummy’ variables. Date/time data including ICU stay and age were converted to hours or years. It was noted that there were extreme outliers in the data sets that were skewing the distributions significantly. Continuous data in the form of ï data points were converted into categorical data by (2) calculating the quartiles for each feature’s distribution and labeling each data point based on which quartile fell. qt_low = (η + 1)/4, qt_mid = 2(η + 1)/4, qt_hi = 3(η + 1)/4, it_qt_ran = qt_hi − qt_low

(2)

164

R. Jindal et al.

    if: value i j > 1.5 ∗ it_qt_ran; remove i j

(3)

Outliers, (3) data points that are outside the one and the half time the interquartile range, were removed and data distributions were re-visualized and re-evaluated for skewness. Data preprocessing and classification steps, including feature selection, require there to be no missing data points for any feature in a sample. Because there was a relatively large number of missing data points, the decision was made to drop missing samples rather than impute values from what was for some features, relatively sparse data. In our initial data set, there were relatively large numbers of missing data points.

3.3 Features Extraction and Selection It was observed that there were features that were more often collected together. We removed features that were present in only a small number of samples, then to organize features that were likely to be collected together into groups. Each group could then be preprocessed and its features evaluated separately. To identify affinity groups, features that were commonly present together, we created ‘missing’ data frames, frames in which the value for a given measurement and ICU stay was zero if measurements were present and 1 if measurements were missing. This allowed for correlation coefficients to be calculated. chi_scores[] = sort(calc_scores_chi(Δᵢ, p=0.001)) ; ჶ_chi=chi_scores[i=0, ..12] (4)

There were two methods of feature selection and extraction used, one was the Chi2 score (ჶ_chi) (4) and the other was the principal component analysis (PCA)(ჶ_pca) (5). Chart features were grouped into four blocks with a total of 103 features that met the criteria (p values ≤ 0.001) for being passed along to the recombination and classification phase. Laboratory data were divided into two blocks with a total of 20 features and patient demographic data were processed in a single block of 46 features that were passed along. ჶ_pca = PCA (Δ)

(5)

In the classification phase, for the Chi2 method, the selected features from all sources were recombined and ranked according to the Chi2 score/p-value, from that group, the top 13 features were selected and recombined resulting in a data set which 13 features. In PCA, the 199 features were reduced into two dimensions, resulting in a dataset with two extracted features 2665 unique stays

Unfolding Healthcare: Novel Method …

165

Fig. 1 Flowchart detailing the process of predicting mortality using our novel method

3.4 Training of Classifier The task presented itself as a supervised classification problem with output as two classes, survivors and non-survivors, and inputs as categorical/dummy features. It was believed that given the relatively high dimensional, complex feature space, the optimum decision boundary would likely be nonlinear. Classifiers selected for evaluation were those that work well in higher dimensional space with categorical data and were capable of generating linear and nonlinear decision boundaries. Benchmark Creation We created benchmarks for all the classifiers. The classifier was trained and tested using the full feature set as inputs and a train/test split of 80/20%. The algorithm was trained using default parameter settings. Averaged values of precision, recall, and F 1 for survival and non-survival groups were calculated (Fig. 1; Table 1).

4 Results For each type of classifier, two instances were created: one optimized for recall and one for F 1 -scores. The classifiers were optimized using these scores and their respective optimization metrics were calculated to maximize the score values. It is clearly visible from Table 2, the decision tree bettered all other classifiers on the basis of precision, whereas the random forest used as a classifier showed

166

R. Jindal et al.

Table 1 Benchmark metrics for feature selection strategy using Chi2 method and principal component analysis Classifiers

Chi 2

PCA

Avg. precision

Avg. F 1 -score

Avg. recall

Avg. precision

Avg. F 1 -score

Avg. recall

Logistic regression

0.80

0.76

0.72

0.80

0.78

0.76

Gaussian Naïve Bayes

0.82

0.82

0.83

0.78

0.78

0.78

Linear SVM

0.85

0.75

0.67

0.81

0.75

0.70

SVM

0.79

0.74

0.71

0.83

0.80

0.78

Decision trees 0.88

0.84

0.81

0.86

0.84

0.82

Random forest 0.84

0.83

0.83

0.88

0.89

0.90

K-nearest neighbors

0.76

0.78

0.80

0.79

0.81

0.85

Boosted trees

0.87

0.87

0.88

0.85

0.84

0.84

Table 2 Metrics of the classifiers of classification process providing us clinical insights Classifiers

Chi2

PCA

Avg. precision

Avg. F 1 -score

Avg. recall

Avg. precision

Avg. F 1 -score

Avg. recall

Logistic regression

0.86

0.82

0.78

0.88

0.85

0.82

Gaussian Naïve Bayes

0.84

0.86

0.88

0.86

0.87

0.88

Linear SVM

0.87

0.79

0.72

0.90

0.83

0.78

SVM

0.86

0.81

0.77

0.88

0.83

0.79

Decision trees 0.93

0.89

0.85

0.95

0.92

0.90

Random forest 0.89

0.93

0.98

0.91

0.94

0.97

K-nearest neighbors

0.83

0.85

0.87

0.86

0.88

0.91

Boosted trees

0.91

0.92

0.93

0.93

0.93

0.93

the highest and remarkable values for the recall and F 1 -score values. The F 1 -score of logistic regression and linear SVMs was near to 0.84 while the random forest results peaked at 0.94. The data proved to be not separable linearly, which came as an outcome of low linear SVM and logistic regression output. Furthermore, the decision tree was having both high and better accuracy and recall showed that most of the passed-away patients were identified accurately and that the many of the forecasted passed-away patients were properly assigned to their appropriate category. Furthermore, although having

Unfolding Healthcare: Novel Method …

167

different thresholds, the linear SVM, linear discriminant, and logistic regression had likewise performance. We can also see from Table 2 that the classification method involving PCA as the feature extraction strategy outperforms the Chi score selection method. The F 1 -score of random forest increases slightly from 0.93 to 0.94, whereas the recall values get lowered by 0.01. Our proposed method betters the results of different attempts made by academics in this field previously.

5 Conclusion Premature hospital mortality risk estimation in CCU units is important because of the need for fast and precise medical choices. The random forest achieved the highest accuracy among the classifiers, qualifying for both precise and explainable tests. In the future scope of our work and this paper, we plan to make the process of feature extraction more disease-specific, by including the disease prediction algorithm, thus improvising on the accuracy.

6 Discussion In this paper, we have presented a set of exhaustive evaluation results to classify highrisk patients who were admitted to ICU’s using several machine learning models, feature extraction strategies, and ICU scoring systems on the MIMIC-III database. We demonstrated that the raw data after undergoing extensive data preprocessing and feature extraction techniques when served to machine learning models consistently outperform all the other existing approaches.

References 1. Johnson, A.E., Pollard, T.J., Shen, L., Lehman, L.-W.H., Feng, M., Ghassemi, M., Moody, B., Szolovits, P., Celi, L.A., Mark, R.G.: MIMIC-III, a freely accessible critical care database, Sci. Data 3 (2016) 2. Calvert, J., Mao, Q., Hoffman, J.L., Jay, M., Desautels, T., Mohamadlou, H., Chettipally, U., Das, R.: Using electronic health record collected clinical variables to predict medical intensive care unit mortality. Ann. Med. Surg. 11(2016), 52–57 (2016) 3. Le Gall, J.-R., Lemeshow, S., Saulnier, F.: A new simplified acute physiology score (SAPS II) based on a European/North American multicenter study. JAMA 270(24), 2957–2963 (1993) 4. Simpson, S.Q.: New sepsis criteria: a change we should not make. CHEST J. 149(5), 1117–1118 (2016) 5. Knaus, W.A., Draper, E.A., Wagner, D.P., Zimmerman, J.E.: APACHE II: a severity of disease classification system. Crit. Care Med. 13(10), 818–829 (1985)

168

R. Jindal et al.

6. Awad, A., Bader-El-Den, M., McNicholas, J., Briggs, J.: Early hospital mortality prediction of intensive care unit patients using an ensemble learning approach. Int. J. Med. Inf (2017) 7. Kim, S., Kim, W., Park, R.W.: A comparison of intensive care unit mortality prediction models through the use of data mining techniques. Healthcare Inf Res 17(4), 232–243 (2011)

Classification of Disaster-Related Tweets Using Supervised Learning: A Case Study on Cyclonic Storm FANI Pankaj Kumar Dalela, Sandeep Sharma, Niraj Kant Kushwaha, Saurabh Basu, Sabyasachi Majumdar, Arun Yadav, and Vipin Tyagi

Abstract With unprecedented growth in the ICT sector in India, use of social media platforms like Facebook, Twitter by Government Agencies has increased multifold to connect millions of people in the shortest time span. Even common people use it as an effective media to raise their concerns to the authorities. During disaster situations, people share ground conditions, ask for evacuation, seek relief and medical help, etc., through platforms like Twitter. Those tweets can be an important source of information for disaster managers to plan relief and rescue operations. Distinguishing and classification of millions of tweets in real time is tedious and time-consuming job through manual process. Thus, an automated solution is required which can classify the tweets dynamically in relevant classes. In this paper, different supervised machine learning models have been compared based upon its use case to address the challenge of accurate classification of tweets into actionable classes. The paper presents a case study on categorization of real-time streamed tweets on Cyclonic Storm ‘FANI’ into different actionable classes using the trained classifiers. P. K. Dalela · S. Sharma (B) · N. K. Kushwaha · S. Basu · S. Majumdar · A. Yadav · V. Tyagi Centre for Development of Telematics, Mehrauli, New Delhi, India e-mail: [email protected] P. K. Dalela e-mail: [email protected] N. K. Kushwaha e-mail: [email protected] S. Basu e-mail: [email protected] S. Majumdar e-mail: [email protected] A. Yadav e-mail: [email protected] V. Tyagi e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_16

169

170

P. K. Dalela et al.

1 Introduction The frequency of disasters is increasing everywhere in the world. People are connected to multiple channels through which vital life-saving information can be sent to them during disaster situations. In the present scenario, social media platforms like Facebook, Twitter, Instagram, etc., are extensively used for communication. There are nearly 326.1 million social media users in India as of 2018 data [1]. Thus, it can be used as a tool to handle disaster-related situations by spreading as well as receiving important warning information. Extraction of useful information is helpful for the disaster management authorities as well as other organizations in preparedness, response, and recovery phase in comprehensive emergency management. Identification of useful messages out of huge count of messages received can help the responders to act efficiently and effectively. The messages can be tweets, posts, or SMS. There must be an automated system that categorizes the messages received into multiple classes because manual efforts cannot be applied to the huge amount of data available. Automatic classification of messages is a challenging work. The messages can be ambiguous or contain informal language, abbreviations or have out of context information. We have used twitter data to form the learning model. Twitter is social networking platform and has over 600 million users all across the globe and plays an important role during crisis by providing important information to public as well as emergency responders [2]. The best-known solution for such problems is to use the supervised machine learning classifiers to distinguish the tweets into separate actionable classes. We applied different machine learning models that can help in tweet classification process and made a comparable analysis among them. We extracted real-time tweets related to Cyclonic Storm ‘FANI’ between April 30 to May 5, 2019, and applied the trained classifier to classify them into different categories. The trained classifier is to be used to classify messages received from different media so that the useful extracted information can be sent to the competent authority for quick and appropriate action. The desired workflow is as shown in Fig. 1. The paper is organized as follows: Sect. 2 gives information regarding the similar work done in this field, Sect. 3 describes the implementation details including the dataset used, problem formulation, system architecture and methodology followed. Section 4 includes the results and observations with comparative results of different classifiers used. Section 5 contains the concluding remarks and the future plan.

2 Related Work Social Media has become a great tool for communication and sharing information. It can play a great role in disaster management by raising public awareness and linking with early warning system for improving preparedness and enhancing recovery management. Recent studies emphasize the importance of social media in

Classification of Disaster-Related Tweets …

171

Fig. 1 Workflow of desired system

disaster-related events. Many NLP techniques are also used to process the text in such messages. Human annotated twitter corpora of different crisis are presented in [3]. Analyzing sentiments or performing binary classification was done by Vaithyanathan et al. [4] on movie review dataset. Twitter sentiment analysis was performed by Yu et al. [5]. Different approaches including supervised, semi-supervised and deep learning solutions have been suggested by different researchers [2, 6]. Supervised learning algorithms are most widely used for classification problems. Disaster-related tweet classification is a challenging task. The comparative analysis of different machine learning classifiers to distinguish crisis related tweets from non-crisis related tweets is also presented in [7].

3 Implementation Different machine learning models are trained on the dataset consisting of various tweets related to the disaster events. The tweets are classified into different categories.

3.1 Dataset Used The labeled dataset used is taken from the corpora given in [3] and it includes tweets collected during disaster events of 2014 India Floods, 2015 Nepal Earthquake, 2015 Cyclone PAM, 2015 Pakistan Earthquake and Floods. Tweets related to 2018 India Kerala Floods are also extracted besides the available dataset using Tweepy library of python by searching the keywords like ‘OpMadad’, ‘KeralaFloods2018’, ‘KeralaRains’ and ‘KeralaFloodRelief’. The extracted tweets are manually labeled

172 Table 1 Dataset categorization

P. K. Dalela et al. Category

Count

Other important information

2958

Donation and help

2162

Casualty information

2057

Irrelevant information

1540

Emotional support

896

Infrastructure damage

533

Missing, trapped and found people

346

Warning information

307

Evacuation information Total count

283 11,082

and combined with the available dataset. The combined dataset contains 11,082 tweets classified into 9 categories. Each category count is represented in Table 1. The dataset is skewed since there are irregularities in count of each category. This dataset is used for training the classifiers. Tweets related to 2019 Cyclone FANI are also extracted, monitored and classified into different categories using the trained classifier. The tweets are extracted based on keywords like ‘#Fani’, ‘#FaniCyclone’, ‘#CycloneFani’, ‘#Cyclonefanilatestupdate’, ‘Fani Cyclone’, ‘Cyclone Fani’, ‘#faniupdate and ‘fani’. The tweets are extracted from April 30, 2019 till May 5, 2019. A total of 47,016 tweets are collected.

3.2 Problem Formulation We have to classify the tweets during the disaster events into different categories and notify the relevant information to the concerned departments. The problem is ‘Single Label Multi-Class Classification’ problem since each tweet is to be classified in one of n possible classes. Thus, the task is formulated as generation of classifier h: T → C where T = {t 1 , t 2 , …, t m } is the domain of tweets set and C = {c1 , c2 …, cn } represents set of finite classes in which the tweet text will be classified. Categorization information can be used to reach the target audience in an effective manner.

3.3 System Architecture and Implementation Details The dataset is collected and preprocessed by removal of extra information that is not having any relevance. Features are extracted and the dataset is divided into separate training and testing set. The model is trained on training set and then evaluated on

Classification of Disaster-Related Tweets …

173

testing set. To classify the unknown data, the data is preprocessed and extracted features are passed to the classifier which outputs the relevant category. The system architecture and methodology followed is shown in Fig. 2. The steps performed are described below. Data Preprocessing The tweets collected are preprocessed to remove irrelevant information such as URL patterns, special characters, additional white spaces, hashtags, mentions, emojis, stopwords and reserved words like ‘RT’, ‘FAV’. The tweets are preprocessed using tweet-preprocessor library. Table 2 shows the example of preprocessing performed on the tweets. Feature Extraction Feature Extraction plays a significant role in determining the results obtained by applying the classifiers. The tweet is transformed into the feature vectors. Most Informative features must be selected. Following techniques to convert text into vectors are used for feature extraction.

Fig. 2 System architecture

174

P. K. Dalela et al.

Table 2 Tweet preprocessing Extracted tweet

Processed tweet

RT @SgtLoau: Deadly monsoon hits India Deadly monsoon hits India Nepal Dozens Nepal: Dozens of people have been killed in people killed flooding northern eastern India flooding in northern and eastern India… http://t. co/ZD6a96HdzL

CountVectorizer It uses the term frequency and transform the collection of texts into sparse matrix representation containing the count of each word present in all texts. For example, for the following two tweets: [‘Deadly monsoon hits India Nepal Dozens people killed flooding northern eastern India,’ ‘Hundreds dead monsoon hits India Nepal’], Transformed sparse matrix is: [[0 1 1 1 1 1 0 2 1 1 1 1 1] [1 0 0 0 0 1 1 1 0 1 1 0 0]]. Each row represents each tweet, column defines the term in the vocabulary and the cell gives the frequency count of each term in a tweet. Thus, it includes only simple unigram words as features. Vocabulary is: {‘deadly’: 1, ‘monsoon’: 9, ‘hits’: 5, ‘india’: 7, ‘nepal’: 10, ‘dozens’: 2, ‘people’: 12, ‘killed’: 8, ‘flooding’: 4, ‘northern’: 11, ‘eastern’: 3, ‘hundreds’: 6, ‘dead’: 0}. TF-IDF Vectorizer It is combination of term frequency and inverse document frequency. Term frequency takes into account the number of times a particular term is coming in the document. Inverse Document Frequency provide weights to the terms. The weight is given least if the word is present is present in many documents because it shows that no informative information can be deduced from that particular word as feature. The explanation of TF-IDF is given in [8]. TF (term) =

Number of times the term appears in document Total terms present in document

IDF(term) = loge

Total Number of the documents Number of documents having term in them

(1) (2)

The TF-IDF score is multiplication of TF and IDF. The TF-IDF vectors are generated at different levels such as word level considering each term as unigram, n-gram level which provides combination of n terms that are used at a single time as features and character level that represent score of character level n-grams. Training and Modeling The dataset is divided into two parts, one for training purpose and other for testing purpose. Out of 11,082 total tweets, 8311 tweets are used as training set and rest 2771 are used as validation set. Different classifiers which include Linear SVC, Logistic Regression, Multinomial Naïve Bayes, Random Forest, XGBoost, K-Nearest Neighbors were applied using multiple feature extraction techniques including CountVectorizer (Bag of words), TF-IDF (word level), TF-IDF (n-gram range 1,3), TF-IDF (character level). A comparative analysis of all classifiers is done.

Classification of Disaster-Related Tweets …

175

4 Results and Observations The comparison of accuracy score of various classifiers with different feature extraction techniques applied on the dataset is provided in Table 3. Linear SVC and Logistic Regression performs better with accuracy score of nearly 0.70. Since the classes are skewed and are not uniform, accuracy score alone cannot be the only error metric to determine the effectiveness of the classifier. The other important metrics include Precision, Recall and F 1 -score. The definitions of the error metrics are described in [9] and represented in Eqs. 3–5. Precision = Recall =

True Positive(TP) True Positive(TP) + False Positive(FP)

True Positive(TP) True Positive(TP) + False Negative(FN)

F1 - score = 2

Precision × Recall (Precision + Recall)

(3) (4) (5)

The comparison of F 1 -score of all classifiers are provided in Table 4. Linear SVC performs well on range of Natural Language Processing (NLP) based text classification tasks. Linear SVC model with TF-IDF n-gram technique for feature extraction is applied to the tweets collected during Fani Disaster from April 30, 2019 till May 5, 2019. The precision of the model is 0.71, recall is 0.71 and F 1 -score is 0.70. Table 5 shows the categorization of Cyclone FANI tweets done using the Linear SVC classifier. The stacked bar chart in Fig. 3 shows the graphical representation of the percentage of classified tweets of Cyclone FANI in each category. The categorized information is to be sent to the responsible authorities so that action can be taken in an effective manner. The extracted information related to Evacuation, Missing, Trapped and Found people from vast amount of data can be Table 3 Comparison of accuracy scores Classifier name

CountVectorizer (bag of words)

TF-IDF (word level)

TF-IDF (n-gram range 1,3)

TF-IDF (character level)

Linear SVC

0.672320

0.701552

0.707326

0.698304

Logistic regression

0.708769

0.696499

0.700108

0.682064

Multinomial Naive Bayes

0.670516

0.657163

0.665464

0.619632

Random forest 0.661855

0.666185

0.654280

0.610609

XGBoost

0.654637

0.655720

0.655720

0.668351

K-nearest neighbors

0.539516

0.648502

0.538073

0.627932

176

P. K. Dalela et al.

Table 4 Comparison of F 1 -score Classifier name

CountVectorizer (bag of words)

TF-IDF (word level)

TF-IDF (n-gram range 1,3)

TF-IDF (character level)

Linear SVC

0.67

0.70

0.70

0.69

Logistic regression

0.70

0.68

0.68

0.66

Multinomial Naive Bayes

0.64

0.61

0.63

0.58

Random forest 0.65

0.66

0.65

0.59

XGBoost

0.64

0.64

0.64

0.66

K-nearest neighbors

0.53

0.63

0.54

0.61

Table 5 Categorization of cyclone FANI tweets using Linear SVC Category

April 30

May 1

May 2

May 3

May 4

May 5

Total tweets

Other important information

1618

4174

3178

1059

2330

4169

16,528

Donation and help

502

2015

1140

456

482

1448

6043

Casualty information

622

847

2680

265

1112

768

6294

Irrelevant information 1740

3852

3105

791

2324

2426

14,238

109

628

402

98

209

232

1678

Infrastructure damage

67

180

314

138

121

214

1034

Missing, trapped and found people

8

182

117

40

27

110

484

Warning information

47

68

77

10

69

139

410

Evacuation information

17

90

85

13

32

60

307

Total day-wise tweets 4730

12036

11,108

2870

6706

9566

47,016

Emotional support

helpful for the authorities to act in timely manner. Similarly, the other details like Infrastructure damage, Casualty Information can be analyzed and the impact of disaster can be estimated. Geo-tagging information can provide the location of the area to be targeted.

5 Conclusion and Future Plan In this paper, different supervised learning models has been compared for distinguishing and categorization of disaster-related tweets to analyze its impact. Among different supervised learning models, the Linear SVC is the most suitable supervised learning model for the disaster-related datasets. The Tweets of Cyclone Storm

Classification of Disaster-Related Tweets …

177

Fig. 3 Bar chart for cyclone FANI tweets categorization

‘Fani’ has been also streamed and used for categorization in respect to disaster situation information extraction. The real-time categorization of tweets will help disaster managers in speedy rescue and organized relief operation. Geo-tagging of the tweets can also be done which will increase effectiveness of impact base crisis situation handling. The current model does not take into account the tweets in vernacular languages which will be handled in future. Model can be trained with large dataset from various other sources like blogs, news and Facebook posts to increase its accuracy.

References 1. Statista Page: https://www.statista.com/statistics/278407/number-of-social-network-users-inindia/. Last accessed 11 Jan 2020 2. Gupta, A., Kumaraguru, P., Castillo, C., Meier, P.: TweetCred: real-time credibility assessment of content on twitter. In: Aiello, L.M., McFarland, D. (eds.) Social Informatics. SocInfo 2014. Lecture Notes in Computer Science, vol 8851. Springer, Cham (2014) 3. Imran, M., Mitra, P., Castillo, C.: Twitter as a lifeline: human-annotated twitter corpora for NLP of crisis-related Messages. In: Proceedings of the 10th Language Resources and Evaluation Conference (LREC), May 2016, pp. 1638–1643. Portorož, Slovenia (2016) 4. Pang, B., Lee, L., Vaithyanathan, S.: Thumbs up? sentiment classification using machine learning techniques. In: Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language processing, vol. 10, pp. 79–86. Association for Computational Linguistics (2002)

178

P. K. Dalela et al.

5. Jiang, L., Yu, M., Zhou, M., Liu, X., Zhao, T.: Target-dependent twitter sentiment classification. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 151–160. Association for Computational Linguistics (2011) 6. Nguyen, D.T., Mannai, K.A.A., Joty, S., Sajjad, H., Imran, M., Mitra, P.: Rapid classification of crisis-related data on social networks using convolutional neural networks. arXiv:1608.03902 (2016) 7. Manna, S., Nakai, H.: Comparative analysis of different classifiers on crisis-related tweets: an elaborate study. In: Yang, X.S., He, X.S. (eds.) Nature-Inspired Computation in Data Mining and Machine Learning. Studies in Computational Intelligence, vol 855. Springer, Cham (2020) 8. Tf-idf: http://www.tfidf.com/. Last accessed 30 Jan 2020 9. Shung, K.P.: Model selection: accuracy, Precision, Recall or F1? https://koopingshung.com/ blog/machine-learning-model-selection-accuracy-precision-recall-f1/ (2020)

Detection of Cardio Vascular Disease Using Fuzzy Logic Shital Chaudhary, Sachin Gajjar, and Preeti Bhowmick

Abstract The aim of this work is to detect whether an individual has a Cardio Vascular Disease (CVD) or not by using Mamdani and Sugeno methods of Fuzzy Inference System (FIS). The data set used for this work consists of 1000 records from the pathology reports of Thyrocare, Suburban Diagnostics, Medall, SRL Diagnostics and Metropolis. The parameters considered for predicting whether the individual has CVD or not are blood pressure, blood sugar, heart rate and oxygen level in the blood (SPO2 ). The FIS outputs indicate (1) whether the individual has a CVD or not, (2) the risk level of CVD and (3) some primary level precautions depending on the risk level. Mamdani and Sugeno methods were evaluated by comparing them with the results of the pathology reports. The results show that the Sugeno method gives 2% more accuracy in predicting a CVD as compared to the Mamdani method. Sugeno FIS gives more dynamical values as compared to Mamdani FIS for different values of input which leads to higher accuracy of Sugeno FIS.

1 Introduction As per the World Health Organization (WHO) report, 17.1 million people died each year due to Cardio Vascular Disease (CVD) [1]. Taking suitable precautions at an early stage can help prevent the occurrence of the CVD or control it with the use of medications. Recently, the use of soft computing techniques like artificial neural networks, fuzzy logic and ANFIS in the fields of medical diagnosis, treatment of illnesses and patient pursuit has highly increased [2]. Among these techniques, Fuzzy S. Chaudhary (B) · S. Gajjar · P. Bhowmick Nirma University, Ahmedabad, Gujarat, India e-mail: [email protected] S. Gajjar e-mail: [email protected] P. Bhowmick e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_17

179

180

S. Chaudhary et al.

logic is a method that can handle the uncertainty of the input data very well. This paper uses Mamdani and Sugeno methods of Fuzzy Inference System (FIS) for detecting CVD. The data from the pathology reports are given to the FIS which identifies (1) whether the individual has a CVD or not, (2) the risk level of CVD and (iii) some primary level precautions depending on the risk level. Blood pressure, blood sugar, heart rate and oxygen level in blood (SPO2 ) are considered as input parameters to FIS for detecting CVD.

2 Literature Survey Ali and Mehdi [3] developed fuzzy expert system with 44 rules using 13 inputs. Simulation of the system with inputs from data set of Cleveland Clinics, V.A. Medical Centre gave an accuracy of 94%. In [4], Uguz et al. have implemented a Mamdani based fuzzy expert system. The results showed that the Surgeon integral based method performed better than the Artificial Neural Network and Hidden Markov Model based expert systems. De Zhi et al. [5] developed a fuzzy expert system for the detection of CVD using 6 inputs. Anuradha [6] proposed a fuzzy expert system for the diagnosis of CVD by using 11 inputs and achieved an accuracy of 94%. Kumar [7] designed an adaptive neuro-fuzzy inference system consisting of 13 inputs and achieved 92% accuracy. Neeru et al. [8] implemented a fuzzy controller for identifying the risk level of developing a CVD. Smita and Sushil [9] have proposed a model to predict the risk level of a CVD. None of the above-mentioned work focuses on the comparison of Mamdani and Sugeno methods. With this perspective, the following work focuses on designing a FIS using Mamdani and Sugeno method that identifies whether the individual is having a risk of CVD or not. The work also compares the results of Mamdani and Sugeno methods. The rest of the paper is organized as follows. The Fuzzy Inference System developed is discussed in Sect. 3. Simulation of the system and the related results are presented in Sect. 4. Section 5 concludes the paper.

3 Fuzzy Inference System Mamdani and Sugeno methods are used for developing the FIS. 1. Fuzzification of inputs and outputs: The four inputs heart rate, blood pressure, blood sugar and SPO2 are bifurcated into five levels: very low, low, medium, high and very high. FIS outputs indicate whether an individual has a CVD or not in terms of three levels: uncertain, not probable and probable. The results show the severity of developing a CVD in terms of five levels: healthy, low risk, moderate risk, risk and high risk. FIS also suggests some primary level precautions like exercise, balanced diet, use olive oil, low salt diet and/or consult a doctor on the basis of the risk level.

Detection of Cardio Vascular Disease Using Fuzzy Logic

181

2. Defining membership functions: The various membership functions available for FIS are: triangle, trapezoidal, sigmoidal, Gaussian, S-shape and Z-shape (Jang, Sun and Mizutani). However, since the degree of triangle and trapezoidal membership functions are easily determined they are widely used in the literature (MathWorks) [10]. For input variables, triangle membership functions represent low, medium, high fuzzy sets and trapezoid membership functions represent very low and very high fuzzy sets. For outputs: (1) to define risk level, triangle membership functions represent low risk, moderate risk, risk fuzzy sets and trapezoid membership functions represent healthy and high-risk fuzzy sets. (2) for primary precautions, triangle membership functions represent output sets as a balanced diet, use olive oil, low salt diet and trapezoid membership functions represent output set as to recommend exercise and consult a doctor. (3) for CVD prediction, triangle membership functions represent output sets as not probable and trapezoid membership functions represent output sets as uncertain and probable. 3. Defining fuzzy rule base: Table 1 shows the fuzzy rule base with four input variables and three output variables. 4. Aggregation and Defuzzification: Maximum area covered for output value is taken for aggregation of outputs. Centroid of the area method is used for defuzzification.

4 Results The pathology reports of Thyrocare [11], Suburban Diagnostics [12], Medall [13], SRL Diagnostics [14] and Metropolis [15] laboratories with 1000 different records are given as input to FIS. As shown in Table 2, out of 1000 records, Mamdani method predicted 950 records correctly and Sugeno method predicted 970 records correctly. Figure 1a, b shows the simulation results of the blood sugar and SPO2 for predicting whether the individual has a CVD or not. The results show that if the blood sugar is very low then the chance of developing a CVD is 81% as per Mamdani method and 78% as per Sugeno FIS. When the blood sugar level increases, Sugeno method predicts the occurrence of a CVD more accurately (3%). Figure 2a, b shows the simulation results of the SPO2 and heart rate for primary precautions to be suggested by Mamdani and Sugeno methods, respectively. If the SPO2 and the heart rate are normal then the chances of developing a CVD are less. In this case, the fuzzy logic should suggest the individual do exercise. But if the SPO2 and the heart rate is very high, then the chances of developing a CVD are high. In this case, the fuzzy logic should suggest that the individual needs medical treatment. Figure 3a, b shows the simulation results of blood sugar and blood pressure for predicting the severity of the CVD by Mamdani and Sugeno methods, respectively. From the two methods, Sugeno method more accurately shows that if the blood sugar and blood pressure are very high, then the severity of the CVD is maximum. In this case, the Sugeno method suggests the individual consult a doctor. Also, Sugeno

182

S. Chaudhary et al.

Table 1 Fuzzy rule base Sr. No.

Heart rate

Blood pressure

Blood sugar

SPO2

Result (Risk level)

Prediction (Primary precautions)

Heart disease

1

V.L.

V.L.

V.L.

V.L.

H.R.

Medical treatment

Possible

2

V.L.

V.L.

L

V.L.

H.R.

Medical treatment

Possible

3

V.L.

V.L.

M

V.L.

H.R.

Medical treatment

Possible

4

V.L.

L

H

V.L.

M.R.

Restrict salt

Uncertain

5

V.L.

L

V.H.

V.L.

H.R.

Medical treatment

Possible

6

V.L.

L

V.L.

V.L.

H.R.

Medical treatment

Possible

7

V.L.

M

L

V.L.

M.R.

Restrict salt

Uncertain

8

V.L.

M

M

V.L.

M.R.

Exercise

Not possible

9

V.L.

M

H

V.L.

R

Medical treatment

Possible

10

L

H

V.H.

V.L.

R

Restrict salt

Uncertain

11

L

H

V.L.

V.L.

M.R.

Restrict salt

Uncertain

12

L

H

L

V.L.

M.R.

Restrict salt

Uncertain

13

L

V.H.

M

V.L.

H.R.

Medical treatment

Possible

14

L

V.H.

H

V.L.

M.R.

Restrict salt

Uncertain

15

L

V.H.

V.H.

V.L.

H.R.

Medical treatment

Possible

16

L

V.L.

V.L.

V.L.

H.R.

Medical treatment

Possible

17

L

V.L.

L

V.L.

H.R.

Medical treatment

Possible

18

L

V.L.

M

V.L.

M.R.

Restrict salt

Uncertain

19

M

L

H

V.L.

L.R.

Exercise

Not possible

20

M

L

V.H.

V.L.

L.R.

Exercise

Not possible

21

M

L

V.L.

V.L.

M.R.

Restrict salt

Uncertain

22

M

M

L

V.L.

H

Exercise

Not possible

23

M

M

M

V.L.

H

Exercise

Not possible

24

M

M

H

V.L.

H

Exercise

Not possible

25

M

H

V.H.

V.L.

M.R.

Restrict salt

Uncertain

26

M

H

V.L.

V.L.

L.R.

Restrict salt

Uncertain

27

M

H

L

V.L.

L.R.

Use olive oil

Uncertain (continued)

Detection of Cardio Vascular Disease Using Fuzzy Logic

183

Table 1 (continued) Sr. No.

Heart rate

Blood pressure

Blood sugar

SPO2

Result (Risk level)

Prediction (Primary precautions)

Heart disease

28

H

V.H.

M

V.L.

M.R.

Restrict salt

Uncertain

29

H

V.H.

H

V.L.

R

Medical treatment

Possible

30

H

V.H.

V.H.

V.L.

R

Medical treatment

Possible

31

H

V.L.

V.L.

V.L.

H.R.

Medical treatment

Possible

32

H

V.L.

L

V.L.

M.R.

Exercise

Not possible

33

H

V.L.

M

V.L.

M.R.

Restrict salt

Uncertain

34

H

L

H

V.L.

M.R.

Restrict salt

Uncertain

35

H

L

V.H.

V.L.

R

Medical treatment

Possible

36

H

L

V.L.

V.L.

M.R.

Restrict salt

Uncertain

37

V.H.

M

L

V.L.

L.R.

Exercise

Not possible

38

V.H.

M

M

V.L.

L.R.

Exercise

Not possible

39

V.H.

M

H

V.L.

R

Use olive oil

Not possible

40

V.H.

H

V.H.

V.L.

R

Medical treatment

Possible

41

V.H.

H

V.L.

V.L.

M.R.

Restrict salt

Uncertain

42

V.H.

H

L

V.L.

M.R.

Restrict salt

Uncertain

43

V.H.

V.H.

M

V.L.

H.R.

Medical treatment

Possible

44

V.H.

V.H.

H

V.L.

H.R.

Medical treatment

Possible

45

V.H.

V.H.

V.H.

V.L.

H.R.

Medical treatment

Possible

46

V.L.

V.L.

V.L.

L

H.R.

Medical treatment

Possible

47

V.L.

V.L.

L

L

H.R.

Medical treatment

Possible

48

V.L.

V.L.

M

L

M.R.

Restrict salt

Uncertain

49

V.L.

L

H

L

M.R.

Restrict salt

Uncertain

50

V.L.

L

V.H.

L

H.R.

Medical treatment

Possible

51

V.L.

L

V.L.

L

H.R.

Medical treatment

Possible

52

V.L.

M

L

L

M.R.

Restrict salt

Uncertain

53

V.L.

M

M

L

M.R.

Exercise

Not possible (continued)

184

S. Chaudhary et al.

Table 1 (continued) Sr. No.

Heart rate

Blood pressure

Blood sugar

SPO2

Result (Risk level)

Prediction (Primary precautions)

Heart disease

54

V.L.

M

H

L

R

Medical treatment

Possible

55

L

H

V.H.

L

R

Restrict salt

Uncertain

56

L

H

V.L.

L

M.R.

Restrict salt

Uncertain

57

L

H

L

L

M.R.

Restrict salt

Uncertain

58

L

V.H.

M

L

H.R.

Consult a doctor

Possible

59

L

V.H.

H

L

M.R.

Restrict salt

Uncertain

60

L

V.H.

V.H.

L

H.R.

Medical treatment

Possible

61

L

V.L.

V.L.

L

H.R.

Medical treatment

Possible

62

L

V.L.

L

L

H.R.

Medical treatment

Possible

63

L

V.L.

M

L

M.R.

Restrict salt

Uncertain

64

M

L

H

L

L.R.

Exercise

Not possible

65

M

L

V.H.

L

L.R.

Exercise

Not possible

66

M

L

V.L.

L

M.R.

Restrict salt

Uncertain

67

M

M

L

L

H

Exercise

Not possible

68

M

M

M

L

H

Exercise

Not possible

69

M

M

H

L

H

Exercise

Not possible

70

M

H

V.H.

L

L.R.

Restrict salt

Uncertain

71

M

H

V.L.

L

L.R.

Restrict salt

Uncertain

72

M

H

L

L

L.R.

Use olive oil

Uncertain

73

H

V.H.

M

L

M.R.

Restrict salt

Uncertain

74

H

V.H.

H

L

H.R.

Medical treatment

Possible

75

H

V.H.

V.H.

L

R

Medical treatment

Possible

76

H

V.L.

V.L.

L

R

Medical treatment

Possible

77

H

V.L.

L

L

M.R.

Exercise

Not possible

78

H

V.L.

M

L

M.R.

Restrict salt

Uncertain

79

H

L

H

L

M.R.

Restrict salt

Uncertain

80

H

L

V.H.

L

M.R.

Restrict salt

Uncertain

81

H

L

V.L.

L

M.R.

Restrict salt

Uncertain

82

V.H.

M

L

L

L.R.

Exercise

Not possible (continued)

Detection of Cardio Vascular Disease Using Fuzzy Logic

185

Table 1 (continued) Sr. No.

Heart rate

Blood pressure

Blood sugar

SPO2

Result (Risk level)

Prediction (Primary precautions)

Heart disease

83

V.H.

M

M

L

L.R.

Exercise

Not possible

84

V.H.

M

H

L

H.R.

Use olive oil

Not possible

85

V.H.

H

V.H.

L

R

Medical treatment

Possible

86

V.H.

H

V.L.

L

M.R.

Restrict salt

Uncertain

87

V.H.

H

L

L

H.R.

Medical treatment

Possible

88

V.H.

V.H.

M

L

H.R.

Medical treatment

Possible

89

V.H.

V.H.

H

L

H.R.

Medical treatment

Possible

90

V.H.

V.H.

V.H.

L

H.R.

Medical treatment

Possible

91

V.L.

V.L.

V.L.

M

H.R.

Medical treatment

Possible

92

V.L.

V.L.

L

M

H.R.

Medical treatment

Possible

93

V.L.

V.L.

M

M

L.R.

Restrict salt

Uncertain

94

V.L.

L

H

M

M.R.

Restrict salt

Uncertain

95

V.L.

L

V.H.

M

H.R.

Medical treatment

Possible

96

V.L.

L

V.L.

M

H.R.

Medical treatment

Possible

97

V.L.

M

L

M

M.R.

Restrict salt

Uncertain

98

V.L.

M

M

M

M.R.

Exercise

Not possible

99

V.L.

M

H

M

M.R.

Exercise

Not possible

100

L

H

V.H.

M

R

Restrict salt

Uncertain

101

L

H

V.L.

M

M.R.

Restrict salt

Uncertain

102

L

H

L

M

M.R.

Restrict salt

Uncertain

103

L

V.H.

M

M

M.R.

Restrict salt

Uncertain

104

L

V.H.

H

M

M.R.

Restrict salt

Uncertain

105

L

V.H.

V.H.

M

H.R.

Medical treatment

Possible

106

L

V.L.

V.L.

M

H.R.

Medical treatment

Possible

107

L

V.L.

L

M

H.R.

Medical treatment

Possible

108

L

V.L.

M

M

M.R.

Restrict salt

Uncertain (continued)

186

S. Chaudhary et al.

Table 1 (continued) Sr. No.

Heart rate

Blood pressure

Blood sugar

SPO2

Result (Risk level)

109

M

L

110

M

L

111

M

112

M

113 114

Prediction (Primary precautions)

H

M

L.R.

Exercise

Not possible

V.H.

M

L.R.

Exercise

Not possible

L

V.L.

M

L.R.

Low salt diet

Uncertain

M

L

M

H

Exercise

Not possible

M

M

M

M

H

Exercise

Not possible

M

M

H

M

H

Exercise

Not possible

115

M

H

V.H.

M

L.R.

Restrict salt

Uncertain

116

M

H

V.L.

M

L.R.

Restrict salt

Uncertain

117

M

H

L

M

L.R.

Use olive oil

Uncertain

118

H

V.H.

M

M

L.R.

Restrict salt

Uncertain

119

H

V.H.

H

M

H.R.

Medical treatment

Possible

120

H

V.H.

V.H.

M

R

Medical treatment

Possible

121

H

V.L.

V.L.

M

M.R.

Restrict salt

Uncertain

122

H

V.L.

L

M

M.R.

Exercise

Not possible

123

H

V.L.

M

M

M.R.

Restrict salt

Uncertain

124

H

L

H

M

M.R.

Restrict salt

Uncertain

125

H

L

V.H.

M

R

Medical treatment

Possible

126

H

L

V.L.

M

R

Medical treatment

Possible

127

V.H.

M

L

M

L.R.

Exercise

Not possible

128

V.H.

M

M

M

L.R.

Exercise

Not possible

129

V.H.

M

H

M

M.R.

Use olive oil

Not possible

130

V.H.

H

V.H.

M

H.R.

Medical treatment

Possible

131

V.H.

H

V.L.

M

M.R.

Restrict salt

Uncertain

132

V.H.

H

L

M

M.R.

Restrict salt

Uncertain

133

V.H.

V.H.

M

M

H.R.

Medical treatment

Possible

134

V.H.

V.H.

H

M

H.R.

Medical treatment

Possible

135

V.H.

V.H.

V.H.

M

H.R.

Medical treatment

Possible

V.L. Very low, L Low, M Medium, H High, V.H. Very high H Healthy, L.R. Low risk, H.R. High risk, M.R. Moderate risk, R Risk

Heart disease

Detection of Cardio Vascular Disease Using Fuzzy Logic

187

Table 2 Comparison of Mamdani and Sugeno FIS results with pathology reports Fuzzy inference system

Total number of records in the data set

Total number of correct predictions

Accuracy (%)

Sugeno FIS

1000

970

97

Mamdani FIS

1000

950

95

(a) Using Mamdani method

(b) Using Sugeno method

Fig. 1 CVD surface view of SPO2 and blood sugar

(a) Using Mamdani method

(b) Using Sugeno method

Fig. 2 Prediction surface view of SPO2 and heart rate

method shows stepwise increase in the severity of the CVD with increasing levels of blood pressure.

188

S. Chaudhary et al.

(a) Using Mamdani method

(b) Using Sugeno method

Fig. 3 Result surface view of blood pressure and blood sugar

5 Conclusion This work detects whether the individual has a CVD or not by using Mamdani and Sugeno FIS and compare their results. The results show that Sugeno FIS gives 2% more accuracy in predicting an individual with a CVD as compared to Mamdani FIS. This is because Sugeno FIS uses a weighted average for defuzzification process. Currently, the accuracy of the proposed technique is evaluated using simulation. In the future, this technique will be implemented on hardware to investigate the real-world limitations of the said technique.

References 1. World Health Organization, Global Health Observatory Data. https://www.who.int/en/newsroom/factsheets/. Last accessed 09 Feb 2020 2. Ordonez, C.: Association rule discover with the train and test approach for the heart disease prediction. IEEE Trans Inf Technol Biomed 10(2), 334–343 (2006) 3. Ali A., Mehdi N.: A fuzzy expert system for heart disease diagnosis. In: International Multi Conferences of Engineering and Computer Scientist, pp. 1–6. Hong Kong (2010) 4. Harun, U., Ahmet, A.: Detection of heart valve diseases by using fuzzy discrete hidden markov model. Elsevier J Expert Syst Appl 34(4), 2799–2811 (2008) 5. Oad, K.K., DeZhi, X., Butt, P.K.: A fuzzy rule based approach to predict risk level of heart disease. Glob. J. Comput. Sci. Technol. 14(3-C), 16–22 (2014) 6. Anuradha, R.P.G.: Design of rule based fuzzy expert system for diagnosis of cardiac diseases. In: National Conference on Innovative Trends in Science and Engineering 2016, vol. 4, pp. 313–320 (2016) 7. Kumar, A.V.S.: Diagnosis of heart disease using advanced fuzzy resolution mechanism. Int. J. Sci. Appl. Inf. Technol. 2(2), 22–30 (2013) 8. Neeru, P.R.: Implementation of fuzzy controller for diagnose of patient heart disease. Int. J. Innov. Sci. 2(4), 694–698 (2015)

Detection of Cardio Vascular Disease Using Fuzzy Logic

189

9. Smita, S., Sushil, S.: Generic medical fuzzy expert system for diagnosis of cardiac diseases. Int. J. Comput. Appl. 66(13), 35–44 (2013) 10. Mathworks Homepage: Available from https://in.mathworks.com/matlab/.aspx. Last accessed 10 Mar 2020 11. Thyrocare Homepage: Available from https://www.thyrocare.com/Test_Menu.aspx. Last accessed 09 Feb 2020 12. Suburban Diagnostics Homepage: Available from https://www.suburbandiagnostics.com/pat hology-tests/. Last accessed 09 Feb 2020 13. Medall Homepage: Available from https://www.medall.in/health.html. Last accessed 09 Feb 2020 14. SRL Diagnostics Homepage: Available from https://www.srlworld.com/health-packages/ahm edabad. Last accessed 09 Feb 2020 15. Metropolis Homepage: Available from https://www.metropolisindia.com/patients/labs-home/. Last accessed 09 Feb 2020

Experimental Evaluation of Motor Skills in Using Jigsaw Tool for Carpentry Trade Sasi Deepu, S. Vysakh, T. Harish Mohan, Shanker Ramesh, and Rao R. Bhavani

Abstract Carpenters are key members and carpentry trades have a major role in the construction industry. Different hand and power tools are used to complete the jobs in this area. Various skill parameters are required for the proper usage of different tools, which are used in the construction industry. Based on the survey, choose one of the important tools used in this field that has conducted the experimentation. This paper describes the skill parameters involved in the proper use of the jigsaw tool and conducted an experimental study to analyze those skills varied between novices and experts. Based on the expert’s data, we could easily teach the novice for the proper usage of the tool. The expert’s data are set as a reference data for skilling the novices. This paper also discusses a comparison of expert’s data with the novice for proposing an assistive training system for jigsaw tools, which will accelerate the learning of use of the jigsaw tool.

1 Introduction In India, construction is the second-largest sector and it is set to become the largest sector by 2022, which employs more than 75 million people [1]. To meet the demand, Industrial Training Institute and Industrial Training Centers are established to give the training to people all over India. According to the Directorate General of Training at present, there exist 14,000 Industrial Training Institutes working in the country [2]. The government of India initiated various schemes and missions to develop skills among youths [3]. Even after the training, there is no assurance that the people who got training are skilled or experts in the trade [4]. The boom in the construction S. Deepu (B) · T. Harish Mohan · S. Ramesh · R. R. Bhavani AMMACHI Labs, Amrita Vishwa Vidyapeetham, Amritapuri, India e-mail: [email protected] S. Vysakh Department of Mechanical Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, India © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_18

191

192

S. Deepu et al.

industry also increases the demand in carpentry trade as carpentry is a part of the construction sector. The need for the carpenters in Indian industry is very high and availability of skilled people is very less [3]. According to National Skill Development Cooperation India (NSDC), the skilled carpenter needs to know multiple skills with different tools [5]. So by conventional training approach, it would take several years to become an expert in vocational education based on the sector. Technology-based training can reduce the training period and reduce the skill gap between expert and novice [6, 7]. In this paper, we described the tool characterization of the carpentry trade, and based on the selected tools from the literature survey, we have conducted a survey with experts. The studies result in converging to a single tool in which experimental studies have been carried to capture the skills involved to have a better understanding of the hardship involved in using the tool. The experiment research was carried out between novices and experts, which gives an insight into the variation in data, which will lead to designing an assistive device system later.

2 Related Work In India, as the number of expert skilled workers is limited, to overcome this barrier, research community must take a necessary step for bringing up new ideas or technology in this field of construction. The research is done in the field of proposing and implementing an assistive system or devices [8] for skill training. To become an expert in rebar bending, an individual should need several years of practical experience, for having more understanding of different motor skill parameters in the skill will be known [9]. Similar work in manual rebar bending trade has been done in which a detailed study of the skill parameters is involved during the trade through experimental methods. Comparing the novice’s data with experts will lead the design of a real-time guidance system for effective skill transfer from expert to novice. The lever positioning skill in the manual rebar bending process projects a high accuracy with the help of feedback system [10, 11]. The authors proposed a feedback system on the hand drilling machine [12] that will guide and reduce the error when a novice practices using the same with its orientation.

3 Methodology 3.1 Tool Survey A survey was conducted among five experts with more than ten years of experience in carpenter’s trade. The purpose of the survey was to shortlist a tool, which is regularly used, which needs more skill, complex to handle, and time-consuming. It was learned

Experimental Evaluation of Motor Skills …

193

Table 1 Tool analysis Circular saw

Jigsaw

Miter saw

Reciprocating saw

Usage type

Moderate

Complex

Moderate

Complex

Kickback

Low

High

Low

High

Cross cut

Limited

Yes

Yes

Yes

Curve cut

Limited

Yes

No

No

Bevel cut

Limited

Yes

Yes

No

Rip cut

Yes

Limited

No

Limited

Portable

Yes

Yes

No

Yes

through questionnaires and discussions. The survey leads to power tools like a circular saw, jigsaw, miter saw, and reciprocating saw. The first part of the questionnaire dealt with the type of tool and its complexity. The second section was concerned with skills and its importance in the industry. A comparison was made as shown in Table 1. There are different types of cutting processes used in the carpentry industry with the usage of each tool which follows its own procedure [13]. In addition, we can perform the same cutting process with different tools provided the user has enough skill to do so. From the table, the details of different tools which are also mentioned in occupational standards by NSDC to become an expert carpenter [5, 14] are mentioned. From the survey, expert’s opinion and table comparison recommended the jigsaw tool, which is also called an all-rounder tool in the carpentry field for the experimentation study.

3.2 Skill Parameters Based on the discussion and observation of the expert, four skill parameters are important for jigsaw users. Figure 1 illustrates the model of jigsaw and movement pattern while it is in use. J. P. Domblesky et al. discussed the model of reciprocating Fig. 1 Jigsaw tool with all the orientation

194

S. Deepu et al.

sawing, in which they study the parameters, which includes the force applied and feed rate [15]. The major parameters required for using the jigsaw tool are Forward Force Feedback in XY-plane Jigsaw rests horizontally on the wooden specimen, and cutting is performed in the direction of exerted force. For perfect cutting process, the impact of tool blade on wood has to be taken in consideration. While performing the bevel cut, the base of the tool rests on the wood and the upper part of the tools is adjusted to the desired angle to cut. Downward Force in the Z-direction The downward force is applied for giving the stability while performing the cutting, which is acted on the XY-plane in Z-direction. Rotational Movement in XYZ Axes The tendency of tool blade to deviate from the direction of cutting is termed as yaw. A mismatch between the forces and speed of cut often leads to kickback termed as pitch. Feed Rate It is the relative velocity at which the cutter is advanced along the workpiece. Initial study was to capture the force skill parameters involved in the usage of the jigsaw tool performed. Similar studies in which hand gloves are used to capture the grip and grasp force on the handle [16, 17] of the tools, this methodology was carried out for primary identification on jigsaw. In which one of the barrier was different individual has different hand sizes. To overcome hand size difference, experiments for different people with different hand sizes are carried out to finalize the design. The design gives the flexibility of measuring forward force and downward forces at the same time. The experiment was conducted with jigsaw for cutting a piece of the wood specimen with gloves on. The ink test experiments by experts have given a high reproducibility, which is cost-effective and user-friendly [18, 19]. During the experiments, experts applied the adequate force needed to cut a wood specimen. Left side of Fig. 2 shows a sample of the different hand patterns with different sizes, which lead to two positions for attaching the force sensors as shown in right side of Fig. 2. Similar approach was carried out to find normal power grasp points or positions of human hands [20]. A similar pattern of applying a force using the thumb and downward force using the palm was observed. These experiments lead to conclude a design by placing two FSRs (FSR 1 related to palm and FSR 2 related to thumb).

4 System Design The electromechanical system has been attached to the handle of the jigsaw, which gives the user the flexibility to handle the tool as normal usage. A rigid mechanical design incorporated to distribute the force in proportion to the sensors.

Experimental Evaluation of Motor Skills …

195

Fig. 2 Sample set for the ink test

4.1 Electronics and Mechanical Design Arduino Uno board based on the Atmega 32 microcontroller as the main controller is used along with two force-sensitive resistor (FSR) from Tekscan Company. FSR is designed to measure the forces up to 10 kg. The power tool has vibration, which leads to high-frequency vibration, drive to instability in the reading of FSR readings. To overcome that low pass, Sallen–Key filter was introduced in the system with the frequency of 3 Hz. The circuit was designed and placed in between FSR and analog pin of the microcontroller. Figure 3 shows the electronic architecture of the system, in which the Arduino Uno board communicated with the computer via serial port. Jigsaw handle has been designed to incorporate sensors at the desired locations as mentioned before. The bottom plate for sensor mounting is designed to follow the same profile of the handle of the jigsaw. In the basement, two connectors are made, one connector directly connected to the top part where the force is applied on and other on the base. The FSR is kept between two pucks, as shown in left side of Fig. 4. According to this design, it gives better support to the FSR by making it steady while working on the tool. The top part of the connectors and mating part on the force applying piece are made chamfered to fit in a better way. Right side of Fig. 4 shows the attachment of the sensors on the jigsaw. Fig. 3 Hardware architecture

Power 5V

FSR 1

Sallen Key Filter

FSR 2

Sallen Key Filter

Arduino Uno

Computer

196

S. Deepu et al.

Fig. 4 Mechanical assembly (left side) and experimentation setup (right side)

4.2 Subjects and Procedure For better accuracy and resolution, the sensor has been calibrated in the system using force-sensitive resistor (FSR). The calibrated data could be used for analyzing skill parameters of expert and novice. In experiments conducted, three experts with more than 10 years of experience in the field of carpentry participated along with three novices in the carpentry field. Specimens of thickness 1.24 cm plywood are used for experiments in which cutting of 16 cm straight cut with a width of 5 cm was performed. Novice has been briefed with the know-how of the tools and safety measure as lack of prior experience; all the subjects were asked to cut five different trails.

5 Results The experimental results describe the individual sensor data (FSR 1 and FSR 2) with respect to time taken to complete the cutting process.

5.1 Experts Data Figure 5 (first one) shows the sample graph of FSR 1 and FSR 2 with respect to time

Fig. 5 Expert 1, 2, 3 (from left side) voltage versus time graphs

Experimental Evaluation of Motor Skills …

197

of expert 1. The FSR 1 value is updated by the force applied by the palm, and FSR 2 value is applied by the thumb force of expert 1. For cutting the 16 cm length of plywood, expert 1 required approximately 18 s which is marked in the figure. When he started the cutting process, FSR 2 (blue color: series 1) give a voltage value in between 1.6 and 1.8 V value which is distributed as 2800–3100 g of force. But at the same time, he keeps on applying the force on FSR 1 (orange color: series 2) using his palm and that force reached maximum up to 1.5 V which is equal to 2500 g. In the sample of five trails of experts, we could recognize that all the trials have almost kept the same force pattern for FSR 1 and FSR 2. That means for expert 1 he has applied an incremental force using FSR 1 which will not exceed a force of 2800 g. Also, FSR 2 is getting a constant force during the entire cutting time with the range of 2600–3800 g. All of the five trails he kept the time in between 18 and 20 s. That was the consistency we could measure by the expert 1. The cutting piece of the expert 1 is perfectly straight which we checked with another expert who participated in the experiments. Center Fig. 5 shows the force parameters of expert 2. In these five trails of expert 2, we found another similarity in all of the cut. FSR 1 (orange color: series 2) and FSR 2 (blue color: series 1) are constant forces and in two ranges. The main difference from expert 1 is the FSR 1 value is a constant force which is around 2600 g and he applies the force at starting of the cut itself. So consistently he applied the force up to end of the cut. In all of his trails, he applied the same pattern and it is kept between 2600 and 3000 g. In the FSR 2 value, he applied a force of more than 3800 g and a maximum of 3900 g which we could see in the initial of the cut. In all of the cut, we could analyze that expert 2 applied a high force on the initial cut and then the force gets reduced up to 200–500 g. At the same time, the FSR 1 is kept constant or increases the force between 100 and 300 g force. This expert took in between of 14–16 s of time to complete the cutting process. Right side of Fig. 5 shows the voltage versus time graphs of both FSRs which recorded by expert 3. In that figure, series 2 (orange color) is the force value of FSR 1. That force is low when the cutting process started and after that, force is increased gradually when the cutting process increased. The voltage value at 2.2 V is approximately 3400 g and the initial force is nearly 2700 g. FSR 2 value which is in blue color holds a constant force which is nearly 3000 g. The time taken for this cutting process is nearly 19 s. Expert 3 is performing almost the same as expert 1 (force pattern) except he applied more force on FSR 1 when starting the cutting process. The time taken for the five trails lies between 17 and 19 s.

5.2 Novice Data The novices’ data are very much different from experts’ data, which is shown in Figs. 6, 7 and 8. Figure 6 shows the force data of FSR 1 and FSR 2 of novice 1. There is no force data on FSR 1 (below 10 g) and he applied discontinuous force on FSR

198

S. Deepu et al.

Fig. 6 Novice 1: voltage versus time value graph for FSR 1 and FSR 2

2. Out of five trails, all trails followed a similar pattern and the force range is varied in 100–500 g. But force applied on the FSR 2 was varied between 3000 and 4100 g. Sometimes this much force is not required to do the process. Also, the time taken for this trail is 20 s and the time taken for this novice varied in between 20 and 27 s for five different trails. Data of novice 2 are given below (Fig. 7) and it is giving some level of a similar pattern with experts which is only for two trails. All other trails are the same as for novice 1. But the force range was different and FSR 1 value is only applying halfway point of cutting a piece of wood and the force applied that time was around 2100 g. FSR 2 giving a force range of 3800 g. But the time taken for this cut is 27 s and the average time taken for this novice is 25 s. There is no consistency for the force range in all cut of this novice. Figure 8 shows the data of novice 3, which denotes that FSR 1 is giving a force and FSR 2 did not give any force as an initial cutting process. After 12 s FSR 1 gives a force which goes up to 600 g. But at the same time, FSR 2 applies a force that

Fig. 7 Novice 2: voltage versus time value graph for FSR 1 and FSR 2

Fig. 8 Novice 3: voltage versus time value graph for FSR 1 and FSR 2

Experimental Evaluation of Motor Skills …

199

Fig. 9 Experts’ data voltage versus time value graph for FSR 1 and FSR 2

goes a maximum of 3800 g. There we could find some discontinuity of the forces pattern which is not shown in the case of expert’s data. The time taken for this cut is nearly 29 s and their five trails varied in between from 25 to 32 s. Out of five trails, four trails were keeping the same pattern and fifth trail is giving the similar pattern of novice 1, except the applied force varied on FSR 2. The above-presented data are very much distinguishable between experts and novices. Of course, there is an individual force pattern between experts according to skillset of each expert. It is observed that a good repeatability in the cutting process completes the process within a constant time, which is shown in Fig. 9 (left graph shows the FSR 1 data and right graph shows the FSR 2 data). The graph displayed in orange color expert 1, green color expert 2, and blue color expert 3, respectively. In case of the novices have no repeatability, and no similar force patterns displayed. Novices’ quality of specimen after the trial was not up to the mark as the experts. The novice fails to perform linear straight cut and even in some cases angle deviation (approximately 5°) in the cut been observed. The data also project the time variation in trial performed by novice, along with discontinuity in force pattern. The cutting pieces after the cut process observed with the experts not participated in the study were observed that expert’s wood pieces were perfect and novices’ pieces cannot be used for the work.

6 Conclusions Based on the study, we can distinguish the experts and novices data for the cutting performance using a jigsaw tool, with respect to different skill parameters that we experimented. Also, we can lead to designing the cutting performance model with expert’s data. The relative force data using the force-sensitive resistor need to be improved using different calibration techniques. Also, limitation in the system for capturing different force parameters has to be analyzed at different positions applied by the user which allows us to process the data very precisely and deeply.

200

S. Deepu et al.

7 Future Works More expert’s data sets with all the skill parameters should be collected and analyzed. There should be an individual reference baseline for each skill parameter based on the expert’s data and to design an individual feedback system for each skill parameter based on the reference baseline, which will not limit the actual working process using the jigsaw tool. The whole assistive system will bring down the gap between the experts and novice in the future.

References 1. Srivastava, R., Jha, A.: Capital and labour standards in the organised construction industry in India. In: London: CDPR, SOAS. Accessed 16 Dec 2016 2. Sharma, S.: Employment (Vision 2025). Government of India, Delhi (2003) 3. Goel, D., Vijay, P.: Technical and vocational education and training (TVET) system in India for sustainable development (2017) 4. Soham, M., Rajiv, B.: Critical factors affecting labour productivity in construction projects: case study of south Gujarat region of India. Int. J. Eng. Adv. Technol. 2(4), 583–591 (2013) 5. Carpenter Wooden Furniture by NSDC. https://www.nsdcindia.org/carpenter-wooden-fur niture 6. Salisbury, J.K., Srinivasan, M.A.: Phantom-based haptic interaction with virtual objects. IEEE Comput. Graph. Appl. 17(5), 6–10 (1997) 7. Mullins, J., Mawson C., Nahavandi, S.: Haptic handwriting aid for training and rehabilitation. In: 2005 IEEE International Conference on Systems, Man and Cybernetics, vol. 3. IEEE (2005) 8. Durand, V.: Functional communication training using assistive devices: effects on challenging behavior and affect. Augment. Altern. Commun. 9(3), 168–176 (1993) 9. Menon, B.M., et al.: Virtual Rebar bending training environment with haptics feedback. In: Proceedings of the Advances in Robotics. ACM (2017) 10. Deepu, S., et al.: An experimental study of force involved in manual rebar bending process. In: IOP Conference Series: Materials Science and Engineering, vol. 310. no. 1. IOP Publishing (2018) 11. Deepu, S., Bhavani, R.R.: Characterization of expertise to build an augmented skill training system for construction industry. In: 2018 IEEE 18th International Conference on Advanced Learning Technologies (ICALT). IEEE (2018) 12. Akshay, N., Deepu, S., Bhavani., R.R.: Augmented vocational tools using real time audiovisual feedback for psychomotor skill training. In: 2012 IEEE International Conference on Technology Enhanced Education (ICTEE). IEEE (2012) 13. Nasir, V., Cool, J.: A review on wood machining: characterization, optimization, and monitoring of the sawing process. Wood Mater. Sci. Eng., 1–16 (2018) 14. Roza, G.: A Career as a Carpenter. The Rosen Publishing Group, Inc. (2010) 15. Domblesky, J.P., James, T.P., Otto Widera, G.E.: A cutting rate model for reciprocating sawing. J. Manufact. Sci. Eng. 130(5), 051015 (2008) 16. Yun, M.H., Kotani, K., Ellis, D.: Using force sensitive resistors to evaluate hand tool grip design. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 36. no. 10. SAGE Publications, Sage CA; Los Angeles, CA (1992) 17. Hammond, F.L., Mengüç, Y., Wood, R.J.: Toward a modular soft sensor-embedded glove for human hand motion and tactile pressure measurement. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE (2014)

Experimental Evaluation of Motor Skills …

201

18. Hsu, W.-C., et al.: The design and application of simplified insole-based prototypes with plantar pressure measurement for fast screening of flat-foot. Sensors 18(11), 3617 (2018) 19. Knudson, D.V.: Forces on the hand in the tennis one-handed backhand. J. Appl. Biomech. 7(3), 282–292 (1991) 20. Kargov, A., et al.: A comparison of the grip force distribution in natural hands and in prosthetic hands. Disabil. Rehabil. 26(12), 705–711 (2004)

Patent Trends in Higher Education of India: A Study on Indian Central Universities J. P. Singh Joorel, Abhishek Kumar, Sanjay Tiwari, Ashish Kumar Chauhan, and Ramswaroop Ahirwar

Abstract This study is based on the several patents registered/filled/published/granted by the Indian Central Universities in Indian Patent Office as well as in other countries, e.g. USA, China and Japan. It covered 20 out of 49 central universities, who published/granted patents (min three) in the last decade (2009–2018). Accordingly, the study is restricted to 20 universities for analyzing the trends in terms of registry of patents in national and global databases, contribution towards different research areas and role of patent as an indicator for ranking of any institutes. Moreover, this study focused on the growth rate of patent published and patent granted by the universities in the last decade, the necessity of a single database for the academic community of India. This patent is being used by different ranking agencies of India, i.e. India Rankings (NIRF), Atal Ranking of Institutions on Innovation Achievements (ARIIA) and accreditation body such as NAAC, etc. Currently, there is no dedicated database of the patent which covers only patents of academic institutions.

J. P. S. Joorel (B) · A. Kumar · A. K. Chauhan · R. Ahirwar INFLIBNET Centre, Infocity, Gandhinagar 382007, India e-mail: [email protected] A. Kumar e-mail: [email protected] A. K. Chauhan e-mail: [email protected] R. Ahirwar e-mail: [email protected] S. Tiwari Magadh University, Bodhgaya Gaya, India e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_19

203

204

J. P. S. Joorel et al.

1 Introduction A massive number of patents inventors and applications are being granted and filled across the globe. It is continuously increasing trends among companies, agencies, colleges, institutions and universities. The trends are also applicable and correlated with the number of inventions in India. There are several contributors in India as an individual, agencies, government bodies, institutes, colleges and universities are filling the patents in India’s government patent office as well as in other countries patent offices. We looked into these trends and found there are some central universities trends those also filling the patents which are very useful for Indian Ranking agencies such as National Institutional Ranking Framework (NIRF), National Assessment and Accreditation Council (NAAC), Atal Ranking of Institutions on Innovation Achievements (ARIIA) rankings as well as other accreditation agencies and global rankings apart from individual benefits of universities. The central universities of India have also filled and granted many patents. This study focuses on trends of patent among Indian central universities during the last one decade, i.e. the year 2009–2018. It also enlightens contributions within the national agency as well as international patent agency and comparison among the subject coverage areas. There are 49 Central Universities in India that are established by an Act of Parliament under higher education department Government of India.

1.1 Background Basically, Patents are Intellectual Property (IP) rights provides to inventors or individuals, which permits them to exclude all other creators from using, selling, or making/creating their inventions for a fixed period. The patents are forms of intellectual or own created properties which provide legal and authorized rights to the owner. There are three types of patents, which are given as follows: • Utility Patents: It covers an original, new and beneficial method, item of production, mechanism, or structure of the material. • Design patents: It includes an innovative, new and attractive design for manufacturing products. • Plant patents: It includes everyone who products, determines and the invention of different types of plants accomplished of reproduction. The Indian Government has been occupied tangible steps to begin a good environment for making and protecting IP Rights and the establishment of IP management in the nation. A strong IPR organization in a country enables growths of trading and commercialization, together at national and international levels and gives authority in the commercial on participants. The Controller General of Patents, Designs, Trade Marks and Geographical Indications (CGPDTM) office, which is first responsible

Patent Trends in Higher Education of India …

205

for managing of copyrights, new designs innovations, new patents innovations, new trademarks and other geographical indication. Therefore, the office currently manages the all major of IPR legislations in India, leading for useful interaction and modernized procedures result in good facilities to patrons.

2 Objectives • Contribution of patents (National/International) by Indian central universities • To find out patent trends (subject wise) among Indian central universities • Role of patent data in Indian rankings and accreditations.

3 Methodology This study has been carried out by the following three steps: Step-1 Sample size and identification of data: The whole study is focused on the patent of central universities. There are three patent databases identified for data collection namely InPass (Indian patent advanced search system), PATENTSCOPE (initiated by WIPO (World Intellectual Property Organization)) and Derwent Innovation (International contributions) database which is one of the products of Clarivate Analytics. Step-2 Data collection: The period of data collection is of one decade from 2009 to 2018. The data has been collected by creating the query (the combination of affiliations of universities) of the universities using various forms, old names, department names, university address, pin code, etc. with the help of Boolean operators for useful and perfect data collection. Step-3 Data Analysis: Patent trends/data analysis has been done on the basis of bibliographical details of data, different graphs, tables, etc.

4 Literature Review According to Sharma and Jain [1], Indian Universities are substantially increasing their patents and publications. This study displays that the publication and patenting rate Indian universities are very low as compared to IITs (Indian Institute of Technology) till 2010. Jana et al. [2], conducted the study drawing an analysis of the patenting behaviour of the SAARC countries in 2014. According to this study, India has been highly active in filing international patents compared to other SAARC countries and inter-country level as well. At the country level comparison based on the domestic patenting intensity and relative quality, indicators show that Sri Lanka is the most active scientific country among the SAARC nations according to the

206

J. P. S. Joorel et al.

GCI (Global Competitive Index), KEI (Knowledge Economy Index) and Human Development Index. According to Bhattacharya et al. [3] the detailed assessment of Indian patenting activity over the period 1990–2002 like entity-wise (Indian organizations, foreign R&D centres in India, resident individuals), proprietary protections (utility, design, plant patents), organization-wise (industry, research organizations, specialized institutions and so forth.), industrial sector-wise, category-wise (process/product), etc., reveals that there are very few patents as a result of the collaboration between different organizations. Major scientific agencies like CSIR, DST, DBT, etc. have initiated several network programs for joint technological development involving research laboratories, universities and industries. Kandpal et al. [4] study have shown that the patent grants have increased significantly in all the fields of agriculture filed after 2005, but international companies own most of these patents. The study accounted for 75% of the total patents granted during 2007–2012. As a result of the collaboration of these international companies, patenting has provided access to the technologies which earlier were not available in India, especially in the fields of transgenics, agrochemicals and animal vaccines. The review explains the urgency of more collaboration and inter-field research initiatives, the development of advanced technology in order to procure more quality patents.

5 Patent as an Indicator in Ranking and Accreditations The Government of India is continuing the focus on Higher Educational Institutions for research productivity, novel innovations, good leadership and creativity. There are two major government bodies, namely, NAAC and NBA are, made for accreditation of Indian Institutions/courses. India has now also owned the ranking methodology to benchmark the institutions through various parameters. The ranking program name is NIRF (Indian Ranking) and ARIIA (Ranking for Innovative Achievement of Institutions). Both (NIRF and ARIIA) benchmarking program gives weightage. The NIRF gives 30% weightage to research outcomes and ranked institutes in different disciplines (Engineering, Management, Pharmacy, Architecture, Medical, Law and Overall). ARIIA is entirely focused on innovations. The patent, as an indicator plays a vital role in the following manner: • NIRF: The patent indicator carries 15 marks out of 100 in NIRF where subfields of the patent are patent published and granted in the last three years. There is strong correlation between patent and ranking. • ARIIA: The subfields required in ARIIA are patent filed, published, granted and cited in the last six years. • NBA: National Board of Accreditation (NBA) also required to fill national and international patents, copyrights and designs awarded in respective years and provide some points in Faculty Intellectual Property Rights (FIPR).

Patent Trends in Higher Education of India …

207

• NAAC: Maximum colleges, universities and institutions are filing their full data including total patent published and granted in the last five years and submit a report to the National Assessment and Accreditation Council (NAAC).

6 Contributions of Patents by Central Universities There are several patents registered/filed/published/granted by the Central Universities in Indian Patent Office as well as in other countries, e.g. USA, China and Japan. There are 20 out of 49 central universities that produced patents in descending order (lower is 3) in the last decade. Accordingly, the study is restricted to analyze the trends, contributions and comparisons for 20 central universities who have published/granted patents.

6.1 Contribution of Patent with Respect to Patent Published and Grants Graph 1 shows the addition of patents published and granted. It depicts that there are huge differences in the top five and last five central universities to publish the patents during the last ten years. Table 1 is showing detailed bifurcation of the patent published and granted by the respective central universities. The University of Delhi

Number of Patents

Total Published and Granted Patents (University Wise) 180 160 140 120 100 80 60 40 20 0

161

109 84 83 65 45 44 27 22 22 18 14

9

8

5

5

5

3

Name of Central Universities

Graph 1 Top 20 central universities in terms of patent published/grants (2009–2018)

3

3

208

J. P. S. Joorel et al.

Table 1 List of top 20 Central Universities patent published and granted (2009–2018) Sr. No. Name the Central Location Universities

State

Total Published Total Granted

1

University of Delhi

New Delhi

Delhi

114

47

2

Aligarh Muslim University

Aligarh

Uttar Pradesh

102

7

3

Banaras Hindu University

Varanasi

Uttar Pradesh

66

18

4

Jawaharlal Nehru New Delhi University

Delhi

68

15

5

University of Hyderabad

Telangana

52

13

6

Tezpur University Tezpur

Assam

42

3

7

Jamia Millia Islamia

New Delhi

Delhi

40

4

8

Visva-Bharati University

Santiniketan

West Bengal

22

5

9

Pondicherry University

Pondicherry

Puducherry

18

4

10

University of Allahabad

Allahabad

Uttar Pradesh

17

5

11

Babasaheb Bhimrao Ambedkar University

Lucknow

Uttar Pradesh

18

0

12

Assam University Silchar

Assam

14

0

13

North Eastern Hill University

Shillong

Meghalaya

8

1

14

Guru Ghasidas Vishwavidyalaya

Bilaspur

Chhattisgarh

7

1

15

Dr. Hari Singh Gour University

Sagar

Madhya Pradesh

5

0

16

Sikkim University

Gangtok

Sikkim

5

0

17

Central University of Gujarat

Gandhinagar Gujarat

4

1

18

Central University of Punjab

Bathinda

Punjab

3

0

19

Manipur University

Imphal

Manipur

2

1

Hyderabad

(continued)

Patent Trends in Higher Education of India …

209

Table 1 (continued) Sr. No. Name the Central Location Universities

State

20

Uttarakhand

Hemwati Nandan Srinagar Bahuguna Garhwal University

Total Published Total Granted 3

0

is top in terms of addition of patent grants and published which includes national and international patents.

6.2 Contribution of the Patent with Respect to Subject Area Graph 2 is presenting the subject fields of the invention covered by the patent published and granted during the last ten years by the mentioned central universities. It was found that maximum patents are available in chemical field and lesser in Civil and Textile fields of invention. It shows that the masses of patents belong to five subject areas, namely Chemical, Biotechnology, Pharmaceuticals, Mechanical Engineering and Electrical. Production of Patents in Different Area of Invention Fields (Subject Wise)

Number of Patents

180 180 160 140 120 100 80 60 40 20 0

135 129

50 44 48 25 25

13 16 14 13 14 7

8

6

8

2

2

2

Area of Invention Fields

Graph 2 Production of patents in different area of invention fields (subject)—2009 to 2018

210

J. P. S. Joorel et al.

6.3 Nationally and Internationally Registered Patents The Central Universities in India have registered (filed/published/granted) the patent with national and international (foreign countries) patents offices. Table 2 presents the data of publishing patents in both nationally and internationally. It shows that the University of Delhi has published 101 nationally includes 23 granted and 60 internationally includes 24 granted. While Aligarh Muslim University is seconds in rank with 107 national patents includes 6 granted and 2 internationally includes 1 granted, Banaras Hindu University acquires the third position with 81 national includes 15 granted and 3 internationally granted patens in its name. The present data exhibits that only eight universities having internationally published/granted patents.

6.4 Role of Patents in India Rankings Table 3 depicts that there are 50% of central universities who have patents comes in the top 100 in university ranking being done by NIRF (MHRD, Government of India). Moreover, in the list of top 100 universities of NIRF ranking, 10 universities, i.e. 50% of universities ranked in the top 50. It is an indication that patent is playing a vital role in India Rankings.

7 Outcomes and Findings The outcomes of this study showed that the research productivity of the central universities in terms of patents published and granted. There are following major outcomes and findings of this study. • Participation in national/international agency: There are only 12 out of 20 universities that have the number of patents published and granted with both nationally and internationally offices. Remaining eight are associated/participated in national office. • Published versus Granted: It was also observed that (as mentioned in Table 2) the chances of transferring patents published to patents granted are more in international scenarios compare to national. For example, the details of top three universities (who published more in international offices) such as the University of Delhi have 29.49% (comparison to published patent) granted patents nationally where it has 66.67% (comparison to published patent) granted patent internationally. Similarly, Jawaharlal Nehru University, which has 13.73% nationally granted, 47.06% internationally granted patents and the Visva-Bharati University has 20.00% nationally granted, 25.00% internationally granted patents.

Patent Trends in Higher Education of India …

211

Table 2 List of top 20 Central Universities with national and international patents published and granted (2009–2018) Sr. No.

Name the Central Location Universities

1

University of Delhi

New Delhi

2

Aligarh Muslim University

3

Banaras Hindu University

4

Granted

Published

Granted

Total patents

78

23

36

24

161

Aligarh

101

6

1

1

109

Varanasi

66

15

0

3

84

Jawaharlal Nehru New Delhi University

51

7

17

8

83

5

University of Hyderabad

42

8

10

5

65

6

Tezpur University Tezpur

40

3

2

0

45

7

Jamia Millia Islamia

New Delhi

38

4

2

0

44

8

Visva-Bharati University

Santiniketan

10

2

12

3

27

9

Pondicherry University

Pondicherry

18

4

0

0

22

10

University of Allahabad

Allahabad

17

5

0

0

22

11

Babasaheb Bhimrao Ambedkar University

Lucknow

18

0

0

0

18

12

Assam University Silchar

7

0

7

0

14

13

North Eastern Hill University

Shillong

8

1

0

0

9

14

Guru Ghasidas Vishwavidyalaya

Bilaspur

7

1

0

0

8

15

Dr. Hari Singh Gour University

Sagar

5

0

0

0

5

16

Sikkim University

Gangtok

5

0

0

0

5

17

Central University of Gujarat

Gandhinagar

4

1

0

0

5

18

Central University of Punjab

Bathinda

3

0

0

0

3

19

Manipur University

Imphal

2

1

0

0

3

Hyderabad

National Published

International

(continued)

212

J. P. S. Joorel et al.

Table 2 (continued) Sr. No.

Name the Central Location Universities

20

Hemwati Nandan Srinagar Bahuguna Garhwal University

National Published 3

International Granted

Published

0

Granted

0

0

Total patents 3

Note The source of data, i.e. considered database is InPass portal (for national patent details) and PATENTSCOPE (WIPO) and Derwent Innovation database (for International patents)

Table 3 Listing of Central Universities who ranked in India Rankings (NIRF) University name

Total No. of patents

Ranking 2019

Ranking 2018

Ranking 2017

University of Delhi 114

13

7

8

Aligarh Muslim University

102

11

10

11

Banaras Hindu University

66

3

3

3

Jawaharlal Nehru University

68

2

2

2

University of Hyderabad

52

4

5

7

Tezpur University

42

29

29

30

Jamia Millia Islamia

40

12

12

12

Visva-Bharati University

22

37

31

19

Pondicherry University

18

48

59

37

University of Allahabad

17

Not ranked

Not ranked

95

• Invention Field Coverage: As presented in Graph 2, there is 20 subject coverage of invention fields and chemical science is one of the major fields of invention where highest patent, i.e. 180 patents published/granted (contributed by said universities) in the last ten years. Similarly, 135 patents are published in the field of biotechnology and 129 patents on pharmaceuticals inventions. The rest of all fields have below 100 patents. It shows the substantial requirement of immediate and consistent emphasis on those fields of invention. • Yearly Growth Rate: The analysis of the last 10 years data is displayed in Graph 3 which is clearly showing growth rate (year wise). The red line is showing granted and the blue line is showing published patents. To get average growth, the data of the previous year is compared with the current year. Since 2009 patent published graph growth is 43%, which means that research, innovation, publication have

Patent Trends in Higher Education of India …

213

Average Growth Rate 250%

200%

Patent Granted

200%

Groth in Percentage

150% 100% 100%

0% 2008

-12%

P Total

50%

43% 50% -10% -4% 2010

7%

23%

19%

2012 -50% -55%

-2% -20%

4%

0%

8%

-25% 2014

2016

G Total

10% 2018

Patent Published 2020

-50% -100%

Year wise Patent Published and Gratented

Graph 3 Presents average growth rate with year-wise patent published and granted

been good in a particular year. However, then in later years, it oscillates between growth and fall. In the case of granted patens, it displays low invention among the central universities and a falling graph from 2009 to 2015, but it accelerates suddenly from 2016 to present. The average growth rate of patents published and granted are increasing every year due to the filing of more patents by central universities, and the patent office is putting massive efforts to a new examination of patents and looking forward to more patents in the future. It is a witness of success and qualitative productivity in the national research and innovation sector.

8 Conclusion The patent is playing a key role for any institution as identification of research/innovation productivity work being carried out by the institutions. It also reflects in different rankings and accreditation systems of the Government of India such as ARIIA, NAAC, NIRF, etc. The effects of patents help to increase or establish eco-system installations, new inventions, start-ups, revenue income and research productivity for the institutions. This is a significant impetus to boost entrepreneurship potential in central universities. Now University Grants Commission (UGC) also promoting to establish Intellectual Property departments/centrrs in universities that will enquire about fundamental human issues in order to ensure there is more growth in IP. It was observed that the average growth of published patents is 10% and granted is 18% despite many challenges and obstacles in this decade. It certainly

214

J. P. S. Joorel et al.

requires more attention and consistent boosting to ensure an increasing growth rate in years ahead, especially in newly established central universities. While retrieving the data, it was observed that retrieval of collective data is very challenging, especially in terms of academic institutions. There is no single database in India, which is made for the academic community. Therefore, it strongly recommended and suggested that there should be a dedicated platform to compile all the bibliographical database of patents being contributed by academic institutions. It will be beneficial to researchers, innovators, faculty, students, information centre and government bodies (in making appropriate policies on updated research data).

References 1. Sharma, R., Jain, A.: Research and patenting in Indian universities and technical institutes: an exploratory study. World Pat. Inf. 38, 62–66 (2014). https://doi.org/10.1016/j.wpi.2014.04.002 2. Jana, T., et al.: Patenting trends among the SAARC nations: comparing the local and international patenting intensity. Curr. Sci. 106, 1190–1195 (2014). https://doi.org/10.2307/24102333 3. Bhattacharya, S., et al.: Indian patenting activity in international and domestic patent system: contemporary scenario. Curr. Sci. 92, 1366–1369 (2017) 4. Kandpal, A., et al.: Recent trend in patenting activity in india and its implications for agriculture. Agric. Econ. Res. Rev. 28, 139 (2015). https://doi.org/10.5958/0974-0279.2015.00011.7 5. Ali, F.: Universities and Patents. The Hindu. https://www.thehindu.com/opinion/op-ed/universit ies-and-patents/article25373107.ece 6. Shafee, Y.: Digital India & intellectual property rights (IPRS) in India and their trends. TRANS Asian J. Mark. Manage. Res. 6, 54–69 (2017) 7. Atal Ranking: Atal Ranking of Institutions on Innovation Achievements (ARIIA), ARIIA an Initiative of Ministry of Human Resource Development (MHRD), Government of India. https:// www.ariia.gov.in/About/ARIIA 8. Indian Ranking: National Institutional Ranking Framework (NIRF), MHRD. https://www.nirfin dia.org/2019/Ranking2019.html 9. University Grants Commission New Delhi: University Grants Commission (UGC), New Delhi. https://www.ugc.ac.in/centraluniversity.aspx

Spatial Rough k-Means Algorithm for Unsupervised Multi-spectral Classification Aditya Raj and Sonajharia Minz

Abstract Geospatial applications have invaded most web- and IT-based services, adding value to information-based solutions. But there are many challenges associated with the analysis of raster data. The availability of labeled data is scarce, pixels containing multiple objects cause uncertainty of classes, the huge size of input data, etc., affect the classification accuracy. The proposed Spatial Rough kMeans (SRKM) addresses the issue of mixed pixels in raster data. In Spatial Rough k-Means, the number of boundary or mixed pixels is reduced based on the spatial neighborhood property. The clustering quality parameters are used to understand the impact of approximation of boundary pixels on the quality of clusters. The experimental results of analyzing two multi-spectral Landsat 5 TM dataset of Nagarjuna Sagar Dam and Western Ghats region using SRKM indicate the potential of the method by addressing the mixed pixel issues related to raster data.

1 Introduction Geospatial applications have invaded most web- and IT-based services, adding value to information-based solutions. The analysis of geospatial data therefore is of great importance in fields like remote sensing, weather prediction, climate change analysis, deforestation, global warning, etc., which influence national as well as global concerns. Any geographical phenomenon on the earth’s surface represented by a set of numerical values in geographical coordinate system is called geospatial data. Raster model of geospatial data represents the data in the form of matrices of numeric values of earth’s reflectance. The vector data are represented as points, lines, and polygons [1]. A. Raj (B) · S. Minz School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India e-mail: [email protected] S. Minz e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_20

215

216

A. Raj and S. Minz

The computational challenges associated to raster data and their analysis provide research opportunities. Some of the issues of remotely sensed data presented in raster format are discussed in this paper to offer computational solutions. The scarcity of labelled data of the remote sensing (RS) images is foremost of these issues. Preparing labeled data involves a time-consuming and costly process. Thus clustering, an unsupervised classification, is considered in this paper. Clustering is a process of assigning similar objects to a group and dissimilar objects to different groups based on the features. k-means, rough k-means, fuzzy C-means, ISODATA algorithm, etc., are a few clustering techniques. The cluster quality measures are used to compare the efficiencies of the various clustering algorithm. The second issue pertains to the quality of data. Mixed pixels are the result of spatial resolution. Due to aggregation of reflectance values of geo-objects lying within the area of a pixel, the reflectance value of considered pixel undergoes aggregation, resulting in spectral value that do not easily indicate the class the pixel should belong to. This causes degradation in cluster quality as the mixed pixels get classified in a (predicted) class other than the actual class. Rough set theory provides ways in dealing with uncertainties due to noise or absence of data. The Rough kMeans algorithm [2] yields two sets corresponding to each cluster: the lower and the upper approximation. Objects, pixels in our case, certainly belonging to the cluster are grouped as the lower approximation. The mixed pixels with uncertainty of their belongingness to a cluster are grouped as the upper approximation of the cluster. Hence, a pixel belonging to an upper approximation may belong to upper approximations of two or more clusters. To be able to cluster the pixels with class uncertainty to the cluster it belongs to without much affecting the cluster quality, is the challenge with respect to the mixed pixels addressed in this paper. The proposed Spatial Rough k-Means algorithm uses the spatial neighborhood model to approximate the mixed pixels to yield crisp clusters. The remaining part of the paper is organized as follows: Sect. 2 presents overview of contributions of various researchers. Section 3 briefs about state-of-the-art algorithm. Section 4 presents proposed work; the experiment and results are presented in Sect. 5, and finally, the conclusions are discussed in Sect. 6.

2 Literature Survey Wang et al. [3] in their work presented an algorithm integrating fuzzy multi-classifiers in classification. Yin et al. [4] presented a semi-supervised method for learning informative image representations. They proposed to represent an image by projecting it onto an ensemble of prototype sets sampled from a Gaussian approximation of multiple feature spaces. Li et al. [5] applied ant colony optimization problem to Thematic Mapper (TM) image of Hubei Province and compared with unsupervised classification. Wang et al. [6] proposed knowledge-based method for road damage detection solely from post-disaster high-resolution remote sensing image. In [7], the

Spatial Rough k-Means Algorithm for Unsupervised …

217

effectiveness of unsupervised learning technique was investigated for change detection in water, vegetation, and built-up classes of part of Delhi region in India. Tabrej and Minz [8] used k-medoid to determine spatial clusters and rough k-medoid to determine the boundary region of these clusters. Those clusters which have more number of points than threshold were called hotspots. Gu et al. [9] combined graph theory and fractional net evolution approach (FNEA) and developed parallel multiscale segmentation method for RS imagery. FNEA uses minimum heterogeneity rule (MHR), used for merging of objects, while graph theory uses minimum spanning tree algorithms, used for initial segmentation.

3 State-of-the-Art Algorithm: Rough k-Means (RKM) The algorithm proposed by Lingras’ [2] focuses not only on assigning each object to a cluster based on the similarity of the object to a cluster center but also if the similarity to one cluster is not certain. A rough set is described by its lower and upper approximations. In case of clustering, an object, if satisfies the similarity criteria with certainty, is assigned to the lower approximation of a cluster; else to the upper approximation. For an approximation threshold δ, if object is found to be almost equidistant from two or more cluster centers, then it indicates uncertainty to which cluster it may belong, so the object be assigned to the upper approximation of both the clusters. Then, as per Eq. 1, for two clusters centers ci and c j and an object x,    x ∈ Yi and x ∈ Y j if distance(x, Ci ) − distance x, C j  ≤ δ else x ∈ Z i

(1)

where let Z i is the lower approximation and Yi the upper approximation of a cluster center Ci , and Y j the upper approximation of a cluster center C j .Upper approximation is the superset of lower approximation as in Eqs. (2)–(4), say for a cluster, Ci , Z i ⊆ Yi and Z i = Yi ∩ Z i  x∈Z i

distance(x, Ci ) n(a ˆ 2) from Eq. (12), it is concluded that an observation will be regarded as an outlier if it is outside the defined limits. Also, it can mention that an observation will be considered an outlier if the absolute value of the standardized residual is greater than 2. On this basis of this said definition winsorized equation II can be expressed as: ⎡

yi , −2 ≤ ri ≤ 2 ⎣ y ∗ = yˆi + n(a ˆ 1 ), ri < −2 i ˆ 2 ), ri > 2 yˆi + n(a

FP-MMR: A Framework for the Preprocessing of Multimodal MR Images Table 1 Performance parameters

1.

Accuracy

2.

Dice coefficient

3.

Sensitivity

4.

Specificity

371

TN+TP TN+TP+FP+FN 2×TP 2×TP+FP+FN TP TP+FN TN TN+FP

Further, this can also be express as winsorized equation III: ⎡

|yi , ei ≤ csi | ⎣ y ∗ = yˆi + n(a ˆ 1 ), ei < −csi i yˆi + n(a ˆ 2 ), ei > csi where c is the tuning constant and si is the Winsor approach.

5 Experimental Analysis 5.1 Performance Measures The accuracy, sensitivity, dice coefficient, and specificity are used to validate the FPMMR performance where true negative (TN) is the correct number of predictions of an instance that is negative, false positive (FP) is the correct number of incorrect predictions that an instance is positive, the number of incorrect predictions that an instance is negative can express as false negative (FN), and true positive (TP) is the number of correct predictions of an instance that is positive (Table 1).

5.2 Experimental Setting The experimental setup of FP-MMR has been implemented in the Python language using packages such as NumPy, Keras, Tensorflow, Scipy, Statsmodels, tqdm, nibabel, h5py, scikit_learn, and termcolor. The machine used has configuration: 8 GB of RAM, 1024 GB hard disk, with installed Ubuntu Linux OS.

5.3 Results and Discussion Four parameters have been used in determining the results of the FP-MMR framework such as sensitivity, accuracy, specificity, and dice coefficient as shown in Tables 2 and 3, respectively.

372

A. Kaur et al.

Table 2 Results of normalized approach for four modalities Patients

Modality

Accuracy

Dice

Sensitivity

Specificity

Case 1

T1

77.0

55.4

89.9

75.2

Case 2

T1

79.5

65.9

88.0

77.5

Case 3

T1

85.7

80.6

88.4

85.0

Patients

Modality

Accuracy

Dice

Sensitivity

Specificity

Case 1

T1C

80.95

64.8

89.9

79.5

Case 2

T1C

80.0

63.6

89.9

78.4

Case 3

T1C

78.5

62.5

89.9

76.4

Patients

Modality

Accuracy

Dice

Sensitivity

Specificity

Case 1

FLAIR

87.01

79.7

89.3

86.7

Case 2

FLAIR

86.3

78.2

83.1

86.8

Case 3

FLAIR

86.7

80.3

88.0

86.5

Patients

Modality

Accuracy

Dice

Sensitivity

Specificity

Case 1

T2

87.0

80.70

88.6

86.8

Case 2

T2

87.4

82.3

89.4

89.1

Case 3

T2

85.7

76.0

89.9

85.0

Table 3 Results of normalized winsorized approach for four modalities Patients

Modality

Accuracy

Dice

Sensitivity

Specificity

Case 1

T1

91.8

83.5

99.7

89.7

Case 2

T1

91.4

81.4

98.6

89.4

Case 3

T1

91.5

75.6

98.9

90.23

Patients

Modality

Accuracy

Dice

Sensitivity

Specificity

Case 1

T2

94.8

85.4

98.7

93.9

Case 2

T2

92.7

78.5

99.8

91.6

Case 3

T2

90.0

74.1

99.7

88.4

Patients

Modality

Accuracy

Dice

Sensitivity

Specificity

Case 1

FLAIR

90.0

67.6

98.9

85.0

Case 2

FLAIR

95.0

88.5

99.8

93.9

Case 3

FLAIR

87.0

67.6

97.6

85.0

Patients

Modality

Accuracy

Dice

Sensitivity

Specificity

Case 1

T1C

85.4

67.6

99.8

82.8

Case 2

T1C

87.1

69.8

99.0

84.8

Case 3

T1C

87.8

73.3

94.5

85.4

FP-MMR: A Framework for the Preprocessing of Multimodal MR Images Fig. 4 Performance parameters for normalized approach with individual modalities

373

Normalised Results

100

Dice coefficient Accuracy

50

Sensi vity 0

Fig. 5 Performance parameters for normalized winsorized approach with individual modalities

T1

T1C

FLAIR

T2

Specificity

Normalized +Winsorized Results 100

Dice coefficient Sensi vity

50

Specificity 0

T1

T1C

FLAIR

T2

Accuracy

Table 2 summarizes the results of a normalized approach with four different modalities. Figure 5 demonstrates the result statistics with higher bar graphs of accuracy, sensitivity, and dice coefficient as compared to Fig. 4. As shown in Table 3, the normalized winsorized approach gives the best accuracy, dice coefficient, and sensitivity results in all the modalities. A total of three cases has been taken for each modality for the evaluation of the performance. The proposed method produces virtuous results with an average accuracy of 90.37 with an improvement of 6.9% and a dice coefficient of 76.0 with an improvement of 3.5 than a normalized approach. It also shows that it has the capability of achieving acceptable performance in terms of bias correction of medical data. The results are being compared with the traditional method on each of the modality taken from BRATS 2013 challenge which is depicted in Fig. 6.

374

A. Kaur et al.

OUTPUT

INPUT TradiƟonal

Proposed

T1

T1C

FLAIR

T2

Fig. 6 T1, T2, FLAIR, and T1C modalities from BRATS challenge 2013 computed with the proposed method along with the traditional method

6 Conclusion The image preprocessing framework of medical images has been proposed to preprocess the images before being pass to the segmentation phase. The experiments are conducted using real patient data from the 2013 brain tumor segmentation challenge (BRATS 2013). The achieved results demonstrate that FP-MMR succeeds in improving state-of-the-art methods with high accuracy, sensitivity, and dice coefficient. A higher value of the dice coefficient and accuracy gives the suitability of the ensemble method. The ensemble method having an average accuracy of 90.73 and a dice coefficient of 76.0 which shows an improvement of 6.9% and 3.5%, respectively. The improvement of the FP-MMR proposed in the future could be based on more advanced preprocessing and improvement techniques. In addition, we can extend the framework to segment tumors in medical images into a segmentation process.

FP-MMR: A Framework for the Preprocessing of Multimodal MR Images

375

References 1. Iqbal, S., Ghani. M.U., et al.: Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN). Microsc. Res. Tech., 1–9 (2018). https://doi.org/10.1002/jemt. 22994 2. Roy, S., Nag, S. et al.: A review on automated brain tumor detection and segmentation from MRI of brain. Int. J. Adv. Res. Comput. Sci. Softw. Eng. (2013). arXiv:1205.6572 3. Akkus, Z., Galimzianova, A., et al.: Deep learning for brain MRI segmentation: state of the art and future directions. J. Digit. Imag. 30, 449–559 (2017). https://doi.org/10.1007/s10278-0179983-4 4. Havaei, M., Davy, A., et al.: Brain tumor segmentation with deep neural networks. Med. Imag. Anal. 35, 18–31 (2017). https://doi.org/10.1016/j.media.2016.05.004 5. Pereira, S., Pinto, A., et al.: Brain tumor segmentation using convolutional neural networks in mri images. IEEE Trans. Med. Imag. 35(5), 1240–1251 (2016) 6. Roy, S., Butman, J.A., et al.: Robust skull stripping using multiple mr image contrasts insensitive to pathology. Neuroimage 146, 132–147 (2017). https://doi.org/10.1016/j.neuroimage.2016. 11.017 7. Shah, M., Xiao, Y., et al.: Evaluating Intensity normalization on MRIs of human brain with multiple sclerosis”. Med. Image Anal. 15(2), 267–282 (2011) 8. Malathi, M., Sinthia, P.: Brain tumour segmentation using convolutional neural network with tensor flow. Asian Pac. J. Cancer Prev. 20(7), 2095–2101 (2019) 9. Kamnitas, K., Ledig, C.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Imag. Anal. 36, 61–78 (2017). https://doi.org/10.1016/j.media. 2016.10.004 10. Nyul, L., Udupa, J., Zhang, X.: New variants of a method of MRI scale standardization. IEEE Trans. Med. Imag. 19(2), 143–150 (2000) 11. Zikic, D., Loannou, Y. et al.: Segmentation of brain tumor tissues with convolutional neural networks. In: MICCAI workshop on Multimodal Brain Tumor Segmentation Challenge (BRATS) (2014) 12. Tustison, N.J., Avanta, B.B., et al.: N4ITK: improved N3 bias correction. IEEE Trans. Med. Imag. 29(6), 1310–1320 (2011). https://doi.org/10.1109/TMI.2010.2046908 13. Reifman, A., Keyton, K.: Winsorize. In: Salkind, N.J. (ed.) Encyclopedia of Research Design, pp. 1636–1638. Sage, Thousand Oaks, CA (2010) 14. Ruppert, D.: Trimming and winsorization. Encycl. Stat. Sci. (2006). https://doi.org/10.1002/ 0471667196.ess2768.pub2 15. Farahani, K., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imag. 34(10), 1993–2024 (2015) 16. Cox, M.G.: The numerical evaluation of B-splines. J. Inst. Math. Appl. 10, 134–149 (1972)

Machine Learning in Medical Image Processing Himanshu Kumar and Yasha Hasija

Abstract Machine Learning is the understanding of some rules or algorithms by the machine. It is the process of providing scientific algorithm and statistical model to computer system which utilizes it to perform specific task without using explicit instructions. Image processing is the method used for quantitative analysis of digital image data by certain algorithm. With advancement in medical science, there are many sophisticated methods for medical imaging and some of them involve the use of machine learning. Machine learning employed on image processing of diseased images can help in earlier detection and diagnosis of developing chronic diseases.

1 Introduction With the advancement in the digital age, we can take the image of almost every part of the body which can provide initial footstep in improving our medical science. Image processing is the process to enhance and improve the image to acquire various features from it. Medical imaging is now becoming very popular in the diagnosis of patients. Machine learning can now also be employed in the field of medical imaging. As the name suggests ‘Machine Learning’ is the learning of some rules or algorithms by the machine. This helps the machine to take the decision of new problems on its own. Through the aid of machine learning, we can improve the analysis of various diseases because of earlier detection of disease images.

H. Kumar · Y. Hasija (B) Department of Biotechnology, Delhi Technological University, Shahbad Daulatpur, Main Bawana Road, Delhi 110042, India e-mail: [email protected] H. Kumar e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_35

377

378

H. Kumar and Y. Hasija

2 Machine Learning Machine learning is the process of providing scientific algorithm and statistical model to computer system which uses it to perform explicit assignment without utilizing unequivocal directions. Machine learning involves three key steps as data preparation, feature extraction and prediction. All three of these have only one purpose of training the data model which uses training dataset and test dataset (Fig. 1). Machine learning models are divided into five types based on instructions, input, and output data provided to them: • Supervised learning: This model is prepared on inputs and outputs provided to the machine to predict the output of new cases. Labelled data is used for supervised learning. Depending on output types, it is called regression or classification. Regression for continuous variable output and classification for discrete value output. • Unsupervised learning: This model is trained by making the cluster of data based on coordinating comparability and closeness. Unlabelled data is used for unsupervised learning. • Semi-supervised learning: This model is prepared on the modest quantity of labelled and huge measure of unlabelled information. • Reinforcement learning: This model lacks both input and output data but aims at better online performance. • Optimization: This includes choosing the model which fits the information properly and gives ideal outcome. Based on these learning approaches, there are various machine learning models. Most commonly used models incorporate support vector machine (SVM), artificial neural network (ANN), genetic calculation and Bayesian system. For image processing purpose, artificial neural network (ANN) and support vector machine (SVM) are broadly practice.

Fig. 1 Machine learning workflow

Machine Learning in Medical Image Processing

379

Fig. 2 Support vector machine (SVM)

2.1 Support Vector Machine (SVM) This is a controlled learning method that is employed for classification and regression. It trains a model which is capable of categorizing new entities. SVM is linear classifier which is entirely non-probabilistic. Platt scaling method is used in probabilistic classification situation in which outputs are transformed into probability distribution over classes. For high-dimensional classification, SVM is used with kernel trick which allows implicit mapping of inputs (Fig. 2).

2.2 Artificial Neural Network (ANN) This computer algorithm is used for both classification and regression. It is based on functioning of neurons like in human nervous system and each neuron is regarded as ‘node’ which is assigned into three different layers: • Input layer: It consists of raw information which will be feed to the network. • Hidden layer: This is used for weight calculation and creating the link between input and output layers. • Output layer: It consists of output values which correspond to prediction of response variable (Fig. 3). Fig. 3 Artificial neural network

380

H. Kumar and Y. Hasija

Fig. 4 Image processing workflow

3 Image Processing Image processing is the method used for quantitative analysis of digital image data by certain algorithm. With the tendency to develop as much automation as possible, many sophisticated image processing methods are possible. Medical image processing is used to isolate and extract the potential region of diseases by using different image processing algorithms depending on the type of disease. There are various methods used for image processing such us clustering, watershed transformation, region growing, thresholding, and compression-based method. Five main steps are essential for image processing which incorporate input image, pre-processing, segmentation, feature extraction and classification (Fig. 4).

4 Recent Work Now, medical science is extensively dependent on medical imaging that helps in determining the internal structure of the human body and diagnose it. Dermatology is the part of medicine which deals with wounds, scars, skin cancer and skin sicknesses. Various skin disorders result in different and unique pattern of distortion on the skin which can be identified by image processing techniques. Image processing has become important in the earlier detection of dermatological disorders. There are several approaches proposed for the detection of skin and leaves disorder. Most of these approaches considers the pixel and colour of the image as the features for image processing. In the last decade, with advancement in medical science, robust research has been done to develop automated systems that are proficient of simplifying medical imaging task [1]. For diagnosis of skin disorders, majority of research is performed on human skin and plant leaves.

Machine Learning in Medical Image Processing

381

5 Discussion and Results 5.1 Human Skin • In 2011, Tushabe et al. [2] worked on developing the system that is fit for distinguishing skin diseases in sub-saharan Africa which arranges the picture as a bacterial or viral skin disease utilizing image processing techniques. They get the accuracy precision of as much as 100% with some of the training images. • In 2011, Asghar et al. [3] proposed the paper that gives an online framework to recognize some skin diseases by utilization of forward-chaining with depth-first search method. Their framework can analyse and identify in excess of 13 kinds of skin diseases. • In 2013, Damilola et al. [4] described a system that utilizes prototyping system in collecting pigmented skin lesions picture results and further examination that information by contrasting perception and ends. • In 2012, Arifin et al. [5] worked on a framework that utilizes an image processing system with a feed forward neural network and pre-processing algorithms. • In 2015, Amarathunga et al. [6] worked on making an another system that is used for skin detection by utilizing a data mining unit. • In 2019, ALEnezi [7] proposed an image processing-based strategy to distinguish skin diseases. Their system works on colour of the sample image that is classified using multiclass SVM which is giving the overall accuracy of 100% • In 2019, Hameed et al. [8] used deep learning and machine learning to do the classification of skin lesion. They also developed multiclass multi-level algorithm to enhance the accuracy up to 96.5%. • In 2019, Kadampur et al. [9] developed the skin cancer detection model using the deep learning algorithm which differentiates between dermal cell images. Their model gives the overall accuracy on 99.77%. • In 2018, George et al. [10] proposed the unsupervised machine learning method for five different erythema severity classes using visual recognition technique in psoriasis. Their model shows 9% and 12% improvement over BoVWs- and AlexNet-based features, respectively. • In 2018, Al-masni et al. [11] proposed novel FrCN method for subdivision of skin lesions utilizing deep learning algorithm. FrCN achieved accuracy of 90.78% in melanoma cases, 95.62% in some clinical benign cases and 91.29% in seborrheic keratosis cases.

5.2 Plant Leaves • In 2010, Rumpf et al. [12] suggested the support vector machine-based method for the classification of healthy sugar beet leaves and diseased leaves with the overall

382



• • • •

H. Kumar and Y. Hasija

accuracy of 97%. Further classification of leaves into three diseases achieved the accuracy of 86%. In 2010, Cui et al. [13] performed the image processing of plant leaf dependent on Hue Saturation Intensity colour model for division of contaminated zones on leaves. They used polar coordinate system as an alternate method to analysis the centroid of the leaf colour distribution. In 2018, Ferentinos [14] developed the deep learning model for detection and diagnosis of plant diseases. They used 87,848 images to define training and testing dataset and their model achieved the overall accuracy of 99.53%. In 2018, Fu et al. [15] worked on five algorithms to extract features from froth images. They considered two case studies, one for industrial floatation and other for batch floatation. In 2019, Parraga-Alava et al. [16] developed the dataset of leaf’s image which can be used for plant diseases recognition. Their dataset contains 1560 leaf images with infection and healthy cases. In 2019, Gu et al. [17] proposed a system for earlier recognition of tomato spotted wilt virus in tobacco with the overall accuracy of 85.2%. They used several machine learning algorithms in their system such as genetic algorithm, support vector machine and boosted regression tree.

6 Conclusion This article reviews the progress and development of various techniques used in the field of medical imaging. Many researches in the last decade show how image processing and machine learning facilitate the earlier detection of dermatological disorders. With the advancement in technology, deep learning is also used in this field which utilizes more hidden layers in their neural network. Researchers have improved the accuracy of their detection models by using different machine learning methods such as ANN (feedback or feed forward network) and SVM models. Machine learning models can be trained by utilizing the abundant data of sample images with the help of image processing. Depending on availability and abundances of these sample images, accuracy of prediction model varies. Different sample image types also influence the type of machine learning model suitable for them. Therefore, in the last decade, researchers have developed suitable prediction models which are useful for earlier detection of various dermatological and plant diseases.

References 1. Medical images classification for skin cancer diagnosis based on combined texture and fractal analysis—Semantic Scholar. https://www.semanticscholar.org/paper/Medical-images-classi fication-for-skin-cancer-diag-Dobrescu-Dobrescu/2c6035be57bfb028c81490160f115aac7 10ca7da. (n.d.). Accessed 13 Nov 2019

Machine Learning in Medical Image Processing

383

2. Tushabe, F., et al.: An image-based diagnosis of virus and bacterial skin infections (PDF Download Available). https://www.researchgate.net/publication/268241732_An_image-based_dia gnosis_of_virus_and_bacterial_skin_infections(n.d.). Accessed 13 Nov 2019 3. Asghar, M.Z. et al.: International journal of computer science and information security. IJCSIS Publication. https://www.researchgate.net/publication/215565885_Diagnosis_of_ Skin_Diseases_using_Online_Expert_System. (n.d.). Accessed 13 Nov 2019 4. Okuboyejo, D.A., Olugbara, O.O., Odunaike, S.A.: Automating skin disease diagnosis using image classification. In: proceedings of Lecture Notes in Computational Science and Engineering, pp. 850–854. http://www.scopus.com/inward/record.url?eid=2-s2.0-84903463250& partnerID=40&md5=8177470d8016552ecb409c25b4777618. (2013). Accessed 13 Nov 2019 5. Arifin, M.S., Kibria, M.G., Firoze, A., Amini, M.A., Yan, H.: Dermatological disease diagnosis using color-skin images. In: Proceedings of International Conference on Machine Learning and Cybernetics, pp. 1675–1680. https://doi.org/10.1109/icmlc.2012.6359626. (2012) 6. Amarathunga, A.A.L.C., Ellawala, E.P.W.C., Abeysekara, G.N., Amalraj, C.R.J.: Expert system for diagnosis of skin diseases. Int. J. Sci. Technol. Res. 4, 174–178. www.ijstr.org. (2015) 7. ALEnezi, N.S.A.: A method of skin disease detection using image processing and machine learning. https://www.sciencedirect.com/science/article/pii/S1877050919321295#!. Accessed 13 Dec 2019 8. Hameed, N., Shabut, A.M., Ghosh, M.K., Hossain, M.A.: Multi-class multi-level classification algorithm for skin lesions classification using machine learning techniques. https://www.sci encedirect.com/science/article/abs/pii/S0957417419306797. Accessed 13 Dec 2019 9. Kadampur, M.A., Al Riyaee, S.: Skin cancer detection: applying a deep learning based model driven architecture in the cloud for classifying dermal cell images. https://www.sciencedirect. com/science/article/pii/S2352914819302047. Accessed 13 Dec 2019 10. George, Y., Aldeen, M., Garnavi, R.: Psoriasis image representation using patch-based dictionary learning for erythema severity scoring. https://www.sciencedirect.com/science/article/abs/ pii/S0895611118301010. Accessed 13 Dec 2019 11. Al-Masni, M.A., Al-Antari, M.A., Choi, M.T., Han, S.M., Kim, T.S.: Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. https://www.sciencedi rect.com/science/article/abs/pii/S0169260718304267. Accessed 13 Dec 2019 12. Rumpf, T., Mahlein, A.K., Steiner, U., Oerke, E.C., Dehne, H.W., Plümer, L.: Early detection and classification of plant diseases with Support Vector Machines based on hyperspectral reflectance. https://www.sciencedirect.com/science/article/pii/S0168169910001262. Accessed 13 Nov 2019 13. Cui, D., Zhang, Q., Li, M., Hartman, G.L., Zhao, Y.: Image processing methods for quantitatively detecting soybean rust from multispectral images. https://www.sciencedirect.com/sci ence/article/pii/S1537511010001303. Accessed 13 Nov 2019 14. Ferentinos, K.P.: Deep learning models for plant disease detection and diagnosis. https://www. sciencedirect.com/science/article/pii/S0168169917311742. Accessed 13 Dec 2019 15. Fu, Y., Aldrich, C.: Froth image analysis by use of transfer learning and convolutional neural networks. https://www.sciencedirect.com/science/article/abs/pii/S0892687517302510. Accessed 13 Dec 2019 16. Parraga-Alava, J., Cusme, K., Loor, A., Santander, E.: A robusta coffee leaf images dataset for evaluation of machine learning based methods in plant diseases recognition. https://www.sci encedirect.com/science/article/pii/S2352340919307693. Accessed 13 Nov 2019 17. Gu, Q., Sheng, L., Zhang, T., Lu, Y., Zhang, Z., Zheng, K., Hu, H., Zhou, H.: Early detection of tomato spotted wilt virus infection in tobacco using the hyperspectral imaging technique and machine learning algorithms. https://www.sciencedirect.com/science/article/pii/S01 68169919304089. Accessed 13 Nov 2019

Adaptive Educational Resources Framework for ELearning Using Rule-Based System Leo Willyanto Santoso

Abstract The implementation of Information and Communication Technology (ICT) in education sector has been carrying a great potential possibility to provide students an environments to their needs and preferences. There currently numerous education industry that are working with standard, traditional or non-adaptive elearning. There are no fixed learning resources, processes and strategies for students, so adaptive framework is really needed. In addition, educational content adapted for some students may not be appropriate for the others. In this paper, an adaptive framework is proposed. This proposed framework developed using rule-based system to orchestrate the interaction with the student and deliver customized resources that are available through e-learning repositories. Moreover, this proposed framework could be accessed by student with visual disabilities by equipping with customized user interfaces.

1 Introduction The rapid advancement of Information and Communication Technology (ICT) has dramatically increased technology use in teaching and learning processes. This has led to educational institutions giving a twist to the formulation of their academic programs and curricula, including new tools, courses and pedagogical aids based on virtual education platforms. All this has generated the need to start studying the different ways of learning of students and develop strategies that allow adapting educational processes. Virtual education is an increasingly popular trend in the different educational institutions, because it allows reaching a greater number of people with the same resources, as well as serving the public that has limitations to attend a physical institution [1, 2]. Through the virtual education platforms, complementary courses L. W. Santoso (B) Informatics Department, Petra Christian University, 121-131 Siwalankerto, Surabaya, Indonesia e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_36

385

386

L. W. Santoso

and even complete training actions are offered, such as professional careers [3]. This makes the number and type of interactions between these platforms and students increasingly growing and varied. Adaptive education platform is an important research topic. It works according to the needs and particular preferences of the students. In order to carry out this, several type of activities are required to identify some characteristics and to establish a specific profile for each student [4–6]. A student model is needed to know the specific characteristics of each user in order to perform processes related to personalization, such as the delivery of adapted content or the recommendation of those that cover a large part of their needs [7, 8]. The student information gathering is required prior the construction of a user profile. There are several susceptible components to adaptation in an education system: the interfaces with the student, the course plan, educational strategies, selection of educational resources, information filtering and the process evaluative, among others. In this paper, we propose a platform oriented to the adapted delivery of digital education resources according to the characteristics captured in the student profile, supporting the search process and recovery of this material. The platform also performs changes in some of the interface settings for offer greater accessibility to the user. The platform performs the adaptation process based on a system of rules that crosses the characteristics of the model of the student with the metadata of the educational resources that are stored in repositories and that result from a search made by the user. The rest of the paper is organized as follows: In Sect. 2 presents the theoretical framework and literature study. In Sect. 3 the design works related to the proposal, which is detailed in the Sect. 4. Finally, Sect. 5 presents conclusions and future work.

2 Literature Study 2.1 Adaptive Framework One of the great possibilities output of the collaboration between ICT and education is to offer a customized environments for students that appropriate to their needs and preferences. It’s known as adaptive, due to its ability to automatically respond to these conditions [7, 9]. In an adaptive system, a strategy of adaptation that consists of establishing “What to Adapt” specifying the components to be delivered in a personalized, “When to Adapt” that corresponds to the moment which will trigger the adaptation, “Why Adapt” that is relates to the objectives of the adaptation process and “How Adapt “that are recognized as rules of adaptability [10]. A particular case of adaptive systems are the recommendation systems that offer suggestions of items, objects, products or services that are useful for a user, making predictions of their tastes or needs [11]. This type of systems have main feature

Adaptive Educational Resources Framework …

387

like the ability to work with users of individual way, identifying their preferences and elements potentially relevant, for which profiles are required to structure this information [12]. One of the main applications of the recommendation systems is in search engines of different types, where the results of a search are filtered to select those that contain information close to the identified conditions for each user [13]. In the modeling of a recommendation system defines the elements that intervene, such as the characteristics that will be captured from the user, the recommendation strategies that will be used and the detail of the items that will be recommended.

2.2 Student Model The main objective of user in a system is to have access to services and contents that meet their needs. Therefore, the user profile is understood as the modeling with the required information to identify each user of independently and offer an experience more in line with its characteristics [14]. This modeling is fundamental and it requires an adequate structure for its analysis, recovery and use [15]. Student Models are geared towards capturing, storing and updating relevant information related to both the characteristics of the student and with some elements of the educational process. It seeks to define distinctive and most relevant characteristics of each user in the teaching and learning process for systems in which they want to make some kind of personalization or adaptation [16]. One of the main advantage of the student model is the ability to deliver search results different for each user according to their characteristics, needs and preferences. Specifically for the case of repositories of educational material, it is expected to deliver resources that present elements that support the requirements of the student and can enrich the educational process [17].

2.3 Digital Resources Digital Educational Resources are distinguished from other resources because of their predisposition to reuse in multiple contexts, in addition to their availability in different environments [1]. They are recognized as digital entities that their main characteristics are reusability, adaptability, accessibility and scalability, what it offers advantages over other types of educational resources. In addition they are accompanied by metadata that describe them and allow your identification, to facilitate your search, recovery and use [16] Educational resources are stored in repositories that allow their management and effectiveness in searches and recovery [3, 4]. Millions of these resources are stored and managed through repositories that must follow a series of standards in order to increase its effectiveness and interoperability, guaranteeing access by students and teachers around the world.

388

L. W. Santoso

2.4 Recommendation Systems A recommendation system as a complement to a smart mentoring system, whose main objective is to increase interaction of the student and the teacher, through the recommendation of learning objects to the teacher according to the topics that he dictates and according to the profiles of the students who receive the course [17]. Although his contribution focuses on creation of efficient and adapted virtual courses, characteristics of the objective profile of this work as are special education needs. Klašnja-Mili´cevi´c in [18] developed a recommendation system for a programming tutoring module called PROTUS. Its main objective is to deliver and build programming courses that are tailored to the student’s learning. In this system, they are taken into account various factors such as: student’s educational level, learning style and navigation logs, with the purpose of identify individual characteristics of each student to deliver adapted content to it. To make this process of recommendation, students are first classified in different clusters according to their learning style, followed by the interactions that the student has analyzed. Finally, each student is presented with a list of recommendations ordered according to the qualifications frequent, provided by the Protus system and expected that the delivered results have a high level of acceptance by students. Salehi and others present a hybrid system of recommendation for educational materials using genetic algorithms, perform two recommendation processes, the first of them deals with the explicit characteristics represented in a preference matrix the interests of the student. The second recommendation is with implied pesos to educational resources that are considered as chromosomes in the genetic algorithm to optimize them according to the historical values. This recommendation is generated by the nearest neighbor [19]. Peissner and Edlin-White [20] propose a design of patterns based on the implementation approach of adaptive user interfaces for people with special needs. In this work, they are based on development of adaptive interfaces and not punctually in the delivery of adapted educational materials. They present a recommendation system based on roads for accessibility. They give resources to people with special needs. Uses the concepts of computing ubiquitous, it also focuses on finding similarities between paths, context information and user profiles for recommend accessible resources. A large number of systems work has been carried out adaptive in education, however they have not yet been filled expectations due to problems such as the lack of generic personalization schemes [10, 21, 22] and difficulties in the capture and update of the student profile [23], in addition when it comes to educational resources it is necessary to have metadata that allows you to clearly distinguish your characteristics, in order to make a personalized selection [24, 25]. In this research, we present a platform that delivers adapted educational resources to the needs and preferences of users.

Adaptive Educational Resources Framework …

389

3 System Design Within the repositories there is a great variety of educational programs resources, which have different characteristics indicated by their metadata. The metadata can be defined in different standards, for this proposal we use the IEEE-LOM standard extending some metadata to handle accessibility data, using information of the different categories. The searchers of these resources, usually perform searches only the keywords, obviating a large number of attributes. This leads to are not considered user characteristics as can be, his learning style that has a close relationship with the way in which the student prefers the educational contents sought, supporting their teaching - learning process. Other characteristics such as educational level, some cultural conditions and certain special needs of education are not commonly taken into account when deliver the resulting educational resources in a search. However, this could improve the experience of users when finding material that best suits their terms. Taking into account the previous approach, it is proposed a technological platform that allows us to adapt the search and recovery of digital educational resources in accordance with specific characteristics of the users, in addition to some features associated with the interface. One of the main elements in an adaptive system is the student model, where the characteristics that will allow to establish difference between each user and offer an answer according to these dissimilarities. For this proposal, we work with the student model presented in Fig. 1.

VISUAL (Yes, No) Visual Level (Null, Low Vision) Font Font Size (xxPts) Contrast Space Between Lines Emphasis on Links and Buons

(First + Last) Name Email Instuon Date of Birth Language

AUDITORY (Yes, No) Visual Level (Null, Low Vision) Sign Language Texts Default Volume

Personal Informaon

ND MOTORIK (Yes, No) Able to use keyboard Able to use mouse Uses some assisve technology COGNITIVE Difficulty texts Difficulty to follow instrucons Difficulty to concentrate

STUDENT MODEL psycho-pedagogical characteriscs

ETHNIC

Educaon Level (Preschool, KG, Primary School, Senior High School, UG, General) Learning Style (Global Auditory, Sequenal Auditory, Global Kinesthec, Sequenal Kinesthec,

COMMUNITY

Global Reader, Sequenal Reader, Global Visual, Sequenal Visual

Fig. 1 Proposed student model

390

L. W. Santoso

Based on the review carried out on some models of users in educational systems and previous work, Three main components are defined: data personal, psychopedagogical characteristics and Education Specials Needs (NEED). The capture of these characteristics will support the adaptation process, recognizing specific conditions of students and delivering educational resources according to them. The process of capturing the student’s profile is done through a registration system in which the user is made a series of questions divided into two tests. The first test is oriented to identify if the student presents some type of visual, auditory, motor or cognitive disability. The test also asks about related aspects to the form as the student interacts with the platform, what preferences for its visualization and control. If it requires some kind of support or if the contents must comply with some special conditions. It also allows to establish if belongs to an ethnic community, i.e. community indigenous, which has different culture, language and customs and to which we can deliver developed resources in our own language according to these unique aspects of their culture. In the second test, the learning style is identified predominant in the user, where the models are combined Visual Auditory Read/Write Kinesthetics (VARK) and FelderSilverman Learning Style Model (FSLSM) making a total of 24 questions [26]. FSLSM test takes only the sequential-global dichotomy, related to how to process and understand the information. During this registration process, the personal data of the student, language and educational level in which it is located, that according to the established in Indonesia it could be: preschool, basic primary, basic secondary, middle and higher. The category is also established General for cases where the student is not in a formal educational process. The platform allows searching and recovery of adapted educational resources to the special needs of education and psychopedagogical characteristics of the user. In the Fig. 2 presents the general scheme of the process of adaptation. recovery of educaonal resources

Adapve Plaorm

search for educaonal resources

delivery of adapted educaonal resources

recovery of user profile for adaptaon

repositories of educaonal resources STUDENT MODEL

Fig. 2 General scheme of adaptation process

Adaptive Educational Resources Framework …

391

With the definition and capture of the student model and using the digital educational resources stored in distributed repositories, the platform performs the delivery of adapted resources in response to a search by the user. As previously commented, the platform also makes adaptations of some aspects of the interface, such as contrast levels, font size and type, and line spacing. This is done especially for the case where the established user has a visual disability and requires of these modifications for a better interaction. The adaptation of the educational contents is done to from a series of rules, which evaluate characteristics of the student’s profile in front of the metadata provided by the repositories. First, the educational resources that are selected meet the criteria of Language and Level of Schooling executing the following rule: Yes (language == Educational.context)

General.Language  SchoolLevel

==

Then it is verified if the student answered yes in any of special needs and the rules are executed for NEED, but the rules for learning style are executed. Below is an example of the rules for NEED: If [(NEED (visual)  Visual (Nullvision)] then {For each OA to do val = 0 [If HasAuditoryAlternative (if) then val + = 0.7]  [Yes (InteractivityLevel (very low) v InteractivityLevel (low) v InteractivityLevel (medium)) then val + = 0.1]  [If (Format (audio) v Format (video)) then val + = 0.1] }

Below is an example of the rules for learning styles: If [Learning Style (Auditory-Global)] then {For each OA select [If Educational.LearningResourceType (audio) V Educational.LearningResourceType (video)]  [Yes Educational.InteractivityLevel (medium) V Educational.InteractivityLevel (low)] V [Educational.InteractivityType (Expositive) V Educational.InteractivityType (Mixed)] }

Once the rules corresponding to the profile of each user, you get a filtered list of resources education that adapt to their characteristics, which will allow improve the student experience and facilitate the process of identification of the educational material that supports your process learning. The platform is developed in the languages of PHP, JavaScript and HTML programming which allow have a good performance of both the server responses to when conducting searches, such as when generating interaction fast on the client side when executing the adaptation process in the interface. There is a database manager PostgreSQL, for the persistence of user data.

392

L. W. Santoso

4 Discussion and Analysis A prototype of the adaptive platform was implemented using the PHP programming languages and JavaScript. In total, 25 adaptation rules were implemented that cross the elements of the student’s profile with the metadata that are stored in digital educational resource repositories. The tests were also implemented to capture the characteristics of the students. In Fig. 3 it can be observe one of the interfaces used for this process. As a case of particular study, a simulated student that corresponds to the characteristics that They are presented below: • • • • • • • • • •

Name: Student 1 Language: Indonesia Educational Level: Basic Primary Learning Style: Visual-Global Visual NEED: No Auditory NEED: No NEED Drive: No Cognitive NEED: No Ethnic NEEDs: Yes Ethnic Community: Java.

Fig. 3 Test to identify NEED

Adaptive Educational Resources Framework …

393

Fig. 4 Results for geenric user

A search was made with the keyword “culture”, first as a generic user who has not registered to the platform, that is, one that has not been captured profile. In Fig. 4 the delivered results list is presented. For the case of the student named as “Student 1 “whose profile indicates that it is part of an ethnic community, specifically the Indonesia-Java, the process of adaptation and in response to the search “culture” only one educational resource is delivered that meets the conditions required by this student. This can be Observe in Fig. 5. As can be seen in Figs. 4 and 5, the results delivered to each user are different, that is, they were adapted to the specific characteristics of your profile. This adaptation allows the student to be able to concentrate on consulting resources that are more in line with their conditions, avoiding waste of time and possible demotivation when faced with long lists of search results that contain material that does not support properly their educational process. To evaluate the proposed framework, the consistency reliability of the system was calculated using Cronbach’s. in this testing process, we used Cronbach’s α ≥ 0.70, because it is categorized to be high in internal consistency [27]. From Table 1, it can be seen that the learning satisfaction from proposed framework is 0.924 (Cronbach’s α = 0.924).

394

L. W. Santoso

Fig. 5 Results adapted to the user

Table 1 Evaluation framework

Parameters

Cronbach’s α

Learning satisfaction

0.924

Learning ınterface – Easy to use

0.871

– User-friendly Learning content – Up-to-date content

0.895

– Contents fits your needs – Provides useful content Personalization – Learn the needed content

0.923

– Choose what you want to learn – Control your learning progress

5 Conclusion The search for educational resources was implemented by considering the student profile, allows the individual characteristics are recognized, what is expected to be translated into a recovery more in line with the needs. It can come to be reflected in a greater effectiveness in the educational process. The presented application showed that it is possible to take advantage of the metadata of educational resources, to make the process of adaptation according to the data captured in the profile of the student. The IEEE-LOM standard was used, and we carried out the extension of some metadata to consider the characteristics of accessibility. This adaptation model can be used under other standards as long as modifications are made to the rules of adaptation for the proper selection of metadata.

Adaptive Educational Resources Framework …

395

It is expected to carry out a greater process of validation of the adaptation rules, considering more of users with different profiles. As future work, the inclusion of a functionality that allows to show texts and links in the platform in Indonesian Sign Language, in order to adapt these characteristics to students who they require it. Likewise, a functionality that allow audio playback under the same conditions previous.

References 1. Nganji, J.: Towards learner-constructed e-learning environments for effective personal learning experiences. Behav. Inf. Technol. 37(7), 647–657 (2018) 2. Santoso, L.W.: Yulia: ITIL service management model for elearning. J. Adv. Res. Dyn. Cont. Syst. 11(6), 190–197 (2019) 3. McGreal, R.: A typology of learning object repositories. ˙In: Handbook on Information Technologies for Education and Training, pp. 5–28 (2008) 4. Murray, M., Pérez, J.: Informing and performing: a study comparing adaptive learning to traditional learning. Informing science. Int. J. Emerg. Transdiscipline 18, 111–125 (2015) 5. Santoso, L.W., Yulia: The analysis of student performance using data mining. Adv. Intell. Syst. Comput. (2019) 6. Santoso, L.W.: Early warning system for academic using data mining. In: Proceedings—2018 4th International Conference on Advances in Computing, Communication and Automation, ICACCA (2018) 7. Somyürek, S.: Student modeling: recognizing the ındividual needs of users in e-learning environments. Int. J. Hum. Sci. (Online). 6(2) (2009) 8. Santoso, L.W.: Yulia: predicting student performance in higher education using multiregression models. TELKOMNIKA 18(3), 1354–1360 (2020) 9. Spector, J.M.: Emerging educational technologies and research directions the 2011 Horizon report. Educ. Technol. Soc. 16(2), 21–30 (2013) 10. Zhang, J., Ghorbani, A.: GUMSAWS: a generic user modeling server for adaptive web systems. In: 5th Annual Conference on Communication Networks and Services Research (CNSR 2007), Fredericton, New Brunswick, Canada, IEEE Computer Society, pp. 117–124, 14–17 May 2006 11. Tintarev, N., Masthoff, J.: Recommender Systems Handbook, vol. 54. (2011) 12. Frias-Martinez, E., Magoulas, G., Chen, S., Macredie, R.: Modeling human behavior in useradaptive systems: recent advances using soft computing technique. Expert Syst. Appl. 29(2) (2005) 13. Isinkayea, F., Folajimi, Y., Ojokohc, B.: Recommendation systems: principles, methods and evaluation. Egypt. Inf. J. 16(3), 261–273 (2015) 14. Amato, G., Straccia, U.: User profile modeling and applications to digital libraries. Eur. Conf. Digit. Libr. 184–197 (1999) 15. Esichaikul, V., Lamnoi, S., Bechter, C.: Student modelling in adaptive E-learning systems. Knowl. Manage. E-Learn. Int. J. 3(3), 342–355 (2011) 16. Dekson, D., Suresh, E.: Adaptive e-Learning techniques in the development of teaching electronic portfolio—a survey. Int. J. Eng. Sci. Technol. 2(9), 4175–4181 (2010) 17. Dall’Acqua, L.: A model for an adaptive e-learning environment. In: Proceedings of the World Congress on Engineering and Computer Science (WCEES), vol. 1 (2009) 18. Klašnja-Mili´cevi´c, A., Vesin, B., Ivanovi´c, M., Budimac, Z.: E-learning personalization based on hybrid recommendation strategy and learning style ıdentification. Comput. Educ. 56(3), 885–899 (2011) 19. Salehi, M., Pourzaferani, M., Razavi, S.: Hybrid attribute based recommender system for learning material using genetic algorithm and a multidimensional information model. Egypt. Inf. J. 14(1), 67–78 (2013)

396

L. W. Santoso

20. Peissner, M., Edlin-White, R.: User control in adaptive user ınterfaces for accessibility. Lect. Notes Comput. Sci. 8117, 623–640 (2013) 21. Aroyo, L., Dolog, P., Houben, G., Kravcik, M., Naeve, A., Nilsson, M., Wild, F.: Interoperability in personalized adaptive learning. Edu. Technol. Soc. 9(2), 4–18 (2006) 22. Kravcik, M., Angelova, G., Ceri, S., Cristea, A., Damjanovi´c, V.: Requirements and Solutions for Personalized Adaptive Learning. Research report of the ProLearn Networkof Excellence (IST 507310), Deliverable 1.1. (2005) 23. Zervas, P., Sergis, S., Sampson, D., Fyskilis, S.: Towards competence-based learning design driven remote and virtual labs recommendations for science teachers. Technol. Knowl. Learn. 19 (2015) 24. Nacheva-Skopalik, L., Green, S.: Adaptable personal eassessment. Int. J. Web-Based Learn. Teach. Technol. 7(4), 29–39 (2012) 25. Gonzalez, C., Burguillo, J., Llamas, M.: A qualitative comparison of techniques for student modeling in ıntelligent tutoring systems. In: Proceedings. Frontiers in Education. 36th Annual Conference. San Diego, pp. 13–18 (2006) 26. Ciloglugil, B.: Adaptivity based on felder-silverman learning styles model in E-learning systems. In: 4th International Symposium on Innovative Technologies in Engineering and Science (ISITES 2016), pp. 1523–1532 (2016) 27. Nunnally, J.: Psychometric Theory. McGraw-Hill, NY (1978)

A Progressive Non-discriminatory Intensity Equalization Algorithm for Face Analysis Khadijat T. Bamigbade and Olufade F. W. Onifade

Abstract Illumination plays a major role in the determination of image quality in an uncontrolled environment. Shadow casting, poor contrast and poor intensity are of particular interest in facial analysis. These problems are characterized by the nature of face image from uncontrolled environment. Several attempts have been made to enhance image quality; however, existing enhancement methods are limited in the specificity for facial expression analysis, thus resulting into non-uniform pixel brightness. This paper presents a novel method which is referred to as an intensity equalizer that handles objects of an image in singleton, pixelates the objects in this case a face image, employs the HSV color model for intensity value separation of every pixel, computes Gaussian probability density function and transforms every pixel with the best local minimum difference of mean intensity and variance of saturation across the object. Experimental result shows that the proposed model is invariant to shadow casting and gives moderate contrast on face image irrespective of input’s contrast class while preserving the global intensity information and enhancing local pixel information.

1 Introduction Illumination effect contributes a large amount of limitations on images from uncontrolled environment. This is as a result of infinite varying illumination condition from various environmental and lightening conditions such as; atmospheric change and room lightening depending on the time of the day. The human visual system has the capability of inferring the remainder information from partly visible objects. K. T. Bamigbade (B) · O. F. W. Onifade University of Ibadan, Ibadan, Nigeria e-mail: [email protected] O. F. W. Onifade e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_37

397

398

K. T. Bamigbade and O. F. W. Onifade

However, this remains a challenging task for computer systems. To address this limitation in computer systems, a number of image enhancement techniques have been developed to handle illumination variation characteristic of image been processed and its usefulness in wide range of systems such as surveillance [1], medical image processing [2], texture analysis [3], face analysis to mention a few cannot be over emphasized. Enhancement refers to different corrections such as: denoising, deblurring, contrast correction and tonal correction among others. The broad spectrum of illumination and its effect on various object’s properties such as color and reflectivity/reflectance deprives no single enhancement technique the ability to handle all illumination problem of any/various image. Thus, this work focuses on illumination variation of face image. A major reason for this focus is that even though in face recognition system, information from about 75% of the image may be sufficient for the system to make a decision. On the other hand, in facial expression analysis, all of the information required for successful decision making may be resident in the unseen/poorly vision 25% of images. Most work in facial expression analysis has been found to employ existing enhancement technique such as histogram equalization (HE) and its variants [4], contrast enhancement (CE) and gamma intensity correction (GIC) and its variant [5]. All of these techniques can be categorized into global, local or hybrid enhancement method [5]. The global enhancement method employs a single transformation function obtained from every pixel of the image. This method was deemed inappropriate for image with irregular illumination because the result of the enhancement leaves some part of the image too light or too dark (over or under brighten) [6]. On the other hand, local enhancement method considers neighboring pixel information for its transformation function but neglects overall brightening information resulting into local artifacts [7]. Hybrid enhancement methods combine both local and global information in their transformation function [5] at a cost of high computational demand. The rest of this paper is structured as: Section 2 discusses existing enhancement methods popularly used in face analysis including facial expression analysis, Section 3 presents the novel method for uniform intensity spread over a face image referred to as Intensity equalizer, Section 4 presents result discussion of developed model while comparing output of our method with two state of the art methods, Section 5 concludes this paper.

2 Literature Review There are several image enhancement methods that have been developed for different images and different illumination conditions. Most commonly used in the field of facial analysis are histogram equalization [8] and its variants: Brightness preserving bi-histogram equalization [4]; adaptive histogram equalization [9]; dualistic subimage histogram equalization [10]; contrast enhancement and gamma intensity correction and its variant: adaptive gamma correction [5]. HE is based on simple and efficient redistribution of image intensity histograms while the variants of HE

A Progressive Non-discriminatory Intensity …

399

partitions histogram based on one of the measures of central tendency mainly mean and median. Measure of central tendency suffers from outliers; this accounts for the major limitation of these algorithms. Contrast enhancement is similar to histogram equalization but with a major difference is how intensity is being distributed over the image. HE ensures the conformity with flatten property of image while CE ensures intensity stretch over entire image. Huang et al. [11] combined gamma intensity correction and histogram equalization to produce adaptive gamma correction weighting distribution which binds every pixel within the maximum intensity value of the image. Their method achieved an overall image brightness but resulted into poor image visuals for images that lack bright pixels. Adaptive gamma correction [5] is a global enhancement method with low computational demand and enhances image based on their characteristics. They adopted a six-class transformation function similar to [12]. Their result showed good performance on variety of image but the intra-image illumination variation was not reported. It seems impossible to have a single algorithm handling variety of images in various of illumination range of light spectrum. This paper proposes an intensity equalizer for face analysis that takes into cognizance the drawbacks of existing algorithms by enhancing face image toward producing a good visual image with uniform intensity spread across the image despite poor contrast and shadow casting in input image.

3 Proposed Method The diagram below presents the holistic process of intensity equalization. The input image is a face in its RGB color mode which is then passed into the Hue--Saturation-Value (HSV) conversion module. This module separates light information from the object’s color information with a bid to facilitate the manipulation of the light information while still preserving the local information of the object. The new image is then decomposed into it corresponding pixel value for intensity class classification as presented in Eq. (1). In this research, every pixel was classified as: low; moderate or high contrast. In determining the intensity class, the Gaussian probability distribution function was used as depicted by Eq. (2) (Fig. 1). In a bid to rigorously establish the research direction as presented above, following below, we present notational definition and adaptation of exiting mathematical models as it pertains to this work. Suppose there exist a face image G, that is defined by x and y coordinates, we define G(x, y) as a face image in its HSV color mode consisting of n pixels represented as a vector space [φ 1 , φ 2 , …, φ n] as in Eq. (1) G(x, y) =

n  i=1

∅i G(x, y)

(1)

400

K. T. Bamigbade and O. F. W. Onifade Intensity Equaon Process

Input face Image

HSV to RGB

Pixilaon

RGB to Grey

HSV Conversion

Output face Image

Fig. 1 Process flow of the developed intensity equalizer

The distribution of intensity and saturation of all pixels in G(x, y) was modeled using the Gaussian probability density function defined in Eq. (2). A transformation function T defined in Eq. (3) determines the new contrast parameter for each pixel, where z is the difference between the mean intensity and deviation of saturation, and K is Heaviside (0.5 ± x) where x < 1. The value of K for low contrast image is 0 ≤ k ≤ 0.4 and for high contrast pixel 0.6 ≤ k ≤ 1   1 (x − Ni )2 exp − φ(G|K ) = √ 2σ 2 2π σ 2 ⎧ ⎨ Z < K ; −1 T = Z = K;0 ⎩ Z > K;1

(2)

(3)

In Eq. (3) above, −1, 0 and 1 means low, moderate and high intensity value, respectively. The transformation of every raw pixel (∅i R ) over T to an enhanced pixel (∅i E ) is defined by Eq. (4) T

∅i R ⇒ ∅i E

(4)

The transformation mapping of a raw pixel to an enhanced pixel simply proceeds using the following algorithm:

A Progressive Non-discriminatory Intensity …

401

From the algorithm above, we relate to our earlier destruction of three intensity classification of low, medium and high. These classifications where assigned different range of sat between 0 and 1. The medium class is adjudged normal, hence no operation. However, for the low and high classes, two distinct normalization procedure where developed to progressively adjust the Z ν (∅) based on the min Z ν (∅). Ultimately, the newly composed image GI is converted into corresponding RGB value. Given a clearer, fairly distributed intensity across the face image. In the next section, the result from the above method is presented.

4 Result Discussion Recall that the crux of this research is to ameliorate the deficiency found in HE, specifically, over enhancement of the brightness information of input image. In this section, we discuss the result of this research with a bid to compare and establish the efficacy of the developed method over histogram equalization and contrast enhancement. Employed for this experiment is a single input image whose characteristic satisfies various contrast classes. The above notion is premised on the fact that as long as there is at least a single pixel with a desiring intensity information, then the algorithm should be able to progressively equalize the intensity of the input image.

402

K. T. Bamigbade and O. F. W. Onifade

Fig. 2 Result of various enhancement methods showing face image at top row and corresponding histogram distribution at bottom row: a original image, b histogram equalization, c contrast enhancement, d proposed method (intensity equalizer)

The above image (see Fig. 2) presents a typical input image (a) and the resultant images (b), (c) and (d) representing HE, CE and the developed intensity equalizer. Unequivocally, we can clearly see the non-uniform distribution of light intensity which can impair the result of expression recognition in a facial action like wink (left eye region). Corresponding result has found in HE and CE failed to produce any uniformity on the face, rather enhancement was made only in regions with pronounced light intensity. From (c), the result was a near blurred image with little or no difference from (b). Although the histogram distribution of a, b, c showed clear distinction, there seems to be a similarity in the progression between a and c. It will however be confusing, if the histograms of b and d were viewed without corresponding images. A quick glance at the two showed strong similarities in the spike; however, the intensity of coloration could probably inform a casual observer that d possesses a more uniform surface than b.

5 Conclusion In this paper, a simple but efficient contrast correction method for face image was presented. The algorithm tagged intensity equalizer transforms local pixel information while preserving overall image brightness. The method simply evaluates light information of every pixel of the image from a Gaussian model. The mean intensity and variance of saturation were used to dynamically compute the minimum difference between pixels and for contrast classification. The adjustment in intensity information is dependent of the pixel with minimum difference between mean and variance of intensity. The intensity information of neighboring pixels is recursively computed and adjusted using the range of threshold value for each contrast class. This method can be extended for images containing variety of multiple objects.

A Progressive Non-discriminatory Intensity …

403

References 1. Renkis, M.A.: Video surveillance sharing system and method. Google Patents. Patent 8, US, 842, 179 (2014) 2. Zikos, M., Kaldoudi, E., Orphanoudakis, S.: Medical image processing. Stud. Health Technol. Inf. 43(Pt B), 465–469 (1997) 3. Efros, A.A., Freeman, W.T.: In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. Image Quilting for Texture Synthesis and Transfer, pp. 341–346. ACM, Los Angeles (2001) 4. Kim, Y.-T.: Contrast enhancement using brightness preserving bi-histogram equalization. Consum. Electron. IEEE Trans. 43(1), 1–8 (1997) 5. Rahman, S., Rahman, M.M., Abdullah-Al-Wadud, M., et al.: An adaptive gamma correction for image enhancement. J. Image Video Proc. 2016, 35 (2016) 6. Cheng, H., Shi, X.: A simple and effective histogram equalization approach to image enhancement. Dig. Sig. Process. 14(2), 158–170 (2004) 7. Celik, T., Tjahjadi, T.: Contextual and variational contrast enhancement. Image Process. IEEE Trans. 20(12), 3431–3441 (2011) 8. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 3rd edn. Pearson/Prentice Hall, Upper Saddle River (2008) 9. Pizer, S.M., Philip Amburn, E., Austin, J.D., Cromartie, R., Geselowitz, A., Greer, T., ter Haar Romeny, B., Zimmerman, J.B., Zuiderveld, K.: Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 39(3), 355–368 (1987) 10. Wang, Y., Chen, Q., Zhang, B.: Image enhancement based on equal area dualistic sub-image histogram equalization method. Consum. Electron IEEE Trans. 45(1), 68–75 (1999) 11. Huang, S.-C., Cheng, F.-C., Chiu, Y.-S.: Efficient contrast enhancement using adaptive gamma correction with weighting distribution. Image Process. IEEE Trans. 22(3), 1032–1041 (2013) 12. Tsai, C.-M., Yeh, Z.-M., Wang, Y.-F.: Decision tree-based contrast enhancement for various color images. Mach. Vis. Appl. 22(1), 21–37 (2011) 13. Jensen, J.R., Lulla, K.: Introductory digital image processing: a remote sensing perspective. Geocarto Int. 2(1), 65 (1987)

Big Data Analytics in Health Informatics for Precision Medicine Pawan Singh Gangwar

and Yasha Hasija

Abstract In today’s computerized era, each experimental instrument, clinical framework, laboratory assembly is embedded with digital gadgets and devices. Due to the digitization of research and experimental procedures, biological databases have expanded in volume tremendously. Big Data, which is described by definite unique traits like volume, variety and velocity, has revolutionized the research of many disciplines including medicine. Health care big data is characterized as large datasets that are gathered automatically or routinely and stored electronically. Employing new methods to extract an idea from a large volume of data has the power to cause actual changes in the clinical practice, from precision medicine and smart drug designing to screening of populations and mining of electronic health records (EHRs). Swift advancements of high-throughput methods and diverse acceptance of EHRs has caused easy and quick aggregation of EHR and Omics data. These large complicated data contains plentiful information for personalized therapy, and patterns in these data can be detected by big data analytics to draw such information that can improve health care quality. Features of big data include low cost and easy collection, the utility for generation of hypothesis as well as testing of hypothesis, and eventually the promise for precision medicine. Several limitations of big data include cost and difficulty of storing and processing data, requirement of effective and better methods for formatting and analysis, and issue of accuracy, security, and reliability.

P. S. Gangwar · Y. Hasija (B) Department of Biotechnology, Delhi Technological University, Delhi 110042, India e-mail: [email protected] P. S. Gangwar e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_38

405

406

P. S. Gangwar and Y. Hasija

1 Introduction Recently, the term “big data” has become much common and developed as an exciting field to explore which attracted vast attention from researchers, analysts, industrialists, and governments [1]. Special issues have already been published on big data by Science and Nature to uncover the opportunities and to deal with its challenges. Big Data can be described as voluminous sets of data that is utilized by analytical approaches to reveal hidden underlying patterns, associations and trends. Big Data has also been characterized by the 6 V’s of volume (lots of data), variety (data obtained from different sources exists in different forms), velocity (data is accumulated speedily), veracity (uncertainty as if the data is correct or not), variability (consistency of data over time) and value (data relevance) (Fig. 1). It is more interesting to see Big Data as it relates to sources, repositories, and its analysis (Fig. 2). Fig. 1 Big data: 6 V’s Volume

Value

Veracity BIG DATA Variability

Variety Velocity

Fig. 2 Big data flow from sources to storage, analytics, and visualization

EHRs Imaging

Omics

BIG DATA

Analytics

Visualisation

Big Data Analytics in Health Informatics for Precision Medicine

407

Over the years many models have been proposed to improve the healthcare system. The objective of the model of “precision medicine” is to make health care delivery approach customizable for every person and also amplify the effectiveness of every patient’s therapy [2]. Hood et al. (2009) proposed the P4 medicine model (personalized, participatory, preventive, and predictive) which plans to change present reactive care to future proactive care, and eventually to diminish health care cost and enhance health outcomes of patient [3]. Recently proposed model states to accurately divide patients in subgroups who share a common disease biological basis [4]. Personalized medicine needs utilities of data which ranges from collecting and managing data (data storage, sharing, and privacy) to analytics (data mining, integration, and visualization). To understand these heterogeneous data, big data analytics is needed to cover applied fields. Thus, this article reviews big biomedical data analytics for precision medicine and close relationship of EHR data with precision medicine. Section 2 presents big biomedical data features, challenges, and big data analytics; Sect. 3 overviews data mining in EHRs; Sect. 4 describes recent studies in this field and also various studies to show the influence of big data analytics in personalized medicine; and finally, Sect. 5 concludes this article.

2 Precision Medicine Big Data EHR and NGS (next-generation sequencing) for the whole population gives a foundation for analyzing healthcare efficacy and safety.

2.1 Big Biomedical Data Several famous big data biomedical initiatives include 1000 Genome Project, 100000 Genome Project, The International Cancer Genome Consortium (ICGC), The Cancer Genome Atlas (TCGA), etc. These projects generated large-scale sequencing data. Big EHR Data. The principal source of Big Data in human health is the electronic health records (EHRs), since conversion from handwritten charts to EHRs started. EHR stores each patient’s information, like research facility tests and results, statistic data, analysis, medications, clinical notes, and radiological images [5]. To analyze such data it needs to be converted from text to a much structured form, with or without NLP (natural language processing). EHR data types are unstructured (clinical notes) or structured (clinical data, imaging data, administrative data, chart, and medication). Big OMIC Data. Omics data contains catalogue of molecular profiles (e.g., genomics, proteomics, transcriptomics, metabolomics, and epigenomics) which provides the base for personalized medicine. The genomes, transcriptomes, and epigenomes are in upstream as compared to the metabolome and proteome; and upstream processes leads towards precision medicine [6].

408

P. S. Gangwar and Y. Hasija

Genomics. Complete set of DNA of an organism. A genome information is contained in frameshift mutations (insertion/deletion), SNPs (single nucleotide polymorphism) and CNVs (copy number variation); Transcriptomics. All the genes present in a cell. T ranscriptomic knowledge is carried in gene expression, transcript expression and alternative splicing; Epigenomics. A large number of chemical compounds which directs the genome; Proteomics. The total protein which is encoded by the genome; and Metabolomics. A well-rounded catalog of metabolites in the cell of an organism.

2.2 Associated Challenges with Omics and EHRs Data Big data analytics of Omics and EHRs is a challenge because of: Frequency of Data Collection. Firstly, different diverse modes of data show variant frequency of data collection. In EHR, bed-side observing information is caught at higher recurrence, while the laboratory tests might be done a couple of times each day. Secondly, frequency of data collection could be sporadic. In EHR, many clinical factors have unpredictable sampling frequencies, which depends upon the patient and whether the measurement is tuff or easy. Issue in Data Quality. In omics data, issues in quality occurs due to a combination of biological, environmental and instrumental agents like sample contamination, batch effects, and down signal-to-noise ratio. Quality issues in EHR data, are like missing data because the variables recorded vary every time the measurement is done and it depends upon condition of patient, and wrong data. High Dimensionality. A major challenge in both EHR and omics information mining is related with high-dimensional information. Omics information frequently has numerous features or dimensions than accessible, while EHR information may contain an enormous size of sample with high-dimensional data, however with every particular sample populated in a sparse fashion. Heterogeneous Data Heterogeneity Types. In omics, utilizing underlying molecular fingerprints to portray subtype of diseases, might need heterogeneous omics data. For instance, the integrative personal omics profile (iPOP) venture reveals dynamic molecular changes among diseased and healthy state [7]. In any case, coordinating multi-omics information poses challenge due to biological and technical noise, resolution and identification accuracy. EHR data is characteristically heterogeneous and it is necessary to make sense out of it to achieve precision medicine.

2.3 Precision Medicine and Data Analytics Big Biomedical Data Analytics. Omics and EHRs are higher dimensional data which needs long computational time as well as influences the precision of analysis. In this way, data dimensionality is reduced by distinguishing a subset of variable having

Big Data Analytics in Health Informatics for Precision Medicine

409

qualities of the original data by two techniques: (i) Feature selection, that plans to choose an ideal subset of existing features. The strategies comprises of filtering, wrapper, or embedded techniques. For instance, minimum redundancy maximum relevance (mRMR) method and support vector machines (SVM); (ii) Feature extraction, that plans to change existing features to a tighter dimension. For example, PCA (principal component analysis) that identifies a little amount of orthogonal linear vectors and ANN (artificial neural networks) such as auto-encoders. EHR Data Pre-processing. Information present in EHR is vast but disorganized naturally. In this way, EHR information requires precise pre-processing. Missing data imputation techniques are required like interpolation, multiple imputation, expectation maximization, and maximum likelihood [8].

3 Data Mining in EHRs Static Endpoint Prediction. Following dimensionality decrease, the relationship between selected clinical features and focused outcomes can be shown with three procedures: (i) Regression analysis. A measurable procedure that assesses the connection between independent variable (features) and dependent variable (endpoint). When dependent variable follows distributions like normal, Poisson, and binomial, a generalized linear model can be utilized for regression model fit; (ii) Classification includes building measurable models that appoint another observation to a known class. Classification methods like k-nearest neighbor, decision tree, and SVM are effective; (iii) Association Rule Learning (ARL) finds frequently occurring and solid relationship among clinical factors. Temporal Data Mining. EHR captures diagnosis, treatment, and outcomes sequentially and therefore, it is significant to frame temporal relationship between cases requiring temporal data mining methods, for example, HMM (Hidden Markov Model) and CRF (Conditional Random Field) [9].

3.1 Big Biomedical Data Analytics Enabling Tools The revolution in big data has prompted the advancement of enterprise tools and platforms for extraction, analysis, and prediction modeling as summed up in Table 1.

410

P. S. Gangwar and Y. Hasija

Table 1 Big data analytics platforms Platform

Features

Limitations

Apache Hadoop (MapReduce Framework) [10]

Scalable horizontally; fault tolerant; designed to be deployed on commodity-grade hardware; free and open source

Efficient batch processing; hardly real time analytics

Apache Spark Streaming [11]

Incorporates with the Hadoop stack; permits 1 code base for both web (real time) and batch-mode analysis

Needs big RAM for efficient output

IBM InfoSphere [12]

Integrates with Hadoop; have tools to deal streaming data

Commercial licensing

Tableau and others

Big complicated data sets visualization

Needs other tools

4 Studies Related to Big Data Analytics in Health Informatics Big Data has been a very hot topic of analytics recently. Some of the recent studies in light of big data in precision medicine are mentioned. 2018 Dec 29. Prosperi et al. [13] analyzed the technological and cultural obstacles linked to the development of prediction models of wellness risks, diagnostics and results from coordinated biomedical databases. Challenges in methodology discovered included improving semantics of study structures: clinical record information are intrinsically biased, and even auto-encoders (AE), the most advanced de-noising deep learning approach cannot overcome the bias. 2019 Mar 01. Hulsen et al. [14] discussed the challenges and chances presented to biomedical research by expanding power to handle huge datasets. Significant difficulties incorporated the requirement for standardization of information format, content, and clinical definitions, for collaboration networks with sharing of expertise and data and to re-evaluate how and when diagnostic methodology is informed to clinical analysts. Hypothesis producing research on huge datasets gets added to traditional hypothesis driven science instead of to replace it. 2019 Apr 03. Fahr et al. [15] identified challenges attached to managing data, quality of data and data analysis. The accessibility of huge volumes of data from different sources, the want to direct information linkages inside a situation of obscure information access and sharing methods, and information management challenges are functional and can be settled if methodology for information sharing and access are improved. Nevertheless, missing data existing across linked datasets, accommodating dynamic data, and several different challenges might need an advancement in economic evaluation techniques. 2019 Jul 08. Qian et al. [16] overviewed ongoing advances made in the fields of new tremendous information-driven ways to therapeutic drug target discovery, drug prioritization of the candidate, clinical toxicity inference, and ML techniques in drug

Big Data Analytics in Health Informatics for Precision Medicine

411

Table 2 Studies exemplifying big data potential in health informatics Fields

Data type

Methods and references

Bioinformatics

Gene expression data

Biostatistics [17]

Bioinformatics

Gene expression data

Genomics [18]

EHR

Patient records and laboratory results

NLP [19]

EHR

Patient record categorical database

Statistics [20]

Health informatics

Health assessment veterinary records

Statistics [21]

Health informatics

Preoperative patient records risk Machine learning [22] data

Imaging

Resting state of MRI data

Network analysis [23]

Imaging EHR

PET scans and patient records

Machine learning [24]

Health informatics (Social network and environment data)

Social network and air quality data

Machine learning [25]

discovery. Summation of large data generated for every individual combined with the methods derived for large data analytics could finally empower us to accomplish precision medicine. Several studies have been performed on big biomedical data analytics in health informatics. Table 2 illustrates some studies with data types, methods and subfields of study.

5 Conclusion This article reviews challenges of omics and EHRs big biomedical data and their recent progress. Recent research studies have been described to show how big data analytics has facilitated and could more enhance precision medicine. Since big biomedical data analytics is still in its early state, more biomedical and biological data scientists and engineers are required to acquire essential biological and medical information, to use vast datasets rendered by several initiatives of big biomedical data, and to pour joint efforts in fields like multi-omics integration of data, and similarity of patient to accelerate big biomedical data research for personalized therapy. It is now more important to form collaborative networks for sharing samples, data, and methods and build bridges among medical science, computer science, engineering and industry. Therefore, using each patient’s precise subtyping information, delivering the most effective and suitable treatment to every patient, better quality and care efficiency could be achieved by the healthcare system.

412

P. S. Gangwar and Y. Hasija

References 1. Jin, X., Wah, B.W., Cheng, X., Wang, Y.: Significance and challenges of big data research. Big Data Res. (2015) 2. Fernald, G.H., Capriotti, E., Daneshjou, R., Karczewski, K.J., Altman, R.B.: Bioinformatics challenges for personalized medicine. Bioinformatics (2011) 3. Hood, L., Friend, S.H.: Predictive, personalized, preventive, participatory (P4) cancer medicine. Nat. Rev. Clin. Oncol. (2011) 4. Katsnelson, A.: Momentum grows to make ‘personalized’ medicine more ‘precise’. Nat. Med. (2013) 5. Kelemen, A.: Deep Learning Techniques for Biomedical and Health Informatics (2020) 6. Collins, F.S., Varmus, H.: A new initiative on precision medicine. N. Engl. J. Med. (2015) 7. Chen, R., et al.: Personal omics profiling reveals dynamic molecular and medical phenotypes. Cell (2012) 8. Schafer, J.L.: Multiple imputation: a primer. Stat. Methods Med. Res. (1999) 9. Andreão, R.V., Dorizzi, B., Boudy, J.: ECG signal analysis through hidden Markov models. IEEE Trans. Biomed. Eng. (2006) 10. Taylor, R.C.: An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics. BMC Bioinf (2010) 11. Zaharia, M., Chowdhury, M., Franklin, M.J., Shenker, S., Stoica, I.: Spark: cluster computing with working sets. In 2nd USENIX workshop on hot topics in cloud computing, HotCloud 2010 (2020) 12. Biem, A., et al.: IBM InfoSphere Streams for scalable, real-time, intelligent transportation services. In: Proceedings of the ACM SIGMOD International Conference on Management of Data (2010) 13. Prosperi, M., Min, J.S., Bian, J., Modave, F.: Big data hurdles in precision medicine and precision public health. BMC Med. Inform. Decis. Mak. (2018) 14. Hulsen, T., et al.: From big data to precision medicine. Front. Med. (2019) 15. Fahr, P., Buchanan, J., Wordsworth, S.: A review of the challenges of using biomedical big data for economic evaluations of precision medicine. Appl. Health Econ. Health Policy (2019) 16. Qian, T., Zhu, S., Hoshida, Y.: Use of big data in drug development for precision medicine: an update. Expert Rev. Precis. Med. Drug Dev. (2019) 17. Schramm, K., et al.: Mapping the genetic architecture of gene regulation in whole blood. PLoS One (2014) 18. Altshuler, D.L., et al.: A map of human genome variation from population-scale sequencing. Nature (2010) 19. Murff, H.J., et al.: Automated identification of postoperative complications within an electronic medical record using natural language processing. JAMA—J. Am. Med. Assoc. (2011) 20. Skow, Á., Douglas, I., Smeeth, L.: The association between Parkinson’s disease and antiepilepsy drug carbamazepine: a case-control study using the UK General Practice Research Database. Br. J. Clin. Pharmacol. (2013) 21. Nielson, J.L., et al.: Development of a database for translational spinal cord injury research. J. Neurotrauma (2014) 22. Anderson, J.E., Chang, D.C.: Using electronic health records for surgical quality improvement in the era of big data. JAMA Surg. (2015) 23. Biswal, B.B., et al.: Toward discovery science of human brain function. Proc. Natl. Acad. Sci. U. S. A. (2010) 24. Mikhno, A., et al.: Toward noninvasive quantification of brain radioligand binding by combining electronic health records and dynamic PET imaging data. IEEE J. Biomed. Heal. Inform. (2015) 25. Larsen, M.E., Boonstra, T.W., Batterham, P.J., O’Dea, B., Paris, C., Christensen, H.: We Feel: Mapping emotion on Twitter. IEEE J. Biomed. Heal. Inform. (2015)

Software Tools for Global Navigation Satellite System Riddhi Soni, Sachin Gajjar, Manisha Upadhyay, and Bhupendra Fataniya

Abstract Global Navigation Satellite System (GNSS) is satellite navigation system for global coverage. GNSS system includes Global Positioning System (GPS) from USA, Global Navigation Satellite System (GLONASS) from Russia, Galileo from Europe, BeiDou from China, Quasi-Zenith Satellite System (QZSS) from Japan, and Indian Regional Navigation Satellite System (IRNSS) from India. A GNSS system consist of a constellation of satellites orbiting earth, continuously transmitting signals that enable users to determine their three-dimensional position with global coverage. Software tools for GNSS provide ability to analyze the navigation system performance. To evaluate the performance of any navigation system a software tool is required. Popular GNSS software tools are discussed in this paper like RTKLIB, GPSTk, gLAB and Bernese. The aim of this study is to compare the software tools for GNSS. These software tools are compared based on their features, support for different GNSS system, support for file formats, and support for operating system platform. Comparison can be helpful to select an appropriate tool as per the requirement.

R. Soni (B) · S. Gajjar · M. Upadhyay · B. Fataniya Nirma University, Ahmadabad, Gujarat, India e-mail: [email protected] S. Gajjar e-mail: [email protected] M. Upadhyay e-mail: [email protected] B. Fataniya e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_39

413

414

R. Soni et al.

1 Introduction GNSS [1] system comprises of all navigation systems around the world including GPS (USA) [2], Galileo (Europe) [3], GLONASS (Russia) [4], BeiDou (China) [5], QZSS (Japan) [6] and IRNSS (India) [7, 8]. GNSS system helps for disaster management, marine applications, military applications, vehicle tracking, terrestrial and aerial navigation [7]. In satellite communication atmospheric errors, receiver components delay, clock drift and clock bias are affecting the accuracy of navigation system. To evaluate the positioning performance of any navigation system certain software tools are required. The popular software tools available are RTKLIB, GPSTk, Bernese, and gLAB. Different organizations developed these software tools for specific system or multiple GNSS systems. The purposes of these software tools are (i) To convert received raw data from receiver into RINEX format, (ii) To apply necessary atmospheric, clock and antenna corrections, (iii) Evaluate performance using different positioning modes in post-processing module and analyze results using graphs and plots. RTKLIB supports multi-GNSS systems: GPS, GLONASS, Galileo, BeiDou, and QZSS, GPSTk supports only GPS, Bernese supports GPS and GLONASS and gLAB supports GPS, Galileo and GLONASS. The rest of the paper is organized as follows: Sect. 2 discusses the need of software tools, Sect. 3 describes the features of an ideal software tool for GNSS system, Sect. 4 describes each software tool in detail and compares them and Sect. 5 gives conclusion.

2 Need for Software Tools • Position Performance: To analyze the positioning performance of the navigation system. • File format conversion: To convert the raw data into RINEX OBS, RINEX NAV formats used in post-processing. • Processing: Pre-Processing of the received data to apply required corrections and post-processing of the data to find receiver location using different positioning modes. • Plotting: To understand the system positioning performance in form of graphs and plots. • Positioning modes: Positioning modes, i.e., single, dual, PPP provide better understanding to examine the change in results. • Research work: Research and development purpose in GNSS system domain. • Performance evaluation: Comparing the performance of different navigation systems.

Software Tools for Global Navigation Satellite System

415

3 An Ideal Software Tool for GNSS System An ideal software tool should have the following features: • GNSS system software should support large number of GNSS systems including GPS, Galileo, GLONASS, BeiDou, IRNSS, and QZSS. • It should support large number of file formats including RINEX (Receiver Independent Exchange), BINEX (Binary Exchange), NMEA (National Marine Electronic Association), RTCM (Radio Technical Commission for Maritime), NTRIP, IONEX (Ionosphere maps Exchange), ANTEX (Antenna Exchange), SP3 and SINEX (Software Independent Exchange). • It should support proprietary messages for several receivers including Novatel, Hemisphere, u-blox, SkyTraq, JAVAD and NVS receiver’s proprietary messages. • Positioning model, clock model, antenna model, Ionospheric model, tropospheric model and earth tide model facility should be available for atmospheric, clock and antenna corrections and positioning. • Different positioning modes like single frequency, dual frequency, differential GNSS (DGNSS), static, fixed, moving base, precise point positioning (PPP) and kinematic should be available in software to calculate receiver position. • Software should provide support for external communication protocols like TCP/IP, serial, FTP/HTTP etc. • Software should be Open Source Software (OSS) • Graphical user interface (GUI) provides more visible clarity to user. • It should be compatible with Windows, Linux, Unix, and MAC operating systems.

4 Software Tools 4.1 RTKLIB RTKLIB is developed by T. Takasu and Akio Yasuda, Tokyo University Marine Science and Technology [9, 10]. RTKLIB software provide convenient program library. Users can link the libraries with their own program and modify the source codes according to the requirements of applications. RTKLIB can perform standard positioning algorithms for civilian users and precise positioning algorithms for military purpose. RTKLIB supports GPS, GLONASS, Galileo, QZSS, BeiDou and SBAS navigation systems. Processing modes supported by RTKLIB are (i) Single, (ii) Differential Global Positioning System (DGPS)/Differential Global Navigation Satellite System (DGNSS), (iii) Kinematic, (iv) Static, (v) Moving Base, (vi) Fixed, (vii) PPP-Kinematic, (viii) PPP-Static, (ix) PPP-Fixed. Real-time processing can be possible with all discussed modes in RTKLIB. It supports RINEX [11], RTCM [12], BINEX [13], NTRIP [14], NMEA [15], EMS, SP3-c [16], ANTEX [17], NGS PCV and IONEX [18] file formats for GNSS. It supports Novatel [19], Hemisphere [20], ublox [21], SkyTraq [22], JAVAD [23], Furuno [24] and NVS [25] receiver’s messages

416

R. Soni et al.

and received data. It supports serial, Transmission Control Protocol (TCP)/Internet Protocol (IP), Networked Transport of RTCM via Internet Protocol (NTRIP), and File Transfer Protocol (FTP)/Hyper Text Transfer Protocol (HTTP) based external communication [9, 10]. The library functions include are: satellite navigation system, matrix, vector, time, string, coordinates transformation, debug trace, platform dependent, datum transformation, RINEX, ephemeris and clock, precise ephemeris and clock, RTCM, solution, Google Earth Keyhole Markup Language (KML), converter, SBAS, options, data input and output, integer ambiguity resolution, standard positioning, precise positioning, post-processing positioning, stream server, Real-Time Kinematic (RTK) server, downloader functions. RTKLIB APIs for models include are: positioning, ionospheric, tropospheric, antenna, earth tides, geoids [9, 10].

4.2 GPSTk GPSTk is an open source software tool with required libraries for satellite navigation systems. GPSTk is the by-product of GPS research conducted by Space and Geophysics Laboratory members of Applied Research Laboratories at the University of Texas (ARL:UT) [26]. It is the combined effort of the research staff at ARL:UT consisting of many software engineers and scientists. It is programmed in C++ to make it platform independent, i.e., practically available for every computational architecture and operating system [26]. Also GPSTk library includes core and auxiliary libraries. The core library provides a number of models and algorithms for GNSS such as solving for the user position or estimating atmospheric refraction. There are several categories of function like conversion among time representations such as GPS week and seconds of week, position and clock interpolation for broadcast and precise ephemerides, ionosphere and troposphere delay models [26]. The major functions to solve processing problems associated with GNSS such as read and process received data from RINEX files. Libraries are available for more advanced applications. It provides support to deal with different file formats such as RINEX OBS, RINEX NAV, SP3, FIC (Floating Integer Character), Novatel and NMEA [26, 27].

4.3 Bernese The Bernese GNSS software provides the high-quality standards for geodetic and satellite navigation based applications [28]. The team at Astronomical Institute, the University of Bern headed by Prof. (Dr.) Rolf Dach has developed Bernese, a highquality scientific software package for multi-GNSS data processing [28]. It supports GPS of USA and GLONASS of Russia. It contains a high performance, highly accurate and flexible GPS/GLONASS processing package. Bernese provide following features to achieve high performance : (i) Support for single and dual-frequency processing, (ii) Fixed networks processing, (iii) Post-processing and reprocessing

Software Tools for Global Navigation Satellite System

417

of GNSS data, (iv) Ability to handle raw data from a huge number of receivers, (v) Processing of GPS and GLONASS data simultaneously, (vi) Support for real kinematic GNSS systems including those on airplanes, (vii) ionosphere and troposphere monitoring, (viii) clock correction and time transfer, (xi) orbital information for GNSS and LEO satellites using SLR orbit validation [28].

4.4 gLAB The GNSS-LAB tool (gLAB) developed by the Research group of Astronomy and Geomatics (gAGE) at the Technical University of Catalonia [29, 30]. gLAB is an informative, reciprocal, versatile package for processing and analyzing GNSS data. The idea behind gLAB development is to support a hands-on GNSS program, where the essentials presented in the theory are tested by means of guided exercises [29]. gLAB contains three software units: (i) Data Processing Core (DPC) (ii) Graphical User Interface (GUI) (iii) Data Analysis Tool (DAT). DPC do the processing of data received from receiver. DAT used for the data examination and results visualizing in form of graphs and plots. GUI is user friendly package with utmost abilities of the DPC and DAT [29]. gLAB supports Linux and Windows operating systems. gLAB can process the received raw data in GUI and Command line User Interface (CUI). gLAB has capability to process only GPS data. For Galileo and GLONASS reading of RINEX file and data analysis with real or simulated measurements can be possible [30]. Table 1 shows a detailed comparison of all the software tools discussed in this section.

5 Conclusion Software tools are popularly used by researchers and scientists for post-processing which includes positioning performance improvement of the navigation system. To calculate the receiver position, standard point positioning, precise point positioning and differential GNSS techniques are used by the software. The algorithms used for the same include: (i) Linear Least Square Estimation (LSE) (ii) Nonlinear LSE (iii) Kalman filter model. Apart from post-processing, software provides preprocessing option for applying corrections. RTKLIB, Bernese, GPSTk and gLAB perform both pre- and post-processing with different models and positioning techniques. Pre-processing correction models for RTKLIB, Bernese, gLAB and GPSTk are ionospheric, tropospheric, tidal, cycle slip and antenna. From among all software discussed in the paper, RTKLIB is an open source software that is constantly evolving. RTKLIB supports almost all GNSS systems including GPS, Galileo, GLONASS, BeiDou, and QZSS. Indian Regional Navigation Satellite System (IRNSS) from India does not have its own software tool. The future work will be to develop a GUI based software tool for IRNSS with all the pre- and post-processing capabilities.

Widely used, continuous support for updates, RTK processing

No support for IRNSS

http://www.rtklib.com/

Strong points

Weak points

Website

Open source, data filtering

Pre- and post-processing

QT Python and C

Open Source

GUI and CUI

Windows/Linux

Research group of Astronomy and Geomatics (gAGE), University of Catalonia

GPS, GLONASS, Galileo

gLAB

https://gage.upc.edu/gLAB/

Closed source, No support for No support for IRNSS, QZSS IRNSS, QZSS, Galileo and not full fledge processing of Galileo and GLONASS

Highly accurate, RTK processing

Pre- and post-processing

QT C++ and Perl

Closed Source

GUI

Windows/Linux/Unix/MAC

Astronomical Institute, University of Bern

GPS, GLONASS

Bernese

https://sourceforge.net/pro http://www.bernese.unibe.ch jects/gpstk/

No support for IRNSS, QZSS. GUI package not available

Strongly support GPS, open source

Pre- and post-processing

Pre- and post-processing

Processing mode

Open Source C++

Open Source

Open Source/Closed Source

CUI

Windows/Linux

Geophysics Laboratory, University of Texas at Austin

GPS

GPSTk

Language in which software is C++ developed

Windows/Linux/Unix/MAC

GUI and CUI

Tokyo University

Developer

GUI/CUI

GPS, GLONASS, Galileo, BeiDou, SBAS, QZSS

GNSS system supported

Platform

RTKLIB

Properties

Table 1 Comparison of software tools

418 R. Soni et al.

Software Tools for Global Navigation Satellite System

419

References 1. Subirana, J., Zornoza, J., Pajares, M.: GNSS Data Processing vol. 1: Fundamentals and Algorithms. European Space Agency Communications, Netherlands (2013) 2. GPS Homepage, https://www.gps.gov/systems/gps/. Last accessed 2019/11/28 3. Galileo Homepage, https://www.gsc-europa.eu/galileo/what-is-galileo. Last accessed 2019/10/19 4. ICD GLONASS.: Global Navigation Satellite System GLONASS-Interface Control Document Navigational radio signal in bands L1, L2, 5th edn. Moscow (2008) 5. BeiDou, I.C.D.: BeiDou Navigation Satellite System Signal in Space Interface Control Document. China Satellite Navigation Office, China (2012) 6. Qzss, I.C.D.: Quasi-Zenith Satellite System Navigation Service Interface Control Specification for QZSS. Japan Aerospace Exploration Agency, Japan (2012) 7. IRNSS Programme Homepage, https://www.isro.gov.in/irnss-programme. Last accessed 2019/12/16 8. Mruthyunjaya, L.: IRNSS Signal-in-Space ICD for SPS. ISRO, India (2017) 9. RTKLIB Homepage, http://www.rtklib.com/. Last accessed 2020/2/6 10. Takasu, T.: RTKLIB ver. 2.4.2 Manual. Tokyo University Marine Science and Technology, Japan (2007). Available. http://www.rtklib.com/rtklib_document.htm. Last accessed 2019/12/10 11. RINEX Working Group and Radio Technical Commission for Maritime Services Special Committee 104 (RTCM-SC104): RINEX The Receiver Independent Exchange Format Version 3.03, International GNSS Service (IGS), USA (2017) 12. RTCM: RTCM Recommended Standards for Differential GNSS (Global Navigation Satellite Systems) Service version 2.3, USA (2001) 13. UNAVCO Homepage, http://binex.unavco.org/binex.html. Last accessed 2019/5/8 14. Elmar LENZ: Networked Transport of RTCM via Internet Protocol (NTRIP)—Application and Benefit in Modern Surveying Systems, Germany (2004) 15. National Marine Electronics Association: NMEA0183-Standard for Interfacing Marine Electronic Devices version 4.10, USA (2012) 16. Hilla, S.: The Extended Standard Product 3 Orbit Format (SP3-c). USA (2010) 17. Rothacher, M., Schmid, R.: ANTEX: The Antenna Exchange Format Version 1.4, USA (2010) 18. Schear, S., Gurtner, W., Feltens, J.: IONEX: The IONosphere Map EXchange Format Version 1 (1998) 19. NovAtel Homepage, http://www.novatel.com. Last accessed 2019/5/5 20. Hemisphere GPS Homepage, http://www.hemispheregps.com. Last accessed 2019/5/5 21. u-blox Homepage, http://www.u-blox.com. Last accessed 2019/5/6 22. SkyTraq Homepage, http://www.skytraq.com.tw. Last accessed 2019/5/16 23. JAVAD GNSS Homepage, http://www.javad.com. Last accessed 2019/5/18 24. Furuno Homepage, http://www.furunocom. Last accessed 2019/5/5 25. NVS Technologies Homepage, http://www.nvs-gnss.com. Last accessed 2019/5/5 26. Conn, T., et al.: GPS Toolkit: User Guide for Scientist, Engineers and Students. Helsinki University of Technology, Austin (2012). Available. https://www.ngs.noaa.gov/gps-toolbox. Last accessed 2019/12/15 27. Post processing Homepage, https://www.unavco.org/software/data-processing/postpr-oce ssing/postprocessing.html. Last accessed 2020/2/8 28. Dach, R., et al.: Bernese GNSS Software Version 5.2. University of Bern (2015). Available. http://www.bernese.unibe.ch/. Last accessed 2019/12/8 29. Ramos, P.: GNSS-Lab tool Software User Manual. European space agency GNSS Education (2017). Available. https://gage.upc.edu/sites/default/files/gLAB. Last accessed 2019/12/14 30. Subirana, J., et al.: GNSS Data Processing vol. 2: Laboratory Exercise. European Space Agency Communications (2013)

Encryption and Decryption: Unraveling the Intricacies of Data Reliability, Attributed by Incorporating the Usage of Color Code and Pixels Bikrant Bikram Pratap Maurya, Aman Upadhyay, Aniket Saxena, and Parag Sohani Abstract In this day-to-day modernizing era where data security is becoming a major concern, data security refers to securing information in potentially hostile environments. Several methods or ways have been proposed in the area of concern, i.e., securing data and assuring its safe and sound transmission from the sender to the receiver. The specialized method focused here is encryption and decryption. Decryption is the process of de-transforming encrypted information so that it gets intelligible. Encryption translates actual data into another form, or code so that only people with access to a secret key (formally called a decryption key) or password can decrypt it. Encryption is the process of transforming information, so it is unintelligible to anyone but the intended recipient. This method emphasizes the substitution of characters with RGB color format later in the form image pixel, because of which it becomes unable for the hacker to find out where the data is hidden, thus, before cracking the data they need to first find where the data is. It becomes useful in various fields such as organizational level, national level to maintain the integrity of data and can be used at a personal level too to avail the security benefits using the created platform.

B. B. P. Maurya (B) · A. Upadhyay · A. Saxena · P. Sohani LNCTS, Bhopal, India e-mail: [email protected] A. Upadhyay e-mail: [email protected] A. Saxena e-mail: [email protected] P. Sohani e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_40

421

422

B. B. P. Maurya et al.

1 Introduction Data, in general, is a collection of information in a raw or unorganized form that, when further subjected to analysis, representation and proper coding, results in some suitable form for better usage or processing in the future. When we think of data in a computer and technology-driven based arena, it underscores the vitality of securing and rendering optimum protection to private data. Well aware of the crucial importance of data in every field varying from person to person, distinct organizations or companies, data everywhere substantially plays a major part. Owing to this, what calls for it is - subjecting the data to safer and secure environments for maintaining the vested purpose and authenticity of data and restraining any sort of unsolicited interventions in our critical data. Here is what that comes into play is—‘Data Security’. Data security refers to the process of protecting data from unauthorized access and data corruption throughout its lifecycle. Data security includes data encryption and key management practices that protect data across all applications and platforms. To be succinct and precise, data security points to securing information in potentially hostile environments. It is a crucial factor in the growth of informationbased processes in industry, business, and organizations [1]. Transactions, knowledge, communications, databases, infrastructure; an organization’s information is arguably its most valuable asset. Regardless of legal or regulatory requirements, it is in a business’s best interests to keep its information safe. Many people have this common misconception that only the big organizations, governments, and businesses get targeted by cyber-perpetrators. Well, this is just not true. Data security is not just important for businesses or governments. Computers, tablets, and mobile devices could also be the target. Usually, common users get targeted by attackers for their sensitive information, such as their credit card details, banking details, passwords, etc. Data Security should be thorough and seamless for everyone—whether you are an individual or a business. As the emphasis is on the transferring of data safely from the sender to the desired receiver without any sort of unsolicited intervention in between that disrupts the authenticity of the data transferred, among the various methods in the field of security and data integrity, this documentation focused on the method of encryption and decryption. It has been used to resolve the problems faced in accomplishing the goal of securing the data in professional as well as a personal level, thereafter, the problem domain revolves around data security through encryption and decryption. The major problem that prevails in the domain of data security and integrity is securing the textual format private information, as any kind of unauthorized access to some sensitive and private data can lead to the changes in the original structure of the data or information and can be used by the someone to fulfill his/her immoral motto or act behind hacking data. In times of era where the dominance of Internet, network and other Web technologies, though rendered many benefits to all, on the other hand, has also created the door for some people to perform illegal and immoral practices using the technologies provided. So, there is a

Encryption and Decryption: Unraveling the Intricacies …

423

major concern among all to protect and secure their data in the highest way possible one can [2].

2 Methodology The described model contains the following 2 parts: • Encryption • Decryption.

2.1 Encryption The Encryption part has the following 3 main attributes: • Actual data • Image • Password. Actual Data It is the original document that will be encrypted with the help of RBG pixels of an image and later gets converted into the decrypted form (Fig. 1). Image It is the image taken by the user inside which actual data is encrypted. The quality of the image can vary from binary image to High Definition (Fig. 2). According to the observed results, the cracking of the data becomes more difficult if the quality of the image increases. Password The password is that secret key which is designed at the time of encrypting the actual data, which later is being used at the time of decryption for decrypting the encrypted data to the actual data. It does not depend upon the actual data or the algorithm used. This specific key is used to decrypt the data and unlock the actual data. The encryption and decryption of the algorithm depends on the key. Fig. 1 Actual data

424

B. B. P. Maurya et al.

Fig. 2 Image

Encrypted Data Ciphertext this the scrambled message produced as output. It depends on the actual text and the password. For a given message, two different keys will produce two different ciphertexts. The ciphertext is a random stream of data and, as it is impossible to understand (Fig. 3). Explanation 1. The actual data is taken (of any length) from the user so that it can be encrypted. 2. The data is taken in the form of a character basis which is then stored in the form pixel of the value of RBG. So through image actual data is being retrieved. 3. Finally, a password is required which will be acting as a key to unlock the actual data at the time of decryption of data (Fig. 4).

In this fig inside the pixel -RBG value and its respecve coordinate

PASSWORD

Fig. 3 Encryption

.

Encryption and Decryption: Unraveling the Intricacies … Fig. 4 Encryption

425

426

B. B. P. Maurya et al.

Algorithm of Encryption 1. 2. 3. 4. 5.

Firstly input the image taken by the user. Next, insert the password. Now insert the actual data. Now find out the length of the actual data. Now, generate n random number in a range of height of an image. Where n is the length of a string. 6. Then extract the pixel value of generated coordinate[x,y] Where x and y are random numbers which the algorithm has generated. 7. Now extract the ASCII value of each character in the combination where one set contains 3 characters. 8. Firstly, Declare two empty string d and cord. 9. Now extract the RBG value of the coordinate. // suppose R, B and G are those variables Then: r= ASCII value of the first character of a pair. b=ASCII value of the second character of the same pair. g=ASCII value of the third character of the same pair. r_i=R-r b_i=B-b g_i=G-g 10. Now if any of these values are negative then concatenate it with "N" else "P" at the beginning of the value and finally concatenate in the $d. d=d+s_r+r_i+s_b+b_i+s_g+g_i //Where s_r, s_b, and s_g shows sign of r_i, b_i, and g_i respectively in the form of N and P. //suppose values have been extracted from x, y coordinate. 11. Now convert x and y in the form of a string of length four. //suppose values of x and y is //x=10 //y=3244 //then string format will be //x="0010" //y="3244" 12. Later encrypt these coordinates with the help of password as follows: v1=p[0]-x[0] v2=p[1]-x[1] v3=p[2]-x[2] v4=p[3]-x[3] v5=p[4]-x[4] v6=p[5]-x[5] v7=p[6]-x[6] v8=p[7]-x[7] 13. Concatenate these values with $cord cord =cord+v1+v2+v3+v4+v5+v6+v7+v8 14. Now store these values in the form of string and repeat same process until all characters of the data follow same process. 15. Encrypted_data = cord + d 16.Encrypt_data is encrypted data and store it in a file.

Encryption and Decryption: Unraveling the Intricacies …

427

2.2 Decryption The decryption part has the following 3 main attributes: • Encrypted data • Image • Password. Encrypted Data Encrypted data is the unreadable form of actual data. The Encrypted data makes the data into uncrackable and makes it in an unreadable format (Fig. 5). Explanation 1. At the time of decryption, firstly that image is taken inside which all the encrypted data is stored. 2. Now at this stage, we have the encrypted data and character’s coordinate. 3. Now according to the given algorithm, it will extract the coordinate and its respective RBG value. 4. Now with the help of a password, the data is decrypted.

PASSWORD

Fig. 5 Decryption

428

B. B. P. Maurya et al.

Algorithm of Decryption 1. Firstly input the image inside which encryption was made. 2. Now Input the password. 3. After inserting the password insert the decrypted data. 4. Now split partial data and encrypted coordinates. 5. After splitting decrypt the coordinates with the help of a password. 6. Initialize a empty string $dec. 7. Now Extract pixel value of that coordinate R=value of r in RBG B=value of b in RBG G=value of g in RBG 8. Split partial data with the “P” and “N” character 9. If the first character is P, Then, Simply convert a numeric character into an int format. Val = int number 10. Else if the first character is N, Then, Convert numeric character into int format and multiply it with -1 val= int number 11. r_a=R-v1 b_a=B-v2 G_a=G-v3 Where,v1, v2, v3 are a numeric form of partial data 12. r_d=chr(r_a) b_d=chr(b_a) g_d=chr(g_a) 13. dec = dec + r_d + b_d + g_d 14. Repeat steps 7-13 until all coordinates undergo the same procedure. 15. dec is decrypted data store it in a file.

See Figure 6

3 Experimented Result Figure 7 shows that for any given length of data the algorithm is successfully encrypting (Table 1). Figure 7 shows that the possibility of cracking data depends upon the size of data. Data is hidden in pixel coordinates if somehow the hacker will be able to find out those coordinate then also it becomes difficult for him to arrange it in the sorted order, in short, this graph is explaining that the size of data is inversely proportional to the conditions of combination to sort the data, which is nearly impossible.

4 Future Scope This project aims to maximize the efficiency level of maintaining the security and integrity of confidential and private data. It encompasses the methods or concepts

Encryption and Decryption: Unraveling the Intricacies …

429

Fig. 6 Decryption

such as cryptography, matrix multiplication, steganography and also subsumed into it the usage of color code and pixels [3]. All these dynamic and robust elements when used together make it a perfect platform to be used now and also keeps gaining preference in the future too. As the demands for more and more secure applications or platforms keep on proliferating, it will always be desirable and on the list of securityseekers. It can be specifically used for high-level purposes. As it deals with hiding and securing data at the highest level possible, it widely finds its significant application

430

B. B. P. Maurya et al.

Fig. 7 Length of data versus possible combinations

Table 1 Test cases with results

Length of data (number of character)

Test (Results)

10,000 or less

Encrypted (Pass)

50,000

Encrypted (Pass)

100,000

Encrypted (Pass)

200,000

Encrypted (Pass)

300,000

Encrypted (Pass)

400,000 and more

Encrypted (Pass)

at a national level for security purposes, which are to be dealt with high diligence and accountability. It also has a necessary role in accomplishing commercial and organizational purposes, as witnessing the thriving of more industrialization and commercialization, this proposed solution would always serve as a credible platform when considering any task regarding the security of data. It can also be used in various other fields and stand up to the expectations of users.

5 Conclusion The times where data security and preserving its integrity plus privacy tasks are prevailing among all the known fields, as well as a personal and general level, the various methods which can pursue the appropriate security of the data, are of crucial importance for all the seekers. The proposed method that is focused here is cryptography, which involves creating generated codes that allow information to be kept hidden. Cryptography converts actual data into a format that is unreadable for an unauthorized user. So, by this method the data gets transferred into partial data and for making back to readable format a special combination will be needed. It also

Encryption and Decryption: Unraveling the Intricacies …

431

comprised of concepts like steganography that adds as an adjunct to it and used matrix multiplication in creating algorithms for encrypting and decrypting data. It becomes quite secure when such robust and efficacious concepts are used up in building up a platform that can be used by all and maintains the well-being and secrecy of data. It can be easily used has high reliability, efficient and is not complicated, all these supplements to the merits of this project.

References 1. Dusane, P., Patil, J., Jain, U., Pandya, R.: Security of data with RGB color and AES encryption techniques. Int. Res. J. Eng. Technol. (IRJET) 04(04) (2017) 2. Rameshwar Prasad.: Improving RGB data security with advance cryptography algorithm. Int. J. Comput. Sci. Mobile Comput. 4(6) (2015) 3. Sri, B.R., Madhu, S.: A novel method for encryption of images based on displacement of RGB pixels. Int. J. Trend Res. Dev. (IJTRD). ISSN: 2394-9333

Edge Intelligence-Based Object Detection System Using Neural Compute Stick for Visually Impaired People Aditi Khandewale, Vinaya Gohokar, and Pooja Nawandar

Abstract There are around 16 million people who are blind, and this system is to support them for object detection and as well as person recognition in future. This is future of artificial intelligence. This system used Intel neural compute stick which has OpenVINO API and supports various frameworks like TensorFlow, Caffe, MXNet, ONNX, Kaldi, etc. This system can detect object from the frame correctly. For testing, implementation is done on raspberry pi 3B+ which uses Raspbian operating system, Intel’s neural compute stick, and Web camera. This can be artificial eye for blind people, and beauty of project is edge intelligence, without Internet and cloud. When data is analyzed and aggregated in a spot, where it is captured in a network is known as edge intelligence. There are many advantages of intelligence at the edge like it will minimize latency and help to reduce bandwidth, and cost reduction is also one of the important features addition to reducing threats which also minimizes duplication as well as improves reliability.

1 Introduction The Internet of Things (IoT) is next-generation expanding field with wide range smart appliances making smart homes like lights, ovens, coffee makers, refrigerators, etc., and use of IOT in security systems is one of the popular applications which uses IOT to make cameras, video door bells, locks smart, and IOT based home entertainment systems that includes smart music systems, smart TV, smart voice assistants, etc. In the current IoT culture, these connected devices are continuously gathering huge A. Khandewale (B) · V. Gohokar · P. Nawandar MIT-WPU, Pune, India e-mail: [email protected] V. Gohokar e-mail: [email protected] P. Nawandar e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_41

433

434

A. Khandewale et al.

Fig. 1 System processing

amounts of data and sending it to centralized remote server. For analytics and remote access control after processing this data, it is stored in remote server. Centralized remote servers are capable for processing this data, but there are multiple limitations in it. Intelligent edge is future of artificial intelligence. There are two modes of edge intelligent which depends on the processing requirement and the data collected from device, and if it is processed at the device itself, then it is known as edge analytics; and if the local node is deployed at the home network, then it is recognized as fog computing. The analysis and decision taken at edge are fed to the device for the next action in real time. Data or information can be categories like sensitive and non-sensitive, and in that sensitive information can be processed at edge only, and non-sensitive information can be sent to server, that is profitable and beneficial in many ways which helps in real-time processing of data without latency, reduces privacy/security risks, and optimizes the network resources. In this project, object detection is done using Intel neural compute stick, especially made for visually impaired people, to help them to identify objects near. The Intel’s movidius neural compute stick (NCS) is a tiny deep learning device which looks like pen drive having USB port to connect which can be used to learn AI programming at the edge. The movidius neural compute stick which is AI interfacing toolkit has wide range of applications, raspberry pi, Linux. This toolkit contains model optimizer which is used to convert model into intermediate representation (IR). Model optimizer is a cross-platform which is responsible for transition between the deployment environment and training model (Fig. 1). Intel neural compute stick will make API to run without ethernet and cloud which makes it portable as well as easy to use for user point of view. System is designed to give output in audio form, using Python text-to-speech (PTTS) library, so that object detected will be audible to visually blind people.

1.1 Edge Computing There are several reasons why edge computing is required,

Edge Intelligence-Based Object Detection System …

435

1. When connected devices needs to send huge amount of data to remote server can cause issues like latency and delays in the transfer of data. In real-time analytics and critical actions, this is specifically a major issue. 2. There is high security risk of the data in flight from smart device to remote server as well as risk in the stored data on the remote server. 3. Privacy with complete information and issues with information getting stored in distant server. 4. The basic and most importantly high-speed Internet connection is required to send information over cloud.

1.2 Advantages of Edge Computing There are several advantages choosing edge computing over cloud computing; 1. Minimize latency: There are number of applications which require immediate insight and instant control, and it can also help in quality enhancement. 2. Bandwidth reduction: To send big data from things to the cloud, it will defiantly consume huge bandwidth. Edge computing is the easiest and best solution to this problem. 3. Cost reduction: Cost plays very important role in an application, there is availability of bandwidth, but it can be costly. Efficiency is an important element of any corporate world 4. Reduce threats: If one transfers data over large distance, there is possibilities of attacks and breaches. 5. Solution of this problem can be processing data at the edge which can improve security. 6. Avoid duplication: If all the data is collected and sent to the cloud, there will be possibility of duplication equipment in storage, networking equipment, and software. If there is need to avoid this, then edge intelligence can be the option. 7. Improvement in reliability: There is possibility of data corruption on its own without hackers attack. Retries, drops, and missed connections will plague edgeto-data-center communications.

2 Block Diagram Figure 2 represents basic block diagram of edge intelligence-based object detection system using neural compute stick for visually impaired people which includes raspberry pi as main hardware used along with Intel neural compute stick as well as camera and headphones software used is OpenVINO and NCSCK’s API (Table 1).

436

A. Khandewale et al.

Fig. 2 Block diagram

Table 1 Hardware and software description

Raspberry pi

RPI 3B+

RPI camera

REV1.3

Web camera

Logitech HD720P

SD card

32 GB

OpenVINO

2020.1.033 toolkit

Operating system

Raspbian stretch

2.1 Requirements Hardware 1. 2. 3. 1.

Raspberry Pi* board with ARM* ARMv7-A CPU architecture. Intel® neural compute stick Web camera Raspberry pi The raspberry pi is a low-cost, small-sized computer to which we can connect monitor, keyboard, and mouse externally. It is a capable of all tasks that PC is able to perform. This system uses raspberry pi 3B+ model. 2. Web camera This system used the Logitech C270 HD Web cam The Logitech C270 HD Web cam has a high-quality, resolution, and clear picture. 3. Operating system Operating system installed and used is Raspbian, specifically Raspbian stretch is used, and it is a free operating system based on Debian optimized which is specifically used for the raspberry pi hardware. Software 1. CMake* 3.7.2 or higher 2. Python* 3.5, 32-bit 3. OpenVINO

Edge Intelligence-Based Object Detection System …

437

1. OpenVINO The OpenVINO™ toolkit deploys applications and solutions for human vision. According to convolutional neural networks (CNN), the toolkit extends computer vision (CV) which maximizes performance. The OpenVINO toolkit includes the Intel® deep learning deployment toolkit (Intel® DLDT). OpenVINO can be used on various platforms like Windows, Linux, Android, raspberry pi, etc. The OpenVINO™ toolkit for Raspbian OS designed for raspberry pi includes the two basic things inference engine and MYRIAD plugins, which we can use it with the Intel movidiu neural compute stick (Intel® NCS).

3 Workflow install the open VINO toolkit

configure Neural compute Stick Driver

Install prerequisites

NCSDK API

TEST and RUN

Implementation on Step-wise procedure • • • • • •

Install the OpenVINO™ toolkit Install external software dependencies Set the environment variables Add USB rules Run the object detection sample The application outputs an image (out_0.bmp) with detected face enclosed in rectangles.

4 Analysis 4.1 Compilations Steps Object detection model installation is done, and after that for experimentations, few models were tested like table (Fig. 3). Object gets detected from, image and detected correctly. Similarly, many objects can be detected. In fif4, system can identify object as remote (Figs. 4 and 5). Future scope Using Intel neural compute stick, real-time object detection is planned for visually impaired people. Target is to use MS-coco dataset which contains 80 objects of daily use or to create customized dataset which will include things which blind person will his day-to-day life.

438

A. Khandewale et al.

Fig. 3 Implementation steps

Fig. 4 Sample image of table

Fig. 5 Object detection from the frame

Using PTTS library, convert output into speech. On the similar basis, face detection and recognition can also be possible, making dataset of faces of group of known people for visually impaired people, and converting output into audio from. Using neural compute stick, system will be portable and easy to carry with accuracy. Acknowledgements Authors are thankful to Department of Science and Technology (DST), Government of India, for funding and supporting.

Edge Intelligence-Based Object Detection System …

439

References 1. Tiwari, F.N., Mondal, K.: NCS based ultra low power optimized machine learning techniques for image classification. In: 2019 IEEE Region 10 Symposium (TENSYMP), pp. 750–753. Kolkata, India (2019) 2. Chang, W., Chen, L., Hsu, C., Chen, J., Yang, T., Lin, C.: MedGlasses: a wearable smartglasses-based drug pill recognition system using deep learning for visually impaired chronic patients. IEEE Access 8, 17013–17024 (2020) 3. Le, V., Vu, H., Nguyen, T.T.: A frame-work assisting the visually impaired people: common object detection and pose estimation in surrounding environment. In: 2018 5th NAFOSTED Conference on Information and Computer Science (NICS), pp. 216–221. Ho Chi Minh City (2018) 4. Duman, S., Elewi, A., Yetgin, Z.: Design and implementation of an embedded real-time system for guiding visually impaired individuals. In: 2019 International Artificial Intelligence and Data Processing Symposium (IDAP), pp. 1–5. Malatya, Turkey (2019) 5. Tepelea, L., Buciu, I., Grava, C., Gavrilut, I., Gacsádi, A.: A vision module for visually impaired people by using Raspberry PI platform. In: 2019 15th International Conference on Engineering of Modern Electric Systems (EMES), pp. 209–212. Oradea, Romania (2019) 6. Joe Louis Paul, I., Sasirekha, S., Mohanavalli, S., Jayashree, C., Moohana Priya, P., Monika, K.: Smart eye for visually impaired-an aid to help the blind people. In: 2019 International Conference on Computational Intelligence in Data Science (ICCIDS), pp. 1–5. Chennai, India (2019) 7. Vyavahare, P., Habeeb, S.: Assistant for visually impaired using computer vision. In: 2018 1st International Conference on Advanced Research in Engineering Sciences (ARES), pp. 1–7. Dubai, United Arab Emirates (2018) 8. Stephen, O., Mishra, D., Sain, M.: Real time object detection and multilingual speech synthesis. In: 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), pp. 1–3. Kanpur, India (2019) 9. Khan Shishir, M.A., Rashid Fahim, S., Habib, F.M., Farah, T.: Eye assistant: using mobile application to help the visually impaired. In: 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), pp. 1–4. Dhaka, Bangladesh (2019) 10. Jayawardena, C., Balasuriya, B.K., Lokuhettiarachchi, N.P., Ranasinghe, A.R.M.D.N.: Intelligent platform for visually impaired children for learning indoor and outdoor objects. In: TENCON 2019—2019 IEEE Region 10 Conference (TENCON), pp. 2572–2577. Kochi, India (2019) 11. Pehlivan, S., Unay, M., Akan, A.: Designing an obstacle detection and alerting system for visually impaired people on sidewalks. In: 2019 Medical Technologies Congress (TIPTEKNO), pp. 1–4. Izmir, Turkey, (2019)

MH-DSCEP: Multi-hop Dynamic and Stable Cluster-Based Energy-Efficient Protocol for WSN Kameshkumar R. Raval

and Nilesh Modi

Abstract Division of the entire wireless sensor network geographic area into number of equal sized regions by applying virtual grid and strategy to appoint head node of the cluster in each region improves stability, consistency, and lifetime of the wireless sensor network. Applying the same size of grid to the different size of wireless sensor networks will not give consistent performance. In this paper, we are presenting a protocol which will dynamically determine the size of the virtual grid, which will give optimal performance of any geographic sized wireless sensor network. The protocol also allows to place sink node to any location and not at some fixed specific location. Protocol discussed in this paper can also support multiple sink nodes. Protocol dynamically takes decision to transmit the data directly to the sink node or by using multi-hop transmission by considering distances between sensor nodes internally and sink node(s), which will optimize energy efficiency of the sensor nodes and increases the lifetime of wireless sensor network.

1 Introduction Wireless sensor network is a network of inexpensive, battery-operated several sensor nodes (that may be hundreds or thousands), which are deployed in the geographic location, which is mostly unattended by humans, but its monitoring data is adequately useful for the human being. Sensor node consists of a sensing unit which can sense the external environmental data usually available in analog form. ADC converter will K. R. Raval (B) Som-Lalit Institute of Computer Applications, SLICA, SLIMS Campus, University Road, Navarangpura, Opp. Xavier’s College, Ahmedabad 380009, India e-mail: [email protected] N. Modi BAOU, Dr. Baba Saheb Ambedkar Open University, Near Nirma University, Chharodi, S.G. Highway, Ahmedabad, India e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_42

441

442

K. R. Raval and N. Modi

simply convert these sensed data into digital form. Wireless transceiver will transmit that data to the sink node either directly or via some other sensor node(s). Sink node is a simply data collector, which will collect the data from various sensor nodes and send the data to desired remote location using cable or satellite transmission. All the components of a sensor nodes are energized to function their dedicated functionality by a battery. Once the battery gets drained, sensor node will become useless (as physical replacement of the battery is not possible). So, routing protocol which efficiently use the energy of a sensor node is more important to prolong the network lifetime. Periodically, sensor nodes sense the data and transmit it to the sink node either directly or by using multi-hop transmission using intermediate nodes between it and sink node.

2 Related Work In direct transmission, each sensor node transmits the to the sink node directly. So, those nodes, placed at a longer distance from the sink node die quicker as they are transmitting the data to a sink node placed at longer distance. In MTE [1], sensor nodes are transmitting the data via some intermediated nodes. In this case, sensor nodes placed nearer to sink node die quicker as they are sensing and transmitting their own data, as well as they also carry other sensor node’s data. In LEACH [2], author has proposed clustering-based protocol. Main advantage of the clustering-based protocol is that nearby sensors will form a cluster and one node to be appointed as head of the cluster. Cluster head will collect the data from various cluster members, and aggregated data will be sent to sink node. After submitting the data, all cluster members will turn off their transceiver to some specific time to save energy. Here, not all nodes are transmitting their individual data to sink node, so lots of energy can be saved. In LEACH-C [3], all processing work, like formation of clusters, appointment of head node given to the sink node, as it do not have any kind of resource limitation as other sensor node has. Author has also focused on the heterogeneous capability on the network by introducing two types of nodes, that is, normal nodes (having lesser energy, deployed earlier) and advanced nodes (having more energy, deployed recently). In SEP [4], author has focused on stability (a greater number of live nodes in each round), because if a greater number of live nodes are present, then we can get exact information about sensing area. HEED [5] is another clustering-based protocol. In MSECHP [6], entire WSN is divided geographically into the number of regions by applying virtual grid, and appointment of the head node is done in every region. Because every region have head node, uniform distribution of the head node significantly increases the performance. VGDRA [7] and VGDD [8] have used similar type of approach to divide sensor network by a virtual grid. In RAM [9], author has considered virtual circles in the WSN field and then created binary hierarchy to appoint cluster heads from inner circle to outer circle.

MH-DSCEP: Multi-hop Dynamic and Stable Cluster …

443

Many researchers have given their valuable contribution to implement multi-hop version for the LEACH protocol. IMHRP [10] and MR-LEACH [11] are the examples of multi-hop transmission protocol.

3 Radio Transmission Model We have considered simplest and basic radio transmission model for sensor node to estimate energy conservation. Let us assume that E TX and E RX are transmitting and receiving energy to transmit or receive a bit by a sensor node. Then, E elec = E TX = E RX = 50 nJ/bit

(1)

3.1 Transmission Energy Now, if we want to transmit L bits of data to distance d, first we need to calculate transmission energy that will be E elec * L. Transmitted signals are short-range signals and cannot be propagated up to distance d. Therefore, each transmission signal has to be amplified, so that the signal can propagate up to d distance. Amplification needs E amp (d) energy, as it is based on distance d. To amplify L bits of data, so that the signals can reach to d distance, sensor node has to spend E amp (d) * L amplification energy. So, total energy consumed by a sensor node to transmit L bits of data to send to the recipient situated at d distance can be estimated as: E TX (L , d) = E elec ∗ L + E amp (d) ∗ L

(2)

Based on free space model or multipath model, amplification energy E amp can be calculated by the following formula:  E amp (d) =

εfs ∗ d 2 if d < d0 εmp ∗ d 4 if d ≥ d0

(3)

Here, d is a distance between transmitter and receiving node. To compute the value of d 0 , following formula is used:  d0 =

εfs εmp

(4)

From the above equations, we can derive the generalized formula to compute transmission energy for a sensor node to transmit L bits of data and to d distance is:

444

K. R. Raval and N. Modi

 E Tx (L , d) =

E elec ∗ L + εfs ∗ d 2 ∗ L if d < d0 E elec ∗ L + εmp ∗ d 4 ∗ L if d ≥ d0

(5)

3.2 Receiving Energy Not only transmitter node has to spent the energy, recipient node also has to spend the energy. If to receive a bit, recipient node has to spent E elec , then to receive L bits of data from the transmitting node, recipient node has to spend E RX (L) is: E RX (L) = E elec ∗ L

(6)

4 Our Contribution 4.1 Calculating Virtual Grid Size From the discussion made in radio transmission model, it is clear that the energy conserved by the transmitting node is in the direct proportion, to the distance between transmitting and receiving node. If we transmit the data via some intermediate nodes (multi-hop), then the distance can be minimized, and we can reduce energy expense of transmitting node. Consider a case, where we have three nodes as shown in Fig. 1. Let us assume that the distance between node A to node B is x, and node B to node C is y. Further, assume that the distances x and y are less than d 0 /2. In this case, total distance from node A to node C is x + y and that will be less than d 0 . If we compute, the total energy in multi-hop transmission (Transmission energy of node A + Receiving energy of B + Transmission energy of B + Receiving energy of C) will be greater than direct transmission from node A to node C. But, if we assume that distances x and y are greater than d 0 /2, then the distance from node A to node C will be greater than distance d 0 . In this case, total energy spent in the multi-hop transmission will be much lesser than the energy spent in direct transmission. We have considered the number of cases and concluded that if the distance between transmitting and receiving node is between lesser than d 0 , then direct transmission and greater than d 0 then multi-hop transmission is preferable. Fig. 1 Multi-hop transmission of sensor nodes

MH-DSCEP: Multi-hop Dynamic and Stable Cluster …

445

Based on the above discussion, we have developed an algorithm to divide whole WSN into various regions, where the length of each region should not be lesser than d 0 /2, and the diagonal length of each region should not be greater than d 0 . i = 1; Do hvDistance = X m /i; √ DiagonalDistance = 2 * hvDistance; If (hvDistance > d 0 /2 && DiagonalDistance < d 0 ) then ApplyGrideofSize(i); //WSN is divided in the i * i regions break; Else i = generateNextNum(i); //i = i+1 End If Loop

4.2 Appointment of Head Node of the Cluster To appoint cluster head node for each region, we have considered nine optimal points in the region. In each round, one optimal point will be selected based on mod (round_number, 9). Nearest node of the optimal point will be elected as head node of the cluster in every region.

4.3 Routing Every sensor node of the region will sense the data and transmit that data to the head node of the region. Head node can transmit the data directly to the sink node if the distance from the sink node is lesser than d 0 . If the distance to the sink node is greater than d 0 , then it will find head node of the other region (intermediate node), at the longest distance (but the distance between two nodes should not be greater than d 0 ). The following algorithm we have used to route the data to sink node. In the algorithm, SN tends to sink node, and CH stands to cluster head. Step: 1 SN will collect the information about various CHs from different region. Step: 2 For Each CH, calculate the distance from all other CHs in the list

446

K. R. Raval and N. Modi

remove those CHs having distance > D0 find CH having maximum distance. set that node as intermediate route node for that particular CH. repeat the process for next CH.

5 Simulation and Result We have simulated our protocol in the MATLAB for different size of WSNs for exam 100 m * 100 m, 200 m * 200 m, 300 m * 300 m, and so on. We have compared the result of proposed protocol with many other multi-hop cluster-based protocols and found that the proposed algorithm gives better performance. For our simulation purpose, we have used the following parameters (Table 1). Simulation for 300 m * 300 m of WSN having 300 nodes are deployed is shown in Fig. 2. In this simulation, we have placed a sink node at the central location (150,150) and compared our protocol with multi-hop LEACH, and result of our simulation (no of live nodes in each round) is shown in Fig. 3. From Fig. 3, we can say that in our proposed protocol, a greater number of live nodes are present in each round, so it is more stable. Even after 1200 rounds, more than 125 live nodes are there to compare multi-hop LEACH (less than 45 live nodes). We have also placed the sink node on (0,0) and (0,150) locations, and we found that the result of the simulation is enhanced from the previous simulation. We have also compared our protocol with LEACH-C, SEP, HEED, MSECHP, RAM, LPCH-UDLPCH [12], EE-LEACH [13], and MH-LEACH [14] for different size of WSNs, with different density of sensor nodes and found that the proposed protocol has produced superior result. Table 1 Simulation parameters

Type of operation

Energy utilization

Transmitting Receiving

Transmission: E TX = 50 nJ/bit Receiving: E REC = 50 nJ/bit E elec = E TX = E REC = 50 nJ/bit

Data processing

E DA = 5 nJ/bit

Transmit amplifier electronics Using free space model If d toBS < d 0 If d toBS > d 0

E fs = 10 pJ/bit/m2 E mp = 0.0013 pJ/bit/m4

MH-DSCEP: Multi-hop Dynamic and Stable Cluster …

Fig. 2 Simulation of 300 m * 300 m WSN with 300 sensor nodes

Fig. 3 Simulation result—number of live nodes and number of rounds

447

448

K. R. Raval and N. Modi

6 Conclusion Finally, we conclude that MH-DSCEP gives consistently better performance in for different size of WSNs with different density. The algorithm dynamically decides the size of virtual grid and uses energy-efficient routing algorithm. The protocol does not restrict to place the sink node on the center location of WSN. MH-DSCEP can be used with multiple sink nodes.

References 1. Shepard, T.: A channel access scheme for large dense packet radio networks. In: Proceedings of the ACM SIGGCOMM, pp. 219–230 2. Heinzelman, W.R., et al.: Energy-efficient communication protocol for wireless microsensor networks. In: Proceedings of 33rd Annual Hawaii International Conference System Science 00, pp. 3005–3014 (2000) 3. Heinzelman, W.B., Chandrakasan, A.P., Balakrishnan, H.: An application-specific protocol architecture for wireless microsensor networks. IEEE Trans. Wirel. Commun. 1, 660–670 (2002) 4. Smaragdakis, G., Matta, I., Bestavros, A.: SEP: a stable election protocol for clustered heterogeneous wireless sensor networks. In: Second International Workshop Sensors Actor Network Protocol Applied (SANPA 2004), pp. 1–11 (2004) 5. Younis, O., Fahmy, S.: HEED: a hybrid, energy-efficient, distributed clustering approach for ad hoc sensor networks. Mob. Comput. IEEE Trans. 4, 366–379 (2004) 6. Raval, K.R., Modi, N.: MSECHP: more stable election of cluster head protocol for heterogeneous wireless sensor network. Adv. Intell. Syst. Comput. 508 (2017) 7. Zhu, C., Long, X., Han, G., Jiang, J., Zhang, S.: A virtual grid-based real-time data collection algorithm for industrial wireless sensor networks. Eurasip J. Wirel. Commun. Network. 2018(1) (2018) 8. Khan, A.W., Abdullah, A.H., Razzaque, M.A., Bangash, J.I., Altameem, A.: VGDD: A virtual grid based data dissemination scheme for wireless sensor networks with mobile sink. Int. J. Distrib. Sens. Netw. 2015(2015). https://doi.org/10.1155/2015/890348 9. Raval, K.R., Modi, N.: RAM: rotating angle method of clustering for heterogeneous-aware wireless sensor networks. In: Zhang, Y.D., Mandal, J., So-In, C., Thakur, N. (eds.) Smart Trends in Computing and Communications. Smart Innovation, Systems and Technologies, vol. 165. Springer, Singapore (2020) 10. Liu, G., Zhang, Y.: IMHRP: Improved multi-hop routing protocol for wireless sensor networks IMHRP: improved multi-hop routing protocol for wireless sensor networks (2017) 11. Farooq, M.O., Dogar, A.B., Shah, G.A.: MR-LEACH: Multi-hop routing with low energy adaptive clustering hierarchy. In: Proceedings—4th International Conference on Sensor Technologies and Applications, SENSORCOMM, pp. 262–268 (2010). https://doi.org/10.1109/ SENSORCOMM.2010.48 12. Khan, Y., et al.: LPCH and UDLPCH: location-aware routing techniques in WSNs. In: Proceedings of 2013 8th International Conference Broadband, Wireless Computing Communication Application BWCCA 2013, pp. 100–105 (2013). https://doi.org/10.1109/bwcca.2013.25

MH-DSCEP: Multi-hop Dynamic and Stable Cluster …

449

13. Arumugam, G.S., Ponnuchamy, T.: EE-LEACH: development of energy-efficient LEACH protocol for data gathering in WSN. EURASIP J. Wirel. Commun. Network. 2015(1) (2015). https://doi.org/10.1186/s13638-015-0306-5 14. Brand, H., Rego, S., Cardoso, R., Jr, J.C., Networks, C.: MH-LEACH : A Distributed Algorithm for Multi-Hop Communication in Wireless Sensor Networks, pp. 55–61 (2014)

Blockchain Framework for Social Media DRM Based on Secret Sharing M. Kripa, A. Nidhin Mahesh, R. Ramaguru, and P. P. Amritha

Abstract Huge adoption of the Internet into our daily lives generates at least few MBs of data. Social media in modern age enabled every single user with a smartphone and Internet to be a marketer, journalist, publisher and content creator. Ensuring protection and copyright of social media data like images, videos and audios are extremely demanding. Copyright infringement happens easily, and protecting the content for its originality and authenticity should be simplified. In this paper, we are proposing a blockchain framework with smart contracts to protect social media contents using IPFS, a modern decentralized file storage system and a secret sharing scheme. This framework through its decentralized and immutability feature offers limitless opportunities for managing copyright rights of content on decentralized social media.

1 Introduction Development in digital technology and the reach of the Internet to the hands of common man-made resulted in the generation of multimedia contents from various digital platforms. Social networking sites have made the world small with any information from a corner of the world spread-out in few seconds. Social media platform also acts as a marketplace for advertising and reviews. Reports say, as on Jan. 2016 the M. Kripa (B) · A. Nidhin Mahesh · R. Ramaguru · P. P. Amritha TIFAC-CORE in Cyber Security, Amrita School of Engineering, Coimbatore, India Amrita Vishwa Vidyapeetham, Coimbatore, India e-mail: [email protected] A. Nidhin Mahesh e-mail: [email protected] R. Ramaguru e-mail: [email protected] P. P. Amritha e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_43

451

452

M. Kripa et al.

number of social media users was recorded at 2.307 billion people, that number has increased to 3.196 billion users globally by Jan. 2018. The social networking market has become the fastest-growing digital market with a huge share of mobile phone users which accounts for USD 220 Billion [1]. In general, most of the users are not aware when they share or re-upload the content, that is of copyright infringement. Some laws govern copyright and set of guidelines we should follow when we reuse copyrighted content. But due to the lack of awareness on the copyright protection, such practices continue on large-scale which needs to be addressed [2]. Digital rights management (DRM) technologies were designed to provide authentic holders with a means to control the distribution of their work and evaluating the use of content. In 2018, Sony announced to use the blockchain technology to make digital rights management more efficient [3]. Blockchain is the newest and perspective technology in the present century and foreseen to have a larger impact on the current day application and Internet. It is a technology that uses distributed ledger on multiple nodes without a need for a centralized system. A continuous chain of blocks that contain information hosted on an open-distributed network of computers which is called as nodes. It is secured than traditional centralized databases that enhance transparency, auditability and accountability. Blockchain is a decentralized, distributed database that is used to record transactions across many computers through the process of consensus [4] mechanism so that any involved record cannot be reversed. A block is usually a collection of transactions, where each block is linked using a cryptographic hash to the previous block. This makes the blockchain immutable and suitable for storing and managing the copyrights of the social media contents. Blockchain at present has also crossed its application boundary from being a mere cryptocurrency platform to Non-financial applications are healthcare, logistics and supply chain management, provenance management, energy management [5], e-governance and even social media platforms. Akasha [6] is a social networking platform based on Ethereum [7] and IPFS [8]. All.me is a digital ecosystem based on a social network with a unique reward method, marketplace and payment solution [1]. The users of these social networking applications are rewarded 50% of the advertising revenue. The paper is organized in the following structure: Innovations and experimentation related to this area are briefed in Sect. 2. Section 3 discusses the proposed blockchain framework along with architecture and its workflow and concludes the work in Sect. 4.

2 Related Works Mehta et al. [9] propose a decentralized platform for P2P image sharing, developed using Ethereum test network. This decentralized application supports perceptual hashes and the Ethereum smart contract that will identify and reject altered images which are perceptually similar to images already available on the forum. Ethereum smart contracts used in their system are publically available and hence can automatically detect and reject similar images on a decentralized image sharing platform to

Blockchain Framework for Social Media DRM Based …

453

secure the copyrights of genuine image authors. The images on the network are saved by InterPlanetary File System (IPFS) in a decentralized fashion. In [9] phash (dct hash), difference hash, average hash and wavelet hash are only used. These algorithms are included in the ImageHash library in python that will provide detailed documentation about these algorithms. So if someone uploads a distorted version of the original image on the marketplace, the smart contract calculates its perceptual hash and also calculates the hamming distance of that hash, and these hashes stored on the decentralized PERC HASH database and upon comparing with a threshold, it will reject such image. Thus, this would prevent unauthorized uploads and will also have transaction fees associated with the processing transaction on the Ethereum network. Paper [10] proposes a new digital watermarking copyright management system and its information such as blockchain along with digital watermarking, perceptual hash function, quick response (QR) code and IPFS. In this watermarking, information is securely stored using the blockchain and timestamp authentication is provided for multiple watermarks or copyrights to confirm the creation order. Hash value based on the structure of information of images is generated using a perceptual hash function, and that watermark information can be confirmed without the original image. To improve robustness and capacity of watermarking, the watermark images are the QR code that contains the image hash and copyright information. Using IPFS, watermarked images are stored and distributed without a centralized server. This scheme will make digital watermarking technology more successful in the field of copyright protection. Another notable limitation of this research is restricting copyright validation to media like images only.

3 Proposed System In this section, we propose a permissioned blockchain framework for DRM systems for decentralized social media systems like All.ME [1]. We conceptualize a decentralized system made of multiple user peers and smart contracts to detect and report copyright infringement and management.

3.1 System Architecture The architecture of the system comprises of decentralized network for identity management, data transfer through REST API to IPFS for file storage and blockchain to record the metadata of the files as shown in Fig. 1. The components are detailed below:

454

M. Kripa et al.

Fig. 1 System architecture

• DApp is an interface the end-user uses. The DApp could be a web-based or mobilebased DApp. The DApp could be a content management system or decentralized social media application. • Rest API is an interface between DApp and the underlying proposed system comprising of IPFS and blockchain. Any third party could use the exposed REST API to build their own DApp and leverage the benefits of the underlying system. • Secret Sharing is a method of sharing a secret among multiple parties by splitting them into unrecognizable shares [11]. The secret can be reconstructed only by combining the threshold number of shares from different parties. A secret sharing scheme is defined as (t, k, n), where n is the total number of shares, k is the threshold number of shares required to reconstruct the secret, and t is the number

Blockchain Framework for Social Media DRM Based …

• •







455

of mandatory shares required to create the secret. In this system, (2,3,5) scheme is used to provide control to the owner and privacy. Robust Hashing is a method of hashing the image which is resistant to modification, rotation, colour alteration [12, 13]. Smart Contracts is a self-executing program that directly and automatically regulates the transfer of digital assets between two exchanging parties under conditions pre-agreed. It makes up a contractual clause between two parties [14] in a blockchain system. It verifies the contract automatically and follows the agreed terms. IPFS is a peer-to-peer distributed storage system used to store and share digital contents [8]. IPFS uses the hash of the contents as its address, so each file uploaded is uniquely identified by its file hash. The content uploaded through the DApp is stored in IPFS, and the hash value is referenced in the blockchain and thus helps in optimizing the size of each block. Blockchain is a decentralized computation and distributed database to store the data immutably. The transactions are stored after going through a suitable consensus process [4]. In this proposed framework, we suggest having a public chain and a private chain. Network Layer is the bottom layer which is responsible for the peer-to-peer communication. We propose to use Libp2p, which is a modular and extensible networking stack for peer-to-peer applications [15]. The IPFS system is built on Libp2p.

3.2 Identity Management Each user and the multimedia content in the application domain should be identified through a unique identity. Users are identified in the proposed system using a 256bit unique hash-user-id. Multimedia contents are uniquely identified using a robust hash and when stored in IPFS provides IPFS-hash for retrieval of contents. To prove and track the ownership of contents added on to the social media, ownership identity (owner_id) is generated which is defined as a function of user-id, ipfs-hash and robust hash. The function could be a simple hash function or a complex function involving encoding schemes. Function(user_id + hashipfs + hashrobust ) → owner_id

3.3 Workflow Whenever a new multimedia content is uploaded by a user in the decentralized social media application, a transaction is created with the information about hash_robust and user_id which is publicly available. The private transaction is created with the

456

M. Kripa et al.

Fig. 2 Block contents of image addition in private and public chain

information on hash_ipfs values which are accessible only by the user’s smart contract. Figure 2 shows the structure of the proposed public and private chain structure with contents. The private blockchain and public blockchain communication are enabled through the smart contract deployed on the blockchain.

3.3.1

Image Detection

Let us say an image is uploaded through the DAPP, to securely store the image we use secret sharing scheme (2,3,5), which will split the given image into five different unrecognizable shares. These shares are then stored into the IPFS system which can be retrieved by their content hash_ipfs. In parallel, the system generates the hash_robust for the uploaded original image. With this hash_robust value as input, Image detection and reporting smart contract is invoked. This smart contract checks the blockchain transaction for the hash_robust. If the image getting uploaded is a genuine image (new image that is not available in blockchain transaction), then a transaction is created in the blockchain. The details like user_id, hash_robust, owner_id are stored in the public chain, whereas details like hash_ipfs are stored in the private chain enabling its access only by the corresponding user through smart contract. This hash_robust will be then used to uniquely identify the media image, as the properties of each image are unique so the generated hash_robust will be unique. On the other hand, if the uploaded image is copyright infringed image or image with modifications like rotation, watermark, colour changes, then the calculated hash_robust will be same as the original image already stored in the blockchain, then the owner_id is returned, so now the contract checks the blockchain for the list of valid users who have obtained copyright permission.

Blockchain Framework for Social Media DRM Based …

457

Fig. 3 Block contents of copyright transaction

3.3.2

Copyright Management

If any user wishes to reuse an image by obtaining copyright permission from the respective owners, then the user can select the appropriate image and request for permission through smart contract. The DApp finds the owner of the image through hash_robust and invokes the copyright management smart contracts. The smart contract provides an alert notification to the owner of the image. If the owner approves the request, the smart contract refers to the blockchain and issues the hash_ipfs to the requester and records the transactions in the public blockchain for future reference. Figure 3 shows the block structure of the copyright transaction. If the request is rejected, the requester is informed and a transaction is stored in the public blockchain.

3.4 Applications The proposed framework can be extended to different applications scenarios like copyright management for other media contents like audios, videos, texts and files. The inclusion of computer vision and deep learning models can help in a great deal in applications like IoT and Internet of vehicles (IoV) [16].

4 Conclusion We have proposed a IPFS-based blockchain framework with smart contracts and secret sharing scheme to protect social media copyright management. This framework through its decentralized and immutability feature offers limitless opportunities for managing copyright on decentralized social media. This system, when implemented on traditional social media platforms liked Facebook, Twitter, Instagram, could suffer latency and lower transaction rate due to the centralized storage nature of these platforms in addition to high bandwidth connection for blockchain transactions. The proposed framework could provide better performance for decentralized social media platforms or blockchain-based social media platforms.

458

M. Kripa et al.

References 1. All.ME Whitepaper. Available: https://allmestatic.com/mepaytoken/all-me_whitepaper.pdf 2. Copyright and Social Media. Available: https://medium.com/swlh/copyright-and-socialmedia-67338c4c72f5 3. DRM and Blockchain: A Solution to Protect Copyrights in the Digital World? https://blog.jipel. law.nyu.edu/2019/01/drm-and-blockchain-a-solution-to-protect-copyrights-in-the-digitalworld/ 4. Sankar, L.S., Sindhu, M., Sethumadhavan, M.: Survey of consensus protocols on blockchain applications. In: 4th International Conference on Advanced Computing and Communication Systems (ICACCS), pp. 1–5. Coimbatore (2017) https://doi.org/10.1109/ICACCS.2017. 8014672 5. Mahesh, A.N., Shibu, N.S., Balamurugan, S.: Conceptualizing Blockchain based energy market for self sustainable community. In Proceedings of the 2nd Workshop on Blockchain-enabled Networked Sensor, pp. 1–7 (2019) 6. Akasha. Available https://akasha.world/ 7. Ethereum Yellow Paper. Available in: https://ethereum.github.io/yellowpaper/paper.pdf 8. Interplanetary File System (IPFS) White Paper. Available in: https://ipfs.io/ipfs/ QmR7GSQM93Cx5eAg6a6yRzNde1FQv7uL6X1o4k7zrJa3LX/ipfs.draft3.pdf 9. Mehta, R., Kapoor, N., Sourav, S., Shorey, R.: Decentralised image sharing and copyright protection using blockchain and perceptual hashes. In: International Conference on Communication Systems & Networks, pp. 1–6. IEEE (2019) 10. Meng, Z., Morizumi, T., Miyata, S., Kinoshita, H.: Design scheme of copyright management system based on digital watermarking and blockchain. In: 42nd Annual Computer Software and Applications Conference (COMPSAC), vol. 2, pp. 359–364. IEEE (2018) 11. Chum, C.S., Fine, B., Rosenberger, G., Zhang, X.: A proposed alternative to the Shamir secret sharing scheme. Contemp. Math. 582, 47–50 (2012) 12. Fawad, A., Siyal, M.Y., Abbas, V.U.: A secure and robust hash-based scheme for image authentication. Sig. Process. 90(5), 1456–1470 (2010) 13. Ahmed, F., Siyal, M.Y.: A secure and robust hashing scheme for image authentication. In: 5th International Conference on Information Communications, pp. 705–709. Signal Processing, Bangkok (2005) 14. Smart Contracts. Available in: http://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/ CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart.contracts.html 15. Libp2p. Available: https://github.com/libp2p/libp2p 16. Ramaguru R., Sindhu M., Sethumadhavan M.: Blockchain for the internet of vehicles , advances in computing and data sciences. In: ICACDS 2019. Communications in Computer and Information Science, vol. 1045. Springer (2019)

An Overview of Blockchain Consensus and Vulnerability Gajala Praveen, Mayank Anand, Piyush Kumar Singh, and Prabhat Ranjan

Abstract Bitcoin, a secure and transparent peer-to-peer payment system, is the origin of blockchain technology. Blockchain has the capability to stores data records in a secure and immutable way without the centralized control. It achieves a goal through a novel decentralized consensus that provides a platform in trustless environment. Consensus mechanism plays an important role to maintain consistency and security in blockchain platform. In this paper, we have discussed the Byzantine Generals Problem and surveyed several popular consensus mechanisms in current blockchain networks. A comparison table of blockchain consensus is presented on the basis of some parameters. This table can be helpful for understanding consensus advantages and disadvantages and its usability in blockchain platform. Vulnerability like 51% in consensus is common in permissionless blockchain which is discussed, and also, some other failures are mentioned in this paper.

1 Introduction Blockchain is a popular terminology in the real world. It becomes a popular technology after cryptocurrency usage, and it is preferred in many applications because of its decentralized nature. Decentralization prevents participants of the network from corruption by a single party, and it requires a reliable consensus protocol for decision-making. G. Praveen (B) · M. Anand · P. K. Singh · P. Ranjan Department of Computer Science, Central University of South Bihar, Gaya, India e-mail: [email protected] M. Anand e-mail: [email protected] P. K. Singh e-mail: [email protected] P. Ranjan e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_44

459

460

G. Praveen et al.

Blockchain is a decentralized system; i.e., no single node has complete system information. Each node making decision for its own conduct and the resulting system conduct is the aggregate framework decision [1]. Just like human society where groups of humans make their own decisions (constrained to be sure) but other’s decisions influence the decisions of all-around group. Rather than decentralized, distributed concern with the computing shared across various nodes, but the decision may still be centralized and use complete domain information. For example, in recent time most of the leading companies use distributed processing to achieve fault tolerance and high efficiency.

2 Blockchain Classification Blockchain can be categorized into two parts on the basis of the way of participation [2]. According to the way of participation, blockchain can be classified in terms of permissionless or public and permissioned or private.

2.1 Permissionless Blockchains In permissionless blockchain as it is decentralized, there is no master–slave system. Bitcoin [3] and Ethereum [4] are the good examples of permissionless blockchain which are open as well as decentralized. Openness means that any peer can read and write the content at any time, as it is open so any peer can join and leave.

2.2 Permissioned Blockchains In permissioned blockchain, a central authority decides who can participate in read– write operation. It is used to exchange data and record very well among organization members. Although it is less decentralized, permissioned blockchain is very popular in the business world. Now companies know that many advantages of using blockchains to expand business systems especially to provide trust, transparency, and efficiency. The Hyperledger [5] foundation is the leading open-source initiative for permissioned blockchains developed by Linux.

An Overview of Blockchain Consensus and Vulnerability

461

3 Blockchain Structure Blockchain is the growing chain of blocks. Each block is connected through cryptographic hash with the previous block. Each block contains data and metadata. The capacity of block is fixed. To add the block in the blockchain, each block is validated and verified. The structural components of blockchain and the block header of a block are discussed in this section.

3.1 Block A block is a fixed size data structure which contains block header and also set of transactions. The start block in the blockchain is called the genesis block.

3.2 Block Header The block header is made up of metadata (data about data). It contains mainly six fields [6]. • • • • • •

Version (4 bytes) Previous Block Hash (32 bytes) Timestamp (4 bytes) Difficulty Target (4 bytes) Nonce (4 bytes) Merkle Root (32 bytes).

Version (4 bytes): Version indicates the set of block validation rules. Previous Block Hash (32 bytes): Previous Block hash is the hash of the previous block which is succeeded by the current block in the chain. It is used for the integrity of the blockchain. Timestamp (4 bytes): It is the time when block is created. Difficulty Target (4 bytes): Difficulty is a measure for how much problem is harder to solve. Target is calculated from difficulty which is inversely proportional to the target. Difficulty is monitored by the developer of the network, which is a value that defines how difficult it is to add a block of transactions to the blockchain. Nonce (4 bytes): Nonce is pseudorandom number used to solve bitcoin puzzle. Literal meaning of nonce is “used only once”; i.e., a nonce used once in the block generation could not be used again for the block generation. It is an integer (32 or 64 bits) that is utilized in the mining method (explained in section consensus mechanism). Miners use the nonce to get a block hash below the target value. If the first nonce does not work (starting at 0), keep increment it and hashing the block header. The nonce value at which block hash is less than the target value is accepted and that nonce is added in the block header.

462

G. Praveen et al.

Merkle Tree: A Merkle tree is a method to demonstrate that something is in a set, without putting away the set. A Merkle tree is the binary tree representation of transactions [7]. Each leaf node represents the hashcode of an individual transaction. Each non-leaf node represents the hashcode of its two children. Similarly, tree grows leaf to root in bottom-up manner. Hence, root is the hash of all the transactions. Root of Merkle tree is also called digest root. A block header having only the Merkle root indicates that the information of all the transactions is stored in less memory space which is very much essential for validation.

3.3 Transaction Transaction is an atomic task that is performed by network participants using set of protocols. In terms of bitcoin, transactions are individual money transfer. Blockchain is a chain with blocks of digital information about crypto transactions. Transactions are exchanged among peers of network.

4 Byzantine Generals Problem To work in a distributed trustless environment , a common problem called the Byzantine Generals Problem (BGP) [8], which has to be solved in network to achieve consensus for perfect work. In 1982, Leslie Lamport et al. stated Byzantine Generals Problem as a precise form of two generals problem. It was presented by authors in terms of a story, meaning the Byzantine Generals did not actually have a problem with achieving decentralized consensus, it is just an effective way of helping people to understand the problem. BGP describes as a situation, in which a group of generals wants to make a plan to attack a city but the generals are far away from each other and depend on messengers for communication. To achieve goal, they should decide a period of composed attack. At the same time, few messengers or even generals could be traitor. So the problem is that “how to reach” a common decision with remainder of the dependable generals act regardless of the harmful messengers supplying the wrong information. This same issue is arising in any trustless network like blockchain. If network is able to solve this problem means, network has options to treat such circumstances, which implies that they must be Byzantine fault tolerance (BFT). Consensus mechanism provided the trustless network the way of handling such circumstances.

An Overview of Blockchain Consensus and Vulnerability

463

5 Consensus Mechanisms Consensus is a mechanism or agreement used for establishing trust among users. The consensus of a blockchain is realized in a codified set of rules that everyone is agreeing on it. These rules are entirely self-enforced. In blockchain, there is some algorithm which ensures the same copy of ledger distributed among participants. Some properties of good consensus are safety, inclusiveness, participatory, and egalitarian. There are some consequences also when consensus mechanism is not good like blockchain fork, poor performance, undesired outcome. Many works on consensus are already done but the scope of improvement in consensus is still remaining to achieve consensus property. In this section, some consensus mechanisms are presented in brief.

5.1 Proof of Work (PoW) Proof of work has been inspired to the hashcash [9] which was used to prevent email spam. PoW is used in bitcoin by Satoshi Nakamoto in 2008 [3]. PoW is based on the solving the mathematical hash and compare the hash value with specified target. This is most participatory but the probability of worked as a miner is high because computing power is high. PoW is iterative process, in which each node tries to propose a block with header and then calculate hash of this by incrementing nonce value every time. If any hash value is less than target, then PoW process is stop and miner rewarded for this. If [ Hash(Block) < Target ], then PoW is completed. When one node achieved the specified target, it is broadcasted to different nodes and the validity of the block is checked by most of the nodes.

5.2 Proof of Stake (PoS) Proof of stake is based on the stake rather than computing power that node hold. In 2012, Peercoin cryptocurrency is developed using this consensus [10]. Energy efficiency is the main advantage of using this. In a proof of stake framework, the creator of the next block depends on what extent they have been holding that specific currency, i.e., coinage. Coinage is the product of amount of coin staked and number of days. If coinage is high, then the target is also high, so achieving target is easy. If Hash(Block)< coinage, then PoS is completed. To prevent the centralization, nature always adds next block randomization which is done.

464

G. Praveen et al.

5.3 Delegated Proof of Stake (DPOS) Delegated proof of stake is proposed by Larimer D in 2014, to make more efficient and democratic or participatory the consensus [11]. It is used in bitshares [12]. In this, every node who participates in voting is a candidate node and the node who generate and verify the block is a delegate node. Every candidate node can vote to desired number delegate. Every candidate node can vote in support or can be absent but cannot vote in against to delegate. If delegate node got more than 50% of the vote, then it can be chosen as delegate. Delegate node has to generate the block in his turn and verified by other delegates. If delegate node fails to generate, then other delegate gets turn to generate block.

5.4 Enhancement in Delegated Proof of Stake Delegated Proof of Stake with Downgrade (DDPoS): Fan Yang et al proposed a consensus which combine the advantage of both PoW and DPoS. PoW is used in the first phase of this for the selection of some node, and then DPoS is used in the second phase for choosing delegates. It also provides a downgrading mechanism when malicious node is identified which is based on the three states: normal, error, and good [13]. Improvement of the DPoS Consensus Mechanism in Blockchain Based on Vague Sets: Guangxia Xu et al. described a mechanism for improving delegated proof of stake using vague set [14] theory. In this, voting model is different than DPoS which also provide power to vote against the delegate. In this, fuzzy membership rule [15] is used to select delegate node. Fuzzy membership value of every node is calculated separately, whose value is greater and chooses as delegate.

5.5 Some Other Consensus Now we present some other consensus in brief. These consensuses have implemented in some blockchain platform or cryptocurrencies. Proof of Capacity (PoC): In PoC, who stores more plots get the chance to add the next block. It is based on the storing capability of miner [16]. It is energy-efficient but the time of block generation is high. Cryptocurrencies like SpaceMint are implemented using this. Proof of Elapsed Time (PoET): In PoET, miner will be selected on the basis of random wait time. The next block generation provided to whose waiting time expires first. This is verified by Intel’s Software Guard Extension (SGX). It is implemented in Hyperledger Sawtooth [17].

An Overview of Blockchain Consensus and Vulnerability

465

Leased Proof of Stake (LPoS): It is the enhanced version of PoS. It removes the centrality problem. In this, lower balance nodes can take wealth on lease from the higher balance node and can improve the chance of mining [18]. Proof of Activity (PoA): In PoA, firstly a block is created using hash competition without transactions. Then, transactions are added in this block. Lastly, some nodes are chosen on the basis of stake to sign and verify this block [16]. Proof of Burn (PoB): In PoB, miners have to burn their coin, i.e., send their coins to an irretrievable address and who burns more coins got priority to add next block [19]. Proof of Importance (PoI): In PoI, miners got a chance to add next block on the basis of his importance, i.e., based on his factor of reputation which make node more important [20]. It is implemented in NEM cryptocurrency. Practical Byzantine Fault Tolerance (PBFT): In these two types of nodes, leader node collects transaction, creates a block and broadcasts it to backup nodes and backup nodes which also create a block of valid transactions and calculate hash of the block. If more than one-third node reply with the same hash, then block is added in the chain [21]. Delegated Byzantine Fault Tolerance (DBFT): In DBFT, leader node is known as speaker and backup nodes are known as delegate. Speaker is elected randomly from delegates. It is used in NEO cryptocurrency platform. Ripple: It is a decentralized version of PBFT. In this, anyone can participate in the consensus, and it also reduces the network overhead by using unique node list (UNL) [22]. Tendermint: Working of this consensus like PBFT, but backup nodes are selected on the basis of stake and leader node is elected among them [23]. Raft: It works like PBFT but more secure and crash tolerance. Here leader election is an important part. It is used in Quorum and Corda blockchain platform [24]. Tangle: In this, transactions itself consider as a block and have to verified other two transactions and solved hash problem in very less time to be approved and added in the ledger [25]. It is mostly used in IoT application.

6 Comparison Among Consensus A brief comparison is shown in Table 1. Many papers are presented doing comparison analysis on consensus considering some properties of consensus. The above table is a summary of different papers [6, 26, 27]. Comparison in Table 1 is done on the basis of considering the following parameters: fault tolerance, processing speed, energy cost, decentralization, accessibility, and application. PoW has permissionless nature but very slow processing speed and energy inefficiency are its limitations. Hence, research on consensus goes in the direction to make consensus energy-efficient. The idea of PoS has solved this problem but enhancement in processing speed is still required, and also, it leads to opposite of the properties of blockchain which is centralization. After that, research work switches on permissioned consensus which is

466

G. Praveen et al.

Table 1 Comparison based on some parameters of consensus mechanisms [6, 26, 27] Consensus

Fault tolerance

Processing speed

Energy cost

Decentralization

Accessibility

Application

PoW

50%

Low

High

High

Permissionless

Bitcoin

PoS

50%

Low

Medium

High

Permissionless

Peercoin

DPoS

50%

High

Medium

Medium

Permissionless

Bitshare

PBFT

33%

High

Very low

Medium

Permissioned

Hyperledger fabric

Ripple

20%

High

Very low

Medium

Permissioned

RippleNet

Tendermint

33%

High

Very low

High

Permissioned

BigchainDB

Raft

50%

High

Very low

Medium

Permissioned

Quorum, Corda

PoC

N/A

Low

Low

High

Permissionless

Spacemint

PoB

25%

High

Very low

Low

Permissionless

Slimcoin

PoET

N/A

High

Very low

Low

Permissioned

Hyperledger Sawtooth

PoI

50%

High

Very Low

Low

Permissionless

NEM

PoA

50%

High

Very low

Low

Permissionless

NEM

Tangle

33%

High

Very low

Medium

Permissionless

IOTA

fast and energy-efficient also but centralization problem is still involved. A democratic voting process is performed to make consensus more decentralized and faster.

7 Vulnerability on Blockchain Blockchain consensus has solved the BGP. But there are possibilities of different ways to be failure or node can behave maliciously. Here some vulnerabilities are listed in terms of attack. 51% Attack: If in any blockchain network, a node has more than 50% computing power. Then original chain can be manipulated by miner. In this scenario, double spending can be possible [7]. This attack is probably done in monetary network. Permissionless blockchain consensus mostly faces this type of attack. This attack is very popular in PoW consensus algorithm. For avoiding this attack, the difficulty is changed according to the scenario [28]. DDOS Attack: A distributed denial-of-service (DDOS) attack in blockchain network is an attack , where a malicious tries to make a system asset inaccessible to its clients, by flooding the system with an enormous number of requests trying to overburden the framework. Due to the network and bandwidth access limitation, blockchains also suffer from this. A DDOS attack is a lot harder to handle. With regard to blockchains, this can occur if large amount of transaction comes from the same node or pool of node to

An Overview of Blockchain Consensus and Vulnerability

467

involve miner in this work which take more time and he is doing another work. For avoiding this attack, a time bound is set and also gives priority to those transactions which comes from different sources. Any network communication which depends on bandwidth affected by this attack. It can be avoided through increasing bandwidth and provides limited scalability to the network. Sybil Attack: In Sybil attack, single entity creates multiple fake accounts and shown to others as a genuine regular account but it is controlled by his only and tries to manipulate network for profit. Mostly, this attack can be possible in online voting. Sybil attacks can mislead the network participants. It can also make honest node weightage less, and also, possible new genuine node cannot join the network due to the limited network participant. This situation is called eclipse attack. Permissionless blockchain faces this attack mostly. Permissioned blockchain is solution for this attack.

8 Conclusion In this paper, we have surveyed different consensus mechanisms on the basis of some parameters. There is also possibility to improve consensus considering their application scenario and attack. In blockchain network, the selection of miner must be perfect to avoid failures. In the real-life application of blockchain, a consensus mechanism knowledge helps for the selection of blockchain platform.

References 1. What is the difference between decentralized and distributed systems?: https://medium. com/distributed-economy/what-is-the-difference-between-decentralized-and-distributedsystems-f4190a5c6462. Accessed 28 Feb 2020 2. Wust, K., Gervais, A.: Do You Need a Blockchain (2017) 3. Nakamoto, S.: Bitcoin: A Peer-to-Peer Electronic Cash System (2008) 4. Wood, G.: Ethereum: a Secure Decentralised Generalised Transaction Ledger Ethereum, Project Yellow Paper, 151 (2014) 5. Hyperledger: https://www.hyperledger.org/. Accessed 28 Feb 2020 6. Zheng, Z., Xie, S., Dai, H., Chen, X., Wang, H.: An Overview of Blockchain Technology: Architecture, Consensus,and Future Trends. IEEE (2017) 7. Tschorsch, F., Scheuermann, B.: Bitcoin and Beyond: A Technical Survey on Decentralized Digital Currencies. IEEE Communications Surveys Tutorials (2016) 8. Lamport, L., Shostak, R., Sri, P.M.: The Byzantine Generals Problem (1982) 9. Haber, S., Stornetta, W.S.: How to time-stamp a digital document. J. Cryptogr. 3(2), 99–111 (1991) 10. Sunny, K., Scott, N.: PPCoin: Peer-to-Peer Crypto-Currency with Proof of Stake (2012) 11. Larimer, D.: Delegated Proof of Stake (2014) 12. Bitshares-your share in the decentralized exchange [online]. Available: https://bitshares.org/ 13. Yang, F., Zhou, W., Wu, Q., Long, R., Xiong, N.N.: Delegated Proof of Stake With Downgrade: A Secure and Efficient Blockchain Consensus Algorithm With Downgrade Mechanism. IEEE Access (2017)

468

G. Praveen et al.

14. Xu, G., Liu, Y., Khan, P.W.: Improvement of the DPoS Consensus Mechanism in Blockchain Based on Vague Sets, vol. 14(8). IEEE (2019) 15. Liu, Y., Wang, G., Feng, L.: A general model for transforming vague sets into fuzzy sets. Trans. Comput. Sci. II, 133–144 (2008) 16. Ismail L., Materwala H.: A Review of Blockchain Architecture and Consensus Protocols: Use Cases, Challenges, and Solutions (2019). https://doi.org/10.3390/sym11101198, 17. Hyperledger Sawtooth. https://www.hyperledger.org/projects/sawtooth. Accessed 28 Feb 2020 18. Leased Proof of Stake. https://docs.wavesplatform.com/platform-features/leased-proof-ofstake-lpos.html. Accessed 28 Feb 2020 19. Debus, J.: Consensus Methods in Blockchain Systems. Frankfurt School of Finance and Management, Blockchain Center (2017) 20. Proof of Importance https://nem.io/technology/. Accessed 28 Feb 2020 21. Castro, M., Liskov, B.: Practical Byzantine Fault Tolerance (1999) 22. Schwartz, D., Youngs, N., Britto, A.: The Ripple Protocol Consensus Algorithm. Ripple Labs Inc. White Paper, vol. 5 (2014) 23. Kwon, J.: Tendermint: Consensus Without Mining (2014). http://tendermint.com/docs/ tendermint.pdf 24. Ongaro, D., Ousterhout, J.K.: In search of an understandable consensus algorithm. In: Proceedings of USENIX Annual, pp. 305–319 (2015) 25. Popov, S.: The Tangle (2016) 26. Zhang, S., Lee, J.H.: Analysis of the Main Consensus Protocols of Blockchain. ICT Express (2019). https://doi.org/10.1016/j.icte.2019.08.001 27. Salimitari, M., Chatterjee, M.: A Survey on Consensus Protocols in Blockchain for IoT Networks (2019) 28. Sayeed, S., Marco-Gisbert, H.: Assessing Blockchain Consensus and Security Mechanisms against the 51% Attack (2019). https://doi.org/10.3390/app9091788

Statistical Analysis of Stress Levels in Students Pursuing Professional Courses Harish H. Kenchannavar , Shrivatsa D. Perur , U. P. Kulkarni, and Rajeshwari Hegde

Abstract In this competitive era, students are undergoing a lot of stress due to societal influence, financial status, and academic environment. This is leading to many psychological disorders such as depression, anxiety, and many others. One student commit suicide every hour in India, and in most of the cases, stress and pressure are the main reasons for the deadly attempt. The high level of stress is hindering the performance of the students in the academics. It is necessary to find out the stress level of the students and guide them through different techniques that help them in reducing stress levels. There are many ways to investigate the level of stress among the students. Techniques like psychological questionnaires, physiological measures like blood pressure, salivary alpha amylase, and vagal tone can be considered for analyzing stress in students. Perceived stress scale (PSS) is one of the standard psychological questionnaire methods used to analyze stress level of an individual. It is a measure of degree to which situations in one’s life are appraised as stress. The scale includes a series of general queries about present level of stress experience. The scale is free of content specific to any subpopulation group. The PSS is used to categorize the subjects as less stressed, moderately stressed, and highly stressed. The data through responses from 486 engineering students was collected to investigate the stress level of the students. The data thus obtained is validated using the statistical tool ANOVA. ANOVA test is a way to find out if experimental results are significant. The gathered data is validated for statistically significant group and H. H. Kenchannavar · S. D. Perur (B) Gogte Institute of Technology, Belagavi, Karnataka, India e-mail: [email protected] H. H. Kenchannavar e-mail: [email protected] U. P. Kulkarni SDMCET, Dharwad, Karnataka, India e-mail: [email protected] R. Hegde BMSCE, Bengaluru, Karnataka, India e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_45

469

470

H. H. Kenchannavar et al.

used for further analysis. The results from the analysis of PSS showed that the female students are less stressed when compared to the male students. The result can further be imbibed in an intelligent system which could detect the level of stress and suggest a method or technique to the subject so as to reduce the level of stress.

1 Introduction In the world, around 94% of the people feel that they are stressed. There are diverse reasons for this like financial status, societal status, work pressure, and health issues. In India, it is noted that around 82% of the total population is stressed. The study also reveals that the people of the age 16–-24 in India are much more stressed when compared to other countries as shown in Fig. 1. People are stressed when they face a scenario that creates them feel pressurized and hinders them from managing it effectively [2]. Exposure to situations that stresses an individual out usually relates to a better risk of physical and psychological impairment [3]. It is an important stage of reference in health studies related with each individual’s general health status and totally different sicknesses, like mental disorders, cancer, vessel disease, drug misuse, chronic diseases, and so on. From Fig. 1, it can be seen that people in the age group of 16--24 years are more stressed compared to other age group. This shows that the students who are studying in colleges are highly stressed. As per the research study, the students are stressed due to a prenotion that the college life is all fun and no studies. As the days pass on, students find it difficult to cope up with the studies which leads to depression.

Fig. 1 Statistical survey of the number of people suffering from stress in India [1]

Statistical Analysis of Stress Levels in Students …

471

Fig. 2 Causes of stress in students [2]

The stress level of students who are perusing professional courses are more compared to the students who are studying degree courses. This is due to continuos evaluation of their academic performance on a day-to-day basis, mandatory participation in extra and cocurricular activities, etc. The increased level of stress affects their academic performance. They are also influenced by external factors such as parental pressure, teacher’s pressure, physical environment, and relationship with others, and internal factors such as psychological changes, attitudes toward others, feeling of anger, fear, and worry. This in turn leads to stress and finally depression which provokes thought of suicidal attempts. Figure 2 shows the causes of negative emotions leading to stress and positive emotions such as happiness and self-confidence which help the students to build resilience. There is a need to find a way that helps the students to control their negative emotions and the stress level in order to excel in their academics. There is a need to have a stress analysis of the students of professional course to have a critical understanding of the cognitive behavior of the students. In explicit, the stress-level analysis is critical for understanding, preventing, and treating the students of the professional degree course. In that respect,different criteria can be applied to analyze stress. Stress levels can be analyzed through ways like 1. Psychological questionnaire: The psychological questionnaire method is one way to analyze stress with the help of predefined questionnaire. The questionnaire involves general queries rather than being specific to any personal subgroup. 2. Physiological measures: Another way through which stress can be measured is the physiological parameters like blood pressure, vagal tone, and salivary amylase. Studies have shown that there are drastic changes in the physiological behavior of an individual when in stress. One of the foremost wide disseminated strategies for measuring psychological stress is the perceived stress scale. This scale produces a worldwide score for stress as perceived that imbibes general queries instead of concentrating on particular incidents. The scale may well be helpful to match stress recognition in numerous countries. Using this scale, subjects are asked to gauge the earlier month for self-analysis. The scores obtained by the scale are considered to categorize the subjects as less stressed, moderately stressed, or highly stressed [3].

472

H. H. Kenchannavar et al.

PSS is a reliable tool to analyze the stress level of an individual. There are different types of PSS available like PSS 4, PSS 10, and PSS 14. The Cronbach’s alpha values observed for the PSS-4 were only marginally admissible. This may be attributed to the PSS-4 including less queries than the PSS-14 and PSS-10 since Cronbach’s alpha inclines to grow with the number of items in an instrument. Thus, selection of PSS 10 would be more apt when compared to PSS 4. The test--retest reliability of the PSS was assessed in different scenarios. However, in a few of these scenarios, the Pearson’s or Spearman’s correlation coefficient was implemented in the test. There is a need for further assessment of the test--retest reliability of the PSS. In regards to an interval, the PSS showed admissible test-retest reliability when its initial and subsequent administrations were separated by between two days and four weeks. Authors opined that a systematic, longitudinal study of changes in PSS scores is required to further clarify this [4]. The PSS-4 scale encompasses a clear supremacy in terms of the time required to finish and simple use. This evaluation is easy to finish at a given interval of time. The most acceptable grounds for selecting the PSS-10 rather than the PSS-4 is not accountable, as several studies have shown a responsible level of PSS-4 which is greater than PSS-10 [5]. The perceived stress scale is an essential factor of reference in fitness studies related to each individual’s health situation and completely one of a kind disease, like mental disorders, most cancers, vessel sickness, drug abuse, persistent illnesses, etc. During the initial development of the measure, an item distinction related to statement directionality (negative vs. positive) was identified. Perceived stress scale will have a predefined questionnaire to analyze the stress of an individual. Based on the scores obtained from the response, a student’s stress level can be measured. Hypothesis exams of the PSS continually established a satisfactory correlation with depression or anxiety. For the recognized groups validity check of the PSS, demographic categorical variables (e.g., marital popularity, academic fame, gender, and having children) had been used basically without prior decided expectations or evidence. It was advocated that the known-groups validity for organizations that have been formerly nicely decided to be implemented in destiny research [6]. With respect to gender, few of the studies found that the PSS scores were significantly higher in women than in men. The gender-related difference in PSS scores remains a matter of debate. Some believed that it was an artifact of measurement bias, given that the women are more likely to score on the negatively worded items of the PSS, while others believed that there is a true gender difference arising from social, biological, or psychological influences [7, 8]. Therefore, opinion was that gender should be considered carefully when evaluating known-groups validity in the PSS. The PSS measures general stress and turned into as a result distinctly free of content that changed into specific to any precise population. However, the PSS have been empirically confirmed with populations of specially college students or people. Furthermore, the authors endorsed a multicultural psychometric evaluation of the PSS [9].

Statistical Analysis of Stress Levels in Students …

473

The paper has been organized in the following manner. Section 2 handles with the methodology and design strategy employed in the analysis. Section 3 covers the results and discussions followed by conclusion in Sect. 4.

2 Materials and Method The predefined questionnaire of the PSS was shared through Google forms to different engineering college students across Karnataka. 486 students (N Male = 273, N Female = 213) had responded for the questionnaire. The median age of students was M Male = M Female = 21. The responses from the questionnaire are validated using a statistical tool called one-way ANOVA. ANOVA test is a way to find out if experimental results are significant [10]. The gathered data is validated for statistically significant group and used for further analysis.

2.1 One-Way ANOVA In records, one-way analysis of variance (abbreviated one-way ANOVA) is a way that may be used to examine means of or greater samples. This technique may be used best for numerical reaction records, the “Y,” typically one variable, and numerical or (commonly) express input facts, the “X,” continually one variable, consequently one-way ANOVA checks the null hypothesis, which states that samples in all businesses are drawn from populations with the identical mean values. To do that, two estimates are fabricated from the population variance. These estimates rely upon various assumptions. • Response variable residuals are commonly disbursed (or approximately usually disbursed). • Variances of populations are identical. • Responses for a given institution are unbiased and identically dispensed ordinary random variables.

2.2 PSS Analysis Perceived stress scale (PSS) has a predefined questionnaire containing ten questions based on how an individual felt in the past month. Of these ten questions, questions 4,5,7, and 8 are considered as positive questions, and the response which we stick to these enquiries will be modified in the fashion like if the response is 0, then it will be considered as 4; if it is 1, then we consider it as 3; if it is 4, then we consider it as 0; and if it is 2, then we will not alter the answer. Now, the summation of the response to all the ten questions is done. Based on the scores we get on summation,

474

H. H. Kenchannavar et al.

individuals are classified as less stressed (score 13 and 26) [11–13]. Algorithm for PSS Input: Responses to the questionnaire Output: Classification of individuals as less stressed, moderately stressed and highly stressed Start Read input as responses for questions Q: {1, 2…. 10} For each response R to {4, 5, 7, 8 ε Q} do If response R for Rk = 0 where k = {4, 5, 7, 8 ε r} Replace Rk as 4 Else if response R for Rk = 1 where k = {4, 5, 7, 8 ε r} Replace Rk as 3 Else if response R for Rk = 3 where k = {4, 5, 7, 8 ε r} Replace Rk as 1 Else if response R for Rk = 4 where k = {4, 5, 7, 8 ε r} Replace Rk as 0 End if End For Sum = R1 + R2 + …. + R10 If (Sum < 13) then Categorize as less stressed Else if (13 < Sum < 26) Categorize as moderately stressed Else Categorize as highly stressed End if End

3 Results and Discussion The data gathered from the responses were tested using the single-factor ANOVA method with α value of 0.5 to check if the experiment results is statistically significant or not. The mean scores of the male students and female students were separately analyzed. As shown in Fig. 3, the F value for the attributes mean score and age for male candidates was found to be 0.50 and F critical was 4.49. The students of different engineering colleges were considered for collecting the data. A predefined questionnaire was shared among 486 engineering students for the analysis. Students of engineering courses in India were found to be more pressurized and stressed which made them perform poorly in the academics. The reasons were many including the pressure from the parents, examination patterns, financial status, and societal status. To analyze this situation further, we considered our null hypothesis for the analysis to be “the students of the professional courses in India are more stressed.” The p value of the analysis was observed to be 0.48 which is less than the α value. Since the p

Statistical Analysis of Stress Levels in Students …

475

Fig. 3 ANOVA analysis for male students

value is less than α value, the null hypothesis can be accepted, and this shows that the results are significant. On the same lines, the p value for the analysis of variance of mean and age of female students was observed to be 0.40 as shown in Fig. 4, which is less than α value, and thus, the results for the female students are also significant in nature. The scores were calculated for the analysis of stress of students of different engineering colleges, and it was observed that female students of the age group 17–19 and 24–25 are more stressed when compared to female students of the age group 20–24 who are less stressed as depicted in Fig. 5. The male students of the age group 17–20 and 22–24 are more stressed as shown in Fig. 6. It is also observed that of 486 students of engineering course, 7.81% students are less stressed, 63.58% students are moderately stressed, and 24.27% students are highly stressed as shown in Fig. 7.

Fig. 4 ANOVA analysis for female students

476

H. H. Kenchannavar et al.

Fig. 5 Mean stress score of female students

Fig. 6 Mean stress score of male students

Fig. 7 Graph of stress level versus number of students

It is observed that majority of the engineering students are stressed due to some or the other reasons. This hinders their performance in the academics. There is a need for some steps to be taken toward helping students to reduce the level of the stress they have. Proper counselling for students, parents, and few modifications in the curriculum in the universities are to be made so as to make sure that the students are not stressed out.

4 Conclusion The study is aimed at analyzing the stress level of the students of professional course. The analysis showed that majority of the professional course students are stressed for which there are many factors responsible like financial status, societal status, tension, worries, and health issues. The stress level was analyzed using the PSS, and the data gathered was validated through a statistical tool called single-factor ANOVA. A course on meditation and yoga would help the students to reduce their

Statistical Analysis of Stress Levels in Students …

477

stress level and handle negative emotions. The results obtained from the study could be used as input for the intelligent system which could help an individual to control his/her stress in real time. Acknowledgements The study is conducted under the competitive research grant, TEQIP scheme funded by Visveswaraya Technological University, Belagavi. We sincerely thank university for supporting us in the course of work. We thank KLS Gogte Institute of Technology, Belagavi, and BMS College of Engineering, Bengaluru, for supporting us to work on this project.

References 1. https://whatsscience.in/stress-level-detection-at-our-fingertips/ 2. Heinen, I., et. al.: Perceived stress in first year medical students—associations with personal resources and emotional distress. BMC Med. Educ. 17, 4 (2017). https://doi.org/10.1186/s12 909-016-0841-8 3. Pedhazur, E.J., Schmelkin, L.P.: Measurement, Design, and Analysis. Lawrence Erlbaum Associates, Hillsdale (1991) 4. Fayers, P.M., Machin, D.: Quality of Life, 2nd edn. Wiley, West Sussex (2007) 5. Leung, D.Y., Lam, T., Chan, S.S.: Three versions of perceived stress scale: Validation in a sample of Chinese cardiac patients who smoke. BioMed Central Public Health 10, 513 (2010) 6. Cohen, S., Williamson, G.: Perceived stress in a probability sample of the United States. In: Spacapan, S., Oskamp, S. (eds.) The Social Psychology of Health: Laremont Symposium on Applied Social Psychology. Sage, Newbury Park (1988) 7. Lavoie, J.A.A., Douglas, K.S.: The perceived stress scale: evaluating configural, metric and scalar invariance across mental health status and gender. J. Psychopathol. Behav. Assess. 34, 48–57 (2012) 8. Gitchel, W.D., Roessler, R.T., Turner, R.: Gender effect according to item directionality on the perceived stress scale for adults with multiple sclerosis. Rehabil. Couns. Bull. 55, 20–28 (2012) 9. Cohen, S., Kamarch, T., Mermelstein, R.: A global measure of perceived stress. J. Health Soc. Behav. 24, 385–396 (1983) 10. Howell, D.: Statistical Methods for Psychology, pp. 324–325. Duxbury. ISBN 0-534-37770-X. (2002) 11. Mimura, C., Griffiths, P.: A Japanese version of the perceived stress scale: translation and preliminary test. Int. J. Nurs. Stud. 41, 379–385 (2004) 12. Mitchell, A.M., Crane, P., Kim, Y.: Perceived stress in survivors of suicide: Psychometric properties of the perceived stress scale. Res. Nurs. Health 31, 576–585 (2008) 13. Andreou, E., Alexopoulos, E.C., Lionis, C., Varvogli, L., Gnardellis, C., Chrousos, G.P., et al.: Perceived stress scale: Reliability and validity study in Greece. Int. J. Environ. Res. Public Health 8, 3287–3298 (2011)

Smart Helmet using Advanced Technology K. Muni Mohith Reddy, D. Venkata Krishna Rohith, C. Akash Reddy, and I. Mamatha

Abstract There has been increasing number of motor bike accidents reported during last few years, and rider’s negligence happens to be one major reason for the same. In this work, a smart helmet with features for the safety of the rider is proposed. Three major features are embedded in the present work such as avoidance of driving without helmet, avoidance of drunk and drive, and accident detection and intimation. A force-sensing resistor (FSR) is used for detecting the presence of helmet which controls the access to the vehicle. This feature is useful even to prevent motorcycles from being stolen. An alcohol detector placed within the helmet senses the limit of alcohol consumption by the rider and restricts the vehicle access in case limit exceeds. Accident intimation to the rescue department and to the family of the rider is the third feature in the proposed work. The hardware is built having all these features and is tested in a laboratory environment.

1 Introduction There have been many cases of road accidents occurring in India every day due to the lack of driving knowledge and not following the road safety rules. Speedy driving has become one major reason for the road accidents. There has been several works on speed monitoring and control of vehicles using wireless sensor networks which if implemented can reduce the accidents to some extent [1, 2]. The risk associated with a motorcycle riding is comparatively higher as accidents can lead to loss of life. Even though the Government has made compulsion for wearing helmet for a two-wheeler rider, there are many cases where people do not follow such rules. Improper condition of the motorcycle, rash driving, drunk and drive, etc., are the other major reasons for accidents. The intimation to the concerned department for the timely treatment K. Muni Mohith Reddy · D. Venkata Krishna Rohith · C. Akash Reddy · I. Mamatha (B) Department of Electrical and Electronics Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_46

479

480

K. Muni Mohith Reddy et al.

of the injured is at most important which requires a fast arrival of the ambulance to the spot of accident for immediate hospitalization. In this work, safety measures to address the above issues are incorporated in a smart helmet. The main objective of the work proposed is to design and implement a smart helmet with the following features: • Helmet authentication: to ensure that the bike rider is wearing helmet. • Alcohol detection: to ensure that the bike rider has not consumed alcohol. • Fall detection: in case of accident, to inform nearby ambulance about the accident. As soon as the rider buckles up the helmet, he gets access to start the bike. A receiver module is placed under the rider’s seat which communicates with the transmitter which is attached to the helmet, and only when the buckle is locked, the bike starts. Rest of the paper is organized as below: Related work in smart helmet development is discussed in Sect. 2. Section 3 describes the block diagram of the proposed system. Results are discussed in Sects. 4 and 5 concludes the work with a direction to future scope of the work.

2 Related Work There has been few works reported in literature on building a smart helmet with varied features. Rasli et al. [3] have proposed a helmet which detects the speed of the vehicle and alarms the rider when it exceeds a limit thereby prevents accidents due to over speeding. Also the helmet ensures that the rider wears the helmet while riding the vehicle. Peter and Vasudevan [4] proposed few algorithms for accident prediction based on acceleration of the vehicle and location details. The system developed by them also alerts the driver when an accident is predicted and does an audio/visual recording if an accident is detected. A smart helmet proposed by Basheer et al. [5] had the features of detecting and alerting during an accident. Parameters such as acceleration/deceleration, tilt of the vehicle, and change in pressure on the body of the vehicle are considered to detect an accident. Karthik et al. [6] proposed a system where a small delay is provided before sending the message to ambulance in case of accident. This delay avoids sending of messages in case driver meets up a normal mishap. Muthiah and Sathiendran [7] have proposed a smart helmet with features such as alarming the driver when he is drowsy and adjusting the headlamp automatically for a clear vision. Smart helmet with helmet detection, automatic head lamp for better illumination, and vehicle tracking facility was proposed by Patil et al. [8]. However, smart helmet requires that the electrical devices being placed in the helmet should not have an impact on the brain activity. It has been demonstrated by Sasikala et al. [9] that the helmet transmitter module should be mounted over the helmet considering lowest impact region, i.e., 0.4% and the receiver module should be sheltered. There has been many smart helmets with smart features which are

Smart Helmet using Advanced Technology

481

commercially available in the market, however, at a high cost. The purpose of this work is to build such a system and to customize the existing helmets at a cheaper price. Hence, a smart helmet system is built with the main features of helmet detection, accident intimation, and alcohol sensing which can be incorporated in the helmets at lower cost.

3 Block Diagram and Description The block diagram of the proposed work is shown in Fig. 1. The system has a transmitter and a receiver sections. The sensors detect the quantities and send the data to the Arduino board in the transmitter side. The Arduino generates the required control information and transmits it through an RF transmitter which is received by the Arduino board at the receiver side. These control signals are used to operate the relay used for igniting the spark plug in order to start the bike. In case of an accident, the accelerometer at the receiver side gives signal to the Arduino board based on which the information is sent through the GSM module. The two Arduino boards are powered by +5 V battery and a +12 V battery powers the GSM/GPS modules. The components used in this work are described below.

3.1 Arduino UNO The main controller used for generating various control signals is the microcontroller Arduino Uno which has utmost 14 digital input/output pins (of which six can be used as PWM outputs). There are six analog inputs, and the controller can work at a frequency of 16 MHz. It is powered by either a battery or an ac to DC adapter of + 5 V. Fig. 1 Block diagram of the smart helmet

482

K. Muni Mohith Reddy et al.

Fig. 2 a MQ-5 alcohol sensor, b FSR, c circuit for FSR

3.2 Alcohol Sensor For detecting ethanol which is an alcohol found in wine, beer, and liquor, an MQ-5 sensor is used. By analyzing the person’s breath during exhalation, it checks the person’s blood alcohol level. Using the built-in formula, alcohol content in blood is evaluated from the sensor reading. The sensor is shown in Fig. 2a. The four leads of the sensor are +5 V, AOUT, DOUT, and GND. The sensor is powered through +5 V and GND terminals. It is capable of producing an analog output through AOUT and a digital output through DOUT leads. The voltage level at these leads is proportional to the amount of alcohol the sensor detects. Whenever the analog voltage exceeds a preset threshold level, the person is considered for alcohol consumption and a ‘high’ signal is sent at the digital pin DOUT. In this work, the threshold value is set at 2.44 V beyond which it is treated that the rider has consumed alcohol.

3.3 Force-Sensitive Resistor (FSR) A force-sensitive resistor detects the amount of physical pressure applied due to squeezing or placing a weight, hence, are used to detect the presence of helmet. It is light weight, very thin stripe which can be placed below the cushion in the helmet to sense the pressure between the person’s head and the helmet. They are simple to use and low cost but are less accurate due to which it is always expected to give response in a range. The FSR is shown in Fig. 2b. The sensor has an infinite open-circuit resistance. Its resistance varies from 100 k to 200  for a pressure variation of low to high. The sensor requires a 5 V (or 3.3 V) supply and gives an analog voltage between 0 V (ground) and 5 V (or 3.3 V) based on the pressure (or about the same as the power supply voltage). The circuit for connecting the FSR is shown in Fig. 2c. The voltage equation is  V0 = Vcc

R R + FSR

 (1)

Smart Helmet using Advanced Technology

483

That is, the voltage is proportional to the inverse of the FSR resistance. As the FSR resistance increases, the voltage falls down and vice versa.

3.4 Accelerometer Accelerometer measures the static and dynamic acceleration caused due to motion, shock, or vibration. The sensor used in this work is ADXL335 for accident detection. This sensor is small in size and has 16 leads with signal-conditioning circuit embedded in the package. The full-scale measurement range is ± 3 g. An appropriate bandwidth is selected using the three capacitors C X , C Y , and C Z . Sensor can be operated in the range of 0.5–1600 Hz for the X and Y axes at XOUT and YOUT pins. Z-axis can be operated in the range of 0.5–550 Hz giving output at ZOUT pin. The circuit of ADXL335 is shown in Fig. 3. Outputs from the circuitry are analog in nature and are proportional to the acceleration. A differential capacitor having a fixed and movable plate (connected to a moving mass) measures the deflection. The fixed plates are driven by 180° out-of-phase square waves. Deflection of moving mass due to acceleration unbalances the differential capacitor, thereby an output proportional to acceleration is produced. Magnitude and direction of acceleration are determined by phase-sensitive demodulation techniques. The amplified demodulator output is taken across a 32 k resistor. In this work, the threshold at which the accelerometer detects the accident is set as below 1.41 V and above 2.001 V. That means if arduino reads voltage output from the accelerometer below 1.41 V and above 2.001 V, it is detected as an accident.

Fig. 3 Built-in circuitry of accelerometer IC ADXL335 from datasheet

484

K. Muni Mohith Reddy et al.

3.5 RF Module The RF module has an RF transmitter and a receiver working at 433 MHz frequency at an operating voltage of 3–12 V. Serial transmission of data is carried between transmitter and receiver and is done through two microcontrollers. The data to be transmitted is displayed using a LCD display. The RF transmitter has VCC supply, GND, and data pin which is connected to the microcontroller.

3.6 GSM Module The GSM module establishes the communication between the mobile phones. In this work, GSM/GPRS TTL Modem is used. The device works at the frequencies 850, 900, 1800, and 1900 MHz. The Modem can be directly interfaced to 5 V microcontrollers such as PIC, Arduino, and AVR. The data transmission rate (baud rate) can be configured from 9600–115,200 through AT command.

3.7 GPS Module GPS module is used to get the locality details of the place of accident, and SIM28 is used for the purpose. The main advantage of SIM28 is its built-in LNA, and it does not require any external antenna. It consumes very less power (acquisition 24 mA, tracking 19 mA) and supports various location and navigation applications, including autonomous GPS, SBAS ranging (WAAS, EGNOS, GAGAN, and MSAS), and A-GPS.

3.8 Working The transmitter side circuit is placed in the helmet which consists of the pressure sensor, an alcohol sensor, +5 V battery, and an RF transmitter module. Pressure sensor is placed inside the helmet in the region which comes in contact with the head while wearing the helmet. It senses the pressure applied by the head while wearing the helmet and an analog value equivalent to the pressure is sent to the arduino. Similar way, the alcohol sensor is used as breath analyzer to check the level of the alcohol content the rider has consumed. When the rider exhales, the alcohol content is taken as an analog value and is sent to the Arduino. Both the sensors are given 5 V as input voltage. When the helmet is put on, FSR pressure sensor detects the rider’s head and next it checks the level of alcohol content the rider has consumed. If the alcohol content exceeds the limit, the motorcycle will not start. If the alcohol is not

Smart Helmet using Advanced Technology

485

sensed and the helmet is ON, then the motorcycle will start. This information is sent from transmission side Arduino to the receiver side Arduino through RF module. In the receiver part, the information from the transmission part is received. A relay is used which acts like a switch. If the conditions on transmission side are satisfying, then the relay is ON and the spark plug of motorcycle which is connected to relay is ignited and the motorcycle starts. When the motorcycle is on running condition and if any accident occurs unfortunately, it is sensed using an accelerometer which is placed under the rider’s seat. The accelerometer senses the vibrations or any tilt in the vehicle. The tilt in the vehicle is measured in terms of an analog voltage. If the limits of tilting are exceeded, it is recognized as accident. Immediately the coordinates of the location at which the accident has taken place are tracked by GPS module and the location information is sent to ambulance and fed contacts through a GSM module in which the SIM is inserted.

4 Results and Discussion All the sensors and modules are connected to the arduino, and the communication between two Arduino modules is done using 433 MHz RF module. The Arduino code is developed in such a way that the motorcycle starts only if the FSR detects the rider’s head and if no alcohol is sensed. When the above two conditions are satisfied, then the relay gets ON which in turn ignites spark plug to start motorcycle. In prototype, LED is used instead of spark plug to indicate the starting of the motorcycle. The control logic at the transmitter and receiver side is represented as a flowchart as shown in Fig. 4a, 4b, respectively. FSR reading, MQ reading, and controller are the variables which are used to indicate the values of the FSR and MQ-5 sensor. If the FSR reading is greater than 300 (1.464 V) and alcohol is not detected, then controller is equal to 3. If the FSR reading is less than 300 (1.464 V), it indicates no helmet on rider’s head and controller is equated to 2. If the FSR reading is greater than 300 (1.464 V) and MQ reading is greater than 500 (2.44 V), it indicates alcohol consumption and the controller becomes 4. In the receiver side flowchart, controller is indicated as ‘w’ for simplicity. The number received indicates the situations as below: w = 2 indicates helmet is not detected. w = 3 indicates helmet is detected and no alcohol is consumed by the rider. w = 4 indicates helmet is detected and alcohol is consumed by the rider. Accordingly, for w = 2 or 4, relay output is low and stops motorcycle from starting. If ‘w’ is equal to 3, then the relay becomes high and the motorcycle gets started. When the motorcycle is under running state and if there is a tilt, it is read by accelerometer. The accelerometer values are taken as analog values, and it depends on the angle the vehicle is tilted. If the tilt is more than 400 (1.95 V) in the positive Y-axis and if the tilt is less than 290 (1.416 V) in the negative Y-axis, then it is recognized as an accident

486

K. Muni Mohith Reddy et al.

Fig. 4 a Flowchart for transmitter, b flowchart for receiver

and the location of the accident is tracked using GPS module. The information of the tracked location is sent to the ambulance and fed contacts through GSM module. The hardware setup is built with all the features discussed and is shown in Fig. 5a. Hardware is tested for each cases, and the corresponding results are obtained. The output of the sensors obtained is shown in Fig. 5b, where the longitude, latitude, controller ‘w’, FSR output, accelerometer output, and MQ-5 sensor outputs are displayed. When required amount of pressure is applied on the FSR, the value of the sensor output was 345 and an LED glows indicating the presence of helmet as shown in Fig. 6a. On detecting the helmet, system checks for any alcohol consumption. No alcohol consumption is assumed for this test case which resulted in sensor output of

Fig. 5 a Hardware setup, b output values of the sensors on the screen

Smart Helmet using Advanced Technology

487

Fig. 6 a FSR output, b MQ-5 output

Fig. 7 a Message received in the mobile, b location details

< 500 (408,409, etc.). When there is no consumption of alcohol, LED near to alcohol sensor glows as shown in Fig. 6b. When both the LEDs are ON, it indicates ignition of the spark plug to start the vehicle. When accident is recognized, accelerometer output is 325 indicating an accident and latitude and longitude information is displayed as shown in Fig. 5b. Meanwhile, the location where the accident has happened is tracked using GPS and the tracked location is sent to the ambulance and to the fed contacts through GSM. Figure 7a indicates such a message received, and using this longitude and latitude information, the exact location is tracked in Google Map and is shown in Fig. 7b.

5 Conclusion and Future Scope A smart helmet with advanced technology is proposed in this work. The smart helmet has the features of helmet detection, alcohol consumption detection, and accident detection. The force-sensing resistor (FSR), an alcohol sensor, and an accelerometer are used for sensing the required quantities, and the complete hardware setup is built on a breadboard and later on printed circuit board. The required controlling is achieved by writing the code in C language and implemented on Arduino board. GSM

488

K. Muni Mohith Reddy et al.

and GPS modules are used for the communication. The proposed work is successfully tested for functionality by observing the longitude and latitude information on the cell phone in the event of an accident. An LED is used to indicate the presence of helmet and absence of alcohol consumption. It was observed that in the absence of helmet, LED is OFF which can be interpreted as relay is ‘low’ and motor cycle does not start. Here, a threshold value has been set arbitrarily for tilt in the vehicle which may lead to miss interpretation of situation while riding in deep turns which is the main drawback of this work. Hence, it is required to practically observe the tilt and set the threshold required. There are still a lot more new features which can be implemented in this project such as • A Bluetooth speaker can be embedded in the helmet for navigation instructions. • Pits and manholes on road in less illuminated environment can be sensed, and a warning message can be displayed on helmet screen through augmented reality. • Location tracking can be continuously monitored using GPS and GSM.

References 1. Kochar, P., Supriya, M.: Vehicle speed control using Zigbee and GPS. In: International Conference on Smart Trends for Information Technology and Computer Communications, pp. 847–854. Springer, Singapore (2016) 2. Megalingam, R.K., Mohan, V., Mohanan, A., Leons, P., Shooja, R.: Wireless sensor network for vehicle speed monitoring and traffic routing system. In: 2010 International Conference on Mechanical and Electrical Technology, pp. 631–635. IEEE (2010) 3. Rasli, M.K.A.M., Madzhi, N.K., Johari, J.: Smart helmet with sensors for accident prevention. In: 2013 International Conference on Electrical, Electronics and System Engineering (ICEESE), pp. 21–26. IEEE (2013) 4. Peter, M., Vasudevan, S.K.: Accident prevention and prescription by analysis of vehicle and driver behaviour. Int. J. Embedded Syst. 9(4), 328–336 (2017) 5. Basheer, F.B., Alias, J.J., Favas, C.M., Navas, V., Farhan, N.K., Raghu, C.V.: Design of accident detection and alert system for motor cycles. In: 2013 IEEE Global Humanitarian Technology Conference: South Asia Satellite (GHTC-SAS), pp. 85–89. IEEE (2013) 6. Karthik, P., Kumar, B.M., Suresh, K., Sindhu, I.M., Murthy, C.G.: Design and implementation of helmet to track the accident zone and recovery using GPS and GSM. In: 2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT), pp. 730–734. IEEE (2016) 7. Muthiah, M., Sathiendran, R.K..: Smart helmets for automatic control of headlamps. In: 2015 International Conference on Smart Sensors and Systems (IC-SSS), pp. 1–4. IEEE (2015) 8. Patil, S., Hegde, M.G., Bhattacharjee, S., Rajeshwari, B.C.: Smart motorcycle security system. In: 2016 International Conference on Emerging Trends in Engineering, Technology and Science (ICETETS), pp. 1–4. IEEE (2016) 9. Sasikala, G., Padol, K., Katekar, A.A., Dhanasekaran, S.: Safeguarding of motorcyclists through helmet recognition. In: 2015 International Conference on Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), pp. 609–612. IEEE (2015)

Analyze and Compare the Parameters of Microstrip Rectangular Patch Antenna Using Fr4 , RT Duroid, and Taconic Substrate Prakash Kuravatti

Abstract For several decades, the demand of modest dimension electronic systems has been increasing. Due to advancements in integrated circuits, the physical dimension of systems is compact. As the demand for improved integration scheme increases and as the system migrates to higher frequencies, microwave designer looks for the next generation of microwave resources with improved versatility. There is also an increasing demand of tiny and squat cost antennas with reduction in size of electronic systems. Due to their compatibility with microwave integrated circuits, patch antennas are one of the most attractive antennas. Hence, a trial investigation of rectangular microstrip patch antenna on three unique substrates FR4 , RT Duroid and Taconic substrate. The measured outcomes for S11, resonant frequencies, radiation efficiency, and radiation pattern for the planned design of patch antenna with openings in the ground plane will be presented. The main aim of this work is to display a reasonable substrate for microstrip patch antenna which can deliver various antenna parameters.

1 Introduction 1.1 Microstrip Patch Antenna Microstrip antennas are striking due to their light weight, conformability, and low cost.. These antennas can be integrated with printed stripline feed networks and active devices. This is a relatively new area of antenna engineering. The radiation properties of microstrip structures have been known. Then, latter function of this type of antennas started when conformal antennas were necessary for military hardware. Rectangular and circular microstrip resonant patches have been used broadly in a P. Kuravatti (B) ATME College of Engineering Mysore, Mysore, Karnataka, India e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_47

489

490

P. Kuravatti

dissimilar of array configurations. A major causative feature for present advance of microstrip antennas is the current uprising in electronic circuit efficiency brought about by developments in huge-scale combination. As conventional antennas are often massive and costly part of an electronic system, microstrip antennas based on photolithographic technology are seen as an engineering breakthrough [1, 2].

1.2 RT Duroid RT DUROID 5880LZ filled PTFE syntheses are intended for demanding stripline and microstrip circuit applications. The RT Duroid is low thickness, lightweight material for elite weight delicate applications. The low dielectric constant of RT Duroid 5880LZ covers is uniform from board to board and is consistent over a wide recurrence run. Applications incorporate Airborne patch antenna basis, lightweight nourish systems, military radar frameworks, canister direction basis, and point-topoint computerized radio antennas. The dielectric constant of RT Duroid is 2.2.

1.3 FR-4 Epoxy FR-4 is a composite material made out of laced fiber glass fabric with an epoxy gum fastener that is fire safe. ‘FR’ remains for fire resistant and signifies that the material orders with the standard UL94V-0. The assignment FR-4 was made by NEMA in bromine, a halogen to encourage fire safe properties in FR-4 glass epoxy covers. Taconic TLC Taconic TLC substrates are particularly intended to meet the minimal effort goals for recently developing business RF/microwave applications. Taconic TLC substrates are produced in thickness. The two materials show magnificent mechanical and warm strength and cost less than customary PTFE substrates.

1.4 HFSS The acronym stands for High Frequency Structure Simulator. It was introduced since from 90s. And recreation tool for difficult 3D geometries by using Finite Element Method (FEM) with Adaptive Mesh production and refinement. It has mainly two vendors, namely Agilent & Ansoft.

Analyze and Compare the Parameters of Microstrip …

491

2 Results and Discussion 2.1 Using FR4 Substrate In Fig. 1, x-axis represents frequency in GHz, y-axis represents dB(S (1,1)) and has return loss of −10.5 dB. In Fig. 2, xz-axis represents radiation intensity (phi) versus beam pattern (Theta) and gain of around 4 dB has been obtained. In Fig. 3, xz-axis represents radiation intensity (phi) versus beam pattern (Theta) and directivity of around 6.8 dB has been obtained.

Fig. 1 Return loss obtained using FR4 substrate of design

Fig. 2 Gain obtained using FR4 substrate of design

492

P. Kuravatti

Fig. 3 Directivity obtained using FR4 substrate of design

2.2 Using RT Duroid Substrate In Fig. 4, x-axis represents frequency in GHz, y-axis represents dB(S(1,1)) and has return loss of −12.5 dB. In Fig. 5, xz-axis represents radiation intensity (phi) versus beam pattern (Theta) and gain of around 6.9 dB has been obtained. In Fig. 6, xz-axis represents radiation intensity (phi) versus beam pattern (Theta) and directivity of around 7.1559 dB has been obtained.

Fig. 4 Return loss obtained using RT Duroid substrate of design

Analyze and Compare the Parameters of Microstrip …

493

Fig. 5 Gain obtained using RT Duroid substrate of design

Fig. 6 Directivity obtained using RT Duroid substrate of design

2.3 Using Taconic Substrate In Fig. 7, x-axis represents frequency in GHz, y-axis represents dB(S(1,1)), and XY plot gives a return loss of −7 dB. In Fig. 8, xz-axis represents radiation intensity (phi) versus beam pattern (Theta) and gain of around 6.9 dB has been obtained. In Fig. 9, xz-axis represents radiation intensity (phi) versus beam pattern (Theta) and directivity of around 7.12 dB has been obtained (Table 1).

494

Fig. 7 Return loss obtained using Taconic substrate of design Fig. 8 Gain obtained using Taconic substrate of design

Fig. 9 Directivity obtained using Taconic substrate of design

P. Kuravatti

Analyze and Compare the Parameters of Microstrip … Table 1 Table containing all the values of return loss, gain, and directivity when the substrate value is (x-60, y-60, z-1.55)

495

Substrate

FR4

RT Duroid

Taconic

Return loss

−10.5 dB ( 2 and F ≤ 4, then the tweet is not suggested, i.e., for least recommended. Rule 5: else tweet not recommended.

3.5 Relevant Feedback Process Algorithm The algorithm for the relevant feedback mechanism is given in Algorithm 1. As mentioned in the above procedures, the algorithm performed in two stages. The first stage is tweeting ranking at the initial stage by returning the searched tweets using the current keywords and correlated with the searched keywords. In the second stage, the ranked tweet identified for the amount of rank it attains. Based on the rank identified, the feedback ranking is performed. These two stages solve the problem of

Relevant Feedback-Based User-Query Log …

563

random nature large tweets through random entropy ranking in the tweet’s information measured and provide a random reference set to compute the frequency of occurrence of tweets keywords. It also solves the measure of keyword quality, through learning the returned tweet keywords and providing the co-occurrence with the existing keywords, making the current keywords to identify and update the final rank with final keywords. Algorithm 1: Relevant Feedback Process Input: UT: User Tweets TS: The tweets searched for similarity KT: Tweets with keywords TK: The keywords for searching tweets CK: The current keyword sets RT: The random text tweet set RN: The random number tweet set NR: New Rank RC: Current Rank RFP: Feedback Rank TR: Relevant tweet set Output: FK: Final key words FR: Final Rank UT←{} for each word (w) ϵ TK do T(w) ← TwitterSearch (w) RT(w) + RN(w) ← T(w) KT(w) =0 for each tweet (at tweet 1) t ϵ T(w) do if (CK Ո t) ≠0 then KT(w)=KT(w)+1 end if end for R(w)←RankScore(KT(w),T(w)) UT←UT Ս {w, TS(w)} end for NR←Rank (UT) for each w ϵ NR do if w ≠ CK and w ≠ TS then if KT(w) + TK(w) > NR then if NR > TR(w) then ew←Entropy (KT(w), TK(w)) RC←NR if NR < TR(w) then UT←UE Ս {w, ew} RFb←RC end if end if end if end if end for FK←RankandSelect(UT,RFb) FR←RFb

4 Comparative Results and Discussions The proposed RS designed with society impact data, the need for huge data made a database to be maintained to store according to the tweets nature. The data chosen

564

V. Kakulapati et al.

from different Twitter user accounts that saved in .csv file format. The Twitter user’s reviews and comments stored as an existing user’s information. The dataset contains society reviews and Twitter user feedbacks in the form of user votes, user views, and user ranks.

4.1 Experiments Each Twitter tweet tokenized using Twitter API designed, with # and @, followed by token numbers. NER system [14] used for the named entity and [15] is used for parts of text tags. User tweets data divided into training and testing. The process of proposed recommender system application explained through the steps is outlined below: 1. Start the application. 2. The Twitter user queries the application system by providing the tweet search criteria. 3. The application checks the similar user-query, and tweets filter the matching query tweets from the Web into the application database. 4. The application rates and ranks the tweets compared through reviews and logs process. 5. The application checks the query with the matched key tweets, and if it is true, then the recommender system performs collecting all the related query tweet metadata or else discards it. 6. The application saves all the metadata in the server location. 7. The application repeats this process until the user tweets the query completed and saved the tweet’s information. 8. Stop the application.

4.2 Evaluation Metrics The evaluation metrics used to evaluate the proposed system precision, recall, and f -measure. The relations of precision, recall, and f -measure are given below:  total number of recommended tweets × 100% number of tweets + total number of recommended tweets   total number of recommended tweets Recall = × 100% number of target tweets + the total number of recommended tweets 

Precision =

F − measure = 2 ∗ [Precision × Recall/Precision + Recall]

Relevant Feedback-Based User-Query Log …

565

Table 2 Classification matrix model for metric analysis Actual class (expectation) Predicted class (observations)

True positive (TP) correct result

False positive (FP) unexpected result

False negative (FN) missing result

True negative (TN) correct absence of result

Accuracy = [TP + TN/TP + TN + FP + FN] where TP is true positives, TN is true negatives, FP is false positives, and FN is false negatives. For classification of tweets, the true positives, true negatives, false positives, and false negatives parameters used to compare the results of the classifier under test with survey methods, which illustrated in Table 2.

4.3 Analysis For training purpose, society impacts tweets like reviews on social media. These reviews are shown in Table 3. Around 6000 tweets are made useful in the proposed work. Among them, 5000 used for training. These tweets are simple and short to analyze. The processed tweets process are shown in Table 4. The cleaning of tweets is very important because of the present an enormous amount of similar features present in the user tweets. Tweets collected through Twitter API, and the script is allowed to run through the .csv file format. Tweets generated in the API application server location. Next, the features extracted from the training Table 3 Social media reviews in training data Social media reviews

Class

My view during a drive (8 h). No water, no halt. First, complain to our best transport system

NEGATIVE

Happy to hear you are at the back seat for an action. Give us a break to speak with you

POSITIVE

Table 4 Twitter tweet and processed tweet Tweet type

Process

Twitter tweet

Hi, I ordered non-vegetarian last night, but got nons with vegetarian, instead of it. Don’t mind terribly, and just they cheated me

Processed tweet Ordered, night, instead, mind, cheated

566

V. Kakulapati et al.

data. After extraction, the accuracy of each classifier for society reviews on training data is made. The analysis table is shown in Table 5. The evaluation metrics, compared with a group of the dataset which attains about 8000 tweets information, are illustrated in Table 6. In the case of the time of execution, the comparative improvement also performed and depicted in Table 7. From the above results analysis, the proposed recommender system using the relevant feedback mechanism provides an accurate and quality recommendation system. Table 5 Classification accuracy results with Twitter tweets data Twitter data

Bernoulli NB classifier

Linear SVC classifier

Logistic regression classifiers

Negative

Positive

Customer

76

72

73

47.2

58.2

Personal and kind

80

71

71

59.22

43.66

Airport delivery 75

66

70

81.20

22.59

Table 6 Comparison of performance metrics with survey works

Table 7 Comparison of execution time with survey works

Method

Precision

Recall

F-measure

Liu et al. [16]

0.56

0.60

0.59

Liu et al. [16]

0.02

0.53

0.01

Zhang et al. [17]

0.25

0.34

0.35

Chang et al. [18]

0.43

0.29



Lin et al. [19]

0.62

0.51



Verma and Virk [3]

0.69

0.67

0.68

Ramzan et al. [20]

0.80

0.69

0.74

Proposed work

0.82

0.75

0.85

Method

Recommendation time (sec)

Bouras and Tsogkas [21]

12

Jazayeriy et al. [4]

28

Liu et al. [16]

27

Ramzan et al. [20]

2.6 milli

Proposed work

2.1 milli

Relevant Feedback-Based User-Query Log …

567

5 Conclusions In this paper, a relevant feedback-based recommender system is proposed, which has proven its ability to process user tweets data at any level of analysis in Java environment with classifier library to permit the improvement in the response time of the API designed to generate the true recommendations. In the proposed work, the RS uses opinion and sentiment analysis to extract the user tweets information and allow the final rank and keywords in API designed. Proposed work makes use of lexical analysis and semantic analysis to learn the sentiment of the user tweets. The use of fuzzy rules has made helpful in analyzing the tweets rank based on the user tweet. Entropy calculations are used in the proposed work to calculate the difference between the suggested and similar tweets and provide accurate recommendations based on the type of the user. The processing time of API application made very less comparatively; it is made possible by reducing the tweets extraction process time and through the number of trained and tested quires. The comparative precision, recall, and f -measure are improved compared with the survey works related.

6 Future Work In the future, the recommender system could able to classify the present, past, and future tweets based on the validation and reorganization process in a dynamic Twitter environment according to the Twitter user blog navigation and feedback analysis.

References 1. Tan, Z., He, L.: An efficient similarity measure for user based collaborative filtering recommender systems inspired by the physical resonance principle. IEEE Access 5, 27211–27228 (2017) 2. Fasahte, U., Gambhir, D., Merulingkar, M., Pokhare, A.M.P.A.: Hotel recommendation system. Imperial J. Interdisciplinary Res. 3(11) (2017) 3. Verma, A., Virk, H.: A hybrid genre-based recommender system for movies using genetic algorithm and kNN approach. Int. J. Innovations Eng. Technol. 5(4), 48–55 (2015) 4. Jazayeriy, H., Mohammadi, S., Shamshirband, S.: A fast recommender system for cold user using categorized items. Math. Comput. Appl. 23(1), 1 (2018) 5. Rosenthal, S., Farra, N., Nakov, P.: SemEval-2017 task 4: sentiment analysis in Twitter. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 502–518 (2017) 6. Ibrahim, M., Bajwa, I.: Design and application of a multivariant expert system using Apache Hadoop framework. Sustainability 10(11), 4280 (2018) 7. Turney, P.D.: Learning algorithms for keyphrase extraction. Inf. Retrieval 2(4), 303–336 (2000) 8. Zhao, W.X., Jiang, J., He, J., Song, Y., Achananuparp, P., Lim, E.-P., Li, X.: Topical keyphrase extraction from twitter. ACL 379–388 (2011) 9. El-Kishky, A., Song, Y., Wang, C., Voss, C.R., Han, J.: Scalable topical phrase mining from text corpora. VLDB 8(3), 305–316 (2014)

568

V. Kakulapati et al.

10. Ortigosa, A., Martín, J.M., Carro, R.M.: Sentiment analysis in Facebook and its application to e-learning. Comput. Hum. Behav. 31, 527–541 (2014) 11. Feldman, R.: Techniques and applications for sentiment analysis. Commun. ACM 56(4), 82–89 (2013) 12. Wilson, T., Wiebe, J., Hoffmann, P.: Recognizing contextual polarity in phrase-level sentiment analysis. In: Proceedings of the conference on human language technology and empirical methods in natural language processing, pp. 347–354. Association for Computational Linguistics (October, 2005) 13. Agarwal, A., Xie, B., Vovsha, I., Rambow, O., Passonneau, R.: Sentiment analysis of twitter data. In: Proceedings of the workshop on languages in social media, pp. 30–38. Association for Computational Linguistics (June, 2011) 14. Alan, R., Clark, S., Mausam, Etzioni, O., et al.: Named entity recognition in tweets: an experimental study. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1524–1534. Association for Computational Linguistics (2011) 15. Owoputi, O., O’Connor, B., Dyer, C., Gimpel, K., Schneider, N., Smith, N.A.: Improved partof-speech tagging for online conversational text with word clusters. In: Proceedings of NAACLHLT, pp. 380–390 (2013) 16. Liu, H., He, J., Wang, T., Song, W., Du, X.: Combining user preferences and user opinions for accurate recommendation. Electron. Commer. Res. Appl. 12(1), 14–23 (2013) 17. Zhang, J., Peng, Q., Sun, S., Liu, C.: Collaborative filtering recommendation algorithm based on user preference derived from item domain features. Physica A 396, 66–76 (2014) 18. Chang,Z., Arefin, M.S., Morimoto, Y.: Hotel recommendation based on surrounding environments. In: Proceedings of the 2013 IIAI International Conference on Advanced Applied Informatics (IIAIAAI), pp. 330–336. IEEE, Matsue, Japan (August–September, 2013) 19. Lin, K.P., Lai, C.Y., Chen, P.C., Hwang, S.Y.: Personalized hotel recommendation using text mining and mobile browsing tracking. In: Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 191–196. IEEE, Hong Kong, China (October, 2015) 20. Ramzan, B., Bajwa, I.S., Jamil, N., Amin, R.U., Ramzan, S., Mirza, F., Sarwar, N.: An intelligent data analysis for recommendation systems using machine learning. Hindawi, Sci. Programming 2019, 1–20, Article ID 5941096 (2019) 21. Bouras, C., Tsogkas, V.: Improving news articles recommendations via user clustering. Int. J. Mach. Learn. Cybernet. 8(1), 223–237 (2017)

Intelligent Sentiments Information Systems Using Fuzzy Logic Roop Ranjan and A. K. Daniel

Abstract Sentiment analysis plays an important role in present era. Nowaday’s, Internet has become a source of text-based information on social media such as Twitter and Facebook. It is growing day by day, covering all the fields of knowledge. Today, sentiment Analysis has received a considerable attention to word opinion, and subjectivity in the human life. Sentiment analysis (SA) is an interesting and emerging field of research area of text mining also called as opinion mining. Today, more and more people are using these social media platform for expressing their feelings, emotions, and expressions toward sentiment information. This research paper proposed a model using fuzzy logic to express the feelings of users based on Indian railway services offered. The analyses show there is 25–35% of positivity toward service (s).

1 Introduction In todays era, social networking sites have become a medium of sharing the emotions and opinions. Sentiment analysis (SA) or opinion mining (OM) is the mathematical study of emotions and opinions of the human being [1]. So, these data are helpful and useful for predicting results of political activities, sports activities, new planning of government, research, and brand promotion. Opinions are generally depicted in subjective manner that describes people’s sentiments and expressions for a subject or topic. The opinions may be positive; negative to certain degree of the attribute is called as polarity. For example, look at the following topic, the opinion of the customer. R. Ranjan (B) · A. K. Daniel Madan Mohan Malaviya University of Technology, Gorakhpur, Uttar Pradesh, India e-mail: [email protected] A. K. Daniel e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_55

569

570

R. Ranjan and A. K. Daniel

“The battery life time of cell phone is too short but camera is good.” The opinion of the customer is expressing the negativity expression about a feature of battery life time of cell phone, but camera attribute has positivity expression. There are many algorithms and methods available for implementation of sentiment analysis. They are classified as rule-based systems, automatic system, and hybrid systems. In this paper, a model has been proposed using machine learning and fuzzy logic to express the feelings of users toward the services offered by Indian Railways. The organization of the paper is as follows: related work for measuring user sentiments in Sect. 2, proposed model and design in Sect. 3, analysis and result in Sect. 4, and conclusion.

2 Related Work Various tools and methodologies have been used for measuring sentiments of users on a public forum to analyze the performance on a huge volume of tweets related to the public system. Méndez et al. [1] proposed a system that is efficiently used to discover the problems of customers in a timely manner to identify and discover trends and problems related to bus-operating firms in Santiago, Chile. Jo et al. [2] tackled the problem of aspects discovery of evaluated reviews and how sentiments have been expressed for different aspects. They proposed a probabilistic generative model for a word in a given single sentence for only one aspect. Srivastava et al. [3] proposed a model for prediction for rainfall. They used the fuzzy logic mode that uses the parameter as temperature and humidity. They have used soil as a constant paramete,and other parameters are varying as per the weather conditions. Mondherbou Azizi [4] suggested a pattern-based technique for detecting sarcasm in the tweets. They have proposed four sets of features that cover the different kind of caustic remarks as per their definition. Lopes Rosa et al. [5] presented a knowledge-based recommendation system (KBRS) for emotions-based health tracking system to detect users with potential psychological disruption, specifically, distress, depression, and mental pressure. The system is based on ontologies and sentiment analysis. Daniel et al. [6] presented a prediction model for the production of maize using fuzzy logic technique in northeast region. They have shown that how their presented model has shown the significant growth in the production of maize crop in northeast region. Krening et al. [7] presented an explanation-based learning (EBL) system that works on accepts of training data. Krening et al. developed an agent-based software that learned how to play game. The performance of object-focused suggestion was found better than when no suggestion was given. Hai et al. [8] proposed semantic aspects and aspect-level sentiments from reviewing and prediction.

Intelligent Sentiments Information Systems …

571

Kumar et al. [9] proposed a system to analyze the news channels data of social media to find the sentiment of people. Saad and Yang [10] presented an approach that aims to extract Twitter sentiment analysis by building a balancing and scoring model, afterward, classifying tweets into several ordinal classes using machine learning classifiers. In this paper, we have analyzed the tweets of users over the period of one month, i.e., from October 1, 2019, to October 31, 2019. The reason behind selection of this period is because during this period, there is a rush at its peak as compared to the whole year in India. We developed a model using machine learning and fussy logic. We have also used RESTful APIs based tools Tweepy for processing the tweets.

3 Proposed Model and Design Twitter is used as data source for retrieving the dataset. A twitter developer account created for an account on Twitter. The user is authenticated by the Twitter through authentication keys. The proposed model is retrieving the data from Twitter using Tweepy and performs analysis. The model uses fuzzy logic-based density as low, medium, and high of APIs for performing analysis. The proposed model is divided as following segments as shown in Fig. 1. The algorithm proposed below uses three steps for sentiment information retrieval of users as follows: 1. The first step is authentication where our model is authenticated by Twitter using Tweepy API by providing valid credentials for accessing the dataset. 2. In second step, i.e., tweet extraction, the dataset is collected from Twitter by using Twitter API. Here, RESTful APIs are being used for fetching data from various sources in a distributed environment. 3. In final step, the fetched tweets are unprocessed, full of spellings mistake, noise, and contains various abbreviations. The dataset is then processed using natural language toolkit (NLTK), a Python-based API for performing the processing and cleaning of data. The cleaned data is then processed, and analysis is performed on the dataset. And final analysis is performed on the cleaned and processed dataset. Fig. 1 Proposed model for information retrieval sentiment analysis

572

R. Ranjan and A. K. Daniel

Table 1 Proposed algorithm 1. Input: Tweets 2. Output: Sentiment orientation 3. Main procedure() 4. begin 5. User_authentication() 6. Tweet_extraction() 7. Analyze_sentiments() 8. end 9. User_authentication() 10. begin 11. Reading consumer key 12. Authenticating user 13. If authentication successful 14. Then 15. Read tweets 16. Else

17. Authentication error 18. end 19. Tweet_extraction() 20. begin 21. Read tweets 22. Clean tweets 23. Feature extraction 24. Pass token to sentiment Classifier 25. end 26. Analyze_Sentiment() 27. begin 28. Fuzzy logic model 29. Determine sentiment polarity 30. end

Proposed Algorithm: The proposed algorithm for sentiment information retrieval from the users is given in Table 1.

4 Analysis and Results The proposed model retrieved the data through Python language tool. The model classified the data, and using polarity classification model, the sentiments of the users are produced for given keyword. Table 2 uses the three membership functions to show the various degrees of the input functions. The various probable output functions are given in Table 3 after removing the outliers from the dataset. The fuzzy relationships are defined in Table 4 for selecting the optimal density points, and following rules sets are considered. Table 2 Input function

Table 3 Output function

Input

Membership

Morning time (MT)

Very early (VE) (5 am–6 am)

Early (E) (6 am–7 am)

Late (LM) (7 am–9 am)

Evening time (ET)

Early (EE) (6 pm–7 pm)

Evening (EV) (7 pm–8 pm)

Late (LE) (8 pm–10 pm)

Input

Membership

Density

0, 0.1, 0.25, 0.40, 0.50, 0.75, 0.9, 1.0, 0

Intelligent Sentiments Information Systems … Table 4 Logical sets

Table 5 Rule sets

573

Morning time

Evening time

Density

Very early (VE)

Early (EE)

0.1

Very early (VE)

Evening (EV)

0.25

Very early (VE)

Late (LE)

0

Early (E)

Early (EE)

0.40

Early (E)

Evening (EV)

0.50

Early (E)

Late (LE)

0.75

Late (LM)

Early (EE)

0

Late (LM)

Evening (EV)

0.90

Late (LM)

Late (LE)

1.0

Rule 1 If morning time is very early (VE) and evening time is early (EE), then density is 0.1 …………………… Rule 9 If morning time is late (LM) and evening time is evening (EV), then density is 1.0

The rule sets are defined as aggregations of the fuzzy rules to generate a fuzzy output are given in Table 5. The rules are designed as follows. Based on the rules of the fuzzy rule sets, we have discarded the output of the two rules, i.e., Rule 3 and Rule 7, because we found no density points for these rules. The proposed fussy rule sets have finally provided the seven density points, i.e., 0.1, 0.25, 0.040, 0.75, 0.90, and 1.0. Using these density points, the observations for the polarity of the datasets are given in Tables 6, 7, 8, 9, and 10. Regressive experiments are performed on the given dataset, and the following graph represents the positivity and negativity of the tweets. The sentiments of the Table 6 Observation of data set of size 1500 users Density

Set 1 (date: 8-Oct-2019) morning

Set 2 (date: 8-Oct-2019) evening

Positivity

Positivity

Negativity

Negativity

0.1

32

28

24

27

0.25

71

138

65

112

0.4

28

42

21

38

0.5

63

166

23

115

0.75

35

30

20

26

0.9

28

12

16

2

6

8

4

11

1

574

R. Ranjan and A. K. Daniel

Table 7 Observation of dataset of size 2000 users Density

Set 1 (date: 10-Oct-2019) morning

Set 2 (date: 10-Oct-2019) evening

Positivity

Negativity

Positivity

Negativity

0.1

200

94

165

101

0.25

140

72

85

56

0.4

92

33

97

30

0.5

121

47

86

36

0.75

16

11

32

12

0.9

51

17

72

22

1

20

42

36

45

Table 8 Observation of dataset of size 3000 users Density

Set 1 (date: 12-Oct-2019) morning

Set 2 (date: 12-Oct-2019) evening

Positivity

Negativity

Positivity

Negativity

0.1

263

144

269

134

0.25

295

82

301

75

0.4

177

52

182

48

0.5

129

91

132

84

0.75

43

9

44

9

0.9

5

21

28

20

11

40

18

37

1

Table 9 Observation of dataset of size 3500 users Density

Set 1 (date: 14-Oct-2019) morning

Set 2 (date: 14-Oct-2019) evening

Positivity

Positivity

0.1

135

75

53

81

0.25

185

120

164

93

0.4

85

45

78

65

0.5

142

45

142

46

0.75

61

17

65

14

0.9

3

7

20

32

22

35

28

48

1

Negativity

Negativity

tweets are represented in Figs. 2, 3, 4, 5, and 6 on different population size. These graphs represent the trend of the sentiments of the tweets. Based on these observations, the model generates a function  for positive orientation of user’s, i.e., % of positivity, in the tweets sets about Indian Railway as below:

Intelligent Sentiments Information Systems …

575

Table 10 Observation of dataset of size 4000 users Density

Set 1 (date: 16-Oct-2019) morning

Set 2 (date: 16-Oct-2019) evening

Positivity

Positivity

Negativity

Negativity

0.1

560

420

386

240

0.25

300

220

252

148

0.4

230

152

238

73

0.5

264

180

240

103

0.75

48

45

89

41

0.9

130

80

140

56

61

116

52

85

1

Fig. 2 Observation on dataset of 1500 size

Fig. 3 Observation on dataset of 2000 size

576

Fig. 4 Observation on dataset of 3000 size

Fig. 5 Observation on dataset of 3500 size

Fig. 6 Observation on dataset of 4000 size

R. Ranjan and A. K. Daniel

Intelligent Sentiments Information Systems … Table 11 Positive orientation of users

577

Population

1 (%)

2 (%)

1500

35

29

2000

33

30

3000

35

30

3500

29

25

4000

31

27

  (+)ve − (−)ve  × 100 i =  (+)ve + (−)ve where 1 shows the first set observation on different populations, date, and time and 2 shows the first set observation on different populations, date, and time as given in Table 11. The performance of dataset that indicates the positive orientation of the users lies between 25 and 35%. The average positive orientation of the users in morning hours was observed as 32.2%, and the positive orientation of the users in evening hours was found as 28.2%. The average positive orientation of users about Indian Railway was found to be 30.4%.

5 Conclusion The performance of the system shows that the people in the early hours are more positive as compared to late hours. This may be depended on various factor such as work load stress, travelling, interaction with persons, and various other factors. The above problem is unified and the performance of the persons is positive. The model performance is toward the positivity of the users for the given set of data. However, our research was limited to analyze the positive orientation of the users, whereas negative and neutral orientation can also be performed in future. The proposed model provides the sentiments of Indian railway services offered. The analysis represents that there are 25–35% of positivity toward service (s). Acknowledgements The authors would like to thank Indian Railway for their services catering to the users who have shown their positive sentiments toward the Indian Railway.

References 1. Méndez, J.T., Lobel, H., Parra, D., Herrera, J.C.: Using Twitter to infer user satisfaction with public transport: the case of Santiago, Chile, pp. 2169–3536(c). IEEE Access (2018)

578

R. Ranjan and A. K. Daniel

2. Jo, Y., Oh, A., Parra, D., Herrera, J.C.: Aspect and sentiment unification model for online review analysis, WSDM’11. Hong Kong, China. ACM 978-1-4503-0493-1/11/02 (9–12, February 2011) 3. Srivastava, R., Sharma, P., Daniel, A.K.: Fuzzy logic based prediction model for rainfall over agriculture in Northeast region 9(2), 191–195, ISSN: 0976–5697(April, 2018) 4. Bouazizi, M., Otsuki, T.: A pattern-based approach for sarcasm detection on Twitter. IEEE Access 4, 5477–5488 (2016) 5. Lopes Rosa, R., M. Schwartz, G., Vicente Ruggiero, W., Zegarra Rodriguez, D.: A knowledgebased recommendation system that includes sentiment analysis and deep learning. IEEE Trans. Ind. Inf. 2012–2019 (2018) 6. Daniel, A.K., Sharma, P., Srivastava, R.: Fuzzy based prediction model using rainfall parameter for north East India maize production. In: 5th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON) (2018) 7. Krening, S., Harrison, B., Feigh, K.M., Isbell, C.L., Riedl, M., Thomaz, A.: Learning from explanations using sentiment and advice in RL. IEEE Trans. Cognitive Dev. Syst. 9(1), 44–55 (2017) 8. Hai, Z., Cong, G., Chang, K., Cheng, P., Miao, C.: Analyzing sentiments in one go: a supervised joint topic modeling approach. IEEE Trans. Knowl. Data Eng. 29(6), 1172–1185 (2017) 9. Kumar, N., Nagalla, R., Marwah, T., Singh, M.: Sentiment dynamics in social media news channels. Online Soc. Netw. Media 8, 42–54 (2018) 10. Saad, S.E., Yang, J.: Twitter sentiment analysis based on ordinal regression. IEEE Access 7, 163677–163685 (2019)

Possibility Study of PV-STATCOM with CHB Multilevel Inverter: A Review K. M. Nathgosavi and P. M. Joshi

Abstract Two-level and three-level inverters are used in a conventional gridconnected solar photovoltaic (PV) system. These inverters are not suitable for high power applications. One of the adverse effects of these two-level and three-level inverters is that at the AC side it required a large size filter to get the good power quality and meet grid codes. Due to this, system becomes inefficient, unreliable, bulky, and costly. As the conventional grid-connected solar PV system is idle because of the absence of sun, the utilization factor of it is very low. As there are two-level and three-level inverters, this grid-connected PV system is having a number of disadvantages such as, (i) not appropriate for high power ratings, (ii) having large filter size, (iii) not able to exact maximum power, and (iv) utilization factor is low. In this paper, the review is to be carried out by keeping in mind to reduce all the drawbacks. How multilevel inverter can be more effectively used with its advantages is discussed in detail. With the help of the STATCOM controller, utilization feature of the system can be enhanced which is elaborated in this paper. It is also highlighted the use of PV inverter as a STATCOM called as a PV-STATCOM with its advantages.

1 Introduction As it is known to all, the grid-integrated photovoltaic system consists of PV array, inverter, filter, isolation transformer, circuit breakers, and grid. The arrangement of all these in a sequence manner is shown in Fig. 1. The transformer needs to perform two operations first is to step up the inverter output voltage to meet the grid voltage and the second is to maintain the galvanic isolation between the grid K. M. Nathgosavi (B) Department of Electrical Engineering, Rajarambapu Institute of Technology, Islampur, India e-mail: [email protected] P. M. Joshi Department of Electrical Engineering, Government College of Engineering, Karad, India e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_56

579

580

K. M. Nathgosavi and P. M. Joshi

and inverter. In a conventional PV system, two-level and three-level inverters were used which are controlled with the help of a digital controller. From voltage and current of PV array, maximum power point tracking (MPPT) is carried out. The main task is to synchronize PV array output with a grid so that closed-loop control is used in which grid voltage is taken from a phase lock loop (PLL). With the help of abc to dq0 transformation, direct and quadrature current reference generated. Direct current reference is generated from MPPT output and quadrature current reference is generated from reactive power reference. In conventional system, importance given to quadrature current reference is negligible. Inverter power control is done with a PI controller where reference and actual value of direct and quadrature are compared. PI controller’s output is used as an input to a gate pulse generator to generate gate pulses for the inverter.

2 PV Inverter as a STATCOM In this section, review about PV inverters is carried out and findings from the review are presented. In [1], various inverter topologies are reviewed for grid-connected PV applications. Depending on commutation, two kinds of inverter configuration such as self-commutated and line commutated are explained in detail. In addition to that centralized inverter, string and multistring inverter is also presented. The application of multilevel inverter in the PV system is also discussed briefly. A review of microinverters is presented in [2]. This paper talks more about requirements and standards of grid-connected PV system which must need to be fulfilled. String inverter, central inverter, and micro-inverter reviews are given in this paper in detail. It is a challenging task to improve the utilization feature of conventional PV inverter as it is going to supply active power to the grid only in the daytime and not in a nighttime due to the unavailability of the sun. It is possible to mitigate this problem if PV inverter operates as a STATCOM. As the PV inverter operates as STATCOM, it is going to regulate real and reactive power flow. To do this quadrature axis, reference is set as per demand of reactive power. Most suitable utilization of PV system by using PV inverter as STATCOM is presented in [3]. In this paper, the PV inverter is utilized as a STATCOM for 24 h, i.e., in a day as well as in the night. During the day when sun radiations are there with the low intensity, it acts as a partial STATCOM and during the night it acts as a STATCOM with its full capacity. This PV inverter utilization concept is validated by simulation work. Varma et al. [4] discussed how the PV inverter can operate as STATCOM during the day. It is clear that the reactive power compensation as a STATCOM is totally dependent on the inverter rating and maximum power point at that instant. It means that the inverter can provide support of reactive power after real power capacity is over. DC link voltage control and reactive power compensation operation control for 24 h operated grid-connected PV system are presented in detail with simulation in [5, 6]. In [7], application of PV inverter for harmonics mitigation and reactive power compensation is presented, where controller for this inverter is designed in a way that It can act as an active filter as well as STATCOM.

Possibility Study of PV-STATCOM with CHB Multilevel …

ON/OFF COMMANDS

GATE PULSES

MEASUREMENTS & INDICATORS

ON/OFF COMMANDS

V & I of PV Array

AC BREAKER

TRANSDUCER

GRID

Grid voltage & Current

INVERTER + L-C-L Filter

DC BREAKER

DC Voltage & Current

PV ARRAY

581

MPPT

PWM PLL

Vabc_Grid

Id_Ref PI

sum

Wt

Vabc

Id Reference Calculator

Id_Grid

DQ TO ABC

ABC To DQ PI

sum

Iabc_Grid

Iq_Grid

Iq Reference Calculator

Reactive power Ref

CONTROLLER

Fig. 1 Grid-connected photovoltaic system

It is very essential to consider various power quality problems such as voltage flicker, voltage transients, and harmonics while integrating photovoltaic generation system to grid. Han et al. [8] present a comparison between different power quality issues in the PV system with conventional PV inverter and PV-STATCOM. When a fault occurs, it is very difficult to maintain voltage stability of loads which are very sensitive. In [9], it is given in detail about as the PV inverter starts acting as a PV-STATCOM the stability improves. Figure 2 shows a clear representation of PCC voltage in both cases, i.e., for conventional PV inverter and when PV inverter act as a STATCOM. It is clear that the is nearly stable when PV inverter act as a STATCOM. In [10], author presents testing of PV-STATCOM. In this paper, three testings are performed to check the proposed system performance. First, it is checked in RSCAD software to analyze the controller. In the next step, Hardware-in-loop (HIL) simulation is done to validate the control algorithm. In the third stage, the 10 kW laboratory model is tested. From this testing, the conclusion is made that the system can be implemented to control voltage and improve the power factor.

582

K. M. Nathgosavi and P. M. Joshi

Fig. 2 Voltage at PCC a PV inverter b PV-STATCOM

Varma et al. [11] present the steady-state and dynamic response of the PV-STATCOM controller in hardware-in-loop (HIL) simulation. PV-STATCOM without DC-to-DC converter is presented in [12]. As there is no DC-DC converter, the system become economic, as well as the size of the system gets reduced. The main task is to control DC link voltage which is done by the STATCOM controller. In [13], the author discusses different technical challenges and power quality issues while integrating distributed generators to the grid. In addition to that, protection and stability are also highlighted. Kow et al. [14] highlighted the adverse effect of a PV system on grid and presented a detailed review about how to mitigate that affect and improve power quality with the help of different compensation techniques and voltage regulator devices. In [15], author presented the islanding effect in detail. Also, author carried out a detailed review of various anti-islanding techniques. Sliding mode control depends on extended state observer used for different converters for grid integration application is presented in [16]. Voltage regulation controller for DC link capacitor with the help of external control, the loop is highlighted. The external control loop gives current reference to the inner current control loop to maintain the desired power factor. An extended state observer (ESO) and super-twisting algorithm (STA) are briefly discussed in this paper. To mitigate interrupts and unavailability in outer voltage control, STA + ESO-based controller is used instead of the conventional PI controller. Few PV inverters that are present in the market listed in Table 1. In [17] authors present a 9.4 MW solar power system that can perform a number of operations such as active as well as reactive power control and power factor correction. To meet the grid standards, the author presented an algorithm. Zeng et al. [18] present a detailed review of different power quality issues in a micro-gird and the importance of inverters to perform various operations along with active power flow. The inverter which can perform various operations is called a multifunction inverter. The author

Possibility Study of PV-STATCOM with CHB Multilevel …

583

Table 1 High power rated solar PV inverters available in market Sr. No.

Manufacturer

Model

Rating of the equipment

Key features

1

Sungrow Power Supply Co., Ltd

SG 2500

2800 kVA, 1000 V DC, and 315 V AC

Best suitable for outdoor applications

2

HUAWEI

SUN 8000–500 KTL

500/600 kVA, 1000 V DC, and 320 V AC

Reactive power control feature is available

3

Hitachi–Hirel

HIVERTER—NP201i Series

1000 kW, 1000 V DC, and 300 V AC

Reactive power control feature is available

4

Hitachi–Hirel

HIVERTER—NP201i Series

1250 kW, 1000 V DC, and 350 V AC

Reactive power control feature is available

5

GE

PSC—1000 MV—L—QC

1000 kW, 1500 V DC, and 550 V AC

Reactive power control feature is available

6

Schneider

Conext core XC 733 –NA

733 kVA, 1000 V DC, and 407 V AC

Reactive power control feature is available

7

Ingeteam

PowerMaxter 840 × 360 indoor

917 kW, 920 V DC, and 360 V AC

Reactive power control feature is available

8

TMEIC

PVL-L0500E

600 kW, 100 V DC, and 300 V AC

Reactive power control feature is available

also reviewed recent multifunction inverter topologies with their control technique, even for a particular application to select a proper. Combination of inverter and control topology is also presented in a comparative way. Various configurations of inverter for PV integration to the grid along with three control topologies are presented in [19]. The author briefly explains abc, dq0, and alphabet controls. Both of the above [18, 19] have not maintained controllers for inverters of more than three levels. As there is a large demand for solar power inverter capacity, demand also increases so that instead of using a two-level inverter, preference is given to multilevel inverter. A two-level inverter is having some drawbacks as compared to a multilevel inverter such as more total harmonic distortion (THD), the requirement of large size AC filters, and inefficiency in maximum power point tracking. In [20], author presents a comparative study between two-level and multilevel inverter. The conclusion made by the author is that multilevel inverter is satiable for high power rating with reduced filter size. Along with this, author also conducts a detailed review of the multilevel inverter with a single DC source and more DC sources. A grid-integrated solar PV system is reviewed in [21].

584

K. M. Nathgosavi and P. M. Joshi

Operation of solar cell and different MPPT technique is explained in brief. In addition to that, the comparison is presented between MPPT technologies. DC-DC converter plays a more important role in a PV-integrated circuit. Various DC-DC converters are discussed in detail. Next to DC-DC converter inverter is that there are different inverter structures such as concentric, string, multistring, and modular discussed. Also, single-phase half Hbridge, full H-bridge, flying capacitor, neutral-point-clamped, etc., are compared and explained in brief. In [22], overview about small-scale, medium-scale, and large-scale grid-connected and the off-grid system is presented. Newly emerged PV converter topologies for grid integration are also discussed in detail which consist of two-level voltage source inverter, neutral-point-clamped inverter, and symmetric and asymmetric multilevel inverter. The author also presents different PV system structures such as string, multistring, modular, having various DC-DC converters. In this paper, technical and legal requirements are also presented which include efficiency, leakage current, installation cost, isolation, anti-islanding, and various standards. RomeroCadaval et al. [23] present in detail various grid-integrated PV system components with their function. Also, different structures are also presented in detail. Types of grid-integrated PV system with and without a transformer, with and without DC-DC converter are presented in detail in this paper. Comparative study of various MPPT topologies such as perturb and observation (P and O), incremental conduction (IC), current sweep, fractional VOC, and fraction Isc is presented in details. Different grid synchronization topologies for single as well as three-phase system are compared by considering immunity distortion, frequency variation, unbalance, dynamic response, price, and complexity-like parameters. On the basis of reliability, power quality, and the possibility of integration and standardization, various anti-islanding topologies reviewed. A comparison between cascade H-bridge (CHB) and conventional twolevel inverter for PV application is presented in [24]. Author presents design of CHB and made comparison based on efficiency and cost. Determination of component level and loss calculation is carried out and is compared with conventional two-level PV inverter. In [26], comparison between modular multilevel converter (MMLC) and cascade H-bridge is presented for the STATCOM application. The controller design for reactive power control operation is discussed in brief.

3 CHB Inverter Application for PV-STATCOM For high rating applications, instead of conventional two-level inverter multilevel inverter (MLI) is preferred as it is having a number of advantages such as better THD, reduced dv/dt, and reduced device losses. In [25], author presented various topologies of multilevel inverter based on the requirement of the DC source. Basic classification of MLI topologies is shown in Fig. 3. The author also discussed various modulation and control techniques of MLI. Out of different MLI types, CHB MLI is explained in brief.

Possibility Study of PV-STATCOM with CHB Multilevel …

585

Fig. 3 Types of MLI based on DC sources

The author highlighted operation of MLI as a multifunction inverter. The main three types of MLI topologies are CHB, diode-clamped, and flying capacitor. Control and application of these three types presented in [27, 28]. For a high power application out of three topologies, CHB and diode-clamped inverter topologies are very popular and used most. Mittal et al. [29] present different multilevel configuration, their controls, modulation techniques, and applications in detail. Single-phase grid-integrated solar power conditioning system without an isolation transformer is presented in [30]. The author discussed in detail about CHB, flying capacitor, diode-clamped half-bridge and active neutral-point-clamped. Boonmee et al. [31] present a comparative study of CHB and neutral-point-clamped multilevel inverter for grid-connected PV inverter application. From this comparison, it is clear that CHB inverter is more superior to extract solar power instead of neutral pointed inverter. A comparison is presented in Table 2. In [32] The author carried out a review of CHB MLI and its control topologies. It is found that the CHB inverter is more appropriate for grid-connected PV-STATCOM applications. Various advantages of CHB inverter are required small size filter as it is having Low output THD, without transformer high voltage level can be easily achieved by used number of modules, voltage stress, cost and size of the components in an inverter module is low, as this configuration consists of many modules it is easy to replace faulty module. In [33], author presents a comparative study of CHB multilevel inverter for different levels (5, 7, 9, and 17 levels) of output. From that comparative study, the author concludes that as the number of levels in output goes on increasing THD decreases but its adverse effect is that the difficulty of the circuit and controller increases. In [34], author discussed in details about symmetrical and asymmetrical CHB multilevel inverter. The author also presented phase shift and level shift PWM topologies and hybrid CHB multilevel inverter in detail. CHB multilevel inverter is most suitable for gridconnected PV inverter as well as STATCOM operation as there is a requirement of individual DC link in CHB configuration which is possible without any additional cost. The operation of the PV-STATCOM controller is based on the power flow in between the grid and the PV system. The power flow is depending on phase angle

586

K. M. Nathgosavi and P. M. Joshi

Table 2 NPC and CHB multilevel inverters comparison Neutral-point-clamped inverter

Cascade H-bridge multilevel inverter

6 × (N − 1) IGBT required

6 × (N − 1) IGBT

Either odd or even levels generated

Only odd levels generated

N − 1 capacitors required

1.5 × (N − 1) capacitors required

Only one DC source is required

For each H-bridge separate source is required

Capacitor voltage balancing is required

Capacitor voltage balancing is not required

While designing a module special care to be taken when a no. of levels rise

Module design is easy as a separate H-bridge is there for every source

For high voltage application, required DC voltage is more so that component required with high capacity which increase cost and size of that components

As separate H-bridge is used input side DC voltage gets scatted so low-rated component required, automatically size and cost of component also get reduced

If any of the circuit component gets failed circuit If failure of a component occurs, then it will stop working not affect more on output continuity is there as only faulty bridge stop working

and voltage amplitude. In a grid-connected PV system, one AC source is a grid and the other is PV inverter output. So in that gird-connected system, real power control is done by controlling voltage phase angle, and reactive power control is done by controlling the amplitude of the voltage.

4 Challenges to Design PV-STATCOM by Using CHB MLI • The requirement of voltage and current sensing devices is more so that the cost of the system increases. It is essential to develop an accurate voltage detection method to reduce cost. • In a large system, complexity is more so that there is less coordination between synchronizing reference signal and carrier signal. • An unbalance in a DC voltage is more in PV application which results in malfunctioning. • At a low modulation index, a number of levels generated are also low. So that it is essential to find an optimum modulation index. • In CHB, all DC links are isolated with the help of modules from each other and this is not acceptable in a system where PV array is having a negative grounding. So, in that case, the DC-DC converter is required along with high-frequency transformer which results in an increase in cost, size, and intricacy of the system.

Possibility Study of PV-STATCOM with CHB Multilevel …

587

5 Conclusion The conventional two-level PV inverter is having many disadvantages such as (a) not appropriate for high power applications, (b) large size filter required, (c) inefficient in collecting maximum power, and (d) utilization factor is low. To overcome these disadvantages, multilevel inverter is preferred instead of the conventional two-level inverter. In this paper, it is presented that the use of a cascade H-bridge multilevel inverter as a STATCOM as well as PV inverter together it becomes PV-STATCOM. It is observed that as a two-level inverter perform as a PV-STATCOM and perform multiple functions such as active and reactive power control, power factor corrector and active filter also so it is also possible with cascade H-bridge inverter with some additional advantages. Review of multilevel inverter topologies is also carried out and from that comparative review conclusion is made that CHB MLI is best suitable for PV-STATCOM. Various already validated studies are also reviewed in this paper. Various control topologies are also discussed for MLI as well as STATCOM operation from which it is concluded that active power flow control is depending on phase angle and reactive power flow is dependent on amplitude.

References 1. Jana, J., Saha, H., Bhattacharya, K.D.: A review of inverter topologies for single-phase gridconnected photovoltaic systems. Renew. Sustain. Energy Rev. (2016) 2. Hasan, R., Mekhilef, S., Seyed mahmoudian, M., Horan, B.: Grid-connected isolated PV micro inverters. Renew. Sustain. Energy Rev. 67, 1065–1080 (2017) 3. Varma, R.K., Khadkikar, V., Seethapathy, R.: Nighttime application of PV solar farm as STATCOM to regulate grid voltage. IEEE Trans. Energy Convers. 24(4), 983–985 (2009) 4. Varma, R.K., Rahman, S.A., Vanderheide, T.: New control of PV solar farm as STATCOM (PV-STATCOM) for increasing grid power transmission limits during night and day. IEEE Trans. Power Deliv. 30(2), 755–763 (2015) 5. Varma, R.K., Das, B., Axente, I., Vanderheide, T.: Optimal 24 hr utilization of a PV solar system as STATCOM (PV-STATCOM) in a distribution network. In: Proceedings of IEEE Conference Publications, pp. 1–8 (2011) 6. Varma, R.K., Rangarajan, S.S., Axente, I., Sharma, V.: Novel application of a PV solar plant as STATCOM during night and day in a distribution utility network. In: Proceedings of IEEE Conference Publications, pp. 1–8 (2011) 7. Seo, H.R., Kim, G.H., Jang, S.J., Kim, S.Y., Park, S., Park, M., Yu, I.K.: Harmonics and reactive power compensation method by grid-connected Photovoltaic generation system. In: Proceedings of IEEE Conference Publications, pp. 1–5 (2009) 8. Han, J., Khushalani-Solanki, S., Solanki, J., Schoene, J.: Study of unified control of STATCOM to resolve the power quality issues of a grid-connected three phase PV system. In: Proceedings of IEEE Conference Publications, pp. 1–7 (2012) 9. Varma, R.K., Rahman, S.A., Sharma, V., Vanderheide, T.: Novel control of a PV solar system as STATCOM (PV-STATCOM) for preventing instability of induction motor load. In: Proceedings of IEEE Conference Publications, pp. 1–5 (2012) 10. Varma, R.K., Siavashi, E.M., Das, B., Sharma, V.: Novel application of a PV solar plant as STATCOM (PV-STATCOM) during night and day in a distribution utility network: Part 2. In: Proceedings of IEEE Conference Publication,. pp. 1–8 (2012)

588

K. M. Nathgosavi and P. M. Joshi

11. Varma, R.K., Siavashi, E.M., Das, B., Sharma, V.: Real-time digital simulation of a PV solar system as STATCOM (PV-STATCOM) for voltage regulation and power factor correction. In: Proceedings of IEEE Conference Publications, pp. 157–163 (2011) 12. Toodeji, H., Farokhnia, N.,Riahy, G.H.: Integration of PV module and STATCOM to extract maximum power from PV. In: Proceedings of IEEE Conference Publications, pp. 1–6 (2009) 13. Mahmud, N., Zahedi, A.: Review of control strategies for voltage regulation of the smart distribution network with high penetration of renewable distributed generation. Renew. Sustain. Energy Rev. 64, 582–595 (2016) 14. Kow, K.W., Wong, Y.W., Rajkumar, R.K., Rajkumar, R.K.: A review on performance of artificial intelligence and conventional method in mitigating PV grid-tied related power quality events. Renew. Sustain. Energy Rev. 56, 334–346 (2016) 15. Karimi, M., Mokhlis, H., Naidu, K., Uddin, S., Bakar, A.H.: Photovoltaic penetration issues and impacts in distribution network–a review. Renew. Sustain. Energy Rev. 53, 594–605 (2016) 16. Liu, J., Vazquez, S., Wu, L., Marquez, A., Gao, H., Franquelo, L.G.: Extended state observerbased sliding-mode control for three-phase power converters. IEEE Trans. Ind. Electron. 64(1), 22–31 (2017) 17. Bullich-Massagué, E., Ferrer-San-José, R., Aragüés-Peñalba, M., Serrano-Salamanca, L., Pacheco-Navas, C., Gomis-Bellmunt, O.: Power plant control in large-scale photovoltaic plants: design, implementation and validation in a 9.4 MW photovoltaic plant. IET Renew. Power Gener. 10(1), 50–62 (2016) 18. Zeng, Z., Yang, H., Zhao, R., Cheng, C.: Topologies and control strategies of multifunctional grid-connected inverters for power quality enhancement: a comprehensive review. Renew. Sustain. Energy Rev. 24, 223–270 (2013) 19. Hassaine, L., OLias, E., Quintero, J., Salas, V.: Overview of power inverter topologies and control structures for grid connected photovoltaic systems. Renew. Sustain. Energy Rev. 30, 796–807 (2014) 20. Krishna, R.A., Suresh, L.P.: A brief review on multi level inverter topologies. In: Proceedings of IEEE Conference Publications, pp. 1-6 (2016) 21. Mahela, O.P., Shaik, A.G.: Comprehensive overview of grid interfaced solar photovoltaic systems. Renew. Sustain. Energy Rev. 68, 316–332 (2017) 22. Kouro, S., Leon, J.I., Vinnikov, D., Franquelo, L.G.: Grid-connected photovoltaic systems: an overview of recent research and emerging PV converter technology. IEEE Ind. Electron. Mag. 9(1), 47–61 (2015) 23. Romero-Cadaval, E., Spagnuolo, G., Franquelo, L.G., Ramos-Paja, C.A., Suntio, T., Xiao, W.M.: Grid-connected photovoltaic generation plants: components and operation. IEEE Ind. Electron. Mag. 7(3), 6–20 (2013) 24. Sastry, J., Bakas, P., Kim, H., Wang, L., Marinopoulos, A.: Evaluation of cascaded Hbridge inverter for utility-scale photovoltaic systems. Renew. Energy 69, 208–218 (2014) 25. Latran, M.B., Teke, A.: Investigation of multilevel multifunctional grid connected inverter topologies and control strategies used in photovoltaic systems. Renew. Sustain. Energy Rev. 42, 361–376 (2015) 26. Vivas, J.H., Bergna, G., Boyra, M.: Comparison of multilevel converter-based STATCOMs. In: Proceedings of IEEE Conference Publications, pp. 1–10 (2011) 27. Mamatha, Sandhu, Tilak, Dr: Thakur “multilevel inverters: literature survey—topologies, control techniques and applications of renewable energy sources—grid integration”. Int. J. Eng. Res. Appl. 4(3), 644–652 (2014) 28. Colak, I., Kabalci, E., Bayindir, R.: Review of multilevel voltage source inverter topologies and control schemes. Energy Convers. Manag. 52(2), 1114–1128 (2011) 29. Mittal, N., Singh, B., Singh, S.P., Dixit, R., Kumar, D.: Multilevel inverters: a literature survey on topologies and control strategies. In: Proceedings of IEEE Conference Publications, pp. 1–11 (2012) 30. Patrao, I., Figueres, E., González-Espín, F., Garcerá, G.: Transformerless topologies for gridconnected single-phase photovoltaic inverters. Renew. Sustain. Energy Rev. 15(7), 3423–3431 (2011)

Possibility Study of PV-STATCOM with CHB Multilevel …

589

31. Boonmee, C., Somboonkit, P., Watjanatepin, N.: Performance comparison of three level and multi-level for Grid-connected photovoltaic systems. In: Proceedings of IEEE Conference Publications, pp. 1–5 (2015) 32. Tuteja, A., Mahor, A., Sirsat, A.: A review on mitigation of harmonics in cascaded H-bridge multilevel inverter using optimization techniques. Int. J. Emerg. Technol. Adv. Eng. 4(2), 861–865 (2014) 33. Sankar, D., Babu, C.A.: Cascaded H bridge multilevel inverter topologies for PV application: a comparison. In: Proceedings of IEEE Conference Publications, pp. 1–5 (2016) 34. Suresh, Y., Panda, A.K.: Investigation on hybrid cascaded multilevel inverter with reduced dc sources. Renew. Sustain. Energy Rev. 26, 49–59 (2013)

Design and Development of Wireless Sensor for Variable Temperature and for Various Security Purposes Prabhakar Singh and Minal Saxena

Abstract The word sensor simply illustrates the conversion of non-electrical signal, physical or chemical quantity into an electrical signal and the measuring quantity by it is called the measurand. There are many autonomous devices using sensors with cluster of networks and spatially distributed network to provide more facility (Rowayda and Sadek in Future Comput. Inf. J. 3(2):166–177, 2018) (Bosman et al. in Inf. Fusion 33:41–56, 2017) [1, 2]. A routine protocol in WSN is enhanced to hierarchical-based routine protocol because of its energy-saving capability, network scalability, and network topology stabilities (Zhang et al. in J. Softw. Eng. Appl. 03(12):1167–1171, 2010) [3]. This sensor will not pose high SNR as compared to Arduino/DS18B20 temperature sensors and because the use of an optimal dynamic stochastic resonance (SR) processing method is introduced to improve the SNR of the receiving signal under certain conditions, which is not been done in others. SNR = (10 log s/n) in db, where σ =1 or 0.1 or 0.01 depending on various circumstances which is quiet low as compared to other sensor noise ratios. As per result, perspective pulling sensor data from the sensor approximately every 4 s got a lot of errors throughout the day (Cheng and Chang in Expert Syst. Appl. 39(10):9427–9434, 2012) [4]. WSN is emerging as a popular and essential ways of providing pervasive computing environment for numerous applications. There are periods of times where it can go several minutes before getting an error-free value. Similarly, it can also go several hours and not have any errors, so it is very intermittent. The errors range from “no sensors found” to CRC errors. These problems will be removed in this sensor because use of DSR and percentage of error will fall to ~0.001–1% only depending on variable weather, location, and environment.

P. Singh (B) · M. Saxena SIRT, Bhopal, India e-mail: [email protected] M. Saxena e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_57

591

592

P. Singh and M. Saxena

1 Introduction The working of the sensor will be based on fifo, automatic, manual, and GPS modes.. The minimum voltage required to work will be from 0 to 5 volts so packet switching of data will be used to communication to main manager while operation and implementation. The duration of communication will be very less as compared to other sensors for internode communication. The coating on sensor is done on the basis of DEA and DSC basis so that maximum efficiency and high performance can be achieved [5]. Moreover, it has to operate in adverse temperature also, so in that case a point-to-point interaction will be made such that calculation and operation can be made easily and maximum optimization will be achieved through interconnectivity and communication [6]. The main purpose of this sensor is to make it work for all different types of security and monitoring purposes. The gallium nitride which is called as the future silicon will be used as the material, so by using this, the work function will increase more as compared to silicon-made sensors [7]. It is far better than silicon, because of its standby time, operation, and performance at maxima and minima temperature which cannot be seen in silicon-made sensors, the problems which had been faced in implementation and designing by silicon will be removed and will be overcome and more lifetime will increase as compared to silicon which is used as a wafer-like structure while designing. The gallium nitride used is only due to its best work at different temperature of operation and it is working. The proper synchronization of transmitter and receiver is very necessary to avoid the loss of data between transmitter and receiver which makes intercommunication easy. Each packet data is assigned some name, and they are duly checked by CRC method to check the error and to avoid data loss (Fig. 1).

Fig. 1 Bit packet transfer of data to save energy

Design and Development of Wireless …

Power supply sense

64 Bit Rom & 1 wire crc

593 Memory & control Logic

High temperature trigger Low temperature trigger

scratchpad

8 bit crc generator

Configuration register

Temperature sensor

Fig. 2 Sensor’s operational work

2 Sensor Systems This is the sensor which will work for all conditions and will fetch the desired result by several keen calculations, monitoring and by examining the present value and past value so that linearity can be mostly maintained in each results [2]. In the given figure, the first pin is for ground supply, pin 2 is for data input/output, and pin 3 is for vdd supply. There is a one common wire communication between different sensors at different locations. The pull up network will be used so that it can start operating at very low voltage (Fig. 2).

3 Working The time elapsed between different sensors cannot get conflict of time but if somehow matter of conflict arises the manager will be having the task to resolve. The interrupts will be removed by raising the flags till the first conflict resolves, and there will be always atmost care taken to avoid the conflict by assigning the timing value. The timing values will be so assigned such that all sensors which are assembled or working together having fixed duration of time to interact between each other and send the data to the manager. The task manager will assign the further mode of operation in which it will resolve all the system related work it will interact with each sensor and will take data for further implementation and operation. The data can be saved for further related work. The interaction with GPS will be done in case it is automatically operated. Since it has to work in all weather condition thereby it will be having a microcontroller (MSP 430) and by configuring with it to seven different low power modes a significant amount of energy can be saved while operation, thereby the reduction of sensing task and simplification of data processing algorithm can also be useful in implementation of data for further related operations. The regulation of timing will control operational related work at each moment of time since the spacing between sensors will be nearly approximately 10–15 m which will function one on one basis and will send report of every instant of time to the task manager. It will work in the following (Fig. 3).

594

P. Singh and M. Saxena

Fig. 3 Sensor use in different type of area

4 Proposed Algorithm Dynamic voltage scaling is a power management technique in computer architecture. When the voltage increased in a component is increased, it is called overvolting and decreasing is called undervolting depending on circumstances. Through it the reduction in energy consumption of a microprocessor can be accomplished without impacting the peak performance. This approach varies the processor voltage under software control to meet dynamically varying performance requirements. These algorithms are applied to a benchmark suite specifically targeted for PDA devices. Frequency in this context refers to the clock frequency or the frequency of operation of a CPU. So the term dynamic frequency scaling refers to the change of the clock frequency of the CPU during run time. The performance of the process depends on two metrices; these are—CPU response time and throughput of the CPU. Here, performance means only response time. We increase the clock frequency of the CPU to reduce its response time and improve its performance but after a certain limit we need to also increase the voltage input to the CPU to maintain its stability at the high clock frequency which in turn increases the power consumption and heat dissipation of the CPU thereby shortening the lifespan. On the other hand, we can reduce the clock frequency of the CPU below the standard values allowing us to undervolt the CPU and hence to reduce the amount of power consumption by the CPU, but this has a negative impact on the CPU performance. So dynamic frequency scaling is a technique to balance the performance and power consumption which refers to a continual variation of the clock frequency to optimize performance and power consumption of CPU. Performance is directly proportional to clock frequency; power consumption is directly proportional to clock frequency.

Design and Development of Wireless … Fig. 4 Strong pull up n/w for supplying sensor during temperature conversion

595 +3v-+5.5v sensor

microprocessor

+3v-+5.5v gnd

vdd

4.7k i/o

5 System Specification The sensor will be having an eight-bit CRC stored in the most significant bit (MSB) of the sixty-four-bit ROM. The bus master will compute a CRC value from the first fifty-six bits of the sixty-four-bit ROM and compares it to the values stored within the sensor, to determine that whether the ROM data has been received error-free by the bus master. The equivalent polynomial for the CRC is CRC = 8X + 5X + 4X + 1. The sensor will also generate an eight-bit CRC value using the same polynomial function and will provide the value to the bus master so that only to validate the transfer of the data bytes. In each case wherever CRC is been used for data validation, the bus master must calculate a CRC value using the same polynomial function as stated before and compare the calculated value to either the eight-bit CRC value stored in the sixty-four-bit ROM portion of the sensor or the eight-bit CRC value which is computed within the sensor and is read as the ninth byte, when the scratchpad is read. The comparison of the CRC value and decision to make it continue with an operation are entirely determined by the only bus master, since there is no circuitry in the sensor which can prevent a command sequence from any proceedings if the CRC stored in or calculated by the sensor does not match the value generated by the bus master. The one wire CRC can be generated by using a polynomial function generator consisting of a shift register and XOR gates. The shift register bits are initialized with zero. Then, starting with the least significant bit (LSB) of the family code and one bit at a time is shifted in, after the eighth bit of the family code been entered, the serial number is to be entered. After the forty-eighth bit of the serial number has been entered, then the shift register contains the CRC value. Again shifting in the eight bits of CRC should return the shift register to all zeroes (Fig. 4).

6 Results Due to proper synchronization of network with each other, the output rarely varies. But sometime exceptions can also occur as there could be change in the temperature graph or other change in equipment due to which the graph will vary for different apparatus or circuit or all of a sudden change in temperature due to which circuit

596

P. Singh and M. Saxena

Fig. 5 SNR improvement compared to other sensors

will stop working. SR-based detector under SNRi = −30 dB and SNRi = −25 dB, respectively. It can be found that 9.1424 dB SNR gain under SNRi = −30 and 10.3117 dB SNR gain under SNRi = −25 dB is obtained by using the proposed approach, which can improve the corresponding spectrum sensing performance of the CR networks importantly. Besides, the SNR wall comparisons between the energy detector and the proposed optimal SR-based detector under different noise uncertainty ρ = 1 dB, ρ = 0.1 dB and ρ = 0.01 dB. It can be observed that the SNR walls of the proposed approach are much lower than the corresponding SNR walls of the conventional energy detection method under the same noise uncertainty. At the same time, under the same SNRi and noise uncertainty ρ, the sampling complexity can be reduced significantly by using the proposed approach, which can improve the spectrum sensing performance of the energy detector seriously. Or in other words, if we compare the noise uncertainties of both approaches under the same sampling complexity, it can be found that the noise uncertainty of the proposed approach can also be reduced compared with the conventional energy detection method. So in comparison to another sensor, there is much more improvement in SNR as the deviation and attenuation improve much more as compared to other temperature sensors which are having very high attenuation and SNR (Figs. 5, 6, 7, 8, 9 and 10).

7 Conclusion The use of energy efficiently in sensor nodes is the most important issue in wireless sensor network in which the routing between the sensor nodes is considered as the most important one. In this paper, we proposed a new routing protocol in order to enlarge the lifetime of sensor networks. This protocol developed from LEACH protocol by considering energy and distance of nodes in WSN in CHs election. However, this protocol is only applied in the case of BS in the sensor area. If BS is far from sensor area, we cannot apply this protocol. In the future, we will study the energy distribution of node in the case BS is far from the sensor area to improve the lifetime of the whole network. In this paper, we presented the subtleties of integrating wireless sensors networks into the Internet in order to control security appliances. We delineated the architecture for deploying a real-world test bed. The presented architecture is simple and can be easily adopted for similar deployments. We highlighted relevant problems mainly IPv4 to IPv6 gateway. As a future work, we intend

Design and Development of Wireless …

Fig. 6 Block diagram of the temperature sensor with interface

Fig. 7 ADC subsystem for the temperature sensor

Fig. 8 Thermosystem graph generation by MATLAB

597

598

P. Singh and M. Saxena

Fig. 9 Temperature variation while simulation in MATLAB

Fig. 10 Temperature variation in signal-to-noise ratio

to further research the middleware system component to support heterogeneous wireless sensor motes, and thus not to limit deployment to specific motes, e.g., Tiny OS ones.

Design and Development of Wireless …

599

References 1. Sadek, R.A.: hybrid energy aware clustered protocol for IoT heterogeneous network. Future Comput. Inf. J. 3(2), 166–177 (May, 2018) 2. Bosman, H.H.W.J., Iacca, G., Tejada, A., Wortche, H.J., Liotta, A.: Spatialy anomaly detection in sensor networks using neighbourhood information. Inf. Fusion 33, 41–56 (1, Jan 2017) 3. Zhang, Z., Zhao, H., Zhu, J., Li, D.: Research on wireless sensor networks topology models. J. Softw. Eng. Appl. 03(12), 1167–1171 (2010) 4. Cheng, S.T., Chang, T.Y.: An adaptive learning scheme for load balancing with zone partition in multi sink wireless sensor network. Expert Syst. Appl. 39(10), 9427–9434 (2012) 5. Aghdam, S.M., Khansari, M., Rabiee, H.R., Salehi, M.: ‘WCCP’ acongestion control protocol for wireless multimedia communication in sensor network 13, 516–534 (2014) 6. Magno, M., Boyle, D., Brunelli, D., O’Flynn, B., Popovici, E., Benini, L.: Extended wireless monitoring through intelligent hybrid energy supply. IEEE Trans. Ind. Electron. 61(4), 1871 (2014) 7. Carlos-Mancilla, M., Olascuagacabrera, J.G., Lopez-Mellado, E., Mendez-Vazquez, A.: Design and implementation of a robust wireless sensor network. In: in Proceedings of the 23rd International Conference on Electronics, Communication and Computing (CONIELECOMP ‘13’), pp. 230–235. Cholula, Mexico (March, 2013) 8. Ciccozzi, F., Crnkovic, I., Di Ruscio, D., Malavolta, I., Pelliccione, P., Spalazzese, R.: Model driven engineering for mission critical IoT system. IEEE Softw. 34(1), 46–53 (2017) 9. Iacca, G.: Distributed optimization in wireless sensor networks-an island network framework. Soft Comput. 17(12), 2257–2277 (2018) 10. Scincalepore, S., Piro, G., Boggia, G., Griecco, L.A.: Application of IEEE 802.15.4 Security procedures in open WSN protocol stack. IEEE Stand. Educ. E-Mag. 2(4), 4th quarter (2014) 11. Mamun, Q.: A qualitative comparison of different logical topologies for wireless sensor network. Sensors 12(11), 14887–14913 (2012)

Analysis of Cloud Forensics Challenges and Solutions Ashish Revar, Abhishek Anand, and Ishwarlal Rathod

Abstract Cloud computing is a rising technical paradigm shift that replaces computational tasks from local devices to non-localized servers. Due to this, cloud computing is generating various provocations for cyber-forensics professionals. Conventional digital forensics expert works in an area in which system elements are within an individual’s natural extension and control frames are adequately determined. Due to cloud, loss of control and loss of security are widely known. Traditional organization’s network infrastructure is local so it is easy to receive, store, or investigate. Less consistent cloud service model makes it more difficult due to a mixture of scientific concerns against cyber-forensics in the cloud.

1 Understanding Cloud The National Institute of Standards and Technology (NIST) specifies cloud computing as “… a standard for facilitating suitable, on-demand network passage to common provisions of self arrangeable computing resources (e.g., servers, networks, storage, and services, applications) that can be expeditious provisioned and discharged with nominal administrative struggle or service provider communication.” The commercial comfort brought by cloud computing has attained boundless consent so large data centers can be deployed with lower expenditure.

A. Revar (B) · A. Anand · I. Rathod Department of Computer Science and Information Technology, Symbiosis University of Applied Sciences, Indore, India e-mail: [email protected] A. Anand e-mail: [email protected] I. Rathod e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_58

601

602 Table 1 Source CIO research

A. Revar et al. Greatest concerns surrounding cloud adoption (%) Security

45

Integration with current systems

26

Less control over data

26

IT governance troubles

19

Regulatory/compliance concerns

19

Capability to get back to legacy systems

11

Application and data resources get separated due to virtualization support from cloud. On-demand model gives freedom to clients by which they can avoid futile expenses of resource allocation. As cloud computing is getting incredibly well known, its misuse in crime will probably arise. Security experts will require upgraded scientific algorithms on the off chance that they need to extricate proof from cloud-based conditions. Cloud solution suppliers and clients have to set up their structures to match these legal demands or suffer penalties and other lawful consequences. Moreover, they have to do as such without overstepping nearby protection legislation or unintentionally parting with serious insider facts (Table 1).

2 Case Study Security controls in cloud computing are the same as in any IT condition. Because of administration models utilized and virtualization, cloud computing may introduce various dangers to an association than conventional IT arrangements. Cloud computing is tied in with losing control while keeping up responsibility regardless of whether the operational duty falls upon at least one outsider. Example 1: Amazon’s EC2 IaaS offering incorporates merchant obligation regarding security up to the hypervisor as it were, which implies they can give security controls, for example, physical security, environmental security, and virtualization security as it were. The client itself is liable for security controls that are identified with the IT framework including the working framework, applications, and information. Example 2: One of the RMS has chosen to move its intranet-based application to a notable CSP. So point is to dispense with additional equipment and upkeep costs by utilizing cloud versatility. Because of this, labor, physical prerequisites, IT foundation, and the executive’s issues may get diminished. Promising methodology. After a few months, the board individual needs the subtleties of the temporary worker who worked in securing and who got ended a month ago. The issue ascends here is that all of the documents that he approached together with the individual’s from that new use of firmly set over the cloud a half year back.

Analysis of Cloud Forensics Challenges and Solutions

603

3 Constraints with Traditional Digital Forensics in the Cloud Development of cloud gets constraints getting to physical objects which are accessible in virtual conditions as they were. Right now in this cloud model, networks disks and memory are granted, and conventional proprietorship limits are fundamentally obscured. Improved techniques and algorithms are profoundly expected to discover lawfully defendable digital proof in the cloud. In cloud conditions, it is not sure that the framework can be re-established before the state which is the known-acceptable arrangement [1, 2].

3.1 Flooded by Justice The requests of cloud crime scene investigation could be expensive as claims and examinations become progressively perplexing. Recently, McKinsey and Company reported that electronic revelation demands were developing by 50% every year. It is reflected based on Socha Consulting LLC study. [3] that development in e-disclosure expenses from $2.7 billion of every 2007 to $4.6 billion out of 2010.

4 What Is Cloud Forensics? Cloud crime scene investigation is a cross-control blending cloud computing and advanced legal sciences. Computerized crime scene investigation takes a shot at software engineering standards to recuperate electronic proof for introduction in an official courtroom. Cloud criminology can be considered as subset of system legal sciences. System legal sciences manage measurable examinations of systems to discover conformations. Cloud administrations are accessible because of network availability. Henceforth, cloud crime scene investigation follows the logical standards of system criminology with hardly any adjusted procedures intended for cloud conditions. Security experts know that cyber-forensics will look for data, metadata, log records and report attachments from analysis and investigatory targets parked in the cloud. This paper centers around the digital crime scene investigation challenge present because of the cloud computing shift. Along these, security for information turns into a key worry for cloud crime scene investigation. There are a few points [4] that have been rising:

604

A. Revar et al.

• Access control: Are there pertinent controls over access to personally identifiable information (PII) when put away in the cloud so just people with a need to know will have the option to get to it? • Structured versus unstructured: How is the information put away to empower the association to get to and deal with the information later on? • Integrity/availability/confidentiality: What are the techniques implemented for data integrity, availability, and confidentiality keep up in the cloud? • Encryption: Several laws and guidelines necessitate that specific kinds of PII ought to be put away just when scrambled. Is this necessity bolstered by the CSP? When procuring computerized relics from the cloud, regardless of whether for protection, presentation in an official courtroom, or the internal examination of employee abuse, fundamental legal standards, and procedures apply. The scientific procedure is broken into four unmistakable advances: 1. Collection: artifacts (computerized proof and supporting one) which are essential in cases of crime 2. Preservation: artifacts are indeed caught or frozen, making that depiction in time so exact to conform 3. Filtering: examination of artifacts for taking out insignificant or copy sections and consideration of things which are thought of significant worth 4. Demonstration: action in which proof will be introduced to help the inquiry. Challenge is to give satisfactory competent scientific information from the cloud to demonstrate the event/activity that occurred. It is beyond the realm of imagination to expect to make a step-by-step duplication of the proof, however, we can get a preview of the existing information from the cloud and reproduce get to through records to the cloud asset (endorsed by rewall records from the client and provider side, just as access signs on the PC used to get to the cloud asset). The prevailing complication remains to persuade different gatherings that this occasion happened in the way just introduced. Comparable methodologies continue to utilize in unlawful situations where advanced proof is utilized as verifying attestation versus legal proof. One theory is that an occasion cannot be overlooked or limited if there is considerable supporting data that approve the case. Due to the variability in nature and constantly having uidity in the cloud, it develops complicated forensics as le and directory structure varies. Admittedly, cloud systems have to negotiate the state of proof in forensics due to the (1) off-shoring of data and (2) methods which follow in determining le rotation with various metadata correction (like action logs, le access metadata).

4.1 The Issue with Cloud Forensics Shared (varied occupants) hosting, coordination concerns, and methods to isolate the information in records are the vital considerations that add to the unpredictability

Analysis of Cloud Forensics Challenges and Solutions

605

of cloud forensics execution. Numerous cloud service providers (CSPs) are as yet ignorant of such issues, which might be dangerous for a future lawful procedure. Cloud Privacy? An email snooping outrage at Google [5] has provoked savage criticism and has stressed clients of cloud services into quiet. Such sensitive data require uncommon assurance, not just to forestall misrepresentation and data fraud, yet in addition to agreeing to protection laws. Gmail scandal gets into a cloud security tricky situation. Obstacles to cloud computing reception become the overwhelming focus. With cloud registering, law implementation does not have the substantial authority of the evidence nor the system in which it dwells. Numerous clients may hold the way to a specific cloud. • What will be the procedure to implement and apply the law to determine the bit of the media where the evidence exists? • What will be the guideline to assure that forensic expert has extracted every proof which will be required further to investigate, comprehend, and document? Added barrier arises from the monstrous databases employed in client involvement the executive’s frameworks and informative diagrams which present legal sciences cannot address. Legal investigators follow customary proof social occasion techniques while recording the means taken by law requirement during the seizure and assessment stages. These strategies may do the trick in a few cases however it has a lack of fact authority explicit to crime scene investigation as involved in distributed computing. The scalable benefit of the cloud may likewise direct under banter on purview. Unconventional concern recognized in the cloud is those mists that genuinely are on an external server. • What legal vicinity makes statutes requirements in such circumstances? Is there any possibility that they can have any authority? • Will the nation being referred to be agreeable regarding getting proof? Couple of scientific difficulties [1] are location and time: • Location: The unfortunate casualty system or PC must be found before the initiation of the criminological procedure. Hints of virtual machine (VM) are accessible simply as the VM probably dwell toward scattered, globally found real drives; information may get erased from a striped multidisk exhibit system, or crime scene investigation may live inside another cloud merchant stockpiling framework that includes court requests to recover. • Time: Network Timing Protocol (NTP) is helpful to synchronize every included substance at a reliable time after the data source is recognized found. If a scientific master makes some troublesome memories persuading your lawful guidance where the event marks from customer view record coordinate time stamps on supplier side log documents, the crime scene investigation may get hard to shield.

606

A. Revar et al.

5 Forming a Cloud Forensics Procedure Associations handle various government and formulate regulations identifying with the safeguarding of data identified with charges, protections, and business guidelines. Simultaneously, they have to keep up consistent with different laws identifying with the annihilation of data that is never again required. Distributed computing likewise raises new inquiries concerning who possesses the information and the client’s desires for protection. Laws fluctuate on the legitimate securities as to in the cloud from nation to nation. Our social standards are advancing endlessly from the capacity of individual information on PC local storage to support such data over “cloud," on servers claimed with specialist co-ops. Such data can at that point be produced and gotten to by manually individualized computing gadgets. That information as no less classified or private since it was put away on a server claimed by another person. Cloud-based electronic revelation tools [3] may assist with keeping these expenses underneath. Organizations like Autonomy, Orange, Kazeon, and Clear-well have propelled facilitated administrations on gathering, protecting, and dissecting computerized proof. Numerous companies will begin putting resources into the e-disclosure framework and that, by 2012, organizations without this framework will burn through 33% more to meet these solicitations, according to Gartner research [6]. The perplexing idea of cloud computing may prompt specialization. Forensic experts for cloud may be more like clinical eld similarity in which you must be a common legal sciences specialist and there will be various zones of distributed computing. In any case, similar to any device, examiners need to get the most advantage at any rate cost. In this way, We need to think related to different devices. We will not examine a huge number of dollars when we can get that equivalent data simpler. We do not go into everything 100%; we take a gander at what you need to comprehend the wrongdoing.

5.1 Tools for Performing Current forensic devices depend on conventional measurable methodologies, counting formal techniques to procure data, and an organized strategy to break down artifacts (data) to reproduce or approve some arrangement of occasions or recover missing data. Certain forensic apparatuses happen to be from these two common classifications [1]: • Unvarying(static): Static examination scientific instruments dissect stationary information, the substance of storage devices, or NetFlow information got within a declared obtaining process. • Active(live): Live legal instruments gather and dissect “Active(live)” framework information, pleasing the request for instability, performing memory examination, and giving techniques to encryption key reconstruction.

Analysis of Cloud Forensics Challenges and Solutions

607

Couple collections subsist because of forensic advancement to reproduce and archive re ned occurrences. Be that as it may, cloud models break this worldview because data are hard to find, obtaining is unthinkable when the area is awed, and examination is non-existent without securing. A third forensic apparatus advancement is expected to encourage cloud legal sciences analysis. Cloud forensic apparatuses must be half and half of the present unvarying(static) and active(live) assortment and investigation techniques and all require the knowledge to note, what is more, foresee curios dependent on measurable heuristics. The main angle a cloud instrument changes is the assortment technique when customary criminological devices t. Legal apparatuses must picture the physical and coherent information areas since procurement is a test in the cloud. The representation must demonstrate possible and absurd relics (information), facilitating the assortment weight and protection gauges. Ridiculous ancient rarities ought to be clarified all things considered in computerized design and the explanations ought to be conformations conveyed into the proof introduction. Cloud can be utilized as a discovery engine for fast and exact measurable judgments for cloud scientific devices in perception. Measurable assortments containing hopeless ancient rarities ought to be presented within the cloud site for heuristic and mark-based examination. Here is like antivirus motors and other paired examination motors as the quantity of entries increment, in this manner permitting legal examiners to change over inadequate assortments to solid introductions.

6 Conclusion Cloud forensics despite everything should be tended to with numerous specialized and physical issues. Customary scientific apparatuses and procedures are accommodating in the event that they are utilized in a various way, while different issues necessitate that new methodologies and frameworks be created. Administration Level Understanding (SLA) gives conformations they that are a piece of your group and can gather and give adequate scientific antiquities when required. So a solid working relationship should be created with CSPs while improvement of new devices and frameworks.

References 1. Zimmerman, S., Glavach, D.: Cyber forensics in the cloud. Inf. Assurance Technol. Anal. Center 14(1), 4–7 (2011) 2. Security Guidance for Critical Areas of Focus in Cloud Computing, Cloud Security Alliance (December, 2009). http://www.cloudsecurityalliance.org/csaguide.pdf. Accessed on March 2011 3. http://searchcloudcomputing.techtarget.com/feature/

608

A. Revar et al.

4. Mather, T., Kumaraswamy, S., Latif, S.: Chapter 7—Privacy. Cloud Security and Privacy, 1st edn. OReilly Media Inc. (September, 2009) 5. http://searchcloudcomputing.techtarget.com/news/1520288/Google-Gmail-scandal-openscloud-security-can-of-worms 6. Gartner Says Worldwide Cloud Services Market to Surpass $68 Billion in 2010. http://www.gar tner.com/it/page.jsp?id=1389313. Accessed on March 2011 7. Mell, P., Grance, T.: The NIST definition of cloud computing, national institute of standards and technology. Inf. Technol. Lab. Version 15, 10-7-09 (accessed March 2011) 8. Brown, C.L.T.: Chapter 1 Computer Forensics Essentials. Computer Evidence: Collection and Preservation, 2nd edn. Cengage Learning (2010). Books24x7 http://common.books24x7.com/ book/id33937/book.asp. Accessed on March 2011

Cohesion Measure for Restructuring Sarika Bobde and Rashmi Phalnikar

Abstract The object-oriented programming is widely adopted in recent software developments. The development of a well-designed software system is needed that reduces software maintenance costs. On the other hand, the internal structure of the software system is deteriorating due to prolong maintenance operations. In such cases, restructuring is one of the strategies to strengthen the system’s overall internal structure without changing its external behavior. Another restructuring strategy is to use refactoring on the current system. Code refactoring is an effective approach to software development to improve the internal structure of the program. Through refactoring, the quality of the program can be enhanced by maintenance and improvement in reliability. Code refactoring is done without any modification in its features. Cohesion is used to assess a software system’s design quality and is the main pillar of object-oriented software development in good software design. Using software metrics, the quality of object-oriented classes that require code refactoring is assessed. This work proposes the need for refactoring and focuses on exploring how to use object-oriented metrics as guidance where code refactoring may be used. We present object-oriented software metric, i.e., cohesion metric and analyzes the need of metric for restructuring.

1 Introduction Nowadays, the development of object-oriented software is widely used. While there are significant costs and energy spent testing the code for release, most studies indicate that some half of the overall effort is expended on maintaining the software after it has been released, and maintenance [1, 2] is expensive. S. Bobde (B) · R. Phalnikar School of Computer Engineering and Technology, MITWPU, Pune, India e-mail: [email protected] R. Phalnikar e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_59

609

610

S. Bobde and R. Phalnikar

Maintenance states that the ease with which software can be changed [3], where possible changes to the software include bug fixes, design enhancements and software adaptations for different environments. Refactoring methods are useful to make the software more accessible while reducing the cost of maintenance. Refactoring says changes that maintain the code’s external behavior while improving its internal structure [4]. Since refactoring preserves established actions, it reduces some of the associated costs of modification, such as the cost of modifying source code, test cases, and documentation. The basic idea of cohesion is to identify groups that are not well designed. If a class does not represent any objects properly, but merely a collection of non-related members, then the relationship between its members may not be strong. Cohesion is strong, if a class correctly identifies the features of objects and all members are closely related to each other. To identify inadequate structured classes that are too large, too small, or attributes and methods that are not allocated properly in class hierarchy, object-oriented software metrics are used to help programmers [5, 6]. Metrics that determine the cohesion of a class (how well the class members work together) and those that measure the link between classes (the point to which classes depend on other classes) are of particular attention. Coherence and coupling are consistent with sustainability [7, 8]. In general, there is a need to optimize cohesion and minimize coupling [5]. The rest of the paper is organized as follows: Section 2 describes a piece of information about object-oriented software metric; Section 3 discusses proposed work with the result of cohesion metric to identify the need for refactoring.

2 Software Metrics for Object-Oriented Systems Different type of software metrics can be used for many purposes. Different metrics are better suited for certain purposes than others. Object-oriented metrics are one of the types of software metrics that are used to measure the quality of object-oriented systems. Nowadays, object-oriented system is widely used in industries for software development. So we are concentrating on object-oriented system, i.e., Java. We are interested in using an object-oriented metric that is used to identify Java classes that require refactoring and to validate that refactoring has improved the quality of classes.

2.1 Refactoring Through Object-Oriented Metric Researchers and programmers are now focusing on cohesion today because a wellcoherent unit is always superior in terms of sustainability, alteration, reusability, and comprehensibility than a smaller amount of coherent module. Major existing cohesion measures have been reviewed by the researcher [2].

Cohesion Measure for Restructuring

611

In common, cohesion defines how powerfully members are related to each other in a group. In a structured system, cohesion represents how the processing elements in a module are closely related [9, 10]. All processing elements are related to the single function of the module, in the majority of the cohesive module. The concept of cohesion is adapted for use from structured design into objectoriented design for use. [5]. Classes, processes, and attributes in object-oriented programming loosely equate with modules, processing components, and structured system data for purposes of cohesion calculation. Ideally, the class’s methods and characteristics are closely related to each other and this is expressed in their behavior patterns. As consistency is considered to be an integral part of a high-quality object-oriented design, several attempts have been made to quantify it, from both a structural and a conceptual viewpoint. The literature includes over 40 different cohesion metrics [5]. Chidamber and Kemmerer [5] considered that a good quality class should have a set of attributes often used in class methods. Their Metric Lack of Coherence in Methods (LCOM) compares the number of dissimilar pairs of methods to the number of similar pairs of methods, where two methods are directly linked if each connection has a common attribute. LCOM is zero, if there is a larger number of a similar method than different methods. The highest cohesion class has an LCOM score of 0. There is no greater limit on the scores of the non-coherent classes [5]. Henderson-Sellers wanted a lack of cohesion metric that was normalized to a range of 0.0–1.0, where 0.0 indicates ideal cohesion (every method accesses every attribute) and 1.0 indicates total lack of cohesion. LCOM* is defined in the following equation: ∗

LCOM =

1 a

a

n(Ai ) − m 1−m

i=1

where a represents the total number of attributes, m represents the total number of methods, and n(A) represents the number of methods that access attribute A [6]. The Tight Class Cohesion (TCC) and Loose Class Cohesion (LCC) metrics [11] compute the number of pairs which are related. TCC and LCC scores range from 0 (least cohesive) to 1 (most cohesive).

3 Proposed Approach In the planned approach, the java classes of the software are taken as input. Then we identify the classes that need refactoring with the help of object-oriented metric. Member variables and set of member functions are identified from the classes present in the software. Vector-based approach can be used to find the relationship between the representation of member variables. Fuzzy C-means clustering algorithm can be used to form clustering of member function based on the similarity values [11].

612

S. Bobde and R. Phalnikar

The aim is to discovery a connection among the object-oriented metrics and the refactoring process. Furthermore, we need to analyze if object-oriented metrics can be used to classify classes preferable to refractor. The metric used in this methodology is Lack Cohesion in Methods (LCOM), LCOM*, Tight Class Cohesion (TCC), and Loose Class Cohesion (LCC) metrics which calculate the similarity between two entities. Once a class that needs refactoring is identified, we apply the clustering algorithm to classes. The clustering aims to group similar or related elements. The importance of the relationship between elements in a component can be calculated using clustering analysis. To achieve these qualities clustering by fuzzy c-means can be used to assist software developers in class-level refactoring [11]. These are achieved by identifying dissimilarity between entities and identify the similarity pairs. A similarity metric is used to discover the resemblance among pairs of member functions. A similarity metric is used to find the relatedness among member function (Fig. 1). Table 1 shows the overview of the system studied which finds the line of code, number of classes, and packages in a class. Table 2 shows the value of cohesion for given data set by applying different cohesion metrics. Table 2 shows the result of different cohesion metrics to identify the need for refactoring. The result shows that the Loose Cohesion Metric gives a better result as compared to another cohesion metric.

Fig. 1 Design of proposed work

Table 1 Summary of the system studied Name of software

LOC

Class count

Package count

Coffee shop management system

2782

33

6

Document manager

3251

35

2

456

4

1

Payroll management system

Cohesion Measure for Restructuring

613

Table 2 Cohesion metric before clustering Name of software

Cohesion before clustering LCOM

TCC

LCC

Coffee shop management system

0.283

0.271

0.263

Document manager

0.201

0.197

0.183

Payroll management system

0.257

0.251

0.239

Table 3 Cohesion metric after clustering Name of software

Cohesion before clustering LCOM

TCC

LCC

Coffee shop management system

0.201

0.201

0.198

Document manager

0.117

0.113

0.109

Payroll management system

0.172

0.169

0.166

Table 3 shows that there is a decrease in the values of LCOM, TCC, and LCC. Since the values of cohesion metric after clustering are less than the values of cohesion metric before clustering, the cohesion is better after refactoring. For many purposes, cohesion measures are used. To identify test classes that may require refactoring, we used existing cohesion metrics to distinguish classes from a range of test classes.

4 Conclusion Refactoring at the class level has great importance in software engineering for reducing maintenance cost. In this paper, we proposed a clustering technique to help refactoring by grouping similar methods and classes. We use software metrics that are used to identify the classes that need refactoring. We apply the experimentation on different software system to identify the need of refactoring and the results of our tests show that the restructuring of the cohesion metric inputs would improve the utility of the metrics to recognize groups that require refactoring. Also, we compare the performance of the three approaches and it is observed that lack of cohesion metric gives the better result as compared to TCC and LCC. This research can be extended and generalized on a large set of applications, including different types of system, developed using an object-oriented language such as Java.

614

S. Bobde and R. Phalnikar

References 1. Pressman, R.S.: Software engineering: a practitioner’s approach, 4th edn. McGraw-Hill, New York (1997) 2. Sommerville: Software Engineering. International Computer Science Series, 5th edn. AddisonWesley Pub. Co, Wokingham, England (1996) 3. IEEE ISO: International Standard—ISO/IEC 14764 IEEE Std 14764–2006—Software Engineering—Software Life Cycle Processes -Maintenance. IEEE, 2 editions, Sept 2006 4. Fowler, M., Beck, K., Brant, J., Opdyke, W., Robert, D.: Refactoring: Improving the Design of Existing Code. Addison-Wesley, Boston (1999) 5. Chidamber, S.R., Kemerer, C.F.: A metrics suite for object oriented design. IEEE Trans. Softw. Eng. 20(6), 476–493 (1994) 6. Brian Henderson-Sellers: Object-Oriented Metrics: Measures of Complexity. Prentice-Hall, Inc. (1996) 7. Aggarwal, K.K., Singh, Y., Kaur, A., Malhotra, R.: Investigating effect of design metrics on fault proneness in object-oriented systems. J. Obj. Technol. 6(10):127–141 (2007) 8. Chidamber, S.R., Darcy, D.P., Kemerer, C.F.: Managerial use of metrics for object-oriented software: an exploratory analysis. IEEE Trans. Softw. Eng. 24(8):629–639 (1998) 9. Briand, L.C., Daly, J.W., Wüst, J.: A unified framework for cohesion measurement in objectoriented systems. In: Proceedings Fourth International Software Metrics Symposium (1997) 10. Bieman, J.M., Kang, B.-K.: Cohesion and reuse in an object-oriented system. In: SIGSOFT Software Engineering Notes, Proceedings of the 1995 Symposium on Software Reusability, volume 20 of SSR’95, p p259–262, Seattle, Washington, United States,. ACM (1995) 11. Bobde, S., Phalnikar, R.: Restructuring of object oriented system using clustering technique. In: International Conference on Computational Science and Application, pp 419–425. Springer Singapore (2019)

Analysis of F-Shape Antenna with Different Dielectric Substrate and Thickness Radhika Raina, Komal Jaiswal, Shekhar Yadav, Dheeraj Kumar, and Ram Suchit Yadav

Abstract A comparative analysis of the F-shape antenna with variation in material and thickness is presented in this paper. The proposed antenna has a size of 40 mm × 30 mm × (0.8 mm/1.6 mm). The proposed antenna is simulated by using three different substrates, i.e., FR4 (1.6 mm), RT Duroid® 5880 (0.8 mm) and RT Duroid® 5880 (1.6 mm). When FR4 is used as a substrate, the percentage bandwidth at lower resonance frequency, i.e., 2.40 GHz is 15.45% and at the higher resonance frequency, i.e., 26.93 GHz is 74.5%. Further same design is simulated using RT Duroid® 5880 (0.8 mm) as a substrate, the percentage bandwidth in the lower resonant frequency, i.e., 3.29 GHz is 15.29% and at the higher resonance frequency, i.e., 17.34 GHz is 26.2%. Later RT Duroid® 5880 (1.6 mm) is used as a substrate, the percentage bandwidth at lower resonance frequency, i.e., 2.77 GHz is 19.63% and at higher resonance frequency, i.e., 19.84 GHz is 13.67%. Defected Ground Structure is used in the proposed antenna. All the proposed antennas are simulated by using High Frequency Structure Simulator (HFSS) software version 13.0.

1 Introduction Demand in the wireless communication of a single device which can work for multiple application has increased; hence, researchers are inclined to work on the multiple band, wideband and ultrawide band. Now a day’s requirement of antenna is that it should be compact in size, low cost, light weight, low profile and the simple fabrication process. However microstrip patch antenna has some major limitations R. Raina · K. Jaiswal (B) · S. Yadav · R. S. Yadav Department of Electronics and Communication, University of Allahabad, Allahabad, Uttar Pradesh, India e-mail: [email protected] D. Kumar Department of Physics and Electronics, Rajdhani College, University of Delhi, Delhi, India © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_60

615

616

R. Raina et al.

like low gain, low efficiency and narrow bandwidth [1, 2]. To overcome these drawbacks various techniques like slot, notches, DGS, variation in the dimensions and vary in dielectric material can be used for bandwidth enhancement and for gain improvement. DGS influence the input impedance and current distribution of the antenna as a result of transmission line impact on capacitance and inductance [3, 4]. The comparative analysis of reported antenna and proposed antenna is depicted in Table 1. It can be clearly seen that the size of proposed antenna is less than all the reported antennas. In this work F-shape microstrip Patch antenna along with DGS is etched on different substrates like FR4 of thickness 1.6 mm and Rogers 5880 of thickness 0.8 mm and 1.6 mm are proposed. It is observed that the antenna Design_1 is designed on RT Duroid® 5880 with a thickness of 0.8 mm operated in the frequency range (3.02–3.52 GHz, 14.20–18.50 GHz) With impedance bandwidth of 15.29%, 26.2% and gain of 1.11, 4.54, 6.35 dBi. Whereas antenna Design_2 is designed on FR4 Table 1 Comparative analysis of proposed antenna with other reported antenna Reference

Size of antenna (mm3 )

Frequency band (resonance frequency) (GHz)

[5]

(50 × 35 × 2.05–2.86 (2.4) 1.6) 5.55–6.14(5.81)

Bandwidth percentage (%)

Peak gain (dBi)

32.99

3.7

10.11

3.57

(40 × 35 × 2.12–2.77(2.44) 1.6) 4.91–5.50(5.18)

26.58

1.87

11.33

2.88

[7]

(110 × 110 0.91–0.933(0.92) × 6.6) 2.40–2.57(2.45)

2.49

3.8

6.84

8.9

[8]

(40 × 40 × 2.4–2.485(2.4) 1) 5.15–5.825(5)

3.48

3.2

12.30

5.5

(21.4 × 2.21–2.70(2.4) 59.4 × 1.6) 5.04–6.03(5.2)

19.95

2

17.88

5

[10]

(45 × 80 × 2.4–2.484(2.4) 0.8) 5.15–5.35(5.2)

3.43

2

Proposed antenna Design_1

(40 × 30 × 3.02–3.52(3.29) 0.8) 14.20–18.50(15.17,17.34)

Proposed antenna Design_2

(40 × 30 × 2.15–2.51(2.40) 1.6) 8.27–10.46(9.4)

Proposed antenna Design_3

(40 × 30 × 2.48–3.02(2.77) 1.6) 10.87–14.98(11.68,14.02)

[6]

[9]

14.68–15.44(15.06) 18.98–41.52(26.93)

18.33–21.02(19.84)

3.80

2.33–2.7

15.29

1.11

26.2

4.54, 6.35

15.45

−0.48

23.38

6

5.04 74.5

−0.36 9.2

19.63

0.5

31.79

8.86, 4.5

13.67

4.3

Analysis of F-Shape Antenna with Different Dielectric Substrate …

617

with thickness of 1.6 mm, it is observed that the antenna resonates at frequency 2.40, 9.4, 15.06, 26.93 GHz with peak gain of −0.48, 6, −0.36, 9.2 dBi and impedance bandwidth of 15.45, 23.38, 5.04, 74.5%. Antenna Design_3 radiating patch is etched on RT Duroid® 5880 with a substrate thickness of 1.6 mm and the antenna resonates at 2.77, 11.68, 14.02, 19.84 GHz frequency with an impedance bandwidth of 19.63, 31.79, 13.67% and gain of 0.5, 8.86, 4.5, 4.3 dBi.

2 Antenna Design The geometrical configuration of the proposed F-shaped antenna with DGS is achieved by forming F-shape “antenna a” keeping the radiating patch upper arm dimension (3 × 7) mm2 , then antenna “b” is evolved by varying the size of W patch (3 × 8) mm2 . Further the antenna “c” is formed by increasing the upper arm width by 1 mm making W patch dimension by (3 × 9) mm2 . Later antenna “d” is designed by further increasing the width of upper arm to 10 mm. Antenna “e” is formed by introducing extended arm of dimension (6 × 3) mm2 at the end of the upper arm of the patch. Finally, antenna “f” the DGS of dimension (17 × 30) mm2 is etched on antenna “e”. The antenna’s dimensions are shown in Table 2. In the second step, all the antennas are further analyzed by using RT Duroid® 5880 with dielectric constant of 2.2 and loss tangent of 0.002 with the thickness of 0.8 and 1.6 mm and FR4 substrate with thickness of 1.6 mm (Table 3). Table 2 Parametric analysis of the antenna designs Parameter (mm)

Design_a

Design_b

Design_c

Design_d

Design_e

Design_f

L sub

40

40

40

40

40

40

W sub

30

30

30

30

30

30

W patch

7

8

9

10

10

10

L patch

34

34

34

34

34

34

F

3

3

3

3

3

3

Pw

4

4

4

4

4

4

PL

3

3

3

3

3

3

3

3

EW EL





3

– 3



3

3

9

9

W DGS











30

L DGS











17

618

R. Raina et al.

Table 3 Comparative analysis of antenna designs using different substrate material Substrate Antennas

Frequency band (GHz)

Resonance frequency (GHz)

Gain (dBi)

FR4 1 (1.6 mm)

6.39–6.60

6.4

2.83

3.23

C

8.68–8.97

8.75

3.08

3.28

X

9.74–10.56

10.17

2

3

4

7.9

8.07

X

13.66–14.24 13.91

3.71

4.15

Ku

15.71–39.57 39.18, 37.90

10.56, 7.01

86.3

Ku, K and Ka

6.18–6.31

6.27

3.49

2.08

C

9.57–10.27

9.98

7.04

7.05

X

17.20–38.79 19.76, 27.24

4.3, 4.97

77.12

Ku, K and Ka

5.96–6.10

5.9

3.3

2.32

C

9.55–10.15

9.8

7.21

6.09

X

15.45–25.14 20.05, 24.33

7.7, 1.54

47.7

Ku and K

9.53–10.09

6.7

5.70

X

9.86

15.40–25.23 22.20, 24.59 5

4

Ku and K C

6.8

2.35

2.80

9.49–10.07

9.88

7.2

5.93

X

12.93–13.90 13.58

0.24

7.23

Ku

16.76–45.20 19.33, 32.07

10.5, 5.10

91.80

Ku, K, Ka and Q

– 0.48

15.45

S

2.40 9.4

6

23.38

X

– 0.36

5.04

Ku

18.98–41.52 26.93,31.97 6.7, 3.6

74.5

K, Ka and Q

10.54–10.81 10.67

9.54

2.52

X

14.55–17.57 16.97

6.51

18.80

Ku

10.42–10.69 10.58

11.02

2.55

X

16.25–17.65 17.09

12.80

8.25

Ku

14.68–15.44 15.06

3

6.19,7.04 48.38

6.68–6.87

6 (Proposed 2.15–2.51 antenna_Design_2) 8.27–10.46

RT 1 Duroid® 5880 (0.8 mm) 2

Percentage Operating bandwidth band

9.61–9.94

10.94

2.97

X

14.36–15.03 14.67

9.78

10.9

4.55

Ku

16.51–17.40 17.30

12.97

5.24

Ku

9.59–9.72

10.26

1.34

9.67

X (continued)

Analysis of F-Shape Antenna with Different Dielectric Substrate …

619

Table 3 (continued) Substrate Antennas

5

Frequency band (GHz)

Resonance frequency (GHz)

Gain (dBi)

Percentage Operating bandwidth band

14.36–17.55 15, 17.07

9.9, 12.46

19.99

Ku

14.38–15.91 15

10.6

10.10

Ku

1.11

15.29

S

4.54, 6.35

26.2

Ku and K

6 (Proposed 3.02–3.52 3.29 antenna_Design_1) 14.20–18.50 15.17, 17.3 RT 1 Duroid® 5880 (1.6 mm) 2

8.54–8.64

8.23

1.16

X

11.29–12.32 11.78

8.58

3.7

8.72

X and Ku

13.64–14.37 14.08

12.29

5.21

Ku

9.68–9.90

4.89

2.24

X

19.71–20.30 20

7.5

2.94

K

7.86–8.04

9

2.26

C and X

13.37–13.93 13.68

10.01

4.10

Ku

18.80–20.61 19.39

7.98

9.18

K

13.21–13.87 13.58

9.25

4.87

Ku

19.20–19.61 19.37

2.2

2.11

K

13.19–13.75 13.48

10.73

4.15

Ku

18.87–19.51 19.18

3.83

3.33

K

6 (Proposed 2.48–3.02 2.77 antenna_Design_3) 10.87–14.98 11.68

0.5

19.63

S

8.86

31.79

X and Ku

18.33–21.02 14.02

4.5

13.67

K

19.84

4.3

3

4 5

9.78 7.98

3 Results and Discussions The proposed F-shaped antenna with DGS is simulated by varying the substrate like FR4 with thickness 1.6 mm and RT Duroid® 5880 with thickness 0.8 and 1.6 mm. The antenna behavior is observed and compared in terms of S 11 , Gain, VSWR, Group delay, radiation efficiency and radiation pattern. The comparative analysis of Proposed Design_1, Proposed Design_2 and Proposed Design_3 are done with respect to S11 and Gain is shown in Fig. 1d, e show that Proposed Design_1 radiates at (3.02–3.52, 14.20–18.5) GHz with gain 1.11, 4.54, 6.35 dBi and Proposed Design_2 operates at frequency 2.15–2.51, 8.27– 10.46, 14.68–15.44, 18.98–41.52 GHz with peak gain of 9.2 dBi and impedance bandwidth of 74%, whereas Proposed Design_3 operated at (2.7, 14.02, 19.84) GHz frequency with peak gain 8.86 dBi and max impedance bandwidth of 31.79%. The VSWR of the proposed designs is depicted in Fig. 3. It can be clearly seen that VSWR

620

R. Raina et al.

Fig. 1 Principle figure of the proposed antenna a front view, b ground view, c side view, d S11 as a function of frequency, e gain as a function of frequency

of all the proposed design frequency lies in between 1 and 2 which is acceptable. The purpose of group delay is to know the transmission pulse distortion of ultrawide band with respect to time domain. It can be clearly seen from Fig. 4 that all the proposed antenna lies between −1 and 1 ns which is acceptable as good antenna, but the group delay of proposed Design_3 is close to −0.7–0.25 ns which mean the transmission time is very less so it could say that proposed Design_3 is best among all proposed designs. The radiation efficiency of proposed antennas is shown in Fig. 5. It can be

Analysis of F-Shape Antenna with Different Dielectric Substrate …

621

clearly seen that the Proposed Design_1 and proposed Design_3 radiation efficiency is more than 80% whereas radiation efficiency of Proposed Design_2 varies from 40 to 98%. The radiation patterns of E and H-plane at a different resonating frequency of proposed Design_1 are presented Fig. 6 (a) 15.2 GHz (b) 17.8 GHz, It shows that the antenna radiates in omnidirectional from Fig. 7a, b, c shows the radiation pattern of Co-polarized E-plane and H-Plane for Proposed Design_2 for 2.4, 9.5 and 20 GHz. The radiation patterns of co-polarized E-Plane and H-Plane of the proposed Design_3 shows the omnidirectional behavior as shown in Fig. 8a–c (Fig. 2).

4 Conclusion In this paper, three different antenna design is proposed with F-shaped radiating patch. All the three antennas are proposed based on the thickness variation, different dielectric substrates (FR4 and RT Duroid® 5880) and DGS technique. The substrate of proposed Design_1 antenna is RT Duroid® 5880 (0.8 mm) and in this designed highest percentage bandwidth (26.2%) with 6.35 dBi gain and cover all positive gain over all 2 GHz to 20 GHz (cf. Figure 1 and Table 1). it’s applicable for S, Ku and K-band (wireless communication, satellite communication, radar and astronomical purpose). The proposed antenna Design_2 substrate is FR4 epoxy (1.6 mm). In this design highest percentage bandwidth 74.5 and 23.38 with corresponding peak gain 9.2 and 6 dBi (cf. Figure 1 and Table 1). It is applicable for X, K, Ka and Q band (weather monitoring, air traffic control, maritime vessel traffic control, defense tracking and vehicle speed detection, modern radars, satellite communication).The proposed Design_2 is good and cost effective. The proposed antenna Design_3 substrate is RT Duroid® 5880 (1.6 mm) have percentage bandwidth 31.79, 19.63, 13.67 with the peak gain of 8.86, 0.5, 4.3 dBi (cf. Fig. 1 and Table 1) and applicable for S-, X-, Ku- and K-band applications (wireless communication applications, satellite communication, radar and astronomical purpose), achieved more than 82% radiation efficiency over all entire frequency range (1–20 GHz) and VSWR lies between 1 and 1.5. Group delay is approx. closer to 0.25 ns than others design (cf. Figs. 3, 4 and 5).

622

R. Raina et al.

Fig. 2 Parametric analysis of proposed antenna a antenna “a”, b antenna “b”, c antenna “c” size 9 mm (W patch ), d antenna “d” size 10 mm (W patch ), e antenna with extension, f proposed antenna with DGS, g parametric analysis of all the antenna with respect to S 11 as a function of frequency for (i) antenna “a” (ii) antenna “b” (iii) antenna “c” (iv) antenna “d” (v) antenna “e” (vi) antenna “f” For RT Duroid® 5880 thickness 0.8 mm, FR4 with thickness 1.6 mm and RT Duroid® 5880 with thickness 1.6 mm

Analysis of F-Shape Antenna with Different Dielectric Substrate …

Fig. 2 (continued) Fig. 3 VSWR versus frequency graph

623

624

R. Raina et al.

Fig. 4 Group delay versus frequency graph

Fig. 5 Radiation efficiency versus frequency graph

Fig. 6 Radiation pattern with respect to E-plane and H-plane for RT Duroid® 5880 with thickness 0.8 at a 15.2 GHz and b 17.4 GHz frequency

Analysis of F-Shape Antenna with Different Dielectric Substrate …

625

Fig. 7 Radiation pattern with respect to E-plane and H-plane for FR4 at a 2.4 GHz, b 9.5 GHz and c 20 GHz frequency

Fig. 8 Radiation pattern with respect to E-plane and H-plane for RT Duroid® 5880 thickness 1.6 mm at a 2.8 GHz, b 11.7 and c 20 GHz frequency

References 1. Yadav, S., Jaiswal, K., Patel, A.K., Singh, S., Pandey, A.K., Singh, R.: Notch-Loaded Patch Antenna with Multiple Shorting for X and Ku Band Applications. Springer Nature Singapore Pte. Ltd. (2019) 2. Jaiswal, K., Patel, A.K., Yadav, S., Yadav, R.S., Singh, R.: Christmas tree shaped proximity coupled microstrip patch antenna for multiple ultrawide band application. In: International Conference on Computer Communication and Informatics (ICCCI-2018), 04–06 Jan 2018, Coimbatore, India 3. Khandelwal, M.K., Kanaujia, B.K., Kumar, S.: Defected ground structure: fundamentals, analysis and applications in modern wireless trends. Hindawi Int. J. Antennas Propag. 2017, Article ID 2018527,22 4. Bhatia, S.S., Sahni, A., Rana, S.B.: A novel design of compact monopole antenna with defected ground plane for wideband applications. Prog. Electromagn. Res. 70, 21–31 (2018) 5. Panda, J.R., Kshetrimayum, R.S., A printed 2.4 GHz/5.8 GHz Dual-band monopole antenna with a protruding stub in the ground plane for WLAN and RFID applications. Prog. Electromagn. Res. 117, 425–434 (2011)

626

R. Raina et al.

6. Panda, J.R., Kshetrimayum, R.S.: An F-shaped printed monopole antenna for dual-band RFID and WLAN applications. Microw. Opt. Technol. Lett. 53(7) (2011) 7. Liu, Q., Shen, J., Yin, J., Liu, H., Liu, Y.: Compact 0.92/2.45 GHz dual-band directional circularly polarised microstrip antenna for handheld RFID reader applications. IEEE Trans. Antennas Propag. https://doi.org/10.1109/tap.2015.2452954 8. Ren, W.: Compact dual-band slot antenna for 2.4/5 GHz WLAN applications. Prog. Electromagn. Res. B 8, 319–327 (2008) 9. Jo, S., Choi, H., Shin, B., Oh, S., Lee, J.: A CPW-fed rectangular ring monopole antenna for WLAN applications. Int. J. Antennas Propag. 2014, Article ID 951968, 6 p, 9067 (2004) 10. Yeh, S.-H., Wong, K-L: Integrated F-shaped monopole antenna for 2.4/5.2 GHz dual-band operation. Microw. Opt. Technol. Lett. 34(1) (2002)

Analyzing Forensic Anatomization of Windows Artefacts for Bot-Malware Detection Vasundhra Gupta, Mohona Ghosh, and Niyati Baliyan

Abstract In order to analyse and detect Bot-malware early stage infections in user machine, we need approaches that can complement the current anti-virus and signature-based approaches for Bot-malware Detection. Our in-depth study forensically investigates various artefacts of Windows Registry which can be utilized to uncover traces of Bot-malware infection in the system. Further, we suggest system resource usage monitor (SRUM), a new diagnostic feature launched with Windows 8 as a source of potential artefact for Bot-malware early infection detection. This study may assist forensic experts to detect Bot-malware at the system level in the absence of logging or when the malicious application has been purposely removed by the attacker.

1 Introduction Windows has always been one of the most prominent operating systems across the globe [1], reason being its user-friendly operations via simple GUI. However, it is one of the most targeted operating systems by attackers as well. According to a report by Symantec, over 144 million malware attacks targeted Windows as compared to 4 million for Mac in 2018 [2]. Windows operating system has the capability to perform numerous operations in backstage, so that various programs get installed on our machine and run satisfactorily. It requires Windows Registry which is a powerful hierarchical database, for retrieval of information about system’s configuration. This information is regularly referenced by various applications and programs. Since V. Gupta · M. Ghosh (B) · N. Baliyan Indira Gandhi Delhi Technical University for Women, New Church Road, Kashmere Gate, Delhi 110006, India e-mail: [email protected] V. Gupta e-mail: [email protected] N. Baliyan e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_61

627

628

V. Gupta et al.

Windows Registry holds a plethora of information, it is often attacked by malware to infect Windows operating system. Its structure contains five main classes called hives [3] which have keys and sub-keys within them that store the actual data. Hence, these require to be protected against unauthorized usage. Malware is a software created by cyber attackers with an intention of performing malicious activities such as gaining unauthorized access to confidential data, corruption of data, unauthorized gain of privileges, causing damage to a computer or network and spamming. These attackers leverage the ignorance of victims and lure them to download something undesirable for the system [4]. Due to massive rise in the number of malware and its stealthy variants, it has become strenuous to detect and respond to these immediately. Conventional anti-virus software are deploying virus signatures in order to trace malware infection. These often require updating their databases with new virus signatures and patterns frequently to guarantee malware detection accuracy which is time confusing. This fosters the need to find forensically modern approaches to detect early infection within a Windows victim machine. With this study, we suggest approaches that can yield forensically sound evidences for Bot-malware analysis and detection at early stage.

2 Related Work This section focuses on the work done in the area of Windows artefacts analysis to detect malicious activities in the system. In [5], string monitoring, registry monitoring and file monitoring are used to distinguish benign executables from malicious executables in Windows machine. Some of these are expensive in terms of the time consumed, due to lack of specific monitoring. With [6], Microsoft Windows Prefetch files are employed to detect various kinds of evasive malware. In [7] UserAssist key, a fragment of Windows Registry as a resource is utilized for program executional analysis. This work involves comparative analysis with other similar artefacts. In [8], authors analysed those Windows artefacts which vary with Windows versions and have compatibility issues with most forensic tools. This required manual analysis to reduce redundancy in data and acquire better results. In [9], a survey is presented about various manual analysis mechanisms used in malware incident handling and detection. This work analyses those Windows artefacts which have not been thoroughly utilized in forensic investigation but can be leveraged to detect early infection of Bot-malware in Windows machine. Further, we present a preliminary analysis of new diagnostic feature launched with Windows 8, known as SRUM. We find this artefact immensely useful in construction of activity timelines of various applications, primarily when the application itself has been deleted/removed.

Analyzing Forensic Anatomization of Windows Artefacts . . .

629

3 Automatic Startup Locations Windows Registry has several locations, which provide programs to execute at system startup or after reboot. While this feature is useful, it turns into a vulnerability as any program can be configured to be added to the start-up during installations or later [10, 11]. In this section, we list the registry keys which can be exploited to acquire malicious persistence and have the potential to find traces of malware. Run/RunOnce Keys: Run Keys are the easiest way to gain persistence in the system depending on the privilege level of the key infected. In case of RunOnce, specified command runs once and then eventually gets deleted. Notable feature of Run keys are that they get ignored in Safe Mode. For any application/program to execute even under Safe Mode, one needs to prefix a value name with an asterisk (*) in RunOnce Keys. In addition, prefixing a value name with an exclamation point (!) in RunOnce Keys can cause delaying deletion until after a particular command runs [12]. Potential Malware Traces: If malware manages to acquire NT AUTHORITY/ SYSTEM OR administrative privileges, it exploits Run/RunOnce keys with System Level privileges1 at; HKEY_LOCAL_MACHINE\Software\Microsoft\ Windows\CurrentVersion\Run and HKEY_LOCAL_MACHINE\Software\ Microsoft\Windows\CurrentVersion\RunOnce. If malware fails to tamper administrative keys, it creates malicious entry with User Level privileges2 at; HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run and HKEY_CURRENT_USER\Software\Microsoft\Windows\ CurrentVersion\RunOnce. For these keys, key frequency of occurrence can be monitored for Bot-malware vs. legitimate programs to perform behavioral anomaly detection. Winlogon Keys: Winlogon is a user mode process which gets loaded at startup by wininit.exe. It is responsible for handling interactive user logon and logoffs. Potential Malware Traces: Winlogon utilizes the value present in the Userinit key to be able to launch logon scripts at; HKLM\Software\Microsoft\WindowsNT\ CurrentVersion\Winlogon\Userinit. The Userinit.exe is pointed by Userinit key, if this key is tampered by the Bot-malware, along with Userinit.exe, Winlogon will launch malicious exe as well. Secondly, Explorer.exe is pointed by the path; HKLM\Software\Microsoft\WindowsNT\CurrentVersion\Winlogon \Shell. It may be noted that this key contains just the name Explorer.exe and not its path as it is launched from \windows.3 Hence, it should only contain this name and no other path. Thirdly, Winlogon manages secure attention sequence (SAS) popularly known as (Ctrl-Alt-Delete). Keys under HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify are utilized to notify event handlers 1 Applicable

to all user accounts. to only that user account through which malware gained entry. 3 Windows Storage Directory. 2 Applicable

630

V. Gupta et al.

at the time of SAS, to load a particular DLL. This DLL can be tampered by Botmalware to launch a malicious DLL as the event of SAS occurs. Table 3 in Appendix consolidates Auto-Run locations examined in this study.

4 Program Execution Locations When it comes to detecting initial infection in the system, it becomes essential for Bot-malware to execute in the system at least once. This gives an opportunity to look for artefacts where program execution details are stored. Further, to construct activity timeline, multiple artefacts need to be analysed . Prefetch: It was initially launched with Windows XP. Whenever an application is run in the system, Windows operating system creates Prefetch files (.pf) containing the information about the files loaded/accessed by that application at C : \Windows\ Prefetch [7]. Basically, Prefetch speeds up the loading/boot-up process for an application by optimizing its load time for future execution. There are three kinds of Prefetch file given in Table 1: Boot trace, application, hosting application [13]. Each has its own relevance and a unique naming convention. We can check the status of (enable/disable) of Prefetch configuration from the Windows Registry HKLM\SYSTEM\CurrentControlSet\Control\SessionManager\ MemoryManagement\PrefetchParameters. If the value is (0), Prefetch has been disabled, and otherwise for nonzero value, it is enabled. Forensic Standpoint Relevance: In Bot-malware analysis, Prefetch files can reveal name of the executable and its path, number of times of execution,4 using the creation timestamp determination of the time when that executable was first run, last modification time to find last run of that executable, information about files/directories accessed by that executable, other artefacts such as volume—path, creation time and serial number. [14]. Forensic investigators can utilize above artefacts for correlation with other artefacts and thereby construct activity timeline. Prefetch files can be used as a cross-reference in log analysis, to double check the executions that happened on the system. Notably, once Prefetch files have been deleted, a persistence mechanism of the malware can be found only on the boot trace file. This has a file named NTOS.EXE having information such as device path and full path. Henceforth, Prefetch files can play a significant role in tracing initial infection of the Bot-malware. UserAssist: This key was introduced with Windows NT4 as a component in Windows Registry [7]. There is a hive file for each user named NTUSER.DAT file which consists of UserAssist key in registry at HKCU\Software\Microsoft\Windows\ CurrentVersion\Explorer\UserAssist [7]. Under UserAssist key, there exist at least two sub-keys having global unique identifiers (GUID’s). It is found that encrypted values are stored in these GUID’s which can be decoded using ROT-13 encryption. 4 In

Windows 8, last eight execution timestamps are captured.

Analyzing Forensic Anatomization of Windows Artefacts . . . Table 1 Prefetch file types Prefetch file type Boot trace

Application

Hosting application

Function

631

Naming convention

Utilized by Windows OS for speeding up the boot process

Single file named (NTOSBOOT-B00DFAAD.pf) which actually means NT operating system boot where B00DFAAD is a hash which constitutes for uninitialized data. This file is largest Prefetch file Responsible for speeding up Name of the exe, followed by a the application launch process dash and then a hash of the run location of the application (CLOCK.EXE-99FAF17F.pf) Responsible for speeding up Same as that of Application the application launch process Prefetch File, but with a hash for those executables which of eight characters computed spawn system processes using application’s path and the command line used to start the application. Since there are executables like rundll32.exe, svchost.exe etc. which spawn various system processes as a single executable/path will have several Prefetch files

Forensic Standpoint Relevance: As the name suggests, it is a user-related artefact which records information such as applications, programs, shortcuts and control panel objects accessed by a specific user. As forensic investigators, when one wants to track traces of malware, then program execution history and activities are of immense importance. UserAssist key also records information regarding all the external media connected to a system. This key has anti-forensic scrubbing capabilities when compared with Prefetch files which get deleted on running privacy/clean up tools in the system. Thus, UserAssist key keeps the information stored in it intact even after privacy/cleanup tools are run [7]. AppCompatCache: Application compatibility cache resolves application compatibility in Windows systems with a shim infrastructure [8], which is responsible for determining whether an application needs to be shimmed or not. Forensic Standpoint Relevance: As the shim infrastructure by Microsoft is implemented, it generates metadata of the executables present in Windows Registry. This metadata can be forensically useful in analysing process execution in Windows machines. Information like file size, last execution time, full file path, etc., can be found. Files with extension .exe, .dll and .bat are logged by AppCompatCache. This metadata gets serialized to Windows Registry on system restart or reboot.

632

V. Gupta et al.

Fig. 1 SRUM link with windows registry [16]

5 System Resource Usage Monitor (SRUM) With Windows 8, Microsoft launched a new feature called as system resource usage monitor which is a diagnostic feature which keeps track of the system resource consumption of all the programs and applications running in the machine [15]. There are five major system consumption parameters, namely connectivity, application resource usage, network resource usage, energy usage and Windows push notifications which are monitored by SRUDB.dat5 HKLM\SOFTWARE\Microsoft\Win− dowsNT\CurrentVersion\SRUM\Extensions. Each parameter is assigned a unique GUID shown in Fig. 1 in appendix. The five parameter data is read by a system process known as Svchost.exe. The data is then stored in Windows Registry at the location mentioned above, once per hour, and there is flush of data from Windows Registry to the file SRUDB.dat located at \Windows\System32\sru. In human unreadable format and the file SRUDB.dat being an OS file cannot be extracted directly. We require tools like FTK Imager to extract SRUDB.dat and give this file as input to SRUM-DUMP.exe. We finally obtain an .xls file having system resource data of all the five parameters. Forensic Standpoint Relevance: It has immense value for file usage and information research as well as for event intervention [15]. SRUM protects and records the paths and names of every application that has been executed or is currently executing on our system, even the ones the attackers deleted. It stores the user’s SID that executes the program, and this helps to track the attacker, who used a transitory account to acquire privileges, that was flushed upon deletion. As evidence, one can find names of all of the networks that our system have established connection with and the duration of connection. In addition, it stores intricate details like battery usage of an application, CPU time segregated by background and foreground time, number of bytes that were read and written from the hard drive by the application, etc. This collected data is stored in the \Windows\System32\sru directory which contains a file called SRUDB.DAT. The file is found to follow Windows Extensible Storage Engine ESE database format. It is specifically designed such that even the cleansing tools used for protection of privacy like CCleaner, CleanAfterMe, or Privacy Eraser, etc., do not 5 Available

in Windows Registry at.

Analyzing Forensic Anatomization of Windows Artefacts . . .

633

Fig. 2 SRUM spreadsheet after software installation

Fig. 3 SRUM spreadsheet after software was purposely uninstalled from the system

have the feature to currently touch SRUM data at all [16]. SRUM is quint essential for scenarios where there is absence of log-keeping feature, the possibility of log logging is negligible, there is no IDS information, the client removes malicious software, and the attacker deletes accounts and corrects the system, or re-installs the latest version of the software, as seen in SRUM spreadsheet extracted Figs. 2 and 3 in Appendix. SRUM Preliminary Execution: Initially, a software name Spotify6 was installed and executed. This entry got stored by the SRUM. For the purpose of acquiring evidence regarding execution of this software, SRUM excel was extracted by means of forensic tools like FTK Imager7 and SRUM-DUMP.exe.8 Figure 2 shows the entry of Spotify software execution. Later, this software was purposely uninstalled to check whether removal of software does impact the stored SRUM entry or not. It can be clearly seen from Fig. 3 that SRUM entry obtained during Spotify execution is still intact. Hence, we suggest SRUM, as a significant source in Bot-malware early infection detection and analysis, primarily because of retaining the traces of even deleted applications. Table 2 summarizes the relevance of various artefacts undertaken in this study in evidence collection.

6 Spotify

Music Application: https://www.spotify.com/in/.

7 https://marketing.accessdata.com/ftkimager4.2.0. 8 https://github.com/MarkBaggett/srum-dump.

634

V. Gupta et al.

Table 2 Forensic relevance of Sources in evidence collection of Host-based executables Evidence Complete Installation Modified Last Execution Traces of source name file path timestamps timestamps execution count deleted timestamps applications Prefetch UserAssist AppCompatCache SRUM (SRUDB.dat)

Y Y Y Y

Y N N N

Y N Y Y

Y Y N Y

Y Y N Y

– – – Y

6 Conclusion With this study, we reviewed the various potential artefacts in Windows Registry in terms of Auto-run locations and Process Execution locations that can be investigated forensically to obtain sound evidence and traces, to detect and analyse Bot-malware at an early infection phase. From the study, it is evident that SRUM can be utilized as a means of gathering evidence in a system when there is absence of log-logging or the application has been removed. Further, this analysis may assist forensic experts to detect Bot-malware at the system level, without the need for inspection at the payload or traffic level. It can serve as a starting point for security researchers, to detect malware in case logs are unavailable or malicious application has been intentionally removed by the attacker.

7 Appendix See Table 3.

Table 3 Auto-Run locations for malware detection and analysis Registry key

Location

Privilege level

Run

HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run

System

Run

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run

User

RunOnce

HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce

System

RunOnce

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunOnce

User

BootExecute

HKLM\SYSTEM\CurrentControlSet\Control\hivelist

System

Winlogon

HKLM\Software\Microsoft\WindowsNT\CurrentVersion\Winlogon\Userinit

System

Analyzing Forensic Anatomization of Windows Artefacts . . .

635

References 1. Desktop Operating System Worldwide. https://gs.statcounter.com/os-market-share/desktop/ worldwide 2. 2019 Internet Security Threat Report. https://www.symantec.com/security-center/threatreport 3. Alghafli, K.A., Jones, A., Martin, T.A.: Forensic analysis of the windows 7 registry. JDFSL 5(4), 5–30. http://ojs.jdfsl.org/index.php/jdfsl/article/view/141 (2010) 4. Shaikh, A.: Botnet Analysis and Detection System. Napier. http://www.soc.napier.ac.uk/~bill/ botnet_alan.pdf (2010) 5. Satrya, G.B., Cahyani, N.D., Andreta, R.F.: The detection of 8 type malware botnet using hybrid malware analysis in executable file windows operating systems. In: Proceedings of the 17th International Conference on Electronic Commerce 2015, p. 5. ACM (2015) 6. Alsulami, B., Srinivasan, A., Dong, H., Mancoridis, S.: Lightweight behavioral malware detection for windows platforms. In: 2017 12th International Conference on Malicious and Unwanted Software (MALWARE), pp. 75–81. IEEE (2017) 7. Singh, B., Singh, U.: Program execution analysis in windows: a study of data sources, their format and comparison of forensic capability. Comput. Secur. 74, 94–114 (2018). https://doi. org/10.1016/j.cose.2018.01.006 8. Duranec, A., Topolˇci´c, D., Hausknecht, K., Delija, D.: Investigating file use and knowledge with windows 10 artifacts. In: 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). pp. 1213–1218. IEEE (2019) 9. Wael, D., Azer, M.A.: Malware incident handling and analysis workflow. In: 2018 14th International Computer Engineering Conference (ICENCO). pp. 242–248. IEEE (2018) 10. Farmer, D.J.: A forensic analysis of the windows registry. Forensic Focus (2007) 11. Singh, A., Venter, H.S., Ikuesan, A.R.: Windows registry harnesser for incident response and digital forensic analysis. Aust. J. Forensic Sci. 1–17 (2018) 12. Raja, P.K.: Run keys in the Registry. https://www.symantec.com/connect/blogs/run-keysregistry (2008) 13. Infosecuritygeek: Prefetch Forensics. https://infosecuritygeek.com/prefetch-forensics/ (2018) 14. McQuaid, J.: Forensic analysis of prefetch files in windows. https://www.magnetforensics. com/blog/forensic-analysis-of-prefetch-files-in-windows/ (2019) 15. Center, S.I.S.: Sans: System resource utilization monitor. https://isc.sans.edu/forums/diary/ SystemResourceUtilizationitor/21927/ (2017) 16. Khatri, Y.: Forensic implications of system resource usage monitor (SRUM) data in windows 8. Digital Invest. 12, 53–65 (2015)

Low-Power Two-Stage OP-AMP in 16 nm Gopal Agarwal and Vedvyas Dwivedi

Abstract Low-power two-stage OP-AMP is presented here. The OP-AMP receives 0.9 V supply voltage with variation of 0.8–1 V. Designed OP-AMP was simulated in 16 nm CMOS technology (PTM—Predictive technology models) with variation in supply voltage and temperature. The overall gain of two-stage OP-AMP was found to be greater than 40 dB for 0–80 °C temperature range. The nominal and worst-case power dissipation achieved are 873 nW and 3.3 µW, respectively.

1 Introduction Advancements in technology are pushing the circuit design to low-power, highdensity and high-performance realm. Low-power requirement demands circuit operation at very low voltage levels with nano-ampere bias currents. Some of the circuit design such as cascoding to achieve high gain in amplifiers fails in low voltage environment. Low voltage and low current entail subthreshold operation of MOSFETs in the circuit design and hence lower gm values and low driving capability. High packaging density requires the utilization of very short channel devices. MOSFETs with channel length as low as 16 nm are being utilized in various types of integrated circuits. Such low lengths bring to fore effects which could be neglected in design with large length MOSFETs, for example, gate-dielectric leakage and lower output resistance. Such short channel effects complicate the designing of circuits.

G. Agarwal (B) · V. Dwivedi C.U. Shah University, Surendranagar, Gujarat, India e-mail: [email protected] V. Dwivedi e-mail: [email protected] G. Agarwal IIIT, Surat, India © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_62

637

638

G. Agarwal and V. Dwivedi

Present work is focused on one circuit block two-stage OP-AMP in 16 nm CMOS technology. Section 2 focuses on the designing of the OP-AMP and Sect. 3 describes the simulated results.

2 Proposed Two-Stage OP-AMP The MOSFET level circuit diagram of the two-stage OP-AMP is shown in Fig. 1. It consists of the regular two stages, the first stage is the differential amplifier stage followed by a NMOS common source amplifier as the second stage. Most of the overall gain of the OP-AMP is achieved in second stage. The reference current for the OP-AMP is obtained via a PMOS-NMOS divider circuit. This eliminates the requirement of a current/voltage reference block in the overall design and hence achieves simplification. But, this way the current is heavily dependent on supply voltage and the gain of the OP-AMP cannot be guaranteed to be constant. However, as seen in Sect. 3 simulation results, the gain of the OP-AMP achieved is always >40 dB as required under all expected operating conditions. The small-signal diagram for the two-stage OP-AMP design is shown in Fig. 2. The design equation for gain of the OP-AMP is as follows:   Av = −gm2,3r01 ∗ (−gm7r02 ) where gm2,3 = transconductance of input differential pair

Fig. 1 Proposed two-stage OP-AMP

Low-Power Two-Stage OP-AMP in 16 nm

639

Fig. 2 Small-signal model for proposed two-stage OP-AMP

r01 = small signal output resistance at first stage output = r03 ||r06 r02 = small signal output resistance at second stage output = r07 ||r010 The OP-AMP is designed to have overall gain of above 40 dB under all operating conditions.

3 Simulation Results The OP-AMP was designed in 16 nm CMOS technology using predictive technology models (PTM) and simulated in Symica software with Symspice as simulator. Following Table 1 describes sizes of transistors in final design (refer Fig. 1 for transistor names). Table 2 mentions the important results in nominal condition (V dd = 0.9 V, V inn = Vinp = 0.5 V, T = 27 °C). Table 1 Transistor sizes in the final design Reference current generator

PMOS

M4

1.6 µ/1.6 µ

NMOS

M0

1.6 µ/1.6 µ

Differential amplifier

NMOS

M1

1.6 µ/1.6 µ

NMOS

M2

16 µ/1.6 µ

NMOS

M3

16 µ/1.6 µ

PMOS

M5

96 µ/1.6 µ

PMOS

M6

96 µ/1.6 µ

NMOS

M7

16 µ/1.6 µ

PMOS

M10

6 µ/1.6 µ

Second stage

Table 2 Important results in nominal condition

Reference current

166 nA

Differential amplifier

Current = 164 nA, gm2,3 = 1.95

µA V ,

gds, 1st stage = 10.62 n Second stage

Current = 636 nA, gm7 = 13.7 gds, 2nd stage = 45.9 n

µA V ,

640

G. Agarwal and V. Dwivedi

Fig. 3 Open-loop gain plot in nominal condition nominal condition (V dd = 0.9 V, V inn = Vinp = 0.5 V, T = 27 °C) with C L = 5pF

Table 3 Loop gain results V dd

0.8 V

Temperature (°C)

0

0.9 V

Gain (dB)

48

45.18

41

59.46

56

50.63

67.14

62.77

56.25

BW (Hz)

24.34k

24.34k

12k

28k

17.46k

12.81k

13.9k

9.56k

6.06k

PM

45°

60°

52°

36°

36°

36°

22.5°

15°

26°

27

80

0

1V 27

80

0

27

80

The designed OP-AMP was simulated for open-loop gain with a load capacitance of C L = 5pF. The open-loop gain plot is shown in Fig. 3. The loop gain is 55 dB with a phase margin of 35°. Sufficient loop gain of 55 > 40 dB is achieved. Phase margin can be further improved by utilizing a sound frequency compensation scheme around the designed OP-AMP. Table 3 demonstrates the loop gain results with supply voltage and temperature variation. The gain is always greater than 40 dB as required. Table 4 demonstrates the power dissipation results of the designed OP-AMP. As can be seen, the worst-case power dissipation for the designed OP-AMP is 3.3 µW.

4 Conclusion A two-stage OP-AMP in 16 nm CMOS technology is presented. Overall gain of >40 dB with a worst-case PM of 15° is achieved. The worst-case power dissipation for the designed OP-AMP was found to be 3.3 µW.

0

73.35n

427.87 n

342n

Temperature (°C)

Reference current (A)

Total current (A)

Power (W)

0.8 V

V dd

Table 4 Power-dissipation results

27

283n

354.32 n

60.87n

80

239n

298.54 n

52.54n 873n

969.75 n

1152n

1.28 µ

27 166.36n

218.46n

0

0.9 V 80

619n

687.62 n

124.93n

3300n

3.3 µ

563.28n

0

1V 27

2320n

2.32 µ

397.97n

80

1370n

1.37 µ

266.16n

Low-Power Two-Stage OP-AMP in 16 nm 641

642

G. Agarwal and V. Dwivedi

References 1. Sansen, W.: Analog CMOS from 5 micrometer to 5 nanometer. In: Digest of Technical Papers— IEEE International Solid-State Circuits Conference, vol. 58, pp. 22–27 (2015) 2. Lundager, K., Zeinali, B., Tohidi, M., Madsen, J.K., Moradi, F.: Low power design for future wearable and implantable devices. J. Low Power Electron. Appl. 6(4) (2016) 3. Dörrer, L., Kuttner, F., Conzatti, F., Torta, P.: Analog circuits in 28 nm and 14 nm FinFET. In: Hybrid ADCs, Smart Sensors for the IoT, and Sub-1V & Advanced Node Analog Circuit Design, pp. 281–295. Springer International Publishing (2018) 4. Ragheb, A., Journal, H.K.: Ultra-low power OTA based on bias recycling and subthreshold operation with phase margin enhancement. Microelectronics 60, 94–101 (2017) 5. Omran, H., Alhoshany, A., Alahmadi, H.: A 33fJ/step SAR capacitance-to-digital converter using a chain of inverter-based amplifiers. IEEE Trans. Circuits Syst. I, no. Regular Paper, pp. 64.2, 310–321 (2016) 6. Flandre, D., Jespers, P., Circuits, F.S.: A gm/ID based methodology for the design of CMOS analog circuits and its application to the synthesis of a silicon-on-insulator micropower OTA. IEEE J. Solid-State Circuits 31(9), 1996 (1996) 7. Flandre, D., Viviani, A., Eggermont, J.: Improved synthesis of gainboosted regulated-cascode CMOS stages using symbolic analysis and gm/ID methodology. IEEE J. Solid, vol. 32(7), 1006 (1997) 8. Jespers, P.: The gm/ID methodology, a sizing tool for low-voltage analog CMOS circuits: the semi-empirical and compact model approaches (2009) 9. Jespers, P., Murmann, B.: Systematic design of analog CMOS circuits (2017) 10. Krishnan, N.A.M.M., Vasundhara Patel, K.S., Jadhav, M.: Comparative study of gm/ ID methodology for low-power applications. In: Lecture Notes in Electrical Engineering, vol. 545, pp. 949–959 (2019) 11. Mahattanakul, J., Chutichatuporn, J.: Design procedure for two-stage CMOS opamp with flexible noise-power balancing scheme. IEEE Trans. Circuits Syst. I: Regular Pap. (2005) 12. Allen, P., Holberg, D.: CMOS analog circuit design (2011) 13. Kumar, V.: High bandwidth low power operational amplifier design and compensation techniques. Iowa State University, Digital Repository, Ames (2009)

Computation of Hopf Bifurcation Points in the Magnetic Levitation System Sudarshan K. Valluru , Anshul Gupta, and Aditya Verma

Abstract This research brief aims to study the bifurcation analysis of an electromagnetic levitation (maglev) system. The bifurcation analysis involves first performing the numerical analysis, and then the simulations using MATLAB are analyzed to verify the results. The study requires derivation of a mathematical model for the maglev system, which can be used for the bifurcation analysis. The bifurcation analysis involves a numerical comparison of the system equations with the predefined constraints. The input voltage is being used as the bifurcation parameter, which is by ball’s position, ball’s velocity, and the current flowing through the circuit. The analysis allows us to compute a combination of values of parameters, when simulations are performed to check system characteristics, periodic oscillations at these values are obtained. Hence, it proves the existence of Hopf bifurcation at the computed value of system parameters.

1 Introduction The electromagnetic levitation system is used for research purposes in electromechanical, electronics, and control system fields. It is a synergic amalgam of sensors, control system, and the actuation system. It is used for various industrial applications and future engineering sciences such as super-high-speed rail, small and huge

S. K. Valluru (B) · A. Gupta · A. Verma Center for Control of Dynamical Systems and Computation, Department of Electrical Engineering, Delhi Technological University, Delhi 110042, India e-mail: [email protected] A. Gupta e-mail: [email protected] A. Verma e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_63

643

644

S. K. Valluru et al.

turbines, centrifuge of a nuclear reactor, heart pump, and electromagnetic suspension. The common point in all applications is the lack of contact and thus no wear and tear and friction, which makes the maglev system highly efficient [1–3]. The research on the control systems for maglev systems has been in action for decades now, and many of them have converged to voltage-feedback linearization and control, which requires the usage of a very accurate mathematical model. Instead of conventional works, this paper studies the bifurcation points of magnetic levitation system during dynamic envelope, which is helpful to design new controllers. It was seen that the system shows Hopf bifurcation when system characteristics switch to or from periodic orbit when a system parameter is changed. This paper has been divided into five sections; introduction to the system and this research brief is in Sect. 1; Section 2 presents a mathematical model for the system; the bifurcation analysis is illustrated in Sect. 3. In Sect. 4, the result has been discussed, and Sect. 5 concludes the brief [4–6].

2 Mathematical Modeling of Magnetic Levitation System The bifurcation analysis of the maglev system requires the determination of the mathematical model. The mathematical model for the magnetic levitation system is derived by utilizing the electrical and mechanical state of the system [4]. The systematic diagram for the electromagnetic levitation system is shown in Fig. 1. It shows the various components of the system and the overall functioning of the system. The inductance of the coil is a nonlinear function depends on the position of ball defined as, 2K x

(1)

di(t) dL(x) + i(t) ∗ + R ∗ i(t) dt dt

(2)

L(x) = L 1 + By Kirchhoff’s voltage law, u(t) = L(x) ∗

The two forces under which the ferromagnetic ball levitates are: 1. Force due to gravitational field (mg) 2. Force due electromagnetic field (F em ). Balancing the forces, we get: m

d2 x = mg − Fem dt 2

(3)

Computation of Hopf Bifurcation Points in the Magnetic …

645

Fig. 1 Pictorial representation of magnetic levitation system

And F em can be calculated using the Eq. (4) and Maglev parameters from Table 1 [10]: Fem = − ∂∂Wxm and Wm = 21 Li 2 Fem = −

i 2 dL(x) 2 dx

(4)

Using (1)–(4) we create the differential model of the system as: x1 (t) = x2 (t) x2 (t) = g −

Table 1 Description of Maglev’s physical parameters

K x32 (t) mx12 (t)

Symbol

Parameter

Value

m

Mass of ball

0.0216 kg

L1

Inductance of coil without the ball

0.237 H

K

Constant

8.47e–05 s−1

g

Gravitational acceleration

9.81 m/s2

R

Resistance of current coil circuit

100 

646

S. K. Valluru et al.

x3 (t) =

  1 2K x2 (t)x3 (t) U (t) − Rx3 (t) + L x12 (t)

(5)

where, x 1 = ball’s position, x 2 = velocity of the ball, and x 3 = current flowing in the coil. The input voltage is taken as, U = K 1 (xe − x(1)) + K 2 (ve − x(2)) + K 3 (ae − x  (2))

(6)

where, K 1 is the position control gain, x e is the reference position, K 2 is the voltage control gain, ve is the reference voltage, K 3 is the acceleration control gain, and ae is the reference acceleration.

3 Bifurcation Analysis for Electromagnetic Levitation System To determine the fixed points, setting the system Eq. (5) to zero: 0 = x2 (t) K x32 (t) 0=g− mx12 (t)   2K x2 (t)x3 (t) 1 U (t) − Rx3 (t) + 0= L x12 (t)

(7)

So, after solving, we find out Fixed Points to be: u x(1) = ± R



U c ; x(2) = 0; x(3) = mg R

(8)

Calculating the linearized model of the system: ⎤ ⎡ x1 (t) 0 ⎢  ⎥ ⎢ 2cx(3)2 ⎣ x2 (t)⎦ = ⎣ mx(1)3 − 4cx(2)x(3) − x3 (t) L x(1)3 ⎡

K1 L

1 0 2cx(3) − L x(1)2

⎤⎡

0 K2 L

2cx(3) − mx(1) 2 2cx(2) R − − L x(1)2 l

K3 l

x1 (t)



⎥ ⎥⎢ ⎦⎣ x2 (t)⎦ x3 (t)

(9)

Computation of Hopf Bifurcation Points in the Magnetic …

647

To determine the eigenvalues for the maglev system, |A − λI | = 0. 0−λ 2cx(3)2 mx(1)3 4cx(2)x(3) − L x(1)3 −

K1 L

1 0−λ 2cx(3) − KL2 L x(1)2

0 2cx(2) L x(1)2

2cx(3) − mx(1) 2 − Rl − Kl 3

=0 − λ

(10)

On calculating Determinant and simplifying we get Eq. (11): 

   2 K3 2cx(2) 4c x(3)2 R 2K 2 cx(3) 2cx(3)2 λ +λ − +λ + − − L L x(1)2 L m L x(1)4 m L x(1)2 mx(1)3 2 2 2 12c x(2)x(3) 2K 1 cx(3) (2c + K 3 )x(3) +( + − )=0 (11) 5 mlx(1) mlx(1)2 mlx(1)3 3

2

As calculated before in Eq. (8), we have equilibrium Points: u x(1) = ± R



U c ; x(2) = 0; x(3) = mg R

Solving for P0 , P1 and P2 , by Eq. (12):    mg 2Rg K 1 − (K 3 + r ) P0 = LU c    K2 2r mg mg 2Rg P1 = − − + U L c LU (K 3 + R) P2 = L

(12)

On substituting values and simplifying: 529.822 (K 1 − (K 3 + 6.4)50.17) U   K2 11.44 125.568 − − 50.02 + P1 = U 0.237 U (K 3 + 6.4) P2 = 0.237

P0 =

Also, U = K 1 (xe − x(1)) + K 2 (ve − x(2)) + K 3 (ae − x  (2)). The maglev system is actuated with the feedback voltage with components of distance, velocity, and acceleration. Now, solving for reference point (x, v, i) = (0.02, 0, 0), for the reference point, the value of acceleration is zero [7].      U K x(3)2 c + K 2 (0 − 0) + K 3 0 − g + U = K 1 0.02 − R mg mx(1)2

648

S. K. Valluru et al.

On solving: U=

K 1 ∗ 0.02 (1 + 0.00311K 1 )

(13)

For Hopf bifurcation [8]; P0 , P1 > 0 and P1 P2 = P0 . From P0 and equating it to zero, we get: K3 =

1.64775K 12 + 2.362725K 1 − 169600.6846 26500.10697 + 82.411512K 1

(14)

Also, from equating P1 = 0, we have: K 2 = −10.07582 +

11.44 K 1 ∗ 0.02

(15)

Now for Hopf bifurcation, P2 * P1 = P0 . On solving, we have: 

1.64775K 12 + 2.362725K 1 − 169600.6846 + 6.399957 26500.10697 + 82.411512K 1   −78289.63917 + 1.77892 − K 1 = 0 ∗ K1



(16)

Values of K 1 can be obtained by solving the above equation Eq. (16) and, the values for K 2 and K 3 can be calculated [9] subsequently from Eqs. (14) and (15).

4 Results and Discussion MATLAB simulations are performed to check the validity of the solution. The solution of the Eq. (16) for reference value xe = 0.02, ve = 0, ae = 0 gives K 1 = 0.00215, K 2 = 266030, K 3 = −6.4, which represents the value of control parameters for which Hopf bifurcation occurs in the system. Simulation of the model for these values gives a limit cycle as obtained in Figs. 2 and 3 which shows the occurrence of Hopf bifurcation in system. In the figures shown below, x-axis denotes the distance of the ball from the reference zero of the system, similarly y-axis shows the velocity of motion of the ball and z-axis represents the current in the coil during the motion of the ball. Figure 2 is obtained by running simulation for a small time period to observe the shape of limit cycle for the calculated value of control signal (U) for

Current (i) Amps

Computation of Hopf Bifurcation Points in the Magnetic …

649

5 0

K1=0.00215003 K2=2.6603e+05 K3=-6.4000

-5 -10 -15 -20 0.2

0.200005

0.20001

Distance

0.200015

(x) m

0.20002

-0.015

-0.01

-0.005

Velocity

0.015

0.01

0.005

0

(v)m/s

Fig. 2 System characteristics with calculated values of the control signal (U) for the maglev system

K1 = 0.00215003z K2 = 2.6603e+05 K3 = -6.4000

Current (i) Amps

2 0 -2 -4 -6

0.01

-8

-10 0.099998

0.005 0 0.1

0.100002

0.100004

-0.005 0.100006

Distance (x) m

0.100008

0.10001

-0.01

Ve

lo

y cit

(v)

s

m/

Fig. 3 Limit cycle obtained with the system for the calculated values of the control signal (U) for Hopf Bifurcation

Hopf bifurcation to see the motion state and Fig. 3 is obtained by running a simulation for longer time period for the same control signal calculated earlier, whereas the same parameters can be adjusted so that the system exhibits stable characteristics as shown in Fig. 4. Similarly, the equations can be used to calculate when the controller will lead the system into Hopf bifurcation for different values of reference point.

5 Conclusion The mathematical model of maglev system with feedback controller is used to derive the equations which give the value of parameters for which system goes through Hopf bifurcation. The value of parameters of U the actuation of the system for Hopf bifurcation were calculated as , K 2 = 2.6603e+05 and K 3 = −6.4000, and when system characteristics were plotted at these values, the limit cycle was obtained. When value of K 2 was adjusted, limit cycle seems to vanish. So, it can be concluded

650

S. K. Valluru et al.

120 100

K1=-1622.193906 K2= -10.4284 K3= -38.8176

Current (l) A

80 60 40 20 0 -20 0

500

s m/ (v) y t i loc Ve 0

0.02

0.04

0.06

0.08

Distance (x) m

0.1

0.12

-500

Fig. 4 Stable system characteristics observed with randomized values of U of the maglev system

that Hopf bifurcation exists at the computed values. Similarly, the analysis can be conducted for various dynamic systems to obtain the conditions of Hopf bifurcation.

References 1. Lundberg, K.H., Lilienkamp, K.A., Marsden, G.: Low-cost magnetic levitation project kits. IEEE Control Syst. Mag. 24(5), 65–69 (2004) 2. Shiao, Y.S.: Design and implementation of a controller for a magnetic levitation system. Proc. Natl. Sci. Counc. 11(2), 88–94 (2001) 3. Yaghoubi, H.: The most important maglev applications. J. Eng. (2013) 4. Dolga, V., Lia, D.: Modeling and simulation of a magnetic levitation system. Ann. Oradea Univ. Fascicle Manag. Technol. Eng. 6, 16 (2007) 5. Valluru, S.K., Singh, M., Verma, A., Gupta, A.: Bifurcation curves for electrical DC motors. In: 2018 2nd IEEE International Conference on Power Electronics, Intelligent Control and Energy Systems (ICPEICES), Delhi, India, pp. 953–957 (2018) 6. Zhang, L., Huang, L., Zhang, Z.: Hopf bifurcation of the maglev time-delay feedback system via pseudo-oscillator analysis. Math. Comput. Model. 52(5–6), 667–673 (2010) 7. Lorenz, E.: The butterfly effect. World Scientific Series on Nonlinear Science Series A 39, 91–94 (2000) 8. Kendall, Bruce: Cycles, chaos, and noise in predator–prey dynamics. Chaos, Solitons Fractals 12, 321–332 (2001). https://doi.org/10.1016/S0960-0779(00)00180-6 9. Roger, G.: Dynamic and control requirements for EMS maglev suspension. In: Maglev 2004 PROCEEDINGS, vol. 2, pp. 926–934 (2004) 10. Feedback instruments Ltd. 33–942. Magnetic levitation manuals (2016)

Face Sketch-Image Recognition for Criminal Detection Using a GAN Architecture Sunil Karamchandani and Ganesh Shukla

Abstract One of the important cues in solving crimes and apprehending criminals is matching sketches with digital face images. The problem of matching a forensic sketch to a gallery of mugshot images is addressed in this paper. Feature-based technique is implemented and compared against proposed generative adversarial networks (GANs). The designed GAN projects alternate accuracy for the generator and the discriminator for various batch sizes, dropouts, and learning rate and is able to identify the corresponding image in the CUHK database. Simulation results show that while the feature-based matching fails the holistic method having been trained as an innovation process produced promising results.

1 Introduction Sketch recognition is a technique of identification in absence of an input but in presence of the ground truth. Face sketch recognition algorithm requires to minimize the error between the sketch drawn by hand and the photograph available in the database. Face sketch recognition has been automated technique using computer interaction for combining facial features in crime examinations. It eliminates the need for individual biometric modalities. With the advent of machine learning, holistic recognition methods have long replaced the conventional component-related methods. The holistic methods involve comprehensive details as a single entity rather than atomistic view where each biometric, eyes, nose, mouth, and even ears assumes an identity by itself. Most of the component-related methods suffer from the disadvantage of component localization. In order to overcome this, evolutionary or genetic algorithms need to be applied which increase the computational complexity of the S. Karamchandani (B) · G. Shukla Dwarkadas J. Sanghvi College of Engineering, Mumbai, India e-mail: [email protected] G. Shukla e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_64

651

652

S. Karamchandani and G. Shukla

algorithms. One can argue that the computations also increase in case of deep learning approaches used in holistic approach. However, the trade-off is a much higher accuracy achieved by these methods [1]. Samma et al. [2] propose a component-based sketch recognition algorithm using tools of artificial intelligence. The facial components are localized using evolutionary optimization algorithms, the features of each component are extracted and the matching is performed at the individual component level. The optimization algorithms also can perform only with a Q learning algorithm and still perform well only on few databases. Mugdha et al. [3] incorporate multiscale local binary pattern (MLBP) and scale-invariant feature transform (SIFT) as the discriminating feature. On implementation of the algorithm, we observed that the sheer size of the feature vectors even for a relatively standard size database as in CUHK was a hurdle not only in terms of the computing time required. We implement a feature-based approach and propose a holistic algorithm for analyzing the dichotomy of the face sketch pairs in this paper.

2 Related Work Literature review has an exhaustive list of both feature-based and holistic algorithms for face-image matching. Klare et al. use [4] the modalities of the sketch as components, where they calculate the feature descriptor for each patch. The correspondence between the two modalities is compared using L 2 norm as the separating distance. This represents a very crude manner of pattern recognition and hence does not provide a reasonable accuracy. Wang et al. [5] propose a Markov random field for photographsketch synthesis and recognition. In this case, the models generated have a NP hard problem for calculation of the normalization constant. This makes the generated models obstinate and need to be approximated. The authors [6] involve multiscale circular Weber’s local descriptor for finding the appropriate sketch-photograph pair, but this not only requires preprocessing of sketches but also an optimized algorithm to elevate the feature space. Weber’s descriptor is nothing but the orientations in the face image and the matching is done with the help of descriptor’s histogram. Xing et al. [7] work with GAN’s for sketch-pair recognition however they perform synthesis of fiducial points on the face, which does not quite require the use of complexity of the GAN algorithm. The authors Zhang et al. [8] utilize the GAN, but to increase the robustness of the matching algorithm. This robustness concentrates mainly on the illumination and the pose of the image and the sketch.

3 Implementation of Feature-Based Model Our approach essentially will be to implement and verify the algorithm of face sketch recognition, which solves the sketch recognition problem for 2-D image of faces, using the local feature-based discriminant analysis (LFDA), to discriminate between

Face Sketch-Image Recognition for Criminal …

653

the multiscale local binary patterns(MLBPs) and scale-invariant feature transform (SIFT) as feature descriptors. In this work, SIFT and MLBP algorithms are explained for feature extraction with an LFDA framework for dimensionality reduction and matching. The block diagram of the model is shown in Fig. 1. The model is applied on the CUHK database. The MLBP features are plotted in Fig. 2 of the sketch-image pair while Fig. 3 shows the SIFT features. The dichotomy in of the MLBP features at the input and output makes it impossible to generate a matching between the pairs. Normally a MLBP should also work with any orientation of the image. The SIFT features have an intermediate stage wherein the derivative of the Gaussian is involved. The SIFT algorithms applied on the sketch and the image involve the presence of the noise as shown in Fig. 4. As observed in Fig. 3, the Gaussian features have a grainy output. Due to the aforementioned inconsistencies in the features of the sketch-image pair, the linear discriminant algorithm (LDA) does not provide any discrimination

Fig. 1 Block diagram of MLBP and SIFT-based face sketch–pair recognition

Fig. 2 MLBP features extracted from a sketch, b grayscale image

654

S. Karamchandani and G. Shukla

Fig. 3 Derivative of Gaussian (DOG) required for SIFT

(a) Image

(b) Sketch

Fig. 4 SIFT transform on a sketch and b image

between the sketch-image pair. The results reported in literature do not confirm to the illustrated feature diagrams.

Face Sketch-Image Recognition for Criminal …

655

3.1 GAN-Based Holistic Approach GAN is a composite of a generator and a discriminator, a boon for unsupervised learning. Each of the composites is a convolution neural network (CNN) by itself. However, no max pooling exists between the layers of the CNN, and this is however replaced with strides. The backpropagation algorithm is at the heart of both the generator and the discriminator. The GAN follows an innovation process wherein a random Gaussian vector is generated to form a latent space. The latent vectors are made to adaptively converge to the photographs which are provided as images to the discriminator. The discriminant uses an optimization algorithm given by crossentropy function     max V (D) = E x∼ pdata(x) log D(x) + E z∼ pz(z) log(−D(G(z))) D

(1)

The discriminator maximizes the entropy of the real photographs D(x) (term 1 of the equation) while the generator minimizes the information of the latent space G(z) (term 2 of the equation). The generator on the contrary performs the reverse.

3.1.1

Design of the GAN

The discriminator model is fed a 200 × 248 image and generates a binary opinion defining whether the sketch image is a real or a fake. The discriminator design is deployed with a leaky rectified linear unit (LeakyReLU) activation function with a slope of 0.12, batch normalization, at a 1 × 1 stride downsampling, and the adaptive algorithm with a learning rate of 0.0002 and a momentum of 0.3. The LeakyReLU adds an element of nonlinearity to the CNN even as that compared to the sigmoid function. The CUHK dataset is loaded and scaled. The GAN is implemented on the Keras platform. The discriminator and the generator have to play a cat and mouse game. However, they are manipulated by a controller (adversarial model) which plays as a communicator between the two. The controller is modeled as an innovation process wherein it uses random data (whitened data) in the latent space, uses the generator to create a random image known in the GAN as latent space conventionally. This is then given to the discriminator which equates it with photographs of the sketches and then suggests that it is a real or fake. Now corresponding to the result, the discriminator updates its weights using the adaptive algorithm. The movement of the slope is given by the equation. Logically the slope of the discriminator has a positive sign indicating that adaptive algorithm follows a gradient ascent m       1  log D x (i) + log 1 − D G z (i) . ∇θd m i=1

(2)

656

S. Karamchandani and G. Shukla

Table 1 Hyperparameters with RMSprop gradient descent

Model

Learning rate

Decay

Discriminator model

0.02

6e−8

Adversarial model

0.00001

3e−8

Here, D(x) is the probability estimate of discriminator that photograph images are not fake, G(z) is the distribution of the random vector which generates the fake images, D(G(z)) is the probability estimate of discriminator that a fake image from the generator is from the photograph database. Correspondingly an error vector is generated with the modified weights. These are then used to train the generator model. The generator model which works in the reverse thus the adaptive algorithm is employed as a gradient descent with decreasing slope with equation given by m 1     (i)  log D G z ∇θd m i=1

(3)

Since the second term in (2) is a measure of the photographs, it is removed from the error minimization of the generator. The loss parameter for the discriminator and the generator is traced during each weight update. The generator eliminates the drawbacks of the PCA and represent the nonlinear feature space as the latent space which represents the compressed sketch image.

3.1.2

GAN Execution

Dropout values are varied from 0.2 to 0.6 in both the GAN configurations since the images which are from the generator resemble noise, due to high variance assumed in the random data. Adversarial network is trained with a mini-batch size ranging from 32, 64, 128, and 256 to observe the convergence of the gradient descent algorithm with a 5 × 5 kernel size. It is observed that as we go higher in the mini-batch size it converges slowly which is obvious but provides a more precise estimate of the error with less noise in the training process. The configuration works on the dataset as it does not get stuck in any valley and provides global minimum and maximum as the case maybe. The RMSprop is gradient descent algorithm normalized with the moving average of its most recent value, basically the RMS value of the current weight vector. The main hyperparameters of the all the three networks are the batch size and the learning rate. Table 1 gives the initial learning rate for each model

4 Results and Discussions The general architecture of the designed GAN system is shown in Fig. 5. The CNN model for the generator is

Face Sketch-Image Recognition for Criminal …

657

Fig. 5 System architecture of the designed GAN

Layer

Configuration

Dense layer

62 × 50 × 128

Input dim = 100; depth = 128 Mini-batch normalization, and ‘relu’ activation function, RMSprop with momentum 0.9 Upsampling and transposed convolution layer-1 124 × 100 × 64, kernel size = 5, single stride Mini-batch normalization, ‘relu’ activation function, RMSprop with momentum 0.9 Upsampling and transposed convolution layer-2 248 × 200 × 32, kernel size = 5, single stride Mini-batch normalization, ‘relu’ activation function, RMSprop with momentum 0.9 248 × 200 × 16, kernel size = 5, single stride

Transposed convolution layer-3

Mini-batch normalization, ‘relu’ activation function, RMSprop with momentum 0.9 248 × 200 × 1, kernel size = 5, single stride

Transposed convolution layer-3 Sigmoid activation function

The CNN model for the discriminator is Layer

Configuration

Convolution layer-1

248 × 200 × 32, kernel size = 5, Single stride

Leaky ReLU = 0.12; dropout = 0.2 Convolution layer-1

244 × 196 × 64 kernel size = 5, single stride

Leaky ReLU = 0.2; dropout = 0.28 Separable convolution layer-2

240 × 192 × 128 kernel size = 5, Single stride

Leaky ReLU = 0.2; dropout = 0.32 Convolution layer-3 Leaky ReLU = 0.2; dropout = 0.15 Flatten layer Dense = 10 and activation ‘relu’ Dense = 1 and activation ‘sigmoid’

236 × 188 × 256 kernel size = 5, single stride

658

S. Karamchandani and G. Shukla 40

Discriminator Generator

35 30

Loss

25 20 15 10 5 0 0

2

4

6

8

10

12

14

16

18

20

22

24

Iteration Fig. 6 Converse behavior of the generator and the discriminator

Figure 6 shows the loss function for the discriminator and the generator as controlled by the adversarial network which is the reverse of each other in Fig. 6. The parameters are batch size = 6, learning rate = 0.002 with epoch of 700 and 100 iterations. The x-axis represents the iterations, and the y-axis the contrary characteristics of the generator and discriminator.

5 Conclusion GAN are deep neural network architecture, comprised of two neural networks, competing one against the other. It is trained in an adversarial manner to generate data mimicking some distribution. In our proposed method, GAN is trained on 88 image of CUHK database. Our aim is to fool the discriminator by generator to achieve the better accuracy. To get the better result, we increase the iteration by 100–700 to train the model perfectly. We also vary the dropout from 20 to 60%, and there is a drastic change in D-loss but as number of iteration increases D-loss decreases. By changing the some of the parameters we see the lots of effect in model initially but as iteration increases then model again come in stable state. We train the model at 0.002 learning rate so that our generator learn the parameters slowly from discriminator input. Initially, we give the 100 input dimension noise to generator, i.e., a randomly distributed sample, and this is a starting point to learning the generator. From this, it’s generate the fake image to fool the discriminator as we label y = 0 initially for fake image, as Generator learning slowly-slowly the discriminator produce the y = 1,

Face Sketch-Image Recognition for Criminal …

659

at this point generator fool the discriminator successfully, that means our parameter learning perfectly as iteration increases. We separately train discriminator and generator to learn the loss and simultaneously updating the parameter. Our objective ‘D’ (real image) is close as possible to D, i.e., generator output. We observe that GAN is good classifier and data synthesizer than CNN and also it can work well with lack of data like in our dataset of 88 images, it learns easily even data is short, and in our case, generator starts learning better after 125 of iterations which is a good start. GAN is state-of-the-art model and good for segmentation and classification. With the help of the binary cross-entropy, GAN easily classifies the images.

References 1. Hu, W., Hu, H.: Fine tuning dual streams deep network with multi-scale pyramid decision for heterogeneous face recognition. Neural Process. Lett. 1–19 (2018) 2. Samma, H., Suandi, S.A., Mohamad-Saleh, J.: Face sketch recognition using a hybrid optimization model. Neural Comput. Appl. 1–16 (2018) 3. Joshi, M.M., Phadke, G.: Face sketch recognition based on SIFT and MLBP (2016) 4. Klare, B., Li, Z., Jain, A.K.: Matching Forensic Sketches to Mug Shot Photos. IEEE Trans. Pattern Anal. Mach. Intell. 33, 639–646 (2011) 5. Wang, X., Tang, X.: Face photo-sketch synthesis and recognition. IEEE Trans. Softw. Eng. 31(11), 1955–67 (2009) 6. Bhatt, H.S., Bharadwaj, S., Singh, R.: Memetically optimized MCWLD for matching sketches with digital face images. IEEE TIFS 7(5), 1522–1535 (2012) 7. Gao, X., Zhong, J., Tao, D., Li, X.: Local face sketch synthesis learning. Neurocomputing 71(10–12), 1921–1930 (2008) 8. Zhang, S., Jil, R., Hu. J.: Robust face sketch synthesis via generative adversarial fusion of priors and parametric sigmoid. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) (2018) 9. Wei, X., Wang, H., Guo, G., Wan, H.: A general weighted multi-scale method for improving lbp for face recognition. In: International Conference on Ubiquitous Computing and Ambient Intelligence, vol. 8867, pp. 532–539. Springer Cham (2014). 10. Momin, H., Tapamo, J.R.: Automatic detection of face and facial landmarks for face recognition. In: International Conference on Signal Processing, Image Processing and Pattern Recognition, vol. 260, pp. 244–253(2011) 11. Juhong, A., Pintavirooj, C.: Face recognition based on facial landmark detection. In: 10th Biomedical Engineering International Conference (BMEiCON), Hokkaido, pp. 1–4 (2017) 12. Wang, Q., Boyer, K.L.: Feature learning by multidimensional scaling and its applications. In: XXVI Conference on Graphics, Patterns and Images, Arequipa, pp. 8–15 (2013) 13. O’Connor, B., Roy, K.: Facial recognition using modified local binary pattern and random forest. Int. J. Artif. Intell. Appl. (IJAIA) 4, 6 (2013) 14. Abbas, Z.A., Duchaine, B.: The role of holistic processing in judgments of facial attractiveness. Perception 37, 1187–1196 (2008) 15. Gregory, K., Richard, Z., Ruslan, S.: Siamese neural networks for one-shot image recognition. In: Proceedings of the 32nd international conference on machine learning, Lille, France (2015)

Studying Network Features in Systems Biology Using Machine Learning Shubham Mittal and Yasha Hasija

Abstract Systems biology approach has been responsible for holistically understanding human health and network biology in the past few decades. Likewise, machine learning methods have advanced rapidly in recent times for improved predictions. Current biological research finds itself at an intersection with machine learning techniques for efficient analysis of biological data. This has opened an interdisciplinary field for better understanding of biological networks such as those involving essential genes and proteins, drug targets, gene and protein–protein interactions, unique genetic profiles, dynamic network biomarkers, and complex diseases. Exploring into these collaborations is essential for understanding the advantages and drawbacks of current machine learning-based network biology models. A discussion on the opportunities and challenges that these techniques offer would provide better insights for improving them. This would help extract valuable information from complex biological data, hopefully, forge better partnerships between researchers pioneering in related fields and address the need to improve biomedicine and healthcare.

1 Introduction Systems biology is simply a research approach which emphasizes the comprehension that the whole is greater than the sum of the parts. While previous studies followed a reductionist approach, where each component of a biological system was individually studied by separate groups of scientists, the systems biology approach involves studying these components together. More importantly, the study of interactions between these components, the networks that they create, and how a change in one S. Mittal · Y. Hasija (B) Department of Biotechnology, Delhi Technological University, Delhi 110042, India e-mail: [email protected] S. Mittal e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_65

661

662

S. Mittal and Y. Hasija

component affects the other key factors involved in developing a systems biology understanding. Needless to say, this approach has been instrumental in holistically understanding human health and living systems and holds great potential in studying disease biology and discovering novel drugs. It also enables us to make use of extensive biological data to uncover biomarkers, create patient genetic profiles and multiomics networks, build multi-scale models, design advanced biomedical devices and thereby provide better healthcare [1, 2]. Integration of data in systems biology takes place through the construction of networks. A network structure comprises of ‘nodes’—which represent biological components (DNA, proteins or metabolites) and ‘edges,’ connecting nodes, which describe the relationship between the interacting nodes. Biological networks are comprised of complex data which is usually categorized and studied in isolation [3, 4]. However, analyzing the data all together would provide more significant results due to large data curation. For this reason, biological networks are created; these can be gene regulatory networks, disease gene interaction networks, protein–-protein interaction networks, metabolic networks, cell signalling networks, or drug interaction networks. But, analysis of biological networks is not without its challenges. For example, in the case of network biology for complex diseases analysis, data from high throughput technologies is noisy and contains both false positive and false negative edges. Also, there is a lack of information and understanding about the nature of interactions (edges) and pathway mechanisms. Lately, machine learning techniques are overcoming these drawbacks. The extensive data represented in biological networks finds its match in machine learning which is proving to be highly successful for analyzing and extracting useful information from large datasets (Fig. 1). Fig. 1 Overview of systems biology approach

Studying Network Features in Systems Biology …

663

Machine learning techniques create predictive models which are constructed on an original algorithm and the provided dataset. The data presented as input to the model comprises of certain ‘features’ and ‘labels’ given across several samples. Features are the weights across samples, either mathematically transformed or raw, while the prediction of the given labels is the objective of the model, that is, the model’s output. A typical machine learning model follows these steps—(i) processing the input data; (ii) training the model (based on an underlying algorithm that characterizes the learning of the rules); and (iii) making predictions on novel data using the trained model [5]. In the training process, the model essentially learns how to interpret the input data into precise predictions of the labels by establishing certain model factors. These factors are estimated after multiple steps of going through data end-to-end. Here, the factors are discovered, inaccuracies are rectified, the performance of the model is evaluated, and the entire routine is repeated. The training process is repeated until the errors are not minimized, that is, till the performance of the model is not maximized. Upon identifying the ideal factors, the model is used to analyze and make predictions on the new data [6]. Features defined in input from biological networks can include several types of data, including but not limited to the genomic sequence, expression profiles of genes, protein–protein interactions, copy number alterations or concentrations of metabolites. Features can be categorical (e.g., gene functional annotation), continuous (e.g., expression values of genes), or binary (e.g., genes off or on). Like features, labels, can be categorical (e.g., disease stage), continuous (e.g., rate od growth), or binary (e.g., non-pathogenic or pathogenic). Machine learning methods can be used to perform classification or regression tasks depending on the kind of labels—discrete or continuous [5]. The purpose of training machine learning models is to use it to make predictions on independent datasets known as the test data. The validation of a successful model is complete when it gives the same accuracy on the training data and the test data. However, sometimes the model gives higher accuracy on training data than on test data. This is called overfitting and it occurs when the factors for the model fit more exclusively to the training data which decreases its ability to make accurate predictions on any other data. Also, this can occur when the model is too complex, that is when there might be too many factors. The opposite happens when the model gives a higher accuracy on test data over training data. This is called underfitting and might occur when the model is too simple. Both underfitting and overfitting are responsible for cases of poor performance of machine learning methodologies. Underfitting can be tackled by increasing the complexity of the model, while overfitting can be remedied either by decreasing the complexity of the machine learning model or by providing more data for training [7]. Moreover, with biological data, it becomes imperative to perform ‘feature selection’ at the beginning of training to select only those features which align with input labels. Additional features may not be informative or simply may cause overfitting. Furthermore, proper cleaning, normalization, and formatting of the input data must be done. This would not just filter out the incomplete data and retain the complete data but would also increase the quality of the input data. And a better-quality input

664

S. Mittal and Y. Hasija

data together with an optimized quantity of training data would thereby result in better-quality output data [7–9]. This review gives an overview of machine learning approaches in the analysis of various biological networks. Furthermore, it discusses the challenges and opportunities brought about by exploring this area of systems biology.

2 Machine Learning Methods for Biological Network Analysis 2.1 Prediction of Essential Genes and Proteins Essential genes and proteins are those which are crucial for the existence and continuation of race of an organism to the extent that their omission will lead to infertility or lethality. Therefore, it is very important to identify essential genes and proteins to understand the survival requirements of organisms and to find genes involved in human diseases and to identify drug targets. Experimental methods for identifying essential genes are expensive, time-taking, and arduous. In contrast, computational methods give fast results and also propose strong hypotheses for experimental validation. Specifically, machine learning methods have made rapid progress in gene prediction by overcoming the limitations of previous methods. Support vector machine (SVM) and ensemble learning-based methods are the most used machine learning algorithms. Other algorithms used are weighted k-nearest neighbours (WKNN), naïve Bayes (NB), gene expression programming (GEP), genetic algorithm, and neural network (NN) [10] (Fig. 2).

2.2 Prediction of Druggable Targets In the past few years, there has been a remarkable increase in the collection of large biological ‘omics’ data. However, this change has not translated significantly into improving vital applications in biomedicine such as in the development of better drugs. Researchers have since started looking for better avenues owing to expensive experimental methods, limited resources, and low target to drug ratio. Also, predicting potential drug targets in the early stages of drug development is essential to ensure success in later clinical stages. Computational methods, specifically machine learning methods have proved to be valuable assets in successfully predicting drug targets. Better computational systems and an increase in the number of machine-based prediction techniques have added to this success. These methods utilize attributes of known drug targets to predict unknown targets. They also consider the protein sequence properties, amino acid composition, structural properties, sequence’s role in biological networks, and gene expression profiles for

Studying Network Features in Systems Biology …

665

Fig. 2 Levels of system organization in network biology

a more accurate target prediction. Machine learning algorithms used frequently are SVMs [11], ensemble of classifiers [12], decision trees, radial basis function Bayesian networks [13], and logistic regression [14]. Maintaining data standards would further help in assessing varying methods of target prediction (Table 1).

2.3 Gene Interactions and Protein–Protein Interactions Genetic interaction is essentially the study of functional interactions between genes. This primarily affects phenotype. Since genetic interactions form the basis of observed phenotypes or behavioral traits, predicting GIs will help us uncover evolutionary relationships, understand complex disease phenotypes, and signalling pathways. The emergence of network analysis approach boosted the study of GIs and brought new momentum and dimensions to it. Physical interactions between proteins are caused by biochemical events that are induced by bonds such as hydrogen bonds and electrostatic forces. Proteins very rarely act alone and are usually regulated by other molecules or proteins which interact in a wide network. Therefore, the study of protein–protein interaction is necessary. In silico methods used to study GI and PPI networks include both machine learning and other computational methods. These

666

S. Mittal and Y. Hasija

Table 1 Application of machine learning algorithms on biological network Network type

Organism

Machine learning approach used

Gene co-expression network

Saccharomyces cerevisiae

Neural network, support Global [15] vector machine characteristics of protein dispensability and evolution

Pseudomonas aeruginosa, Escherichia coli

Ensemble Learning

Identification of essential genes

[16]

Several bacterial species

Naïve Bayes

Prediction of essential genes

[17]

Schizosachharomyces Feature-based-weighted Prediction of pombe Naïve Bayes model essential (FWM) genes

[18]

Saccharomyces cerevisiae

[19]

Protein interaction network

Application

References

Gene expression programming

Prediction of essential proteins

Transcriptional Escherichia coli regulatory network

Decision tree

Identification [20] of essential genes, enzymes, drug targets

Metabolic network

Pseudomonas aeruginosa, Salmonella typhimurium, Escherichia coli

Support vector machine

Identifying [21] essential genes, enzymes, drug targets

Saccharomyces cerevisiae

Decision tree

Integrative approach toward prediction of essential genes

[22]

include decision trees, network connectivity, SVMs, logistic regression, graph diffusion kernel, flux balance analysis, weighted sum, and semantic similarity method [23].

Studying Network Features in Systems Biology …

667

2.4 Mapping the Human Interactome with Machine Learning Models The wide number of biochemical interactions between molecules inside the human body is responsible for the continuation of life processes. These interactions at the individual level may be gene interactions or protein–protein interaction, or even interactions among metabolites but when all of them are studied together, considering their regulatory, inductor, or inhibitory effects, do we gain insights into the networks at a macrolevel. This ‘whole picture’ scenario is referred to as the human interactome. Machine learning models used in the study of human interactome provide useful results. Efficient data representation is crucial to the success of machine learning models; these are One-Hot (OH) Encoding, Conjoint Triad (CT), Substitution Matrix Representation (SMR), and Position Specific Scoring Matrix (PSSM). The machine learning models used are k-nearest neighbor (KNN), SVMs, and stacked autoencoder. Deep neural networks and bilinear-CNNs are especially useful in this approach [24].

2.5 Molecular Systems Biology of Complex Diseases The study of network biology with respect to human diseases has resulted into a novel field known as network medicine. The factors associated with complex diseases do not function independently but function collectively in a complex network. In the context of network biology, now, we integrate the study of molecular biomarkers, disease susceptibility genes, gene interactions, PPIs, etc., to identify novel network biomarkers. These network biomarkers are more robust when compared to molecular biomarkers and give us a more precise understanding of a disease condition. Traditional methods for network analysis usually fail to find patterns in a complex disease network, however, machine learning algorithms (e.g., CNNs, RNNs) for classification and clustering are highly advanced and can easily cluster by identifying patterns. Dynamical Network Biomarkers (DNB) is a new model for detecting biomarkers in complex diseases based on complex network theory and non-linear dynamical theory. DNBs can take advantage of network information to solve mechanisms of disease initiation and development and thereby increase the accuracy of diagnosis and prognosis of complex diseases. Unlike molecular and network biomarkers, DNBs can also identify pre-diseased state [25].

3 Opportunities and Challenges It is evident from the above discussion that with the advent of advanced machine learning algorithms, we find immense opportunities for their application in biological networks. However, at this intersection, we also find certain challenges that need

668

S. Mittal and Y. Hasija

to be addressed for better results and accurate predictions. Firstly, machine learning methods are highly data-hungry and require large datasets in order to prevent overfitting and improve performance. Keeping that in mind, we are aware that current biological datasets exist in multi-omics states and this makes for a large repository of data, as is the current requirement. However, these datasets are largely in orders of magnitudes and too minute for effective application of advanced machine learning algorithms like deep learning or their quality is not standard enough. Therefore, the current need is to invest in creating curated datasets of network biology with a focus on both optimal quality data (experimentally validated) and quantity of data (for ML analysis). Secondly, multi-omics datasets can be expensive and creating complementary databases with imaging data which are easy to create and ideal for analysis by deep learning algorithms, offer meaningful alternatives. The challenge of sparse datasets in biological data being too small for machine learning analysis can also be countered by investing in the creation of algorithms in machine learning created specifically for such data. While the challenge of fewer data available for effective analysis can be addressed by generating computational data with properties of real data. For image analysis using machine learning, this is usually achieved by using Generative Adversarial Networks (GANs). GANs have DNN architecture and they either create new data having properties like training data (generative model) or they analyze the new data and decide if it belongs to training data or not [5]. Either way, the training data is increased enough to perform significant analysis. Thirdly, the ‘black box’ problem in the latest machine learning models is especially problematic in network biology. In novel CNN, DNN, and RNN algorithms, while training, the input data gets altered in such a way till the final stages that it is difficult to understand how the final features came to be decided which is why the output data obtained upon studying loses its biological perspective. It prevents us from gaining an understanding of fundamental mechanisms in the biological networks which eventually limits the efficiency of the model. Developing architectures where there is transparency in model steps would help in their applicability in network biology.

4 Conclusion We are at a junction where computational methods and machine learning techniques are increasingly finding utility in understanding network biology data. While commendable progress has been made in this field, we have a long way to go both in mining significant biological data and in creating optimized datasets of biological networks. Furthermore, machine learning algorithms hold high potential in extracting valuable data from complex biological data and improving them would certainly enable us to predict an exciting and profound future for network biology.

Studying Network Features in Systems Biology …

669

References 1. Prokop, A., Csukás, B.: Systems Biology (2013) 2. Palaniappan, S.K., Yachie-Kinoshita, A., Ghosh, S.: Computational systems biology. In: Encyclopedia of Bioinformatics and Computational Biology: ABC of Bioinformatics (2018) 3. Saitou, N.: Network. In: Brenner’s Encyclopedia of Genetics, 2nd edn. (2013) 4. Ma’ayan, A.: Introduction to network analysis in systems biology. Sci. Signal. (2011) 5. Camacho, D.M., Collins, K.M., Powers, R.K., Costello, J.C., Collins, J.J.: Next-generation machine learning for biological networks. Cell (2018) 6. Tiwari, A.K.: Introduction to machine learning. In: Ubiquitous Machine Learning and Its Applications (2017) 7. Domingos, P.: A few useful things to know about machine learning. Commun. ACM (2012) 8. Chandrashekar, G., Sahin, F.: A survey on feature selection methods. Comput. Electr. Eng. (2014) 9. Saeys, Y., Inza, I., Larrañaga, P.: A review of feature selection techniques in bioinformatics. Bioinformatics (2007) 10. Zhang, X., Acencio, M.L., Lemke, N.: Predicting essential genes and proteins based on machine learning and network topological features: a comprehensive review. Front. Physiol. (2016) 11. Lin, C.-J., Hsu, C.-W., Chang, C-C.: A practical guide to support vector classification. BJU Int. (2008) 12. Li, J., et al.: Application of random forest and generalised linear model and their hybrid methods with geostatistical techniques to count data: predicting sponge species richness. Environ. Model. Softw. (2017) 13. Yao, L., Rzhetsky, A.: Quantitative systems-level determinants of human genes targeted by successful drugs. Genome Res. (2008) 14. D. Emig et al., “Drug Target Prediction and Repositioning Using an Integrated Network-Based Approach,” PLoS One, 2013 15. Chen, Y., Xu, D.: Understanding protein dispensability through machine-learning analysis of high-throughput data. Bioinformatics (2005) 16. Deng, J., et al.: Investigating the predictability of essential genes across distantly related organisms using an integrative approach. Nucleic Acids Res. (2011) 17. Cheng, J., et al.: Training set selection for the prediction of essential genes. PLoS One (2014) 18. Cheng, J., et al.: A new computational strategy for predicting essential genes. BMC Genom. (2013) 19. J. Zhong, J. Wang, W. Peng, Z. Zhang, and Y. Pan, “Prediction of essential proteins based on gene expression programming.,” BMC Genomics, 2013 20. da Silva, J.P.M., et al.: In silico network topology-based prediction of gene essentiality. Phys. A Stat. Mech. Its Appl. (2008) 21. Plaimas, K., Eils, R., König, R.: Identifying essential genes in bacterial metabolic networks with machine learning methods. BMC Syst. Biol. (2010) 22. Acencio, M.L., Lemke, N.: Towards the prediction of essential genes by integration of network topology, cellular localization and biological process information. BMC Bioinf. (2009) 23. Boucher, B., Jenna, S.: Genetic interaction networks: better understand to better predict. Front. Genet. (2013) 24. Schreiber, K.: Net-PPI: Mapping the Human Interactome with Machine Learned Models. Signature redacted LIBRARIES ARCHIVES. Massachusetts Institute of Technology (2008) 25. Barabási, A.L., Gulbahce, N., Loscalzo, J.: Network medicine: a network-based approach to human disease. Nat. Rev. Genet. (2011)

Smart Predictive Healthcare Framework for Remote Patient Monitoring and Recommendation Using Deep Learning with Novel Cost Optimization Anand Motwani , Piyush Kumar Shukla , and Mahesh Pawar

Abstract Purpose The aim of this study is to propose a smart predictive healthcare framework for patients suffering from chronic diseases and are under observation at home. To appropriately predict the patient’s actual health status and for better recommendation and assistive services, the framework utilizes a novel Deep Learning (DL) model. The DL model utilizes the big data of patients’ vital signs, context data such as activity, medication, and symptoms collected through Ambient Assisted Living (AAL) systems. Method We applied a DL model with novel cost optimization for categorical prediction. Our model is a component part of the Intelligent Module at patient’s end. The experimental study is carried out on patients suffering from Chronic Blood Pressure (BP) disorders. The imbalanced dataset collected over a period of 1 year and sampled every 15 min. Result The highest overall accuracy achieved for the proposed model is 99.97% which is up to 8.8% better than one of the existing models. F-score for emergency cases has been enhanced by 12%, 39%, and 12% for Hypertensive, Hypotensive, and Normotensive patients’, respectively. Conclusion The experimental outcomes reveal that the proposed model can predict patients’ conditions (emergency, warning, alert, and normal) with more accuracy. Also, our model is able to handle imbalanced big data, high variability of vital signs, and all kinds of BP patients. Thus, we consider that the proposed framework is valuable for the management of chronic diseases.

A. Motwani (B) · P. K. Shukla · M. Pawar University Institute of Technology, RGPV, Bhopal 462033, India e-mail: [email protected] P. K. Shukla e-mail: [email protected] M. Pawar e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_67

671

672

A. Motwani et al.

1 Introduction As per recent statistical data, chronic diseases such as cancers, diabetes, chronic respiratory disease, and Cardiovascular Disease (CVD), etc. accounts for around 71% of all deaths globally [1]. Also, the average human lifespan has increased by 5 years during 2000–2016 but by 2020 the chronic diseases account for around 80% of all deaths globally [2]. The direct impacts of this rise in the average human lifespan are a lack of caregivers to serve an increasing number of elderly patients and an increase in healthcare budgets [3]. The world requires modern healthcare monitoring and recommender systems to analyze the big data collected through the patient’s care delivery environment. The analysis derives insights, determines patient health status, and predicts the disease. The systems should be intelligent in order to predict the health condition by analyzing a patient’s lifestyle, physical health records, and social activities. Thus, Smart Predictive Healthcare Framework is becoming a key platform for healthcare services. In this context, such systems have become indispensable tools in determining leading risk factors of chronic diseases later in life. This intensive research in smart healthcare monitoring and recommender systems involves a fusion of several domains of Information and Communication Technologies (ICT). The domains that led to the innovations in the healthcare sector [1, 4–6] are mobile communication [4, 7, 8], Wireless Sensor Networks (WSNs) [9], Internet of Things (IoT) [10], big data [11], wearable computing, and Wireless Body Area Network (WBAN) [12]. These devices and technologies are bringing hospitals better awareness of the medical equipment, clinicians, staff, and patients in their care delivery environments. These systems connected to human bodies generate big data. The continuous monitoring of patients using AAL systems is also a source of big data [13]. Processing big data and performing real-time actions in critical situations is a challenging task [14]. The emergence of Artificial Intelligence and sub-domains has provided new dimensions and opportunities for improved analytics and predictions. Machine Learning (ML) and DL are able to predict disease patterns and the association with symptoms [15]. Healthcare and Recommender systems have empowered wellness among people through knowledge generated using data from various sources including reviews by healthcare experts. Predictive modeling based on ML and DL can augment health prospects, lessen disease progression, and discover the reasons for diseases by analyzing patient’s reviews, symptoms, history, and vital signs in real-time. Health professionals also get benefitted as predictive modeling helps them in retrieving valuable knowledge from information and clinical guidelines, thus enabling them to deliver high-quality health remedies for patients. Although the models [12, 16–18] developed to predict chronic disease but have not been validated outside the setting [17–19]. Most of the earlier models only categorize disease onset as “Yes” and “No”. So, predictive models based on ML is an urgent need to gain insights and early diagnosis using disease symptoms. context-aware

Smart Predictive Healthcare Framework for Remote …

673

AAL systems [20] evolved as recent cloud-based architectures put patients at risk in case of service unavailability and connection interruption. So, there is a need for healthcare frameworks and models that generates efficient classification of patients’ health status, supports personalized and generic medical rules, and performs early diagnosis of disease on the basis of symptoms. Also, the frameworks must support collection, aggregation, storage, and processing of realtime patient data to generate accurate insights. The key features of the proposed framework with an optimized DL model which is simple yet powerful, are robustness, fault-tolerant, context-aware, high-performance, scalable, and adaptive. The study is outlined in five sections. In Sect. 2, we present the architecture of the proposed smart predictive healthcare framework, DL model, and novel cost optimization. The experimentation details are presented in Sect. 3. The results and discussions are presented in Sect. 4. In Sect. 5, the conclusions and future research directions are stated.

2 Materials and Methods Deep Learning models deal with varied modeling problems, such as classification or regression by defining a suitable cost function [21]. The choice of cost or loss function must suit the modeling problem, for instance, Categorical Cross Entropy (CCE) is suitable for multiclass classification problems. The proposed predictive DL model is optimized with an adaptive learning rate method. The goal of this study is to predict the class of the patient’s health condition (emergency, alert, warning, and normal) by adapting the novel CCE-based cost function and implement the DL model with this novel cost function.

2.1 Synthetic Data Generation Typically no real dataset exists that contains long-term monitoring of patients with chronic disease (BP) that is collected through IoT sensors [3]. So, a synthetic data set is generated in the format as shown in Table 1. For this study, the vital signs are taken from Physionet MIMIC-II database for three real patients over one year [22]. The synthetic data set is generated based on real vital signs produced by e-Medical IoT kits (MySignals) [5] for one year with a sampling rate of 15 min. The dataset contains ambient conditions such as room temperature and activity information coupled with vital signs. The range and circumstantial classification according to the medical model and actions to be taken are considered accordingly [3]. In a previous study [23] of biomedical data analysis, the reliability of synthetic data generation for big data similar to real data for monitoring of extended period has been proven.

Heart rate (HR)

67

98

106

179

Time stamp

01-01-2018 00:00

07-04-2018 22:30

07-12-2018 03:45

01-01-2019 04:15

53

163

127

110

Systolic BP (SBP)

106

117

88

75

Diastolic BP (DBP)

Table 1 Synthetic dataset sample of BP patients

20

14

7

15

Respiratory rate (RR)

65

91

92

97

Oxygen saturation SpO2

3

3

2

6

Activity (Act)

4

3

6

5

Last-activity (L_Act)

2

0

1

0

Ambient condition (Amb)

1

0

1

0

Medication (Med)

55

8

26

0

Symptoms (Symp)

4

3

2

1

Class

674 A. Motwani et al.

Smart Predictive Healthcare Framework for Remote …

675

Fig. 1 A smart predictive healthcare framework for remote patient monitoring and recommendation

The architectural framework proposed in this work is shown in Fig. 1. It encompasses three layers (Sect. 2.2).

2.2 Framework Description Layer-1—(AAL). The layer administers AAL systems that monitor and record the patient’s vital signs, and ambient conditions (temperature, humidity, etc.). This layer is facilitated by open-source electronic health platforms such as MySignals platform [14]. The e-health platforms support a variety of connectivity options and also facilitate the inclusion of custom medical sensors. Layer-2—Local Intelligent Module (LIM). This layer based on Edge devices therefore also known as the Edge layer. The layer collects, stores processes the data that comes across intermediate communication protocols. This module basically works in both offline and online modes. It comprise of IoT Gateway, Local Processing and Storage Unit (LPSU) and proposed DL model (see Sect. 2.3). Layer-3—Cloud-Oriented Module (COM). This module comprises patient’s personalized information, assistive services, and knowledge databases therefore also known as knowledge module. The module comprises two or more clouds with proper security. Online Patient Database (OPDB) is synchronized with LPSU for patientspecific rules and updates [3]. The caregivers, medical experts, assistive services, etc. are part of this layer. The patients are being remotely monitored and responded by the team in case of alerts that are generated by LIM.

676

A. Motwani et al.

2.3 Proposed Predictive Model The novel Deep Learning model (see Fig. 2) with novel cost optimization function and algorithmic steps are presented below. It is responsible for predicting and categorizing the patient’s health state on the local side. The model added is in contrast to the model proposed in [3] which is at the cloud side and gets downloaded when required. In this work, the framework utilize its own prediction model based on the patient’s vital signs and present context from AAL systems. In the case of internet disconnection or in the absence of cloud services or in an emergency, the model utilizes the current saved context and performs efficient classification and prediction. The model is intended to demonstrate the highest categorical classification accuracy. After correctly determining the patient’s health status, layer 2 takes necessary and appropriate actions to call assistive services, doctors, and caregivers. Model Input. Vital Signs and AAL data. Pre-process and Normalization. Pre-process the data and convert it to numeric. Further, the data is normalized using z-score. Feature Engineering. The features are extracted as per patient activity and context. Model Training. Train the model with Z=

m 

Wih X i + b

i=1

where W h : represents weight set at layer h, and input dataset features are represented using X ranging from 1 to m. The proposed DL model comprises of five layers including input and output layers. The number of nodes through layer-1–5 are 12, 24, 12, 6, and 4, respectively. After calculating the probability score, it is fed into

Fig. 2 Novel predictive model with novel Categorical Cross Entropy loss function for classification of BP disorder

Smart Predictive Healthcare Framework for Remote …

677

a novel cost optimization function (see Sect. 2.4). Finally, the outputs are squeezed using softmax activation function to get the correct prediction.

2.4 New Cost Function Deep Neural Networks (DNN) are trained using the optimization algorithm which requires initial estimation of the loss or cost on one or more training examples. The derivative of loss or cost is then propagated backward through the network to update the weights. The poor selection of cost function calculates large error values leading the network to fail to train, or it can also produce useless models. To handle the variability of errors for different inputs, we developed a new cost function by expansion of Categorical Cross Entropy (CCE). The adaptive learning rate method (Adam) optimization is used for parameter optimization then. Adam calculates individual learning rates for different parameters. It minimizes the cost or objective function in DNN. For proposed model, if the CCE of individual probability input, i.e., E(W) is greater than the average of all E(W), then new CCE is rationalized using the Eqs. 1 and 2:       z i = yi log y − E(W ) if yi log y > E(W ) 



(1)

Then again calculating the individual costs using (2) will give optimal error values and so the gradients. E(W ) = −

k 

zi

(2)

i=1

The novel loss/cost function converges the DL algorithm better with smaller values of learning rates used by optimization algorithms. Also, the model which predicts perfect probabilities has a cross entropy or log loss of 0.0. Figure 3 represents the loss with novel cost function versus Cross Entropy Loss. Here, the loss values are plotted along with predicted and actual probabilities. Our predictive DL model uses the novel cost function proposed here.

3 Experiments To analyze the performance, several experiments were conducted with the proposed predictive DL model and novel cost optimization function. The predictive model in layer 2 is classifying and predicting the real health status of patients for sending alerts to caregivers, patient’s social networks and call assistive services. The count of each

678

A. Motwani et al.

Fig. 3 Comparison of cross entropy loss and novel loss function

Table 2 Class distribution for patient data Patient type

Emergency

Alert

Warning

Normal

Hypertensive (P1)

175

2404

23,347

9307

Hypotensive (P2)

148

1627

14,003

19,455

Normotensive (P3)

109

1186

21,421

12,517

class label, for each type of patient in an imbalanced dataset, is given in Table 2. Training and Testing splits are taken as 70% and 30%, respectively. The proposed model (see Fig. 2) is executed on a system with Intel Core i3, 5th Generation, having 8 GB RAM and 4 cores, running Windows-10 (64-bit). The wellmatched versions of essential mathematical, ML, DL, scientific, and graph libraries including Scikit-learn, Keras, and Google TensorFlow were installed and utilized.

4 Results and Discussions To determine whether the prediction model will provide a correct recommendation to patients being monitored remotely, its performance is compared with the benchmark NN model and with one of the best classifier (Naive Bayes) in IHCAM-PUSH [3] for all three types of patients.

Smart Predictive Healthcare Framework for Remote …

679

4.1 Evaluation Metrics The model is evaluated and compared on the basis of the following standard parameters: Classification Accuracy. It measures the uprightness of a classification model. The accuracy of all models for all patient types has been compared (see Fig. 4) and reported here. Precision. It is part of true results over all positive results. Sensitivity. It is the portion of all correct results returned by the model. It is also known as Recall.

Fig. 4 Comparison of accuracy for patient P1, P2, and P3

Fig. 5 Comparison of F-score values: average and emergency class

680

A. Motwani et al.

Table 3 Comparison of precision (emergency) and recall (emergency) of novel DL (proposed) model with benchmark neural network (NN) Hypertensive (P1)

Hypotensive (P2)

Normotensive (P3)

Precision

Precision

Precision

Recall

Recall

Recall

NN

1.00

0.80

1.00

0.86

1.00

0.79

Novel DL MODEL

1.00

0.83

1.00

0.84

1.00

0.89

F-Score. It is weighted average of precision and recall. It is weighted between 0 (lowest) and 1 (ideal score). Figure 5 depicts the average F-score and F-score (Emergency) for all types of patients generated using three models including the proposed model. The performance of the proposed predictive DL is shown (see Table 3). The highest accuracy achieved nearly 99.90%. F-Score is a good measure based on precision and recall which is highest for proposed method. Besides higher class imbalance, the average F-score and F-score for emergency class, for all patient types is greater than 0.90. Thus, we can conclude that the model performs equally well in predicting the emergency, Alert, Warning, and Normal cases.

5 Conclusion and Future Directions The proposed Healthcare Monitoring framework monitors remote patients suffering from chronic diseases such as BP disorder and diabetes in real-time. It enables caregivers and hospitals to provide better care to patients under supervision at home by monitoring patients’ vital signs and contexts (activities and ambient conditions) in real-time. It is apparent from the results that the proposed framework performs equally well in predicting the Emergency, Alert, Warning, and Normal cases at local end. The following features distinguish proposed from other frameworks: • Robustness: It is contributed by the integration of personalized and generic medical rules. • Fault-tolerant: works in offline mode, i.e. in absence of cloud services, with highperformance learning. • Context-aware: Monitoring and recommendation are done on the basis of the patient’s context and ambient conditions. • Powerful: Employed high-performance offline learning with powerful and novel DL algorithms (cognitive technique). • Responsive: Deployment over local system in contrast to other models that load the cloud-based learner on the local system. • Simple: Able to handle large, unstructured, and imbalanced datasets. • High performance: Provides higher F-Score, prediction accuracy, precision, categorical accuracy.

Smart Predictive Healthcare Framework for Remote …

681

• Scalable: It accommodates the big data analysis with the power of Deep Learning. • Adaptive: It is adaptive to the latest technologies like Cloud Computing, IoT, Machine Learning, and AI-enabled devices. In the future, the proposed framework can be implemented with Convolutional Neural Network (CNN) or with other DL algorithms. Our novel context-aware framework can be extended for monitoring of patients suffering from other chronic diseases such as cancer. Cloud Computing and Cloud-based Social Networking Service (SNS) [24] can be a perspective opportunity in the future of healthcare domain. In future, the proposed framework will be tested for ‘Quality of Service (QoS), energy and other performance parameters’ [25] in cloud computing environment.

References 1. Organization, W.H.: World health statistics 2019: monitoring health for the SDGs, sustainable development goals (2019) 2. WHO, W.J.G.W.H.O.: The world health report 2003: shaping the future. 204 (2003) 3. Hassan, M.K., El Desouky, A.I., Elghamrawy, S.M., Sarhan, A.M.: Intelligent hybrid remote patient-monitoring model with cloud-based framework for knowledge discovery. Comput. Electr. Eng. 70, 1034–1048 (2018) 4. Hämäläinen, M., Li, X.: Recent advances in body area network technology and applications. Int. J. Wireless Inf. Netw. 24(2), 63–64 (2017) 5. Libelium Comunicaciones Distribuidas S.L..: MySignals SW eHealth and Medical IoT Development Platform Technical Guide. http://www.libelium.com/downloads/documentation/mys ignals_technical_guide.pdf (2019). Accessed 12/01/2020 6. Negra, R., Jemili, I., Belghith, A.: Wireless body area networks: Applications and technologies. Procedia Comput. Sci. 83, 1274–1281 (2016) 7. Gope, P., Hwang, T.: BSN-care: a secure IoT-based modern healthcare system using body sensor network. IEEE Sens. J 16(5), 1368–1376 (2015) 8. Stüber, G.L.: Principles of Mobile Communication. Springer, Berlin (2017) 9. Sohraby, K., Minoli, D., Znati, T.: Wireless Sensor Networks: Technology, Protocols, and Applications. Wiley, New York (2007) 10. Fortino, G., Giannantonio, R., Gravina, R., Kuryloski, P., Jafari, R.: Enabling effective programming and flexible management of efficient body sensor network applications. IEEE Trans. Human-Mach. Syst. 43(1), 115–133 (2012) 11. Normandeau, K.: Beyond volume, variety and velocity is the issue of big data veracity. Inside Big Data (2013) 12. Lont, M., Milosevic, D., van Roermund, A.: Wireless body area networks. In: Wake-up receiver based ultra-low-power WBAN, pp 7–28. Springer, Berlin (2014) 13. Mahdavinejad, M.S., Rezvan, M., Barekatain, M., Adibi, P., Barnaghi, P., Sheth, A.P.: Networks: machine learning for Internet of Things data analysis: A survey. Dig. Commun. Netw. 4(3), 161–175 (2018) 14. Rathore, M.M., Ahmad, A., Paul, A., Wan, J., Zhang, D.: Real-time medical emergency response system: exploiting IoT and big data for public health. J. Med. Syst. 40(12), 283 (2016) 15. Sun, G., Chang, V., Yang, G., Liao, D.: The cost-efficient deployment of replica servers in virtual content distribution networks for data fusion. J. Med. Syst. 432, 495–515 (2018) 16. Mohammadzadeh, N., Safdari, R.: Patient monitoring in mobile health: opportunities and challenges. Med. Arch. 68(1), 57 (2014)

682

A. Motwani et al.

17. Maheswar, R., Kanagachidambaresan, G., Jayaparvathy, R., Thampi, S.M.: Body Area Network Challenges and Solutions. Springer, Berlin (2019) 18. Collins, G.S., Omar, O., Shanyinde, M., Yu, L.-M.: A systematic review finds prediction models for chronic kidney disease were poorly reported and often developed using inappropriate methods. J. Clin. Epidemiol. 66(3), 268–277 (2013) 19. Echouffo-Tcheugui, J.B., Kengne, A.P.: Risk models to predict chronic kidney disease and its progression: a systematic review. PLoS Med 9(11), e1001344 (2012) 20. Sarker, V.K., Jiang, M., Gia, T.N., Anzanpour, A., Rahmani, A.M., Liljeberg, P.: Portable multipurpose bio-signal acquisition and wireless streaming device for wearables. In: 2017 IEEE Sensors Applications Symposium (SAS) 2017, pp. 1–6. IEEE 21. Koshimizu, H., Kojima, R., Kario, K., Okuno, Y.: Prediction of Blood Pressure Variability Using Deep Neural Networks. Int. J. Med. Inf. 104067 (2020) 22. Saeed, M., Villarroel, M., Reisner, A.T., Clifford, G., Lehman, L.-W., Moody, G., Heldt, T., Kyaw, T.H., Moody, B., Mark, R.G.: Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II): a public-access intensive care unit database. Critical Care Med. 39(5), 952 (2011) 23. Alam, F., Mehmood, R., Katib, I., Albeshri, A.: Analysis of eight data mining algorithms for smarter Internet of Things (IoT). Procedia Comput. Sci. 98, 437–442 (2016) 24. Bajaj, G., Motwani, A.: Improving reliability of mobile social cloud computing using machine learning in content addressable network. In: Social Networking and Computational Intelligence. Lecture Notes in Networks and Systems, vol. 100. pp. 85–103. Springer, Singapore (2020) 25. Kaushar, H., Ricchariya, P., Motwani, A.: Comparison of SLA based energy efficient dynamic virtual machine consolidation algorithms. Int. J. Comput. Appl. 102(16), 31–36 (2014)

Enhanced Question Answering System with Trustworthy Answers C. Valliyammai, V. P. Siddharth Gupta, Puviarasi Gowrinathan, Kalli Poornima, and S. Yaswanth

Abstract Community-based question answering sites such as Yahoo! Answers, Quora, and Stack Overflow have emerged as an effective means of information seeking on the Web. Anyone can obtain answers to their questions by posting them for other participants on these sites. However, not all questions get immediate answers from other users. Questions which are not interesting enough for the community may suffer from “starvation”. Such questions may take days/months to get satisfactory answers. This delay in response can be avoided by searching similar questions in an archive of previously answered questions or by forwarding to users who might potentially answer. Another approach would be to combine multiple valid answers into one and then generate a well-summarized answer. The proposed framework reduces starvation of new questions in Stack Overflow by profiling users based on interests and forwarding the questions to the right users to obtain trustful, complete, and relevant answers.

C. Valliyammai (B) · V. P. Siddharth Gupta · P. Gowrinathan · K. Poornima · S. Yaswanth Madras Institute of Technology Anna University, Chennai, India e-mail: [email protected] V. P. Siddharth Gupta e-mail: [email protected] P. Gowrinathan e-mail: [email protected] K. Poornima e-mail: [email protected] S. Yaswanth e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_68

683

684

C. Valliyammai et al.

1 Introduction Questioning and demanding answers for it has always been human nature. The process of searching for answers to questions includes cogitation and the consultation of different information sources. One of the biggest information sources nowadays is the World Wide Web. The information is however of limited use if it is not possible to pose queries in order to obtain data that is potentially relevant for answering a given question. Once an answer selection has been performed, it has to be decided which answer is relevant given this question. The biggest challenge for computers in fetching has always been interpreting the questions asked by humans in their natural language. The basic subtasks include question analysis, candidate document retrieval, and answer selection [1]. Community question answering often poses additional difficulties stemming from the informal language used in many Web forums from which the data is crawled. The Web forums are restricted to a certain topic, for example, programming, but this topic can span over a wide range of subdomains with different vocabulary, in our example object-oriented programming, functional programming, and different programming languages. Another parameter apart from irrelevant answers that reduces the quality of answers to a wide extent is the availability of spam content all over the Web. With the use of spam content, users promote their content or infect the systems of other users by sharing malicious software. The amount of spam content is believed to be reduced and thus improving the quality of answers provided by the question answering system via fundamental integration of machine learning.

2 Related Work In [2], the author proposes a scheme of controlling the quality of answers which is introduced using various features such as user reputation, positive comments by users, and classified as verified by admin. It also demonstrates the application of a lightweight spam classification mechanism to maintain the quality of answers and to avoid displaying misleading or harmful information. The reputation of a user is of utmost importance in community-based question answering systems. In [3], a model has been presented for computing user’s importance scores in Q&A systems. The author proposes a model based on PageRank algorithm which indirectly calculates the important score of a user and designed a model which is capable to run on a MapReduce environment for implementing on large-scale systems. Ranking relevant answers is a crucial step while displaying existing answers from the question answer pool because the goal of a question answering system is to provide useful information to a user in the most concise way possible. In [4], the author has described a system for finding answers in a community forum, whose approach relies on several semantic similar features based on fine-tuned word embedding. Ranking of answers will always remain an unsolved problem. The quality of

Enhanced Question Answering System with Trustworthy Answers

685

an answer depends on the amount of content fitted into the shortest way possible. Even humans give different ranks to the same answer. This is due to the difference in expectancy of different users. Hence, based on previously ranked answers, it can be predicted whether the particular user will be satisfied with the answer or not. In [5], the author provides an insight of grading essays using Bayesian independence classifier and k-nearest-neighbor classifiers. The proposed system is tested against manual correction with an accuracy of 0.97. The work by [5] elaborated on the usage of textual features while ranking different answers. In [6], the author provides a method to rate the quality of an answer using non-textual features such as click counts and user’s recommendation. Other mentioned features may include copy counts, number of answers, the answerer’s activity level, editor’s recommendation, etc. Kernel density estimation (KDE) is used for feature conversion and has concluded by stating that usage of the Gaussian kernel would give more influence to closer data points. A study with live data gathered within a community to demonstrate the training of models with reasonable accuracy and recall leads to more sophisticated and more useful real-time Q&A support [7]. Informational and conversational questions are the two faces of a coin. In [8], the author describes the need to distinguish informational and conversational questions. Their work classifies the answers for a factual question as “True,” “False,” and “Nonfactual.” Bidirectional Encoder Representations from Transformers is a deep bidirectional transformer which is used for classification purposes.

3 Proposed Work 3.1 Ideology The goal is to formulate a starvation-free question answering system using the data from Stack Overflow for the programming community. Question answering is very dependent on a good search corpus for without documents containing the answer. But not at all times, questions are answered. In this case, the user with the question will most likely be starving as most of the questions in the Q&A community take months to be answered. Even if the question exists in the question pool, there would be many answers by different users for that particular question. In this case, the work of the question answering system becomes a little more complicated as the system ranks the answers based on their relevance and then produces the best answer among the lot.

686

C. Valliyammai et al.

Fig. 1 Workflow of proposed question answering system

3.2 Workflow The workflow of the proposed QA system is shown in Fig. 1 which provides the quality answers with minimalistic spam content and related answers. The dataset for Stack Overflow acquired from Kaggle is completely unprocessed. The few preprocessing techniques are applied to the input set to remove noise and distortion. Then, the features are extracted from the dataset to get more information and non-redundant input to train the model.

3.3 XGBoost Algorithm In Q&A systems, the first process while receiving a new question is searching the existing pool of questions using a classification algorithm. There are many classification algorithms, while the current Q&A systems operate on TF-IDF classification. While the TF-IDF classifies the questions based on the frequency of the terms in the two questions, it fails to recognize the occurrence of words together with their position with respect to others. The usage of XGBoost algorithm with a multi-layer neural network proves to reduce the number of false positives in the classification process.

Enhanced Question Answering System with Trustworthy Answers

687

3.4 Naive Bayes A classification algorithm based on a collection of Bayes’ theorem is the Naive Bayes classifier. The dataset is classified as feature matrix (dependent features) and the response vector (prediction or output variables). A question answering system consists of millions of questions. Searching for data in this question pool would take a very long time despite the efficiency of the algorithm. In this case, to speed up the searching process, the processing power of the system can be improved by using parallel computing and by increasing the server specifications. Even after boosting up the process, another approach would be to reduce the question set in which the algorithm looks for similar questions. To do this, the questions can be divided into separate question categories and search the particular category. The questions can be categorized based on the keywords using Naive Bayes algorithm.

4 Implementation Details and Result 4.1 Similar Question Identification Each question asked by every user needs to be searched among the existing question pool if the question has already been asked. As the system needs to identify similar questions, a neural model using XGBoost has been implemented to calculate the similarity between two questions. In order to train a model, a dataset consisting of questions that have similarity and dissimilarity relation is needed. Kaggle questionpair dataset consists of questions in pairs which can be used to train the classifier model. The accuracy of the model trained with the dataset is 0.76 which is considered to be a good accuracy for the XGBoost model. The input to train the model consists of two input strings which are sent to the model in vectorized form generated by the count vectorize functions under the sklearn library. The model has dense layers to equally distribute the parameters sent by the previous layer to the next layer. The simplest way to develop the model is using Keras library which is a framework on top of TensorFlow. In this case, the use of LSTMs is to consider one word before and after the current word for training the model. This is to ensure that the model can differentiate between “operating systems,” “operating,” and “systems.” The dataset used for training the proposed model is a question pair dataset generated from a question answering community called “Quora” as shown in Fig. 2. This dataset has been published on the Web site of Kaggle for the user’s preference.

688

C. Valliyammai et al.

Fig. 2 Distribution of question lengths for each programming language

4.2 Classification Based on Content/Title In question answering systems, the question classification can be first used to classify the new question’s category and then make sure the question is read from that category. For example, if the question asked by the user is a question related to SQL, the question answering system need not search in the question pool of Java or Python or anything else. After classifying the question and if forwarding of question is needed, the system needs to forward the question to a user who is an expert or is well-versed in that category. So in order for user profiling, the system needs to compute reputation based on the questions answered in each category and the viewer response for the answers posted. In this module, the aim is to classify questions retrieved from Stack Overflow database based on the language they are related to. The data will be retrieved using an SQL query. First, the required data is queried from the Kaggle Cloud using BigQuery Client API. In this module, the post-questions table is needed in order to classify the questions based on the language they belong to. Before this, there is a need to classify a part of the dataset questions based on the language they belong to. This is done by identifying the language names appearing in the question. Visualizing based on the length of question for each programming language did not prove to be effective as all the languages had distribution in different question lengths. Hence, this method was proven to be inefficient. Next, the code from the question was taken separately, and the data was visualized based on the syntax of the codes. Figure 2 is a representation of data based on the distribution of question length. As any unique observation was not found, this method was proven to be ineffective. Next, considering the data visualization based on syntax, the following were the observations by plotting a heat map: The square bracket count, quote count, operator count, and period count occur the most frequently in Python. For JavaScript, quotes and operators clearly appear the most frequently. For Java, the occurrences of each special character are fairly even. In SQL, these special characters rarely occur

Enhanced Question Answering System with Trustworthy Answers

689

aside from arithmetic operators. This is largely due to SQL using white space and new lines to denote complete statements. This is due to the fact that JavaScript is used in conjunction with HTML, in which text denoted by quotes occurs more frequently.

4.3 Computing User Reputation In case a question is not found in the existing question pool, the question is to be forwarded to users in order to avoid starvation. In order to efficiently forward the question, the system needs to forward the question to the users who are most likely to provide a satisfying answer. The users having the highest reputation are assumed to be the most likely users to answer the question. Hence, users who have the highest reputation in the category of the question are the first set of users to receive notification for help. The user reputation calculated is a combination of two types of trust, direct trust and aggregate trust. Aggregate trust is the reputation of the user calculated based on the reputation gained by acceptance of answers by the followers. The aggregate trust of a user is 0 initially and increases by 50 when one of the user’s answers receives an up vote and -10 when one of the same receives a down vote. The person up voting and down voting also has reputation change accordingly (+5 when up votes an answer and -1 when down votes an answer. Using direct trust, we can predict the users who are most likely to give a satisfying answer to the user based on past interactions (i.e., the answers A receives from B). After calculation of user reputation, the top of the reputation table is chosen to forward the question. The question can also be forwarded to users who are most likely to be online in the recent future [9]. The frequent answering time is computed by calculating the number of answers answered within every hour.

4.4 Experimental Result The multinomial Naive Bayes was experimented with and without using TF-IDF scaling. This process was repeated for the bag of words concatenated with the extracted code features. The accuracy for multinomial Naive Bayes with and without TF-IDF is 0.90 and 0.82, respectively. It was observed that multinomial Naive Bayes without TF-IDF has greater AUC while compared with a multinomial Naive Bayes with TF-IDF classification report. These ROC curves plotted in Figs. 3 and 4 were a result of a classifier model which was developed without the consideration of code features. Figures 5 and 6 represent the plotted ROC for classification models developed with code features considered. Using some feature engineering and natural language processing techniques, a neural network was trained to classify questions from Stack Overflow based solely on their titles and content with a precision, recall, and F1-score of approximately 0.93. These results were achieved using the multinomial Naive Bayes classifier in conjunction with the extracted features from the

690 Fig. 3 Multinomial Naive Bayes classification report

Fig. 4 Multinomial Naive Bayes with TF-IDF classification report

Fig. 5 Multinomial Naive Bayes with code features

C. Valliyammai et al.

Enhanced Question Answering System with Trustworthy Answers

691

Fig. 6 Multinomial Naive Bayes with TF-IDF and code features

code blocks of the text. The result is to identify similar questions using the XGBoost algorithm by reaching a total accuracy of 0.76. Then the question is processed in the model and it gives a decimal value (between 0 and 1). The output represents the similarity in the questions. After identifying similar questions, the next step is to classify the question in categories based on language. This has been achieved using a neural model and is successfully classifying the questions into “Java,” “Python,” “SQL,” “JavaScript,” or “Others.” The Naive Bayes algorithm produces an accuracy of 0.93, whereas a neural model produces an accuracy of 0.80.

5 Conclusion and Future Work The main goal of the project is to create a user-friendly question answering system which produces quality answers and spam-free answers with minimal starvation time. Since a lot of answers will be present for an answer, the most appropriate answer needs to be produced first. The best results for question classification were achieved using the multinomial Naive Bayes classifier in conjunction with the extracted features from the code blocks of the text. The proposed QA system can be extended to calculate user reputation, answer ranking, and forward questions efficiently for fetching reliable answers. User reputation can be computed as a part of user profiling, where the activity of the users can be tracked, and can possibly predict the area of expertise, frequently active duration, etc.

692

C. Valliyammai et al.

References 1. Lee, U., Kim, J., Yi, E. Sung, J., Gerla, M.: Analyzing crowd workers in mobile pay-foranswer QA. In: Proceedings of SIGCHI Conference on Human Factors Computer System (2013), pp. 533–542. Punyakanok, V., Roth, D., Yih, W.: The importance of syntactic parsing and inference in semantic role labeling. Comput. Linguist. 34(2), 257–287 (2008) 2. Lin, Y., Shen, H.: SmartQ: a question and answer system for supplying high-quality and trustworthy answers. IEEE Trans. Big Data 4(4):600–613 (2018) 3. Long, P., Anh, N., Vi, N., Quoc, L., Thang, H.: A meaningful model for computing users’ importance scores in QA systems. In: Proceedings of 2nd Symposium on Information Communication Technology, pp. 120–126 (2011) 4. Mihaylov, T., Nakov, P.: Ranking relevant answers in community question answering using semantic similarity based on fine-tuned word embeddings 5. Larkey, L.: Automatic essay grading using text categorization techniques. In: 21st Annual ACM/ SIGIR International Conference on Research and Development in Information Retrieval, pp. 90–95 (1998) 6. Jeon, J., Croft, B., Lee, J., Park, S.: A framework to predict the quality of answers with nontextual features. In: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 228–235 (2006) 7. Richardson, M., White, R.W.: Supporting synchronous social QA throughout the question lifecycle In: Proceedings of 20th International Conference on World Wide Web, pp. 755–764 (2011) 8. Stammbach, D., Varanasi, S., Neumann, G.: DOMLIN at SemEval-2019 Task 8: automated fact checking exploiting ratings in community question answering forums. In: Proceedings of the 13th International Workshop on Semantic Evaluation, June 2019 9. Richardson, M., White, R.W.: Supporting synchronous social Q&A throughout the question lifecycle. In: Proceedings of 20th International Conference on World Wide Web, pp. 755–764 (2011) 10. John, B., Chua, A., Goh, D.: What makes a high-quality user generated answer? Internet Comput. 15(1), 66–71 (2011) 11. Harper, F., Raban, D., Rafaeli, S., Konstan, J.: Predictors of answer quality in online QA sites. In: Proceedings of SIGCHI Conference on Human Factors in Computing Systems, pp. 865–874 (2008) 12. Negi, S., Daudert, T., Buitelaar, P.: SemEval-2019 Task 9: suggestion mining from online reviews and forums. In: Proceedings of the 13th International Workshop on Semantic Evaluation, June (2019)

Rumour Containment Using Monitor Placement and Truth Propagation Amrah Maryam and Rashid Ali

Abstract With the dynamic rise of Online Social Networks (OSNs), the method of information sharing and communication has transformed tremendously. OSNs shows a dual nature, on one perspective it helps in propagating the news and information, and on the opposing side, they may also turn out to be a platform for diffusion of rumours and misinformation all over the OSN. It is therefore necessary to track the misinformation sources by placing network monitors to affirm the trustworthiness of OSNs to its consumers. In this paper, we imitate a scenario in which rumour has at present is diffused into the network by its malicious users and the network administrators have identified the rumour and have successfully pointed out the set of users contaminated by the rumour. Now, our intent is to identify those suspected users originally accountable for spreading the rumour and then bound their ability in spreading rumours into the network. We have proposed three heuristics for rumour containment in this work namely: Source Identification using Mother Node Approach, Monitor Placement using Articulation Points and Truth Propagation using Eigen Vector Centrality. Additionally, to deliver concurrent working of the proposed scheme we mix both the monitor placement and truth propagation objects as-well. Experimental results on real-world Stanford datasets represent that the heuristics are significantly successful in identifying the suspected sources and in preventive the dissemination of rumour into the network.

A. Maryam (B) · R. Ali Department of Computer Engineering, Zakir Hussain College of Engineering and Technology, Aligarh Muslim University, Aligarh, Uttar Pradesh 202002, India e-mail: [email protected] R. Ali e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_69

693

694

A. Maryam and R. Ali

1 Introduction In this modern era, OSNs has emerged as a source of entertainment, information interchange, arguments, advertising, publicizing and a lot more. It has completely revolutionaries the prior way of disseminating and acquiring information [1–3]. In OSNs users posts various things like their opinions, ideas, career interests and various other forms of expressions [4]. These user activities can be used to detect the misinformation posted or shared by different users in OSNs. Various researches have been done by researchers to distinguish between true information and misinformation. For example, Qazvinian et al. showed how to use various features containing different type of memes for distinguishing the misinformation [5]. Similarly, Kwon et al. showed how linguistic, structural, and temporal features can be used to distinguish misinformation from true information [6]. The major contributions can be listed as follows: (1) We define the source identification problem using Mother Node Approach and misinformation detection problem named Monitor Placement using Articulation Points. (2) to lessen the consequence of already diffused misinformation we define truth propagation using Eigen Vector Centrality to propagate the correct information. (3) The performance of our heuristics is authenticated on real network traces of Gnutella, Epinions, Wiki-Vote and Slashdot datasets [7].

2 Literature Survey Giakkoupis et al. [8] proposed PUSH-PULL Rumour Spreading Model describing an undifferentiating organization for information diffusion in networks. Jin et al. [9] have proposed epidemiological model by studying rumour outbreak on Twitter datasets. Nguyen et al. [10] effectively acknowledged a cluster K of doubted rumour generator nodes by proposing a paradigm called K-Suspector problem. Amoruso et al. [3] concentrated on two problems: Source Identification (SI) and Monitor Placement (MP). They have reduced the SI problem to Maximum Spanning Arborescence/Branching problem. The basis of MI problem is the approximation of k-unbalanced cuts. For verification of the used heuristics they have considered 3 real-world networks: Gnutella, Wiki-Vote and Epinions. Zhang et al. [2] studied misinformation detection problem and proved its equivalence to the influence maximization problem. They proposed τ-Monitor Placement problem to find an optimal monitor set to recognize rumour injected into the network and they also proved its #P-complete hardness. For verifying the used heuristic they have considered Twitter, Slashdot and Epinions datasets.

Rumour Containment Using Monitor Placement and Truth Propagation

695

3 Preliminaries and Problem Definition 3.1 Graph Representations and Diffusion Model We depict the OSN as a directed weighted graph G = (V, E, w), where V represents the network nodes, E represents the direct links between nodes and w represents the possibility that node x send information to node y [4]. There are three main commonly used information diffusion models namely (i) Linear Threshold Model of Diffusion, (ii) IC Model of Diffusion, and (iii) Push and Pull model of Diffusion. In this work, we have used simplest IC Model of Information Diffusion [8].

3.2 Problem Description Here, we imitate a scenario where misinformation has already been diffused into the network by some corrupt nodes and the administrators have acknowledged the misinformation and have successfully pointed out the cluster of nodes dirtied by the misinformation spread. Using mother node approach we detect those corrupt suspected nodes primarily answerable for this misinformation spread. Now, our aim is to place monitors near the suspected nodes and bound their capability in spreading the rumours and we also identify the dominant nodes for broadcasting the correct information so as to diminish the effect of previously diffused misinformation.

4 Implementation 4.1 Source Identification Using Mother Node Approach We depict a social network G = (V, E) as a directed graph and a cluster of users represented by A contaminated by the misinformation. We first run the IC model of information diffusion for some random n nodes and let the misinformation propagate to the network. After that we let the system analyze and find those sources responsible for this spread initially. We apply the proposed heuristic mother node, where mother node is a node in the graph from where all the other nodes can be grasped by a specific path. Given a graph it may have any number of such nodes. Now considering mother nodes as the source node as it is from them only the information can diffuse to the entire connected network. Hence, our objective is to find all the mother nodes in the connected component of the infected graph. Algorithm 1 represents the Mother Node approach.

696

A. Maryam and R. Ali Algorithm 1 Mother node Input: Graph G Output: Set of mother nodes M for all u  G.vertices() setLabel (u, UNEXPLORED) for all e  G.edges() setLabel (e, UNEXPLORED) for all v  G.vertices() if getLabel (v, UNEXPLORED) do Depth First Search (G,v) if all vertices are reachable from v v Efficiency of model by DIC (except ‘JOHN’) • BIC Selector This selector penalizes all the works of greater complexity models. It certainly performs better than DIC with the word ‘JOHN’ as an exception but fails to acquire a better efficiency than CV. • DIC Selector This selector works by computing the difference between the evidence of the model given the corresponding dataset and the mean of the anti-evidences is taken along the model. The result is the best generative model for the correct class and the worst generative model for competing classes. Overall, this makes the model more accurate for the classification task. The model output tends to be more complex.

5 Part III: Recognizer 5.1 Recognizer Tutorial Train the full training set using modelSelector and load the test set (Fig. 7).

6 Results We experiment with four different feature sets and three different model selection methods. Our aim is to find out which method gives us the lowest Word Error Rate (WER). Anything less than 0.6 (60%) can be considered an efficient method (Table 2; Fig. 8).

Sign Language Recognizer Using HMMs Table 2 Results of the experiment

Fig. 8 Graphical comparison

723

Method

WER

Total correct

DIC

0.5280

84

CV

0.5617

78

BIC

0.5449

81

100 50 0

DIC

CV WER%

BIC

Total Correct

7 Conclusion It doesn’t come as much of a surprise that the results of the model returned by the DIC selector are better than any other selector since it selects a model that is more accurate for the classification task the DIC selector with the polar features performed best in my experiments with selectors and features having a WER of 0.5280. When using ground features, I obtained a WER of 0.5617 with the DIC selector. The BIC selector with polar features also produced a relatively generic model with very good results and a WER of 0.5449. The logic of penalizing more complex models used by BIC makes the model that it selects generic. As per ref [1], the phonological feature-based tandem models may give a better performance but that requires a label for each frame while training. The system fails when apogee labels are removed. By using Hidden Markov Models, it is easy to insert a new class of posture or to delete an existing class of posture as cited in ref [2]. Thus, Hidden Markov Models is a very efficient method for sign language recognition.

References 1. Kim, T., Livescu, K., Shakhnarovich, G.: American sign language fingerspelling recognition with phonological feature-based tandem models (2012) 2. Liang, R.H., Ouhyoung, M.: A sign language recognition system using Hidden Markov model and context sensitive search 3. ElBadawy, M., Elons, A.S., Shedeed, H.A.: Arabic sign language recognition with 3D convolutional neural networks (2017) 4. Tolba, M.F., Elons, A.S.: Recent development in sign language recognition systems 5. Artificial intelligence, a modern approach by Stuart J Russel and Peter Norvig 6. Nicole, R.: Title of paper with only first word capitalized. J. Name Stand. Abbrev. (in press) 7. Olofsson, T.: Bayesian model selection for Markov, Hidden Markov and multinomial models (2007)

724

V. S. R. Middi and M. A. Raju

8. Anantha Rao, G., Syamala, K., Kishore, P.V.V.: Deep convolutional neural networks for sign language recognition (2018) 9. Dreuw, P., Rybach, D., Deselaers, T., Zahedi, M.: Spech recognition techniques for a sign language system. In: Interspeech (2007) 10. Liwicki, S., Everingham, M.: Automatic recognition of fingerspelled words in British sign language. In: CVPR (2009) 11. Fang, Y., et al.: A real time hand gesture recognition method. In: Proceedings International Conference on Multimedia Expo (2007) 12. Bowden, R., Windridge, D., Kadir, T., Zisserman, A., Brady, M.: A linguistic feature vector for the visual interpretation of sign language. In: ECCV (2004)

Attendance Monitoring Using Computer Vision Akanksha Krishna Singh, Mausami, and Smita Kulkarni

Abstract Monitoring attendance is a very important aspect in any environment where attendance is crucial. However, many of the available attendance monitoring methods are time taking, disturbing and it requires more human intervention. The proposed system is aimed at developing a less disturbing, cost-effective, and more efficient automated student attendance management system using neural networks. In this framework, CNN is used as a classifier to recognize the facial images captured using Raspberry Pi. This system enables accurate attendance. Its targeted users are educational institutes. Hence, this prototype avails an effective solution for replacing an existing system with an embedded attendance system.

1 Introduction Most educational institutes are always concerned about the presence of students’ in the respective institute. Participation of students’ helps in successfully producing an intended result [1]. Also, the teaching environment becomes more interesting and informative as the number of students increases in a class. So, monitoring attendance is the most used way to increase the count in the class. Mostly attendance is taken by calling out the names and marking the attendance on papers. However, these paper attendances are not as effective as it is time-consuming and easy to be manipulated [2]. So, technology had to play its role in this field just as well as it has done in other fields. The present paper proposes an effective way of attendance monitoring using computer vision. A. K. Singh (B) · Mausami · S. Kulkarni MIT Academy of Engineering, Alandi, Pune 412105, India e-mail: [email protected] Mausami e-mail: [email protected] S. Kulkarni e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_72

725

726

A. K. Singh et al.

This attendance monitoring system is divided into various steps, but the most important steps include detection and recognition of one’s face and marking the attendance. Convolution Neural Network algorithm is used as the face recognition technique. The proposed system includes taking the images of all the students present in the class and making their database which can be further given to the classifier for the training purpose. Later, at the time of attendance monitoring, the image can be snapped using the camera module. This image will act as an input to the system which will be recognized and the attendance will be marked. Lastly, the attendance report will also be generated and mailed to the respective faculties. The paper flow is organized as follows. Section 2 explains the previous work. Section 3 introduces the proposed framework followed by Sect. 4 which presents results and conclusions.

2 Literature Survey In [3] the authors came up with the automatic attendance monitoring and managing system for academic institutes. The proposed framework collects attendance electronically by using fingerprint sensor and records are saved on a computer server. Devices like LCD screens and fingerprint sensors are installed at the entrance of each classroom. Attendance will be marked as students place their finger on the fingerprint sensor and it gets detected. Once the identification is done, the attendance report is updated accordingly in the database, and students are notified through the LCD screen. The proposed system has successfully developed a reliable and efficient system with an accuracy of 98%. In [4] the authors have proposed an ear biometrics attendance monitoring system. Ears are never affected by the facial expression. So, the proposed system extracts the geometrical features of the ear from the captured image of the student’s ear. First, database is created by taking the images of all the students, using which identification is done. Edge detection is used to extract the features of the captured image. These features are saved as a vector form in a database. Later these extracted feature vector corresponding to a particular image is compared with the vector database. This system has an overall efficiency of 85%. In [5], the authors have tried to solve the attendance monitoring problem using the RFID technology. This technology enables automatic wireless identification with the help of active and passive tags present in it. The proposed system captures the ID card’s image as the student’s flash it in front of RFID reader. Then the captured image is sent to an online server for the attendance recording purpose. This technology enables semi-automated approach with the help of active and passive tags present in it. The attendance record is sent to the online server instantly for data protection. This will reduce the time utilized in taking manual attendance. The present system can be upgraded by including more modules and features in the system. In [6] authors have used face as a biometric to recognize an individual. In this paper, there is a comparison between the modular PCA and conventional PCA based

Attendance Monitoring Using Computer Vision

727

face recognition algorithm. This method helps in eliminating the problem faced in conventional PCA like illumination and the facial pose. Both the algorithms are tested with two image datasets, one consisting image with different poses and the other with different illumination. In modular PCA, PCA algorithm is applied on all the images divided into more sub-images. It has been inferred that the modular PCA performs better in all the conditions. As a result, PCA modular performed better than conventional PCA. There are many more techniques for marking attendance. The most widely used attendance monitoring systems are based on technologies like GPS, RFID, biometric (fingerprint and Iris). In [7], researchers have designed and implemented an attendance system that is a password-based authentication system. In [8] the author has introduced the iris based system that was used for taking the attendance of employees. This type of system may prove to be inconvenient for people who wear hard contact lens or glasses as they might produce glare which may obstruct the iris. Also, a similar kind of project [9] was implemented based on the location of a person. This system marks attendance when the location of an employee and the location of organization is the same.

3 Methodology In Fig. 1, Attendance Monitoring System Model describes the process of attendance monitoring using a convolution neural network and the generation of attendance reports. First, raspberry pi camera is used to capture the real-time image, and Haar transform is used to process the image. This classifier crops the facial part of the image and saves it in the raspberry pi memory. The CNN model is trained using the facial database and further this trained model is used to classify the real-time captured image. On the basis of the result obtained from the model, the attendance sheet is generated and saved on the server and further mailed to the faculties concerned.

3.1 Dataset Generation The proposed system was tested in an educational institute. The dataset includes the images of four students. The dataset is only used to train the CNN model by dividing it into two parts, i.e., training and testing part with a 4:1 ratio. The images of each student are captured using the laptop camera using Haar cascade as a classifier for face detection as shown in Fig. 2. Using this classifier, the face of the person is cropped from the image captured from the camera. This is implemented using a pretrained Haar casacde.xml file for frontal face. Haar Cascade classifier is a method in which a cascade function is trained with specific input images that are face and nonface. So it detects whether a given input image is a face or non-face [10]. It is based

728

A. K. Singh et al.

Fig. 1 Attendance monitoring system model

on a technique called Haar Wavelet that analyzes pixels in the image into squares by using cascade function. As the camera gets ON, it searches for the face in the frame. As soon as it detects the face, the camera captures the face image. The dataset has been created by taking 500 images of each student. Images are taken in different positions for better database as shown in Fig. 3. 400 images of each students are given for training the model and the rest are for validating the model. As the size of captured images may vary from person to person so the images have been resized to a uniform size.

Attendance Monitoring Using Computer Vision

729

Fig. 2 Face detection

a. Front View

b. Right View

c. Left View

d. Top View

Fig. 3 Student’s photograph in different positions

3.2 Real-Time Image Capturing Raspberry Pi (RPi). RPi can be considered as a small computer. The processor inside it is Broadcom BCM2835 System on Chip (SoC) which is the main heart of the RPi system. SoC means that the most of the components like central and graphics processing units, communications system is there on top of a single chip inside the 256 MB memory chip at the center of the RPi’s PCB. Raspberry Pi Camera. The camera has a resolution of 5 MP with a resolution of 2592 × 1944 pixel for still images and 1080p at the rate of 30 frames per sec and 720p at the rate of 60 frames per second for video. Here the raspberry pi is connected with the camera and personal computer through mobile hotspot. The image acquisition is done using the camera that is controlled by Raspberry Pi. As the students come in front of the camera, it detects the face and captures the facial image. All these captured images are stored in Raspberry Pi’s memory which is further fed to the trained CNN model. The whole system programming is done using python [11].

730

A. K. Singh et al.

3.3 Face Classification CNN Algorithm. Convolution Neural Network consists of hidden layers that are convolutional layers, pooling layers, fully connected layers, and normalization layers [12]. This algorithm helps to understand and remember the features of the image to guess the label (name) of the new image fed to it. Here, the CNN algorithm is used for multiclass classification. The architecture of CNN comprises of first layer with 32 nodes and an input shape of (64,64,3) with 3 signifying color image and second layer with 64 nodes having a kernel size of 5 and 3, respectively. Both the layers are followed by pooling layers and ReLU activation function. ReLU works quite well in neural networks. The output of these layers is flattened and then connected to fully connected layers. Finally, dense layer with SoftMax as activation function is the output layer, comprising of as many nodes as the number of people in database. Here, SoftMax function interprets output as a probability function. The model predicts output having the highest probability. The CNN model is trained with the images in the dataset created. 80 percentage of the images are for the training dataset and 20 percentage of the images are for the validation purpose. The model gets trained based on the number of images per student and the present method of augmentation. Illumination of the images is also been taken care. This increases the accuracy of the result. The model is tested on the real-time images captured by the camera module connected to the Raspberry Pi processor. The trained model identifies the student and gives a unique array as an output [13]. Based on the output array, name of the student is determined.

3.4 Attendance Report The attendance report is generated using the Google Drive API and python. The output label of the image is compared with the report data and present or absent is marked accordingly. This Google sheet is updated using python programming with the help of libraries like gspread and oauth2client. The program accesses and updates the attendance sheet with the help of .json file that has the Google service account credentials. On every working day, the sheet is updated and students are marked absent or present when this system is running. After the captured image data is classified, the sheet is updated based on classifier data and the Google sheet is mailed to the concerned faculties automatically weekly to maintain the attendance record as shown in Fig. 4.

Attendance Monitoring Using Computer Vision

731

Fig. 4 Attendance report mailed to concerned faculties

4 Results and Discussion The attendance system was tested for the students whose images were used in a database. The raspberry pi camera recorded the images of all the targeted students. These images were tested with the model. This predicted result is presented in Table 1. The above Table 1 lists the result obtained when the real-time images were fed to the trained CNN model. Column 1 gives the output array given by the model, column 2 gives the integer values obtained after array conversion using argmax function. This function returns the indices of the maximum element of the array. The last column gives the names of the students based on the integer value. The CNN model used for face recognition achieved an accuracy of 97%. The model was trained with a finite number of images, i.e., 500 images per student with applied image augmentation. This led to the enrichment of the initial dataset which improved the overall accuracy. By analyzing the images stored in the database, it was observed that the noise condition and the orientation of face images affect the recognition process. Overall accuracy can be improved by including all the possible orientation of face images and images with different illumination in the database by applying some image processing methods. Table 1 Output of CNN model

Output array

Integer values

Predictions

[1. 0. 0. 0.]

0

Arti

[0. 1. 0. 0.]

1

AKS

[0. 0. 0. 1.]

3

Tahir

[0. 0. 1. 0.]

2

Mausami

732

A. K. Singh et al.

5 Conclusion This paper introduces an effective attendance monitoring system using raspberry pi and CNN that can replace the existing manual practice of marking the attendance. This system is secure and reliable and can be easily constructed in any workplace or institute with minimal hardware. This prototype is cheaper, easy to use, and low power design and it also requires a less troublesome process for installation unlike other devices like biometric and RFID. In the current approach, the CNN model gives an accuracy of 97% of detecting the faces of the students.

References 1. Stanca, L.: The effects of attendance on academic performance: panel data evidence for introductory microeconomics. J. Econ. Edu. 37(3), 251–266 (2006) 2. Samet, R., Tanriverdi, M.: Face recognition-based mobile automatic classroom attendance management system. In: 2017 International Conference on Cyberworlds, pp. 253–256, IEEE, Chester, UK (2017) 3. Nawaz, T., Pervaiz, S., Korrani, A., Azhar-Ud-Din, : Development of academic attendence monitoring system using fingerprint identification. IJCSNS Int. J. Comput. Sci. Netw. Secur. 9(5), 164–168 (2009) 4. Jawale, J.B., Bhalchandra, A.S.: Ear based attendance monitoring system. In: 2011 ICETECT, pp. 724–727, IEEE, Nagercoil, India (2011) 5. Kassim, M., Mazlan, H., Zaini, N. and Salleh, M.K.: Web-based student attendance system using RFID technology. In: ICSGRC 2012, pp. 213–218, IEEE, Selangor, Malaysia (2012) 6. Gottumukkal, Rajkiran, Asari, Vijayan K.: An improved face recognition technique based on modular PCA approach. Pattern Recogn. Lett. 25(4), 429–436 (2004) 7. Shoewu, O., Olaniyi, O.M., Lawson, A.: Embedded computer-based lecture attendance management system. Afr. J. Comput. and ICT 4(3), 27–36 (2011) 8. Kadry, Seifedine, Smaili, Khaled: A design and implementation of a wireless iris recognition attendance management system. Inf. Technol. Cont. 36(3), 323–329 (2007) 9. Uddin, M.S., Allayear, S.M., Das, N.C. and Talukder, F.A.: A location based time and attendance system. Int. J. Comput. Theo. Eng. 6(1) (2014) 10. Priadana, A., Habibi, M.: Face detection using haar cascades to filter Selfie face image on Instagram. In: International Conference of Artificial Intelligence and Information Technology (ICAIIT), pp. 6–9, IEEE, Yogyakarta, Indonesia (2019) 11. Guerra, H., Cardoso, A., Sousa, V., Leitão, J., Graveto, V., Gomes, L.M.: Demonstration of programming in python using a remote lab with raspberry Pi. In: 2015 3rd Experiment International Conference (exp.at’15), pp. 101–102, IEEE, Ponta Delgada, Portugal (2015) 12. Asfia, Y., Tehsin, S., Shahzeen, A., Khan, U.S.: Visual person identification device using raspberry Pi. In: The 25th conference of FRUCT association, pp. 422–427, Helsinki, Finland (2019) 13. Haldar, R., Chatterjee, R., Sanyal, D.K., Mallick, P.K.: Deep learning based smart attendance monitoring system. In: Proceedings of the Global AI Congress 2019. Advances in Intelligent Systems and Computing, vol. 1112. Springer, Singapore (2019)

Heart Disease Prediction Using Ensemblers Learning Meenu Bhatia and Dilip Motwani

Abstract Heart disease describes a range of conditions that affect your heart. Worldwide increase in heart disease rates there is a need to detect the manifestations of heart stroke at the beginning period and therefore forestall it. It is illogical for a typical man to spend much on exorbitant tests like the ECG and in this manner, there should be a framework set up which is convenient and simultaneously dependable, in foreseeing the odds of coronary illness. Thus, it will be useful to build up an application that can anticipate the weakness of a coronary illness given basic side effects like age, alcohol, smoke, cholesterol, lifestyle, gender, stress, etc. [3]. The machine learning algorithm has demonstrated to be the most accurate and solid algorithm and hence utilized in the recommended or the proposed framework. The main aim is to suggest a system/application which will consist of two modules: Doctor Login and Patient Login. Doctor can record the case details along with the case history of the patient. Whereas the Patient can view its entire medical history. The proposed model will work on Ensemblers Learning that is Bagging, Boosting, and Stacking algorithms. Bagging, Boosting, and Stacking works on a hybrid algorithm which test and trains the dataset. Passing’s because of cardiovascular maladies increased from 13 lakh in 1990 to 28 lakhs in 2016. The number of common cases of cardiovascular illnesses has increased from 2.57 crore in 1990 to 5.45 crore in 2016.

1 Introduction There is no lack of records in regard to the medical side effects of patients suffering heart strokes. In any case, the potential they need to assist us with foretelling comparative prospects in apparently sound grown-ups are going unnoticed. For instance: M. Bhatia (B) · D. Motwani Department of Computer Engineering, Vidyalankar Institute of Technology, Mumbai, India e-mail: [email protected] D. Motwani e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_73

733

734

M. Bhatia and D. Motwani

According to the Indian Heart Association, half of the heart strokes happen under 50 years old and 25% of all heart strokes occur under 40 years old in Indians. Urban population is thrice as defenseless against coronary failures as a rustic population. It is critical to gather important information relating all components identified with our field of study, train the framework according to the proposed calculation of AI and anticipate how more grounded the chance of a patient to get a coronary illness. With the end goal of patients entering information, it is proposed to utilize the effectively accessible sensors in watches and phones to gauge the straightforward elements. To start with the work, gather information from every single viewpoint towards the objective of the framework. In any case, the examination was toward the primary driver or the components which have a solid effect on the heart wellbeing. A few unchangeable variables are like age, sex, and family foundation yet there are a few parameters like pulse rate, blood pressure and so on which can be kept in charge by certain following measures. A few specialists propose a sound eating regimen, others state ordinary exercise keeps the heart solid. The parameters which are considered for the assessment in arranging the structure are as follows. • • • • • •

Age Gender Blood Pressure Heart Rate Genetics Cholesterol Hence, all heart infections are classified as cardiovascular ailments. A few types of heart ailments are:

• Coronary illness: It, in any case, called coronary stock course disease (CAD), it is the most prominent kind of coronary infirmity over the world. It is where plaque stores upset the coronary veins actuating a diminished stock of blood and oxygen to the heart. • Angina pectoris: It is a helpful term for midsection torment that occurs because of an inadequate supply of blood to the heart. Regardless called angina, it is an admonishing sign for coronary failure. The midsection torment is between times running for a few seconds or minutes. • Congestive heart disappointment: It is the place the heart can’t siphon enough blood to whatever is left of the body. It is regularly known as heart frustration. • Cardiomyopathy: It is the crippling of the heart muscle or an adjustment in the structure of the muscle in view of lacking heart siphoning. A bit of the ordinary explanations behind Cardiomyopathy are hypertension, alcohol usage, viral ailments, and innate flaws. • Innate coronary illness: It insinuates the improvement of a sporadic heart in view of deformation in the structure of the heart or its working. It is also a kind of inborn illness that children are conceived with.

Heart Disease Prediction Using Ensemblers Learning

735

• Arrhythmias: It is associated with an issue in the melodic improvement of the beat (pulse). The beat can be Subsiding, brisk, or erratic. These strange pulses are achieved by a short out in the heart’s electrical system. • Myocarditis: It is an irritation of the heart muscle regularly brought on by wellknown, parasitic, and bacterial pollutions affecting the heart. It is a remarkable disease with barely any signs like joins misery, leg expanding, or fever.

2 Literature Review Coronary (Heart) illness is one of the hugest reasons for mortality on the planet today. Expectation of cardiovascular infection is a basic test in the zone of clinical information examination. AI (ML) has been demonstrated to be successful in helping with settling on choices and expectations from the enormous amount of information created by the medicinal services industry. Additionally, numerous ML systems being utilized in late advancements in various territories of the Internet of Things (IoT). Various contemplates give just a look into anticipating coronary illness with ML procedures. Right now, the novel strategy targets finding huge highlights by applying AI methods bringing about improving the precision in the forecast of cardiovascular illness. The forecast model is presented with various blends of highlights and a few known arrangement systems. This delivers an improved presentation level with a precision level of 88.7% through the expectation model for coronary illness with the crossbreed irregular woods with a direct model (HRFLM) [1]. These days, wellbeing sickness is expanding step by step because of a way of life, genetic. Particularly, coronary illness has become progressively normal nowadays such is the reality of individuals is in danger. Every individual has various qualities for blood weight, cholesterol, and heartbeat rate. In any case, as per medicinally demonstrated outcomes, the ordinary estimations of blood pressure is 120/90, cholesterol is and heartbeat rate is 72. This paper gives an overview of various grouping strategies utilized for anticipating the hazard level of every individual depending on age, sex, blood pressure, cholesterol, beat rate. The patient hazard level is arranged utilizing information mining characterization strategies, for example, Naïve Bayes, KNN, Decision Tree Algorithm, Neural Network, and so forth. Accuracy of the hazard level is high when utilizing a more noteworthy number of characteristics [2]. Concealed examples and connections can be separated from huge information sources utilizing information mining. Information mining combines factual investigation, AI, and database innovation. Information mining has been applied in a few zones of clinical administrations such has disclosure of connections among analysis information and put away clinical information. Current clinical conclusion is a composite procedure which requires exact patient information, numerous long stretches of clinical experience, and decent information on the clinical writing [3]. The prediction of coronary illness is the most entangled undertaking in the field of clinical sciences which cannot be observed with a naked eye and comes instantly anywhere, anytime. So there emerges a need to build up a choice emotionally

736

M. Bhatia and D. Motwani

supportive network for identifying coronary illness. A coronary illness expectation model utilizing information mining system called choice tree calculation which helps clinical specialists in identifying the infection depends on the patient’s clinical information. Right now, propose a proficient choice tree calculation strategy for coronary illness expectation. To accomplish the right and practical treatment PC-based frameworks can be created to make a great choice. Information mining is a groundbreaking new innovation for the extraction of concealed prescient and significant data from huge databases, the fundamental target of this task is to build up a model which can decide and separate obscure information (patterns and relations) related with a coronary disease from a past coronary ailment database record. It can fathom confounded questions for distinguishing coronary illness and in this manner help clinical specialists to settle on brilliant clinical choices [4]. Quick and rapid development is found in human services benefits over recent years. Coronary illness causes a great many passing around the world. Numerous remote correspondence advancements have been created for coronary illness expectations. Information mining calculations are valuable in the location and finding of coronary illness. Right now, it is completed on a few single and cross breed information mining calculations to distinguish the calculation that best suits the coronary illness expectation with an elevated level of precision [5].

3 Problem Statement Heart illness can be managed effectively and adequately with a blend of way of life changes, medication, and at times with medical procedures. With the correct treatment, the side effects of coronary illness can be decreased, and the working of the heart improved. The point of consolidating numerous classifiers is to get better execution as contrasted and an individual classifier. The overall objective of our work will be to predict accurately with a few tests and attributes with the presence of heart disease. Attributes considered form the primary basis for tests and gives accurate results more or less. Many more input attributes can be taken but our goal is to predict with few attributes and faster efficiency the risk of having heart disease. Choices are frequently made dependent on specialists’ instinct and experience instead of on the rich information covered up with the informational collection and databases. This training prompts undesirable inclinations, blunders, and unnecessary clinical costs which influence the nature of administration gave to patients. The executives of cardiovascular breakdown can be mind-boggling and are frequently one of a kind to every patient; nonetheless, there are general rules that ought to be followed. Avoidance of intense intensifications can slow the movement of a cardiovascular breakdown just as expands the security and generally speaking prosperity of the patient. At the point when a patient who has intense congestive cardiovascular breakdown is readmitted, the expense and weight to the patient increments. Applying machine learning algorithm to detect heart disease failure. The research questions we plan to answer in this work are:

Heart Disease Prediction Using Ensemblers Learning

737

1. How do we overcome the limited list of attacking existing systems with the goal of detecting new attacks? 2. How effective is the new approach compared to existing approaches? 3. What is the effect of using hybrid algorithms?

3.1 Objectives The fundamental goal of this exploration is to build up a heart prediction system. The system and the framework can find and concentrate concealed information related to ailments from a verifiable heart informational collection. Heart disease prediction system intends to use information mining systems on clinical informational index to aid the forecast of the heart infections.

3.2 Specific Objectives • Provides a new approach to concealed patterns in the data. • Helps avoid human biasness. • To implement Ensemblers Learning that classifies the disease as per the input of the user. • Reduce the cost of medical tests.

4 Proposed System Coronary failure prediction system is implemented in this system using Ensemblers Learning given the input CSV document or manual data entry to the system. The ensemblers algorithms are applied accessing the data set and also by users’ input, heart attack can be predicted on several factors. The proposed system will add some more parameters significant to a heart attack with their weight, age, and by consulting expertise doctors and medical experts. The coronary failure forecast framework intended to help identify different risk levels of heart attack. Flow Chart See Fig. 1. A. Dataset The idea of the dataset is to diagnostically predict whether or not a patient will have heart disease, based on definite diagnostic measurements integrated into the dataset.

738

M. Bhatia and D. Motwani

Fig. 1 Flowchart of the system

Cardiovascular Disease Dataset consists of 70,000 total datasets as described in the following attribute list (Fig. 2). The first phase is data collection. The dataset is from Cardiovascular Disease dataset. The dataset contains features like age, height, weight, gender, genetics, blood pressure, cholesterol, stress, lifestyle to predict whether heart disease chances or

Fig. 2 Dataset description

Heart Disease Prediction Using Ensemblers Learning

739

identified. After that, the dataset is divided into two sets, one for training where most of the data is used and the other one is testing. In the training set, four different classification algorithms have been fitted for the analysis performance of the model. The algorithms used are Ensemblers Learning. After the system has done learning from training datasets, newer data can be given by the user. The final model generates the output using the knowledge it gained from the data on which it was trained. In final phase, the accuracy of each algorithm is predicted and gets to know which particular algorithm will give more accurate results for the prediction of coronary failure prediction. B. The system has two Main Modules • Patient Login and Doctor Login. • Patient Login includes only where the patient can view all the case history if the patient is registered. • Doctor login includes generating case paper of patient, patient’s case history, medications, updates, and patient details. Block Diagram See Fig. 3. The well-known Ensembler learning; Bagging, Boosting, and Stacking were applied utilizing the three base classifiers. Information mining assignments incorporate information grouping, information affiliation and information characterization is a procedure of finding a model to anticipate fitting classes of the obscure information. The essential idea of characterization comprises of two stages: model development and forecast. When constructing the grouping model, the information used to develop model may have commotion or imbalanced data. The outcomes uncovered that stowing with choice tree performs well on the very unevenness and high dimensional enormous datasets. Workflow See Fig. 4. (I) Bagging (Bootstrap Aggregating) Multiple models of same learning algorithm prepared with different subsets of dataset randomly picked from the skilled data set (Fig. 5). (II) Boosting Little variations on Bagging algorithm. In this algorithm, select that points which give wrong predictions. Test with training dataset and select data points with wrong predictions, again this set is trained with ensemble model to get the modified results. Repeat this step to test and train the models for higher accuracy (Fig. 6) [1].

740

M. Bhatia and D. Motwani

Fig. 3 Block diagram

(III) Stacking Stacking, additionally called stacked speculation includes preparing a learning calculation to join the forecasts of different calculations. The various calculations are prepared utilizing the accessible information, at that point combiner calculation is prepared to make the last forecast utilizing all the expectations of every single other calculation (Fig. 7). Here, m*n size of data taken as input. This training data is then fed to different models and gets the predictions from these models and combines them to create m*M size of a new matrix where the number of models represents using M. Second-level Model uses these data. The final prediction generated using the Second-level Model. To create training data for Second-level Model: • Like K-fold cross-validation training data split into K-folds • On K-1 section base model is fitted and for Kth part predictions model were created • To calculate the base model’s performance on test set, first base model is fitted on whole trained data.

Heart Disease Prediction Using Ensemblers Learning

Fig. 4 Workflow of the system

Fig. 5 Bagging

741

742

M. Bhatia and D. Motwani

Fig. 6 Boosting

Fig. 7 Stacking

5 Conclusion Exactness, Speed, Precision, Lift, Relative Improvement are the measures by which we can remark on the working of the calculation dependent on any of the strategies. In broader terms, we need to work to upgrade imperatively to the Stacking group design by utilizing some other transformative calculations. Various results of different sources have been presented to show the head-on of these algorithms

Heart Disease Prediction Using Ensemblers Learning

743

on ensemble-based optimization. More examinations should be possible further on meta-heuristic calculations. Additionally, the calculation gives the close by dependable yield dependent on the information gave by the clients. In the event that the quantity of individuals utilizing the framework expands, at that point the mindfulness about the prediction of heart disease will be known and the pace of individuals kicking the bucket because of heart sicknesses will diminish in the long run. The preliminary outcomes show that an immense number of the standards support in the better find of heart afflictions that even assistance the heart expert in their assurance choices. Acknowledgements I want to stretch out my genuine gratitude to all who helped me with the undertaking work. I want to earnestly express gratitude toward Dr. Dilip Motwani for their direction and steady supervision for giving vital data with respect to the undertaking likewise, for their help in completing this task work. I want to offer my thanks towards my folks and individuals from Vidyalankar Institute of Technology for their thoughtful co-activity and support.

References 1. Mohan, S., Thirumalai, C., Srivastava, G.: Effective heart disease prediction using hybrid machine learning techniques. IEEE (2019) 2. Princy, R.T., Thomas, J.,: Human heart disease prediction system using data mining techniques. (Princy, R.T., Research Scholar Department of Information Technology Christ University faculty of engineering, Bangalore, India-560060; Thomas, J., Department of Computer Science and Engineering Christ University faculty of engineering, Bangalore, India-560060) (2016) 3. Suvarna, C., Sali, A., Salmani, S.: Efficient heart disease prediction system using optimization technique. IEEE (2017) 4. Pattekari, S.A., Parveen, A.: Prediction system for heart disease using Naïve Bayes. (Department of Computer Sci & Engg Khaja Nawaz College of Engineering) 5. Gnaneswar, B., Jebarani, M.E.: A review on prediction and diagnosis of heart failure. (Department of ECE Sathyabama University Chennai, India)

Impact of Influencer Credibility and Content on the Influencer–Follower Relationships in India Adithya Suresh, Akhilraj Rajan, and Deepak Gupta

Abstract Social media platforms have gained an edge over main stream media today. With the advent of the social media age, we see the rise of new marketing styles and influencer marketing tops the list. Our study aims to identify the impact of credibility and content factors of social media influencers on the relationship that they have established with their followers; and how these relationship factors in turn affect the behavioral response of the followers. There has been relatively little research done in India in this context. As the number of influencers across social media platforms keep increasing and influencer marketing has become a more mainstream form of marketing this study becomes very relevant so as to identify the dynamics of the relationship between the influencer and followers. An online survey of 224 followers of social media influencers was conducted from Tier 1 and Tier 2 cities across India. The data was analyzed in Stata using partial least squares path (PLS SEM) modeling. The results identify trustworthiness to be the most important credibility factor for the influencer. Commitment was found to be the most important relationship factor followed by control mutuality and satisfaction. This study has many practical implications in today’s influencer marketing.

1 Introduction Influencer marketing is the next big thing in marketing. Compared to the traditional style of celebrity based endorsements, influencer marketing is where companies take the help of influencers on social media platforms such as Facebook, Instagram, A. Suresh (B) · A. Rajan · D. Gupta Amrita School of Business, Amrita Vishwa Vidyapeetham, Coimbatore, India e-mail: [email protected] A. Rajan e-mail: [email protected] D. Gupta e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_74

745

746

A. Suresh et al.

YouTube etc., who have a good follower base and can endorse the company’s products in a much more efficient manner [1] and at a much lesser cost and effort for the company’s marketing team. Social Media Influencers are micro celebrities whose content that may be in the form of blogs, posts, videos or photos document their everyday life, these may be trivial or very common activities related to their line of work or interests. These influencers have the ability to shape the perspectives, opinions and interests of the people who follow them. This makes it a very interesting case of what attributes about the influencer and about their content lead to an impact on their relationship. The followers develop a direct relationship with the influencer and not the company. Therefore it’s important to find out how the relationship between the influencer and follower can lead to a favorable action for the company in terms of purchase intention for the products endorsed by the influencer and the eWOM.

2 Literature Review Our theoretical framework includes both the drivers as well as the marketing consequences of the relationship between the influencers and their followers. We know from literature that influencers are better endorsers than celebrities [1]. Source credibility factors include trustworthiness, expertise and attractiveness [2]. Latest research on influencer credibility suggests trustworthiness and information involvement as significant variables [3]. Credibility factors also seem to have significant impact on purchase intention [4]. Credibility factors refers to how trustworthy (like whether they are open about their endorsements [5]), how attractive in terms of physicality and attribute wise and how much of an expert the influencer is with respect to their line of work or interests [6, 7]. Influencer content has played a major role in gratifying the needs of their followers from both a hedonic and utilitarian basis [8]. Informativeness refers to the how utilitarian the content put by the influencer is and Entertainment Value refers to the hedonic aspects [9] of the content [6]. These two together form the content factors of the influencers content. The relationship factors are divided into Trust, Control Mutuality, Commitment and Satisfaction [10]. Trust refers to the level of confidence that interacting parties have in each other and their willingness to open themselves to the other party [11]. Control Mutuality refers to the degree of control each party have in the relationship and how much they listen and respond to each other. Commitment refers to the extent to which the influencer and follower believe that the relationship is worth investing in, to maintain and promote. Satisfaction refers to how favorable and pleased the parties feel about each other [12]. Here eWOM or electronic word of mouth essentially means the extent to which the follower is ready to suggest, comment, review, share posts etc. about the influencer or the products they endorse. Finally Purchase intention refers to the degree to which the follower is inclined to purchase the products endorsed by the influencer other [6].

Impact of Influencer Credibility and Content …

747

Fig. 1 Conceptual model with lines indicating each hypothesis

3 Conceptual Model See Fig. 1.

4 Hypothesis We hypothesize a positive correlation from Trustworthiness, Attractiveness, Expertise, Informativeness and Entertainment Value to Trust, Commitment, Satisfaction and Control Mutuality as shown in the diagram (H-1 to H-20). We hypothesize a positive correlation between Trust, Commitment, Satisfaction and Control Mutuality to Purchase intention and eWOM as shown in the diagram (H-21 to H-28). Illustrative Hypotheses: H-1: Trustworthiness is positively related to Trust H-2: Trustworthiness is positively related to Commitment H-20: Entertainment Value is positively related to Control Mutuality H-21: Trust is positively related to Purchase Intention H-22: Trust is positively related to eWOM.

5 Methodology The research process was initiated with a structured online questionnaire. It was distributed across to people of 18 + years of age and across Tier 1 and Tier 2 cities in India using Quota sampling. The questionnaire initially focused on the whether

748

A. Suresh et al.

Table 1 Scales for variables and their source Scale Item

Adapted from

Trustworthiness

Construction and validation of a scale to measure celebrity endorsers’ perceived expertise, trustworthiness, and attractiveness Roobina Ohanian

Attractiveness Expertise Informativeness

Voss et al. [9]

Entertainment value Trust

Hon and Grunig [10]

Control mutuality Commitment Satisfaction eWOM

Evans et al. [11]

Purchase intention

the respondent followed an influencer or not on social media. Those who did not follow any social media influencer were not to respond to the survey. And those who followed social media influencers were asked to name the influencer that they followed the most. This influencer was to be the focus while answering the questions based on all the credibility, content, relationship and the action variable questions across the questionnaire. The later part of the questionnaire focus on the demography of the respondent. Out of the 289 respondents who attempted the questionnaire, 224 of them were following social media influencers and these form the basis of our current analysis. The data was analyzed using the partial least squares path (PLS SEM) modeling framework in Stata. We decided to use this model as it enables us to estimate complex models with many constructs, indicator variables and structural paths without imposing distributional assumptions on the data (Table 1).

6 Empirical Results and Implications All items scales used have high reliability as their Cronbach’s Alpha, D.G and Rho scores are all above 0.85. The factor loading for all the variables is between 0.7 and 0.9 showing that the factors extract sufficient variance from the variables (Tables 2 and 3). Our study has brought out some very interesting results. We find that trustworthiness seems to be the most salient among all the influencer credibility and content factors put together. It has a strong impact on both the trust factor in the relationship and also control mutuality. This means that the more trustworthy the follower perceives the influencer to be, the more the follower will feel treated well and assured about whatever the influencer claims. The strong positive correlation between trustworthiness and control mutuality suggests that the more trustworthy the influencer

Impact of Influencer Credibility and Content …

749

Table 2 PLS path coefficients between source credibility and content factors with relationship factors Variables

PLS path coefficients Trust

Commitment

Trustworthiness

0.281**

Expertise

0.177

Attractiveness Entertainment value Informativeness

0.225*

Control mutuality

Satisfaction

0.279**

0.210*

−0.013

0.119

0.176

−0.007

0.078

−0.049

−0.027

0.025

0.139

0.130

−0.086

−0.045

−0.107

0.144* −0.137

Note * p < 0.1; **p < 0.05; ***p < 0.01

Table 3 PLS path coefficients between relationship factors and behavioral factors

Variables

PLS path coefficients Purchase intention

eWOM

Trust

0.018

0.090

Commitment

0.333***

0.337***

Control mutuality

0.233**

0.164

Satisfaction

0.206**

0.055

Note * p < 0.1; **p < 0.05; ***p < 0.01

is perceived to be by the followers, the more the follower feels that he/she is being given sufficient importance in the relationship and that the influencer listens to them. Trustworthiness also tends to have a positive impact on both commitment and satisfaction.1 This means that the more trustworthy the influencer is perceived to be, the more long term relationship they tend to have with the influencer and the more loyal they become toward the influencer. This also means that the more trustworthy the influencer is perceived to be, the more the follower tends to be happy and content about their relationship. Trustworthiness seems to be a very strong characteristic, even above expertise. Therefore we may say that expertise need not necessarily play an important role in developing a strong relationship with the followers. The influencer needs to be able to behave in ways that can actually make him/her to be perceived as more trustworthy. This can be by being more open about paid endorsements or being extensively careful about the credibility of the information that they put out. This is a prospective area for further research. We also find that attractiveness does not seem to have an impact on the relationship factors. While it might be intuitive to say that the more attractive the influencer is the more followers they may have, the reality does not seem to look so. An honest and modest person seems to make better progress in their relationship with followers than just a very attractive influencer.

1P

= 0.10.

750

A. Suresh et al.

Analyzing the impact of influencer content, we find that informativeness does not have an effect on any of the relationship factors. However entertainment value seems to have a significant correlation with satisfaction in the relationship.2 This means that the more entertaining the content put up by the influencer, the happier and content the follower feels about the relationship. This suggest that the influencer must identify the important entertaining components of their target group of followers, and must meet their expectations. When it comes to the relationship factors we find that commitment tends to be the most important component. If the follower feels that the influencer is making efforts to have a long term relationship with them, they are ready to purchase products that are endorsed by the influencer and is the only case in which they are ready to engage in suggesting, reviewing or commenting about the products endorsed by the influencer online (eWOM). We also find that control mutuality and satisfaction have a strong correlation with purchase intention. These results reiterate the findings of a recent study by Dhanesh and Duthler [12], where commitment emerges as one of the strongest relationship factors. However, unlike Dhanesh and Duthler, we find that satisfaction and control mutuality have a strong correlation with purchase intention; i.e., the follower will engage in purchase of products endorsed by the influencer if they feel that they have sufficient control in the relationship and the influencer is ready to listen to them. The follower should also feel happy about the relationship for the purchase to happen. We also find—along the lines of Dhanesh and Duthler—that trust is more like a hygiene factor and is not sufficient enough to make purchase or electronic word of mouth.

7 Conclusion This study was limited to the Indian population and could be done more effectively if sampling was more stratified. We can conclude from the study that the most important attribute of an influencer is trustworthiness. The positive relationship between entertainment value of content and satisfaction suggests that the influencers could benefit from creating content higher in entertainment value. Commitment as a relationship factor is the strongest variable that will result in positive behavioral response. Control mutuality and satisfaction are the next important relationship factors for purchase to happen. The test for mediation for the relationship factors between the credibility and content factors as independent variables and behavioral factors as dependent variables offers scope for further research.

2P

= 0.10.

Impact of Influencer Credibility and Content …

751

References 1. Schouten, A.P., Janssen, L., Verspaget, M.: Celebrity vs. influencer endorsements in advertising: the role of identification, credibility, and product-endorser fit. Int. J. Adv. 39(2), 258–281 (2019) 2. Kahle, L.R., Homer, P.M.: Physical attractiveness of the celebrity endorser: a social adaptation perspective. J. Consum. Res. 11(4), 954–961 (1985) 3. Xiao, M., Wang, R., Chan-Olmsted, S.: Factors affecting YouTube influencer marketing credibility: a heuristic-systematic model. J. Media Bus. Stud. 15(3), 188–213 (2018) 4. Sokolova, K., Kefi, H.: Instagram and YouTube bloggers promote it, why should I buy? How credibility and parasocial interaction influence purchase intentions. J. Retail. Consum. Serv. 53 (2020) 5. Stubb, C., Nyström, A.G., Colliander, J.: Influencer marketing. J. Commun. Manage. (2019) 6. Lou, C., Yuan, S.: Influencer marketing: how message value and credibility affect consumer trust of branded content on social media. J. Interact. Advertising 19(1), 58–73 (2019) 7. Ohanian, R.: Construction and validation of a scale to measure celebrity endorsers’ perceived expertise, trustworthiness, and attractiveness. J. Advertising 19(3), 39–52 (1990) 8. Kolo, C., Haumer, F.: Social media celebrities as influencers in brand communication: An empirical study on influencer content, its advertising relevance and audience expectations. J. Digital Soc. Media Mark. 6(3), 273–282 (2018) 9. Voss, K.E., Spangenberg, E.R., Grohmann, B.: Measuring the hedonic and utilitarian dimensions of consumer attitude. J. Mark. Res. 40(3), 310–320 (2003) 10. Hon, L., Grunig, J.E.: Guidelines for measuring relationships in public relations. Retrieved from Gainesville, Institute for Public Relations Research, University of Florida, FL (1999) 11. Evans, N.J., Phua, J., Lim, J., Jun, H.: Disclosing Instagram influencer advertising: the effects of disclosure language on advertising recognition, attitudes, and behavioral intent. J. Interact. Advertising 17(2), 138–149 (2017) 12. Dhanesh, G.S., Duthler, G.: Relationship management through social media influencers: effects of followers’ awareness of paid endorsement. Public Relat. Rev. 45(3), 101765 (2019)

Smart Employment System: An HR Recruiter Kajal Jewani , Anupreet Bhuyar, Anisha Kaul, Chinmay Mahale, and Trupti Kamat

Abstract The traditional HR recruitment process is long and time-consuming. The talent search process is restricted due to human limitations. The overwhelming number of candidates, geographical constraints and deception which cannot often be caught by experienced recruiters are some of the problems faced by the sector and there is an urgent need to address the concern with technical solutions. To optimize this entire process of HR interviews, we propose video analytics be used to screen candidates. A candidate’s emotion is extracted from his speech using MelFrequency Cepstral Coefficients (MFCCs) as a major classification feature for the Artificial Neural Network (ANN). Deceptive Impression Management (IM), i.e. an applicant trying to exaggerate his suitability for a job by overestimating his prowess is also taken into consideration when displaying results. Thus, an NLP approach using Linguistic Inquiry and Word Count (LIWC) and Latent Dirichlet Allocation (LDA) is used for text-based measurement of deceptive IM which may help by informing organizations to take a second, more critical review of applicants when a high level of deceptive IM is detected. Finally, the Big five personalities index: Openness, Conscientiousness, Extroversion, Agreeableness, Neuroticism (OCEAN) commonly used by many recruiters, is digitized using Convoluted Neural Networks (CNN) and a personality graph generated, giving a more comprehensive view of the candidate’s K. Jewani (B) · A. Bhuyar · A. Kaul · C. Mahale · T. Kamat Vivekananda Education Society’s Institute of Technology, Sindhi Society, Chembur 400074, Mumbai, India e-mail: [email protected] A. Bhuyar e-mail: [email protected] A. Kaul e-mail: [email protected] C. Mahale e-mail: [email protected] T. Kamat e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_75

753

754

K. Jewani et al.

personality and fit with the company. The results are eventually presented in the form of a detailed review.

1 Introduction Artificial Intelligence and Deep Learning are fast becoming a force to be reckoned with. Industry 4.0 is the new direction in which businesses are headed as computers communicate, inanimate objects get ‘smarter’, and decisions are often made without human involvement [1]. The process of recruitment is one such area where automation can efficiently support the system. Currently, the entire process is man-powered. Screening resumes efficiently and time-effectively is one of the biggest challenges in talent acquisition: 52% of talent acquisition leaders say the hardest part of recruitment is identifying the right candidates from a large applicant pool [2]. An interviewer can have his own inherent biases regarding a candidate’s caliber or lack thereof. Furthermore, features like micro-expressions, involuntary emotional leakage which can showcase a person’s true emotions, or Deceptive Impression Management wherein candidates project themselves to be more suitable for the job than they actually are difficult to detect by a human. Moreover, interviews that happen in person need people to meet at a centralized location. Many important pieces of information can be overlooked due to human error such as stress levels, candidate behaviour and conversational attitudes. Talent search is restricted due to human limitations. And thus, automating the process can be very helpful and can bring a lot of proficiency. A system that considers all the factors mentioned needs to be implemented so that any loopholes in the recruitment can be reduced and the system can be automated.

2 Literature Survey The paper [3] uses speech analysis to detect candidate’s stress levels. Machine learning is used to detect stress in speech, using mean energy, mean intensity and Mel-Frequency Cepstral Coefficients (MFCCs). The dataset used is the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). We can also check for a variety of emotions like anxiety, surprise, disgust to analyze the overall personality of a candidate. In paper [4], Deceptive Impression Management (IM) is often used by applicants in employment interviews to convince the interviewer. Given the limitations of both traditional self-report deceptive IM measurement and thirdparty ratings, NLP, which broadly refers to the creation of datasets from unstructured text sources, has the potential to assess raw interview content and measure deceptive IM without many of those limitations. The paper, Deep learning-based document modelling for personality detection talks about personality is a combination of multiple things- motivation, behaviour, thought method, etc. They used convolutional Neural networks to represent traits that affected someone’s suitability for a job.

Smart Employment System: An HR Recruiter

755

Using text to extract personality traits from a stream of consciousness essays using the CNN training, five networks for the five commonly used personalities of the OCEAN model were trained. Each was a binary classifier that predicted the particular trait to be true or false. They represented each individual essay by aggregating the vectors of its sentence. Concatenated the obtained vectors with the Mairesse features, which were extracted from the texts directly at the pre-processing stage thus improving the method’s performance. The method has provided the best results so far.

3 Proposed Solution Our system attempts to solve the problems associated with the recruiting process by helping it transition into the age of Industry 4.0. An aspiring candidate can log in to the system and will be provided questions which he/she can answer in a time frame of 90 s. The recorded answers are then analyzed through three techniques (Fig. 1): 1. NLP approach to score Deceptive Impression Management using text obtained by speech-to-text from input using certain chosen hypotheses. 2. Speech emotion is classified by ANN using MFCC as a major feature set to train Artificial Neural Network. 3. NLP to identify the Big 5 personalities, i.e. Openness, Agreeableness, Conscientiousness, Extraversion, Neuroticism. At the end of the process an elaborate report is generated which has a personality graph—showing the spectrum of his/her characteristics, an emotion analytic graph which shows the candidate’s positive and negative responses and the levels at which deception is seen. This report is accessible by the recruiter who can use it to choose an employee.

4 Methodology 4.1 Speech Emotion The RAVDESS as following [3] is a validated multimodal database of emotional speech and song. The database is gender-balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. The speech database includes eight expressions: calm, happy, sad, angry, fearful, surprise, and disgust. We have used this dataset for training the Artificial Neural Network (ANN). In order to identify different emotions in a human speech, Mel-Frequency Cepstral Coefficients (MFCCs) are chosen as the major feature for the ANN.

756

K. Jewani et al.

Fig. 1 Proposed system design

Mel-Frequency Cepstral Coefficients: The speech produced by the vocal cords, the tongue, the teeth, these are all elements that filter the sound and make it unique for every speaker. The sound is therefore determined by the shape of all these elements. This shape manifests in the envelope of the short-time power spectrum, and the MFCCs represent this envelope (Lyons 2015). These carry a lot of information in the lower spectrum specifically like age, gender and other dynamics of speech Several steps are taken in order to extract these coefficients from an audio file: the framing (a frame size of 3 s is chosen in our case), windowing, FFT (Fast Fourier Transform) in the frequency domain and then passing it through Filter Banks to finally reach the Mel-Frequency Coefficients (Fig. 2). It is possible to extract 26 (deltas) or 39 (deltas-deltas) coefficients according to the needs. The deltas give the trajectories of the coefficients and thus give information about the dynamics of the speech.

Smart Employment System: An HR Recruiter

757

Fig. 2 Audio processing (from stress detection using speech analysis [3])

An ANN model is trained using these coefficients as a feature to predict emotions from the audio snippets of the interview which is then reported. Data augmentation was done for more accuracy using methods like, noise adding, time-shifting, changing pitch and changing speed to create more data samples to take into account bad network/audio connections.

4.2 Deceptive Impression Management We used an NLP approach to deal with deceptive IM, wherein we perform hypothesis testing which determines if there is any deceptive behaviour. Deception is used by a candidate in a peccable way to convince the interviewer with their points. Many candidates fake their answers to sound good and impress the interviewer. Linguistic Inquiry and Word Count (LIWC) [5], is a dictionary wherein each word comes under some word category. LIWC helps us define different types of words or patterns used by deceivers when they are lying. Hypothesis testing is done on these word categories and patterns to know how they influence or signify deception. It is a statistical test done to determine if our assumed hypothesis (nothing but a simple statement) is correct or not. We have noted some useful hypothesis [4] as follows: Hypothesis 1 Total number of words used by the interviewee shows that longer length and more words will be positively related to deceptive IM [4]. Hypothesis 2 Negative word-use will predict self-reported scores of deceptive IM such that increased negative word-use will be positively related to self-reported deceptive IM [4].

758

K. Jewani et al.

Hypothesis 3 Positive word-use will predict self-reported scores of deceptive IM such that increased positive word-use will be negatively related to self-reported deceptive IM [4]. Negative words show less confidence from the interviewee’s side whereas positive words indicate the person is confident and true about what he or she is speaking. Hypothesis 4 A dictionary-based measure of pronoun usage will predict self-reported scores of deceptive IM such that other references pronouns (second, third-person pronouns) will collectively be related to deceptive IM [4]. Hypothesis 5 A dictionary-based measure of pronoun usage will predict self-reported scores of deceptive IM such that self-reference (first-person singular) pronouns will collectively be related to less deceptive IM [4]. Self-referencing indicates good ownership of the event you’re speaking about. Using more other references shows less confidence or avoidance of ownership of the event. Non-immediacy also shows that there might be deception because a truthful person who describes an event would be very immediate and subtle about his or her connection to the experience. We use a confidence interval based on the z-statistic. When we give a sample of values of the word count, first-person pronouns, etc., an interval or range of values is generated. This is the expected range of a value that closely relates to the sample given [6]. If our input value lies in between this we can say that there is a similarity between the input value and the sample. More deviation from the interval will indicate more dissimilarity of value with the given sample. The formula [6] used to calculate the interval, (µ) is as given: µ = M ± Z (sM )

(1)

where M sample mean, Z Z value for the required confidence level (95, 99%, etc.), √ sM standard error = (s2 /n). Not one test result is sufficient to determine the deceptive level but a combined analysis is required [4, 5].

4.3 Five Factor Model for Personality Classification We used the Big 5 personality markers to identify the traits of a candidate and his suitability to the job. The Big 5 personality or Five Factor Model (FFM) developed

Smart Employment System: An HR Recruiter

759

Table 1 The five factor model as can be used to gauge employability Personality dimension

Features

Openness (O)

Insightful, imaginative, open to new experiences, learn new things

Conscientiousness (C)

Methodical, organized, thorough

Agreeableness (A)

Considerate, kind, sympathetic, team player

Extraversion (E)

Energetic, talkative, action-oriented, enthusiastic

Neuroticism (N)

Low scores indicate calmness, emotional stability

Fig. 3 Personality classifier workflow

by Ernest Tupes and Raymond Christal is often used by Human Resources to place employees. We have tried to automate the process using Convoluted Neural Networks (CNN) (Table 1; Fig. 3). There are two corpora used to train the model: Essays by 2479 psychology students who wrote whatever came to their mind for 20 min and then fill out forms analyzing their personalities. The second corpus included data from Myers–Briggs Personality Type (MBTI) with 8600 columns which had the type code of a person and 50 things they had said (It was seen that MBTI and FFM model had similarity in parameters like Intuition-Openness, Agreeableness-Feeling, Conscientiousness-Perception, Introversion-Extraversion and could be used interchangeably). Following [7], pre-processing, document-level feature extraction using Mairesse baseline feature set like word count, Filtering, word-level feature extraction and fivelevel classification into the five categories was done. A graph for each candidate was produced which was displayed in the form of a report to the employer to take a final call on. A convoluted neural network with seven hidden layers (which include unigrams, bigrams and trigrams) was used to perform the classification. Various classifiers like Naive Bayes, Random Forest and Support Vector Machine (SVM) were used for classification between which MLP showed the best results.

760

K. Jewani et al.

5 Results MFCCs are very good emotion detection features and are enough to successfully classify emotions. Also, the best way to use the MFCCs is using the first 13 significant values and audio length to be used as a feature is 3 s which was found out by plotting audio files of different lengths (Table 2). The model trained using data augmentation techniques is tested upon male actors and female actors’ emotion datasets which was split (Figs. 4 and 5). As we can see, the confusion matrix of the male and female model is different. MALE: Angry and Happy are the dominant predicted classes in the male model but they are unlikely to mix up. FEMALE: Sad and Happy are the dominant predicted classes in the female model and Angry and Happy are very likely to mix up. Also, Mixing Multiple Data Augmentation techniques such as Noise Adding + Shifting, Pitch Tuning + Noise Adding can make the validation accuracy better enhancing the performance of the trained model. Table 2 Labels for each emotion Emotion

Angry

Calm

Fearful

Happy

Sad

Label

1

2

3

4

5

Fig. 4 A confusion matrix of female emotion

Smart Employment System: An HR Recruiter

761

Fig. 5 A confusion matrix of male emotion

Table 3 Result analysis for deception levels WC, FPS, OR, Pos, Neg

Deception shown for

Was the answer deceptive ideally?

Result

123,9,0,17,2

None

No

Successful

69,5,2,9,4

WC, FPS

Yes

Successful

87,7,3,8,2

Pos

Yes

Unsuccessful

120,10,3,22,6

None

No

Successful

Abbreviations WC word count, FPS first-person singular, OR other references, Pos/Neg positive/negative word usage

5.1 NLP Module (Hypothesis Testing) The system results show that deception is mainly studied from the first two parameters which are Word Count and First-Person Singular pronouns. These are more important as compared to others. Positive and Negative word usage are related to EQ more (Table 3).

5.2 FFM Model For the personality classifier, the best results were obtained using a convolutional Neural Network with Mairesse features out of which, different classifiers showed results with varied accuracies but the best of the lot was the Multi-level Perceptron Classifier which worked with the underlying neural system to give good results

762

K. Jewani et al.

for every personality dimension [7]. Moreover, diversifying the training data by collaborating with Myers–Briggs Personality datasets increased the accuracy of the classifier. The highest accuracy was seen for Openness (80.39%). Also, since there was no parallel for the parameter of Neuroticism in the MBTI classification slightly lower accuracy is seen in that personality dimension (61.74%).

6 Conclusion This paper tried to find a more efficient way to automate the highly tedious and manually intensive job of a Human Resources Interviewer. The emotion of the speaker as he answers a question is noted to find his natural response to them using MFCC coefficients. The model gave results with an accuracy of 93% using data augmentation techniques along with the coefficients. The speech is also simultaneously converted to text and is psychoanalyzed into five Big personalities: OCEAN with Classification methods like Logistic Regression, Support Vector Machine out of which MLP gave the more accurate results. Finally, the levels of deception are also recorded using five major hypotheses following which z-statistic is used to calculate confidence intervals to find whether the particular answer has high levels of deception. Working with a more specific dataset may better the confidence numbers

7 Future Scope The project can be further developed by using the images obtained from the video and analyzing them to gain information about the candidate from his/her body language. It can also be used to check the micro-expressions (expressions made at 1/25th of a second, the immediate true reaction as has been mentioned in various studies) which have otherwise gone out of the purview of recruiters due to persistence of vision. The resume of a candidate can also be summarized and presented to a recruiter using natural language processing and relevant information tagged so as to further ease the process of the recruiter providing him all the information to evaluate a potential employee at one location.

References 1. Bernard, M.: (2018) What is industry 4.0? Here’s a super easy explanation for anyone, Available at: https://www.forbes.com/sites/bernardmarr/2018/09/02/what-is-industry-4-0-heres-a-supereasy-explanation-for-anyone/#74562a469788. Accessed 12 Apr 2020 2. Min, J.A.: (2016) How artificial intelligence is changing talent acquisition. Available at: https://www.tlnt.com/how-artificial-intelligence-is-changing-talent-acquisition/. Accessed 12 Apr 2020

Smart Employment System: An HR Recruiter

763

3. Tomba, K., Dumoulin, J., Mugellini, E., et al.: Stress detection through speech analysis. In: Proceedings of the 15th International Joint Conference on e-Business and Telecommunications. https://doi.org/10.5220/0006855803940398. (2018) 4. Auer, E.M.L.: Detecting deceptive impression management behaviors in interviews using natural language processing. Master of Science (MS), thesis, Psychology, Old Dominion University, https://doi.org/10.25777/yx69-dy97. (2018) 5. Newman, M.L., Pennebaker, J.W., Berry, D.S., Richards, J.M.: Lying words: predicting deception from linguistic styles. Pers. Soc. Psychol. Bull. 29, 665–675 (2003). https://doi.org/10.1177/014 6167203029005010 6. Sullivan, L.: Confidence intervals. Boston University School of Public Health. http://sphweb. bumc.bu.edu/otlt/MPHModules/BS/BS704_Confidence_Intervals/BS704_Confidence_Inte rvals_print.html 7. Majumder, N., Poria, S., Gelbukh, A., Cambria, E.: Deep learning-based document modeling for personality detection from text. IEEE Intell. Syst. 32, 74–79 (2017). https://doi.org/10.1109/ mis.2017.23

Alzheimer’s Disease Prediction Using Fastai Chiramel Riya Francis, Unik Lokhande, Prabhjyot Kaur Bamrah, and Arlene D’costa

Abstract Alzheimer’s disease is a rapidly progressive neurodegenerative disease that can be diagnosed on the basis of numerous criteria such as assessing memory impairment, thinking skills, judging functional abilities, and identifying behavioral changes. Prediction of this disease at an early stage will help take better preventive measures, slowing down the degeneration process. However, the above criteria are not sufficient enough to accurately predict the presence of Alzheimer’s disease which further effects the treatment of the disease leading to hazardous repercussions. Through this paper, we present a system that advances in diagnosis using detection and prediction of Alzheimer’s disease and dementia. Although Alzheimer’s disease occurs in a clinical set of stages, this system will classify the images into two prevalent classes Alzheimer’s disease and non-Alzheimer’s disease. The above system can be further integrated into hospitals for efficient diagnosis of patients, thus facilitating the increase in life expectancy of an Alzheimer’s patient. Apart from predicting the MRI images, the system will be easier to use and integrate with the current system. The model makes use of MRI images from OASIS database which have been transformed and normalized using functions from Fastai library. The model which is developed using transfer learning techniques uses DenseNet algorithm for training data categorizing prevalent data and performing prediction with higher accuracy.

C. R. Francis (B) · U. Lokhande · P. K. Bamrah · A. D’costa Department of Information Technology Fr, Conceicao Rodrigues College of Engineering, University of Mumbai, Mumbai, India e-mail: [email protected] U. Lokhande e-mail: [email protected] P. K. Bamrah e-mail: [email protected] A. D’costa e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 T. Senjyu et al. (eds.), Information and Communication Technology for Intelligent Systems, Smart Innovation, Systems and Technologies 195, https://doi.org/10.1007/978-981-15-7078-0_76

765

766

C. R. Francis et al.

1 Introduction Alzheimer’s disease is a neurodegenerative disorder also known as dementia and is a significant healthcare concern in the twenty-first century. It is estimated that around 5.5 million people above 65 years of age are living with Alzheimer’s disease and it is the 6th leading cause of death in the USA. In 2018, the global expense of treating Alzheimer’s disease including medical, social services, and income loss to the families of patients was over $277 billion, affecting the national economy and burdening the US healthcare system. Alzheimer’s disease is a significant personal, medical, and social issue. A large number of Alzheimer’s disease patients suffer from aspiratory pneumonia where food moves involuntarily into the windpipe instead of moving into the esophagus due to degeneration of essential neurons gradually advancing toward fatal pneumonia. Although medical reports indicate multiple organ failure, the primary cause of the fatality is late or no detection of Alzheimer’s disease. Even when the disease is correctly diagnosed, tracking the disease progression is expensive over time. Treatment is believed to be most effective if it is performed before significant downstream damage occurs that is before Alzheimer’s disease is clinically diagnosed, at an earlier stage of mild cognitive impairment (MCI) also known as a pre-symptomatic stage. Earlier diagnosis of Alzheimer’s disease requires comprehensive medical assessment, including a detailed review of medical and family history as well as feedback from a family member or individuals. Apart from feedback obtained from patients, the following rigorous testing techniques and strategies are implemented as well trials are conducted. These include simple preliminary mental state testing including short memory tests, a physical and neurological examination, tests (such as blood tests and/or brain imaging to identify tumors) and MRI, PET scans. A physician diagnosis Alzheimer’s disease by aggregating results from the above set of tests as well as reports, and this involves skill and experience as the symptoms vary among different patients. Common pattern analysis methods as well as some clustering algorithms have been used and seem likely to be successful in early detection of Alzheimer’s disease. Application of these machine learning algorithms requires suitable architectural design or preprocessing steps. Classification cases using machine learning often require the following steps: extraction of features, selection of features, reduction of dimensionality, and selection of feature-based classification algorithms. This requires expertise and several stages of optimization which can prove to be time consuming and inaccurate in complex situations. [1] To overcome these difficulties, deep learning attracts a growing area of machine learning research using raw neuroimaging data such as MRI images and PET scans to generate features by utilizing a new learning approach called as progressive learning. The corresponding paper focuses on detecting and accurately predicting Alzheimer’s disease.

Alzheimer’s Disease Prediction Using Fastai

767

2 Literature Survey According to the paper referred [2] pattern classifiers built on longitudinal data would yield better performance as compared to cross-sectional data based on which recurrent neural networks are built. Multivariate functional principal component scores (MFPC) were used to represent longitudinal markers so that the missing or irregular data for the samples could be adequately handled. Deep learning techniques built on recurrent neural networks which have long short-term memory (LSTM) can be essentially used in functions such as sequence modeling including machine translation and functional MRI modeling. Thus, RNN provides better results for characterizing longitudinal data. Cognitive functions of human brain include different tasks like running, memorizing, cycling, and other mental abilities. Various examinations are conducted to study the mental abilities of symptomatic patients precisely toward the longitudinal section of the brain, and these measures are recorded known as longitudinal cognitive measures. The deep learning model used in this paper was an LSTM [3]auto-encoder which learns appropriate features from the abovespecified longitudinal cognitive measures in order to predict the progression of the MCI patients to Alzheimer’s disease [4]. This encoded data was combined with baseline imaging data as features in order to build a prediction model. Implementation of auto-encoder was done using TensorFlow [2]. The measurement of the model was done by using C-index scores (concordance). The time period for measurement of values of a patient for conversion was 12–6 months. The model built on purely baseline cognitive measures yielded a C-index of 0.848 while model with LSTM auto-encoder provided C-index value of 0.898 and 0.90 at 6 and 12 months, respectively. However, the technique specified in this paper needed one-year follow-up to calculate the accurate results and does not provide immediate results which are not favorable. The performance of convolutional neural network can be improved by adding a learning block that performs function of capturing spatial correlations. In this paper [5], DenseNet architecture was modified in order to propose multiple feature reweight DenseNet architecture. Squeeze-and-excitation module (SEM) was introduced to DenseNet through the channel feature reweight DenseNet (CFR-DenseNet). For connecting the interdependencies between the features of different convolutional layers, the double squeeze-and-excitation module (DSEM) was introduced and the inter-layer feature reweight DenseNet (ILF-DenseNet) was constructed. Multifeature reweight DenseNet was constructed by combining the CFR-DenseNet and ILFR-DenseNet [5]. Experimentation was done to check the effectiveness of all the three types of DenseNets mentioned above. The test error rates obtained on the two datasets of variable amount were 3.57% and 18.27%, respectively. SEM could be added to DenseNet after each convolution layer such that the output from all the preceding layers can be concatenated together. It reduces structural differences between the single layer and multi-layers adding to the functionality, thus modeling the correlation of channels implicitly. Thus, MFR DenseNet turned out to be most

768

C. R. Francis et al.

effective through experimentation results and can be introduced in various projects featuring DenseNet According to this paper [6], data samples have been collected from Alzheimer’s disease neuroimaging initiative and merged with data obtained from smaller hospitals. Exploration of heterogeneity of Alzheimer’s disease datasets has been done from data obtained from data sources and concluded that different function MRI scans have different sample distributions resulting in different features in the feature space [6]. The technique involved in this paper was extraction of weighted connections between different features including functional regions similar to brain network modeling. The second step included was feature selection. The features which were selected were then distributed in their respective feature spaces. The principal component analysis method was replaced with singular value decomposition (SVD) which provides another way for factorization of a matrix into singular values and singular vectors. [5] SVD provided more stable results as opposed to that of PCA where the numbers close to zero would be lost. This paper explored data samples not only from medical institutes but also from datasets available online improving the accuracy to around 80%, while the data from medical institutes provided accuracy of around 50%.

3 Dataset For 2D MRI images, OASIS dataset is prevalent. The 2D MRI scans used to train the prediction model in the proposed system have been obtained from this source. The objective of The Open Access Series of Imaging Studies (OASIS) is to freely provide neuroimaging data to the scientific and academic group. OASIS-3 is the most recent launch in the series of datasets released by OASIS. The intention behind this compilation and free distribution of multi-modal data is to promote findings in neuroscience. The OASIS data previously released was utilized for analysis and enhancement of segmentation algorithms [7]. OASIS-3 uses longitudinal data to examine regular aging as well as Alzheimer’s disease. Around 1080 samples of patients of dissimilar age groups are taken for experimentation and analysis in the proposed system.

4 Proposed System 4.1 Algorithms Transfer learning is a terminology associated with deep learning and artificial intelligence that stores necessary knowledge when solving one problem requirements and applying to other similar problem statements. [8] The system explained in this

Alzheimer’s Disease Prediction Using Fastai

769

Fig. 1 Proposed System

paper uses transfer learning algorithms. In the earlier stages of system development, ResNet152 was used for training the model. However, due to lower accuracy rate and large time consumed for training data model and other inadequacies, it was replaced with a better transfer learning algorithm [9] DenseNet201 (Fig. 1). ResNet152: ResNet is a neural network functioning as a backbone for a large number of computer vision tasks having a network of 152 layers which learns the representation functions which are residues rather than learning the signal representation functions directly [10]. In deep neural networks, information flows from the first layer to the last layer. Residual functions are formulated as identity mappings through which information flows throughout the network. Let the residual function be denoted by F(x). If the identity mapping is optimal residuals are set to 0 (F(x) = 0) rather than setting F(x) = x which is a set of stack of nonlinear CNN layers. Thus, input data is transferred from one layer to the corresponding next layer without having further modifications on the data enabling the deep neural network. However, due to this transfer of data vanishing/exploding gradients is introduced. Introduction of these gradients might result in data loss, and for solving this problem, a skip/shortcut connection is added to the input x which is represented by F(x) is shown in Fig. 2. The formula for residual mapping is:

770

C. R. Francis et al.

Fig. 2 Residual Mapping

F(x) = H (x)−x

(1)

Even if there is vanishing gradient due to the value x obtained from previous layer, the original data can be retained. There are two types of residual connections: 1. The identity shortcuts (x) can be used directly if input and output are of the same dimensions. y = F(x, {W i}) + x

(2)

2. If the dimensions change: (A) Identity mapping has a shortcut function which performs the function of adding extra zeros entries padded with the increased dimension. (B) Matching the dimension is (done by 1*1 conv) using the following formula y = F(x, {W i }) + W sx

(3)

The advantages of ResNet are that it reduces the effect of vanishing gradient problem. Higher accuracy is obtained in image classification. However, there are certain disadvantages associated with ResNet. Deeper network usually requires weeks for training using ResNet. This makes it practically infeasible in real-world applications. DenseNet201: DenseNet-201 is a 201-layer deep convolutional neural network [11], trained on pictures from ImageNet database. As the model is pretrained, it is well versed with the feature representations from a wide spectrum of images. The neural network has a picture input size of 224-by-224. [11] DenseNet portrays dense connectivity as every layer acquires supplementary inputs from all preceding layers and passes its own feature maps to consequent layers. Taking into consideration that

Alzheimer’s Disease Prediction Using Fastai

771

Fig. 3 Multiple dense blocks

every layer acquires feature maps from all precursory layers, the neural network is assumed to be thinner as well as compact and has higher memory efficiency. The growth rate k reflects the auxiliary quantity of channels for every layer. For every composition layer in DenseNet, pre-activation batch norm and rectified linear unit followed by 3 × 3 Convolutions are done utilizing the output feature maps from k channels. The model complexity and size can further be decreased by applying batch normalization-ReLU-1 × 1 convolutions prior to batch normalization-ReLU-3 × 3 convolutions (Fig. 3). 1 × 1 Convolutions along with 2 × 2 average pooling are utilized as transition layers between two neighboring dense blocks. The size of feature maps in a dense block is identical for easy concatenation. Global average pooling is carried out where the final dense block terminates followed by a Softmax classifier. Using DenseNet in the proposed system provided the following advantages: The faulty signal could be efficiently propagated to the initial layers because of direct connection, representing deep supervision as initial layers are directly supervised by the final classification layer. Since every layer in DenseNet receives all precursory layers as input, it learns more variety of features. DenseNet utilizes features of multiple difficulty levels giving smoother decision boundaries.

4.2 Explanation The system consists of a UI for receiving the MRI images as input which accurately predicts the possibility of Alzheimer’s disease based on the image provided. 2D MRI images are provided as input to the prediction system. The system uses vision module of the Fastai library which provides a set of functions for defining the dataset in order to handle image objects as well as perform transformations on them [12]. The dataset comprising of 2D images is converted to a folder structure resembling ImageNet dataset having class-named folders which can be retrieved using ImageDataBrunch.from_folder function as Fastai makes use of transfer learning approach for dataset training. [13] Image transformation is done using tfms having a specific

772

C. R. Francis et al.

target range size of pictures. ImageNet data has predefined statistical measurements using which normalization is done on the 2D MRI image dataset. Thus, the transformed and normalized dataset comprises of two classes Alzheimer and non-Alzheimer. This dataset is trained using the pretrained deep learning algorithm DenseNet201 for 20 epochs. The cnn_learner method helps to obtain a pretrained model from architecture suitable according to dataset requirements [2]. The trained model is provided to classification interpretation object from learner module of Fastai for performing statistical analysis such as prediction, actual, loss and probability of actual class.

5 Experimental Setup and Result Experimental Setup Training of the deep learning model is done using Google Colaboratory. This trained model is saved to a file and then loaded locally using Anaconda command prompt in order to perform prediction for Alzheimer’s disease. Hardware Requirements Google Colab: Colab contains two processors Intel® and Xeon® @2.30 GHz. GPU essential for deep learning models provided by Colab is Tesla K80 GPU having 12 GB RAM [14]. Colab has cache size of 46,080 KB. However, major network requirement for using Colab is high-speed Internet connection. Local Setup: Processor used locally includes Intel® Core™@2.60 GHz. For running the system, the RAM and OS minimal requirement is 4 GB and 64 bit, respectively. Software Requirements Anaconda navigator is used for locally performing the prediction module. Programming language used is Python, and the operating system is Windows. Result Accuracy obtained for model trained with ResNet152 was 73%. However, training the dataset with DenseNet201 yielded an accuracy of 93%. Confusion Matrix Classification results can be represented with a matrix which describes the performance of the classification model. It represents the computed true and false results after classification. The following defined conditions are the possibilities to classify events: True Positive (TP): Images that are successfully predicted as Alzheimer’s disease positive by the model. False Positive (FP): Non-Alzheimer’s images that are falsely predicted as Alzheimer’s positive by the model. True Negative (TN): Non-Alzheimer’s images that are successfully predicted as non-Alzheimer’s. False Negative (FN): Alzheimer’s disease positive images that are falsely predicted as Non-Alzheimer’s (Table 1)

Alzheimer’s Disease Prediction Using Fastai Table 1 Confusion matrix predicted

773 Alzheimer’s

Alzheimer’s Non-Alzheimer’s

Non-Alzheimer’s

372

68

14

626

Actual

Classification Rate (CR) or Accuracy: It is defined as the ratio of correctly classified instances to the total number of instances. Accuracy =

372 + 626 TP + TN = = 0.9359 TP + TN + FP + FN 372 + 68 + 14 + 62

(4)

Detection rate (DR): It is the ratio of the number of Alzheimer positive images predicted correctly to the total number of Alzheimer positive images. DR =

372 TP = = 0.8485 TP + FN 372 + 68

(5)

False positive rate (FPR): It is defined as the ratio of non-Alzheimer images predicted positive to the total number of non-Alzheimer images available in the dataset. FPR =

FP 14 = = 0.021 FP + TN 14 + 626

(6)

Precision (PR): It is defined as the fraction of MRI images predicted as Alzheimer’s disease positive to that images which are actually positive. PR =

372 TP = = 0.963 FP + TP 14 + 372

(7)

False Acceptance Rate (FAR): It is defined as the ratio of the number of falsely predicted Alzheimer’s disease positive images to the total number of positive cases. FAR =

68 FN = = 0.176 TP+FP 372 + 14

(8)

False Rejection Rate (FRR): It is defined as the ratio of the number of falsely predicted Alzheimer’s disease MRI images to the total number of negative cases FRR =

14 FP = = 0.020 TN+FN 68 + 626

(9)

Recall: The classifier describes this as the percentage of correctly interpreted images of Alzheimer’s disease. Having a high recall value is important for a classifier having metric equivalent to that of the detection rate (DR).

774

C. R. Francis et al.

F-measure (FM): FM is the harmonic mean of precision and recall at that threshold for a given threshold FM =

2 1 PR

+

1 RECALL

=

1 0.963

2 +

1 0.8485

= 0.923

(10)

6 Future Scope and Application Taking the restrictions on the availability of neuroimaging data into consideration, we presume that additional training samples with lesser variance are required for further extracting the synergy between the various biological markers. Also with access to relevant datasets there is scope for multi-class classification, determining the disease severity and accordingly providing users with medical assistance. In order to enhance the systems usability, 3D neural images can be used. Also, disease prediction can be improved with a game that evaluates the user’s cognitive ability.

7 Conclusion In this research, we proposed an approach for Alzheimer’s prediction based on deep learning. Traditional systems on Alzheimer’s mainly focus on disease detection and are based on standard machine learning algorithms like regression or SVM-based methods. Compared with current scenario detection methods which mainly depend on machine learning techniques to detect Alzheimer’s disease, our proposed method uses deep learning to precisely predict the disease with an accuracy of 93.59%. An enhancement in the system’s performance with respect to binary classification was observed with the use of deep learning models as it automates feature extraction. Furthermore, the use of pretrained models including ResNet152 and DenseNet201 helped in obtaining better accuracy and boosted the speed at which neural networks train. This study has potential in research field for developing similar models for treating other diseases based on image classification.

References 1. Maqsood, M., Nazir, F., Khan, U., Aadil, F., Jamal, H., Mehmood, I., Song, O.: Transfer learning assisted classification and detection of Alzheimer’s disease stages using 3D MRI scans. Sensors 19(11), 2645 (2019) 2. Li, H., Fan, Y.: Early prediction of Alzheimer’s disease dementia based on baseline hippocampal MRI and 1-year follow-up cognitive measures using deep recurrent neural networks. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) (2019)

Alzheimer’s Disease Prediction Using Fastai

775

3. Hong, X., Lin, R., Yang, C., Zeng, N., Cai, C., Gou, J., Yang, J.: Predicting Alzheimer’s disease using LSTM. IEEE Access 7, 80893–80901 (2019) 4. Vincent, P., et al.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010) 5. Zhang, K., Guo, Y., Wang, X., Yuan, J., Ding, Q.: Multiple feature reweight DenseNet for image classification. IEEE Access 7, 9872–9880 (2019) 6. Li, F., Tran, L., Thung, K., Ji, S., Shen, D., Li, J.: A robust deep model for improved classification of AD/MCI patients. IEEE J. Biomed. Health Inf. 19(5), 1610–1616 (2015) 7. Oasis-brains.org.: OASIS brains—open access series of imaging studies. (online) Available at: http://oasis-brains.org/. Accessed 29 Oct 2019 (2019) 8. Medium.: A comprehensive hands-on guide to transfer learning (online) Available at: