128 82 14MB
English Pages 434 [418] Year 2023
Lecture Notes in Networks and Systems 756
Anurag Mishra Deepak Gupta Girija Chetty Editors
Advances in IoT and Security with Computational Intelligence Proceedings of ICAISA 2023, Volume 2
Lecture Notes in Networks and Systems Volume 756
Series Editor Janusz Kacprzyk , Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Türkiye Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong
The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).
Anurag Mishra · Deepak Gupta · Girija Chetty Editors
Advances in IoT and Security with Computational Intelligence Proceedings of ICAISA 2023, Volume 2
Editors Anurag Mishra Department of Electronics Deen Dayal Upadhyaya College University of Delhi New Delhi, India
Deepak Gupta Department of Computer Science and Engineering MNNIT Allahabad Prayagraj, India
Girija Chetty Faculty of Science and Technology University of Canberra Bruce, ACT, Australia
ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-981-99-5087-4 ISBN 978-981-99-5088-1 (eBook) https://doi.org/10.1007/978-981-99-5088-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Preface
Along with the advancement of technologies in cyber-physical systems, Internet of things, cloud computing and big data, challenges are traditionally solved by optimization of organizations, business processes and information systems at the wholeenterprise level. This is increasingly requiring an extension of this perspective to the societal level. We live in society and deal with it. Without this, we cannot sustain technological advances. In particular, the innovation that originated in Industry 4.0 has got transformed into Society 5.0, which is expected to play an important role in bringing about change not only in the enterprises but also in society. Therefore, how to design, develop, and manage enterprise and societal architectures and systems by using information technology gain more attention than ever before. With this objective in mind, the International Conference on Advances in IoT, Security with AI (ICAISA-2023) was organized by Deen Dayal Upadhyaya College, University of Delhi, New Delhi, India, in collaboration with University of Canberra, Canberra, Australia, and NIT, Arunachal Pradesh, Itanagar, Arunachal Pradesh, India, during March 24–25, 2023. We are thankful to our contributors, participants and sponsors—STPI Chennai, REC Limited and Power Finance Corporation Limited who have supported this event wholeheartedly. This conference has been organized having thirteen parallel technical sessions besides inaugural and valedictory sessions as the most suitable tracks capable of serving the electronics, IT and software industries to be specific. Few presentations by Indian and international industry specialists have been done in these sessions. This is done with a view to establishing a connection between academia and the industry, and both of them can get fruitful ideas from each other. We are really thankful to all our overseas sector and academic experts who have either joined us physically or in online mode.
v
vi
Preface
We are particularly grateful to Dr. Rajendra Pratap Gupta, Mr. Animesh Mishra, Mr. M. S. Bala and Prof. Balram Pani who blessed us in the inaugural session. We are also thankful to Mr. N. K. Goyal for his presence in the valedictory session. We are extremely grateful to Springer Nature, especially Dr. Aninda Bose who agreed to publish two volumes of conference proceedings in the prestigious series of Lecture Notes in Networks and Systems. New Delhi, India Prayagraj, India Bruce, Australia
Anurag Mishra Deepak Gupta Girija Chetty
Contents
Comparative Study of Metaheuristic Algorithms for Scheduling in Cloud Computing Based on QoS Parameters . . . . . . . . . . . . . . . . . . . . . . Jyoti Chauhan and Taj Alam
1
Impact of Spatial Distribution of Repeated Samples on the Geometry of Hyperplanes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reema Lalit and Kapil
15
IoT-Based Smart Farming for Sustainable Agriculture . . . . . . . . . . . . . . . . Geetan Manchanda, Bhumika Papnai, Aadi Lochab, and Shikha Badhani
27
ELM-Based Liver Disease Prediction Model . . . . . . . . . . . . . . . . . . . . . . . . . Charu Agarwal, Geetika Singh, and Anurag Mishra
39
Intercompatibility of IoT Devices Using Matter: Next-Generation IoT Connectivity Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sharat Singh
49
Role of Node Centrality for Information Dissemination in Delhi Metro Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kirti Jain, Harsh Bamotra, Sakshi Garg, Sharanjit Kaur, and Gunjan Rani
59
Biometric Iris Recognition System’s Software and Hardware Implementation Using LabVIEW Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rajesh Maharudra Patil, B. G. Nagaraja, M. R. Prasad, T. C. Manjunath, and Ravi Rayappa A Unique Method of Detection of Edges and Circles of Multiple Objects in Imaging Scenarios Using Line Descriptor Concepts . . . . . . . . . Rajesh Maharudra Patil, B. G. Nagaraja, M. R. Prasad, T. C. Manjunath, and Ravi Rayappa Robotic Vision: Simultaneous Localization And Mapping (SLAM) and Object Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Soham Pendkar and Pratibha Shingare
71
85
97
vii
viii
Contents
Optimum Value of Cyclic Prefix (CP) to Reduce Bit Error Rate (BER) in OFDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Mahesh Gawande and Yogita Kapse Optimum Sizing of Solar/Wind/Battery Storage in Hybrid Energy System Using Improved Particle Swarm Optimization and Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Gauri M. Karve, Mangesh S. Thakare, and Geetanjali A. Vaidya Fuzzy Based MPPT Control of Multiport Boost Converter for Solar Based Electric Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Vishnukant Gore and Prabhakar Holambe Image Classification Model Based on Machine Learning Using GAN and CNN Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Ch. Bhavya Sri, Sudeshna Sani, K. Naga Bavana, and Syed. Hasma Role of Natural Language Processing for Text Mining of Education Policy in Rajasthan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Pooja Jain and Shobha Lal Multilingual and Cross Lingual Audio Emotion Analysis Using RNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Sudipta Bhattacharya, Brojo Kishore Mishra, and Samarjeet Borah Multi-modality Brain Tumor Segmentation of MRI Images Using ResUnet with Attention Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Aditya Verma, Mohit Zanwar, Anshul Kulkarni, Amit Joshi, and Suraj Sawant CPF Analysis for Identification of Voltage Collapse Point and Voltage Stability of an IEEE-5 Bus System Using STATCOM . . . . . . 201 Subhadip Goswami, Tapas Kumar Benia, and Abhik Banerjee Analysis of Various Blockchain-Based Solutions for Electronic Health Record System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Namdev Sawant and Joanne Gomes Coordinated Network of Sensors Over 5G for High-Resolution Protection of City Assets During Earthquakes . . . . . . . . . . . . . . . . . . . . . . . . 225 Ivelina Daiss, José R. Martí, Amitabh Chhabra, Dragan Andjelic, Carlos E. Ventura, and Andrea T. J. Martí Detection of COVID-19 Using Medical Image Processing . . . . . . . . . . . . . . 237 Rekha Sri Durga, I. Akhil, A. Bhavya Sri, R. Lathish, Sanasam Inunganbi, and Barenya Bikash Hazarika
Contents
ix
Text Encryption Using ECC and Chaotic Map . . . . . . . . . . . . . . . . . . . . . . . 247 P. N. V. L. S. Sneha Sree, Vani Venkata Durga Kadavala, Pothakam Chandu, Savara Murali Krishna, Khoirom Motilal Singh, and Sanasam Inunganbi Plant Leaf Disease Detection and Classification: A Survey . . . . . . . . . . . . . 259 Rajiv Bansal, Rajesh Kumar Aggarwal, and Neha Goyal Performance Evaluation of K-SVCR in Multi-class Scenario . . . . . . . . . . . 269 Vivek Prakash Srivastava, Kapil, and Neha Goyal An Ensemble Method for Categorizing Cardiovascular Disease . . . . . . . . 281 Mohsin Imam, Sufiyan Adam, Neetu Agrawal (Garg), Suyash Kumar, and Anjana Gosain Intrusion Detection System for Internet of Medical Things . . . . . . . . . . . . 293 Priyesh Kulshrestha, T. V. Vijay Kumar, and Manju Khari Veracity Assessment of Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Vikash and T. V. Vijay Kumar The Role of Image Encryption and Decryption in Secure Communication: A Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 T. Devi Manjari, V. Pavan Surya Prakash, B. Gautam Kumar, T. Veerendra Subramanya Kumar, Khoirom Motilal Singh, and Barenya Bikash Hazarika Reconstructing Masked Face Using GAN Technique . . . . . . . . . . . . . . . . . . 327 Chandni Agarwal, Charul Bhatnagar, and Anurag Mishra Brain Cancer Detection Using Deep Learning (Special Session “Digital Transformation Era: Role of Artificial Intelligence, IOT and Blockchain”) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Shivam Pandey and Shivani Bansal Traffic Accident Modeling and Prediction Algorithm Using Convolutional Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 Anil Kumar, Shiv Kumar Verma, and Subhanshu Goyal Cyberbullying Severe Classification Using Deep Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 Idi Mohammed and Rajesh Prasad The Six Sigma Methodology Implementation in Agile Domain . . . . . . . . . 375 Abhay Juvekar, Oscar Leo D’souza, and Anita Chaware Toward a Generic Multi-modal Medical Data Representation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 K. M. Swaroopa, Nancy Kaur, and Girija Chetty
x
Contents
Universal Object Detection Under Unconstrained Environments . . . . . . . 395 Nancy Kaur, K. M. Swaroopa, and Girija Chetty Internet of Things-Based 3-Lead ECG Signal Acquisition System . . . . . . 405 Pranamya Sinha, Anuja Arora, Sunil Kumar, Daya Bhardwaj, and Ravi Kumar Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Editors and Contributors
About the Editors Prof. Anurag Mishra has bachelor’s and master’s in Physics from the University of Delhi. He completed his M.E. in Computer Technology and Applications and Ph.D. in Electronics also from the University of Delhi. He has extensive experience of teaching B.Sc. (Hons.), M.Sc., B.Tech. and M.Tech. programs in Electronics and Computer Science. He has about 28 years of experience as a teacher and as an active researcher. He has been a consultant for offshoot agencies of the Ministry of Education, Government of India. Presently, he is nominated as a visitor’s nominee in a central university by the Government of India. He has 65 refereed papers in highly cited journals, international conferences and book chapters, three authored, one edited book and two patents to his credit. He has recently entered into developing medical applications using deep convolutional neural networks. He is an active reviewer of papers for Springer, Elsevier and IEEE Transactions. He is a member of IEEE and also holds membership of the Institute of Informatics and Systemics (USA). Dr. Deepak Gupta is an assistant professor in the Department of Computer Science and Engineering at Motilal Nehru National Institute of Technology Allahabad, Prayagraj, India. Previously, he worked in the Department of Computer Science and Engineering at the National Institute of Technology Arunachal Pradesh. He received a Ph.D. in Computer Science and Engineering from the Jawaharlal Nehru University, New Delhi, India. His research interests include support vector machines, ELM, RVFL, KRR and other machine learning techniques. He has published over 70 referred journal and conference papers of international repute. His publications have more than 1384 citations with an h-index of 22 and an i10-index of 45 (Google Scholar, 21/06/2023). He is currently a member of an editorial review board member of Applied Intelligence. He is the recipient of the 2017 SERB-Early Career Research Award in Engineering Sciences which is the prestigious award of India at the early career level. He is a senior member of IEEE and currently an active member of
xi
xii
Editors and Contributors
many scientific societies like IEEE SMC, IEEE CIS, CSI and many more. He has served as a reviewer of many scientific journals and various national and international conferences. He was the general chair of the 3rd International Conference on Machine Intelligence and Signal Processing (MISP-2021) and associated with other conferences like IEEE SSCI, IEEE SMC, IJCNN, BDA 2021, etc. He has supervised three Ph.D. students and guided 15 M.Tech. projects. He is currently the principal investigator (PI) or a co-PI of two major research projects funded by the Science and Engineering Research Board (SERB), Government of India. Dr. Girija Chetty has a bachelor’s and master’s degrees in Electrical Engineering and Computer Science from India and Ph.D. in Information Sciences and Engineering from Australia. She has more than 38 years of experience in industry, research and teaching from Universities and Research and Development Organisations from India and Australia and has held several leadership positions including the head of Software Engineering and Computer Science, the program director of ITS courses, and the course director for Master of Computing and Information Technology Courses. Currently, she is a full professor in Computing and Information Technology at School of Information Technology and Systems at the University of Canberra, Australia, and leads a research group with several Ph.D. students, post-docs, research assistants and regular international and national visiting researchers. She is a senior member of IEEE, USA; a senior member of Australian Computer Society; and ACM member, and her research interests are in multimodal systems, computer vision, pattern recognition, data mining and medical image computing. She has published extensively with more than 200 fully refereed publications in several invited book chapters, edited books, and high-quality conferences and journals, and she is in the editorial boards, technical review committees and a regular reviewer for several Springer, IEEE, Elsevier and IET journals in the area related to her research interests. She is highly interested in seeking wide and interdisciplinary collaborations, research scholars and visitors in her research group.
Contributors Sufiyan Adam Department of Computer Science, ARSD College, University of Delhi, New Delhi, India Chandni Agarwal GLA University Mathura, Mathura, India Charu Agarwal Ajay Kumar Garg Engineering College, Dr. A.P.J. Abdul Kalam Technical University, Ghaziabad, Uttar Pradesh, India Rajesh Kumar Aggarwal National Institute of Technology, Kurukshetra, India Neetu Agrawal (Garg) Department of Physics, University of Allahabad, Prayagraj, U.P., India
Editors and Contributors
xiii
I. Akhil Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Taj Alam Department of Computer Science & Engineering and Information Technology, Jaypee Institute of Information Technology, Noida, Uttar Pradesh, India Dragan Andjelic The University of British Columbia, Vancouver, BC, Canada Anuja Arora Shaheed Rajguru College of Applied Sciences for Women, University of Delhi, Delhi, India Shikha Badhani Department of Computer Science, Maitreyi College, University of Delhi, Delhi, India Harsh Bamotra Acharya Narendra Dev College, University of Delhi, New Delhi, Delhi, India Abhik Banerjee National Institute of Technology, Jote, Arunachal Pradesh, India Rajiv Bansal National Institute of Technology, Kurukshetra, India; JMIT Radaur, Radaur, India Shivani Bansal Assistant Professor, Department of Mathematics, Chandigarh University, Mohali, Punjab, India Tapas Kumar Benia National Institute of Technology, Jote, Arunachal Pradesh, India Daya Bhardwaj Shaheed Rajguru College of Applied Sciences for Women, University of Delhi, Delhi, India Charul Bhatnagar GLA University Mathura, Mathura, India Sudipta Bhattacharya Department of Computer Science and Engineering, GIET University, Gunupur, India A. Bhavya Sri Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Ch. Bhavya Sri Koneru Lakshmaiah Education Foundation, Vijayawada, Andhra Pradesh, India Samarjeet Borah Department of Computer Applications, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Gangtok, Sikkim, India Pothakam Chandu Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India Jyoti Chauhan Department of Computer Science & Engineering and Information Technology, Jaypee Institute of Information Technology, Noida, Uttar Pradesh, India Anita Chaware Associate Professor, P G Department of Computer Science, SNDTWU, Mumbai, India
xiv
Editors and Contributors
Girija Chetty Faculty of Science and Technology, University of Canberra, Bruce, ACT, Australia Amitabh Chhabra Rogers Communications, Brampton, ON, Canada Ivelina Daiss The University of British Columbia, Vancouver, BC, Canada T. Devi Manjari Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Rekha Sri Durga Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Oscar Leo D’souza HCL Technology, Mumbai, India Sakshi Garg Acharya Narendra Dev College, University of Delhi, New Delhi, Delhi, India B. Gautam Kumar Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Mahesh Gawande Electronics and Telecommunication, College of Engineering Pune, Pune, India Joanne Gomes St. Francis Institute of Technology, Mumbai, India Vishnukant Gore Electrical Engineering Department, College of Engineering Pune, Pune, India Anjana Gosain USICT, GGSIPU, New Delhi, India Subhadip Goswami National Institute of Technology, Jote, Arunachal Pradesh, India Neha Goyal M.M. Institute of Computer Technology & Business Management, Maharishi Markandeshwar Deemed to be University, Mullana, Ambala, Haryana, India Subhanshu Goyal Marwadi University, Rajkot, Gujarat, India Syed. Hasma Koneru Lakshmaiah Education Foundation, Vijayawada, Andhra Pradesh, India Barenya Bikash Hazarika Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Prabhakar Holambe Electrical Engineering Department, College of Engineering Pune, Pune, India Mohsin Imam Department of Computer Science, ARSD College, University of Delhi, New Delhi, India Sanasam Inunganbi Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India
Editors and Contributors
xv
Kirti Jain Department of Computer Science, University of Delhi, New Delhi, Delhi, India Pooja Jain Jayoti Vidyapeeth Women’s University, Jaipur, Rajasthan, India Amit Joshi Department of Computer Engineering and IT, COEP Technological University (COEP Tech), Pune, Maharashtra, India Abhay Juvekar IT Consultant, Mumbai, India Vani Venkata Durga Kadavala Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India Kapil National Institute of Technology, Kurukshetra, India Yogita Kapse Electronics and Telecommunication, College of Engineering Pune, Pune, India Gauri M. Karve Electrical Engineering Department, PVG’s COET & GKPIM, Pune, India Nancy Kaur Faculty of Science and Technology, University of Canberra, Bruce, ACT, Australia Sharanjit Kaur Acharya Narendra Dev College, University of Delhi, New Delhi, Delhi, India Manju Khari School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India Savara Murali Krishna Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India Anshul Kulkarni Department of Computer Engineering and IT, COEP Technological University (COEP Tech), Pune, Maharashtra, India Priyesh Kulshrestha School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India Anil Kumar Galgotias University, Greater Noida, India; Deen Dayal Upadhyaya College, University of Delhi, Delhi, India Ravi Kumar Shaheed Rajguru College of Applied Sciences for Women, University of Delhi, Delhi, India Sunil Kumar Shaheed Rajguru College of Applied Sciences for Women, University of Delhi, Delhi, India Suyash Kumar USICT, GGSIPU, New Delhi, India; Department of Computer Science, Hansraj College, University of Delhi, New Delhi, India Shobha Lal Jayoti Vidyapeeth Women’s University, Jaipur, Rajasthan, India
xvi
Editors and Contributors
Reema Lalit National Institute of Technology, Kurukshetra, India; Panipat Institute of Engineering and Technology, Samalkha, India R. Lathish Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Aadi Lochab Department of Mathematics, Maitreyi College, University of Delhi, Delhi, India Geetan Manchanda Department of Mathematics, Maitreyi College, University of Delhi, Delhi, India T. C. Manjunath Electronics and Communication Engineering Department, Dayananda Sagar College of Engineering, Bengaluru, India Andrea T. J. Martí The University of British Columbia, Vancouver, BC, Canada José R. Martí The University of British Columbia, Vancouver, BC, Canada Anurag Mishra Department of Electronics, Deendayal Upadhyay College, University of Delhi, Delhi, India Brojo Kishore Mishra Department of Computer Science and Engineering, GIET University, Gunupur, India Idi Mohammed Computer Science Department, African University of Science and Technology, F.C.T Abuja, Nigeria K. Naga Bavana Koneru Lakshmaiah Education Foundation, Vijayawada, Andhra Pradesh, India B. G. Nagaraja Electronics and Communication Engineering, Vidyavardhaka College of Engineering, Mysuru, Karnataka, India Shivam Pandey Student, Chandigarh University, Mohali, Punjab, India Bhumika Papnai Department of Mathematics, Maitreyi College, University of Delhi, Delhi, India Rajesh Maharudra Patil Electrical Engineering Department, SKNS College of Engineering Korti, Affiliated to Solapur University, Pandharpur, Maharashtra, India V. Pavan Surya Prakash Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Soham Pendkar College of Engineering, Pune, India M. R. Prasad Computer Science and Engineering, Vidyavardhaka College of Engineering, Mysuru, Karnataka, India Rajesh Prasad Computer Science Department, African University of Science and Technology, F.C.T Abuja, Nigeria
Editors and Contributors
xvii
Gunjan Rani Acharya Narendra Dev College, University of Delhi, New Delhi, Delhi, India Ravi Rayappa Electronics and Communication Engineering, Jain Institute of Technology, Davanagere, Karnataka, India Sudeshna Sani Koneru Lakshmaiah Education Foundation, Vijayawada, Andhra Pradesh, India Namdev Sawant St. Francis Institute of Technology, Mumbai, India Suraj Sawant Department of Computer Engineering and IT, COEP Technological University (COEP Tech), Pune, Maharashtra, India Pratibha Shingare College of Engineering, Pune, India Geetika Singh KIET Group of Institutions, Dr. A.P.J. Abdul Kalam Technical University, Ghaziabad, Uttar Pradesh, India Khoirom Motilal Singh Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Sharat Singh Department of Electronics, Deen Dayal Upadhyaya College, University of Delhi, New Delhi, India Pranamya Sinha Shaheed Rajguru College of Applied Sciences for Women, University of Delhi, Delhi, India P. N. V. L. S. Sneha Sree Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India Vivek Prakash Srivastava National Institute of Technology, Kurukshetra, India K. M. Swaroopa Faculty of Science and Technology, University of Canberra, Bruce, ACT, Australia Mangesh S. Thakare Electrical Engineering Department, PVG’s COET & GKPIM, Pune, India Geetanjali A. Vaidya Electrical Engineering Department, PVG’s COET & GKPIM, Pune, India T. Veerendra Subramanya Kumar Department of CSE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India Carlos E. Ventura The University of British Columbia, Vancouver, BC, Canada Aditya Verma Department of Computer Engineering and IT, COEP Technological University (COEP Tech), Pune, Maharashtra, India Shiv Kumar Verma Galgotias University, Greater Noida, India T. V. Vijay Kumar School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India
xviii
Editors and Contributors
Vikash School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India Mohit Zanwar Department of Computer Engineering and IT, COEP Technological University (COEP Tech), Pune, Maharashtra, India
Comparative Study of Metaheuristic Algorithms for Scheduling in Cloud Computing Based on QoS Parameters Jyoti Chauhan and Taj Alam
Abstract Cloud computing (CC) has gained huge superiority in recent era by providing the feature of sharing a pool of computing resources on demand among various cloud users over the internet. It provides benefits of scalability, flexibility, and pay-per-use facility using virtualization technology to its clients which attract large enterprises that work on distributed computing. One important considered research issue in cloud computing is task scheduling which means that the cloud tasks need to be appropriately mapped to the existing cloud resources to optimize single or multiple objectives. The complexity and large search space of task scheduling classify it as a NP-hard problem. A brief analysis of existing heuristic and metaheuristic strategies and their application in scheduling cloud environments has been presented in this paper followed by the comparative study of few metaheuristic algorithms. The heuristic algorithms cannot produce an exact optimal solution in an acceptable time. To solve this problem, metaheuristic algorithms based on swarm intelligence and bio-inspired techniques like Particle Swarm Optimization (PSO), Genetic algorithm (GA), and Ant Colony Optimization (ACO) algorithm are a good choice for finding the near-optimal solution. These have been implemented to run in cloud scenarios and their performance has been compared to optimize the parameters makespan, average resource usage, and average response time. PSO algorithm is found to be outperformed ACO and GA in these optimization metrics in various test conditions in the cloud environment. Keywords Cloud computing · Scheduling · Metaheuristics · Particle swarm optimization · Ant colony optimization · Genetic algorithm · QoS parameters
J. Chauhan (B) · T. Alam Department of Computer Science & Engineering and Information Technology, Jaypee Institute of Information Technology, Noida, Uttar Pradesh 201309, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_1
1
2
J. Chauhan and T. Alam
1 Introduction CC offers a standard platform for cheap and convenient hosting and delivering computing resources as a utility on demand through the Internet [1]. Cloud providers rent out physical and logical computing resources on demand from their large data centers to different cloud users having dynamic needs on pay-per-use basis [2]. Cloud services are distinctly classified into three kinds: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). IaaS is an substantial and fast-growing emerging field to provide maximum benefit to small and mediumsized organizations [3]. However, with several benefits, there are some major issues and challenges that need to be addressed in CC such as automated resource provisioning, interoperability, virtualization, privacy and security, data management, load balancing, network management, application programming interfaces (APIs), and many more [3, 4]. Virtualization is a primary notion circulating CC technology concept which makes possible the isolated execution of several cloud users’ tasks at the same time using a software layer termed as hypervisor or VM monitor [5]. Various cloud users request virtualized resources by specifying a set of resource instances at any instant to run their task. It is the cloud provider’s responsibility to allocate resources efficiently and effectively to the given set of tasks at any instant without any delay which is called resource management. Resource management includes challenges regarding resource allocation, resource mapping, and modeling, resource adaptation, resource finding, provisioning, and scheduling of resources. Both under-provisioning and over-provisioning of resources are avoided as the cloud services and resources are shared among various cloud clients who use them on a subscription basis [3]. The main aim of cloud providers is to maximize their profit and revenue leading to the high performance of the cloud. Hence, cloud providers have to allocate resources efficiently to save energy usage, improve resource utilization, and efficient bandwidth management. However, cloud users expect the simplest interface to use Quality of Service (QoS) with minimum expenses, high throughput, and quick response time [2]. The cloud providers can achieve the objective of maximum resource usage by minimizing the makespan, task transferring time, task execution time, energy usage and costs, etc. Cloud users can achieve the objective of reducing expenses and satisfy QoS by minimizing the average response time. There should be an efficient and well-managed scheduling mechanism to schedule the cloudlets to attain maximum resource usage. An efficient scheduling scheme can be achieved with the appropriate mapping of tasks to the required resources called task scheduling. Hence, a scheduling problem includes several cloud consumers tasks that need to be scheduled on the existing VMs followed with few constraints to achieve optimization of an objective function. The goal is to construct a schedule specifying which task will be allocated to which resource [6]. The scheduling methods can be categorized into three classifications which are resource-based scheduling, dependent task-based or workflow-based scheduling, and independent task scheduling. The tasks are scheduled independently of each other
Comparative Study of Metaheuristic Algorithms for Scheduling …
3
in independent task scheduling, whereas the tasks are bounded with each other via interdependencies in workflow-based scheduling. Task scheduling methods can be centralized or distributed. There is only one scheduler for mapping tasks in the centralized scheduling-based method, whereas the scheduling decisions are decentralized among all available VMs in distributed scheduling. There is one more way of categorizing job scheduling: static and dynamic schedulings. In static scheduling, every task is assumed to arrive at the same time. Hence, all the tasks or VMs are mapped and scheduled based on a priori information. While in dynamic scheduling, no prior information is there about the task’s arrival, execution time, and VMs. Hence, all scheduling decisions like resource allocation to incoming cloudlets, execution time, etc. are done in real-time only. The cloud tasks can be handled immediately when they arrive called as immediate mode or can be collected in a batch and then the whole batch can be scheduled called batch mode [7, 8]. The traditional exhaustive and deterministic scheduling strategies are simple and easy to understand and implement but do not give any guarantee of getting the optimal solutions in an acceptable amount of time [9–11]. The traditional heuristic algorithms show slow performance such as local optimum trap, slow convergence, additional computational time, having complex operators, and framed only for binary or real searching domain and not suitable for complex scientific optimization problems and large solution space. To solve this problem, metaheuristics’ algorithms based on swarm intelligence and bio-inspired techniques like Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) are a good choice for finding the near-optimal solution [6, 7, 9]. Recently, various task scheduling proposals have been proposed in cloud computing environments but despite that, no comprehensive performance study has been done to compare existing task scheduling algorithms. A comparative study of existing metaheuristics algorithms and their application in scheduling in cloud environments has been presented in this paper. These three have been implemented to run in cloud scenarios and their performance has been compared to optimize the parameters makespan, average resource usage, and average response time. The experimental results are compared and found that PSO outperforms ACO and GA algorithms in optimizing the objective function.
1.1 Contributions The following are some of the major contributions made by this work. • A system model is presented, including a task model, and virtual machine model. • It demonstrated an adaptive task allocation to virtual machines that dynamically adjusts task execution time. • It proposed a model based on PSO for cloud computing and to see its inverse relationship between makespan and average resource utilization.
4
J. Chauhan and T. Alam
• It looked into the impact of several scenarios on the heterogeneous cloud system on makespan, response time, and average resource utilization of the system. • It enables us to perform comparative study of existing metaheuristic techniques PSO, ACO, and GA and found PSO to be outperformed than GA and ACO.
2 Review Literature Authors gave an extensive analysis of cloud computing emphasizing its key models, its architectural principles, its state-of-the-art implementation, its advantages along with research challenges in [1, 4]. Serving the cloud handy resources to the cloud users termed as scheduling acts as the main theme in the research of cloud resource management and primarily its task scheduling section [2, 12]. The global research community focusing on cloud computing has developed an increasing interest in its resource scheduling issue. The categorization of resource allocation methods has been discussed in [2, 12]. Researchers proposed various heuristics algorithms for independent task scheduling such as Min–Min, Max–Min, round-robin, First Come First Serve, and many more to overcome the drawbacks of traditional exhaustive and deterministic strategies [13–18]. Authors [13] compared performance of various heuristic approaches such as Min–Min, Max–Min, and Duplex depending on metrics Minimum Execution Time (MET) and Minimum Completion Time (MCT). Researchers have conducted extensive review of dependent job-centered strategies modeled with Directed Acyclic Graph (DAG) for task scheduling problem described in [5, 19]. Scheduling strategies for dependent jobs have been presented in [20–22]. Due to complexity and large search space, researchers classify task scheduling as a NP-hard problem. The heuristic strategies generally suffer from slow convergence, and the solution generated by heuristic approaches may be stuck in local optima and difficult to find the exact solution. Thereby, to improve the solution quality and computing time, metaheuristics’ techniques have already gained vast attention since the past many years for the NP-hard problems. Metaheuristic approaches provide near-optimal solutions within an acceptable timespan and make task scheduling algorithms more effective and efficient. Lot of review literatures was given by various researchers on metaheuristic techniques adopted for task scheduling for distributed environment, i.e., cloud computing, cluster, and grid environment which includes ACO, PSO, GA, League Championship Algorithm (LCA), and BAT algorithm [6, 11, 23]. In this direction, Tsai and Rodrigues [11] gave an extensive review of literature discussing metaheuristics techniques for cloud task scheduling and presented the major issues and challenges faced in metaheuristics’ algorithms. Researchers have studied and analyzed the performance of metaheuristic techniques in cloud system [13, 24]. An ACO algorithm in case of independent task scheduling was proposed in [25, 26] to optimize QoS parameters in cloud computing. Various GA-based algorithms and their modifications have been proposed by researchers to optimize the QoS parameters for task scheduling in cloud system in references [27, 28] to be outperformed compared to traditional PSO and GA algorithms [29]. Researchers found that
Comparative Study of Metaheuristic Algorithms for Scheduling …
5
PSO provides fast task scheduling and solution quality better than existing heuristics and other metaheuristics in the grid and homogenously distributed and cloud computing [23, 30–33]. Researchers proposed various modified forms of PSO algorithm which is found to be outperformed than standard PSO and other metaheuristics discussed in references [34–36]. Although, all compared algorithms show satisfactory revenue from simulation outcomes. However, the new modified PSO algorithm is much better than the compared algorithms in cloud computing and its performance is improved by using load balancing technique proposed in the references [37, 38] which minimizes QoS parameters such as makespan, execution time, resource utilization, cost, transmission time, round trip time to perform load balancing between cloudlets and VMs.
3 Scheduling Approaches 3.1 Heuristic Approaches • First Come First Serve Algorithm: Resources are assigned to the tasks according to their order of arrival. The early the task arrives, the earlier it gets the resources and then releases the resource after completing its execution [24, 39]. • Round-Robin Algorithm: The tasks are assigned the resources in an FCFS manner, but they get the resource only for a small-time quantum. The resource is pre-empted if the allotted time slot expires and is given to the next waiting task in the ready queue. The former pre-empted task is directed to wait at the tail of the ready queue if its execution is not complete [39]. • Min–Min: The notion of Min–Min algorithm is to select the shorter job first having Minimum Completion Time (MCT) from the given task set and further allocate the selected shortest task to a resource having minimum expected completion time. This algorithm computes expected completion time C ij of any ith task from the cloudlets set T = {t 1 , t 2 , t 3 , …, t n } on any jth resource from a resources set R = {r 1 , r 2 , r 3 , …, r m } using Eq. (1) given below: Ci j = E i j + re j .
(1)
Here, rej denotes the time to get ready or prepare of resource r j and E ij denotes the time taken by ith task to execute on jth resource. The expected completion time of all tasks is calculated using the above Eq. (1), and then, the task having the shortest expected completion time is selected and is mapped to the respective resource and detached from the task set. This step is reiterated for all subsequent tasks in the set until all tasks have been mapped to the respective resources [17].
6
J. Chauhan and T. Alam
• Max–Min: This algorithm prioritizes the longer tasks having maximum MCT than the shorter tasks. It firstly selects the longer tasks from the given task set for resource assignment. This algorithm is proved to be superior to the Min–Min algorithm when the count of shorter tasks is greater than longer tasks [17]. • RASA (Resource Awareness Scheduling Algorithm): The Max–Min and Min– Min approaches can be used otherwise to enjoy their benefits and overcome their drawbacks which result in a hybrid efficient scheduling scheme known as RASA. • Best Fit: This scheduling policy assigns resources to the job which requires the maximum number of resources from the given task set. When multiple resources of different types are required by VMs, in that case, one kind of resource can be taken as a “reference resource” and then choose the best fit according to the reference resource. The traditional heuristic algorithms show slow performance such as local optimum trap, slow convergence, additional computational time, having complex operators, and framed only for binary or real searching domain. Hence, heuristic algorithms are not suitable for complex scientific optimization problems and large solution space. This motivates the researchers to enhance the heuristic approaches to overcome their drawbacks leading to metaheuristic algorithms.
3.2 Metaheuristic Approaches Metaheuristic algorithms also use iterative techniques to solve in some acceptable amount of time in comparison to traditional exhaustive and deterministic heuristic strategies. However, there exist no such metaheuristics algorithms applicable to all kinds of scheduling problems. They show varying performance in different problems. There exist no such algorithms that can give an optimal solution in polynomial time for optimization problems having large domain and complexity. Researchers have to satisfy with suboptimal solutions [11]. These algorithms improve computing time and solution quality. Generally, metaheuristics approaches can be divided into two broad classes, bio-inspired (BI) and swarm intelligence (SI)-inspired techniques. Computer science technology can be correlated to nature that may solve several real-life optimization problems. Some nature-inspired metaheuristics algorithms commonly used for scheduling in cloud computing include Memetic algorithms (MAs), Genetic algorithm (GA), Imperative competitive algorithm (ICA), and Lion algorithm (LA). Further, many metaheuristics algorithms have evolved from the social behavior of animals such as Wolf, Lions and the behavior of birds and other insects such as ants, honey bees. Their way of finding source of food in optimal time is the main inspiration. Many approaches such as ACO, PSO, Honeybee, Bat algorithm, etc. have been inspired by this swarm behavior. In this paper, the authors have simulated the working of GA, ACO, and PSO in the cloud environment and compared their performance for response time, makespan, and average resource utilization.
Comparative Study of Metaheuristic Algorithms for Scheduling …
7
• Genetic Algorithm (GA): The concept of the GA method was first given by Holland in 1975 which proved its effectiveness for complex and large searching problems. GA is a probabilistic population-based and evolutionary optimization technique that is motivated by the natural evolutionary process of the chromosomes in which the notion of fittest survival is used, i.e., recombination of the chromosomes provides new better solutions via the use of genetic crossover, mutation, and inversion [40, 41]. • Ant Colony Optimization (ACO): Ant Colony Optimization (ACO) is used in computer science and Operation Search for solving complex combinatorial optimization problems. Dorigo in 1992 originally introduced this novel ant system approach in his Ph.D. thesis. Since 1992, various ACO algorithms have been proposed which almost share the same idea. The prime idea of ACO is motivated by the searching behavior of real ants to locate the shortest path through their ant colonies directing to their food source [42, 43]. • Particle Swarm Optimization (PSO): PSO is expected as a powerful optimization and computational technique to get the optimal solution for multimodal continuous optimization problems. PSO is a swarm intelligent, evolutionary, and population-based metaheuristic technique developed in 1995 by Kennedy and Eberhart to perform global search. Originally, its idea was motivated by the particle’s social behavior and their movement such as birds, fish herds [23, 24]
4 Problem Description and QoS Parameters The Task Assignment Problem (TAP) can be described as follows. A set of tasks or cloudlets is represented by set T = {t 1 , t 2 , t 3 , …, t n }, where n is the total number of independent tasks in a batch which are different in length. All available VMs are represented by a set VM = {VM1 , VM2 , VM3 , …, VMm }, where m is the total no. of available VMs which are different in MIPS rating. This implies that tasks executed on different machines have different execution times and execution costs. The number of cloudlets is always more than the number of VMs. The processing time of any cloudlet T i on VMj is denoted by PTij and the completion time of VMj as CTj . Finishing time and submission time of any cloudlet T i are denoted by FTi and SubTi , respectively. Response time of ith task is denoted by RTTi, and average response time is computed as denoted by AvRT. Our objective of minimizing overall makespan and average response time and maximizing average resource utilization (LBR) can be described with Eqs. (2), (3), and (5) given below [38, 44]: Makespan = max{CT j | j = 1, 2, 3, . . . , m,| n Average Response Time (AvRT) =
i=1
FTi j − SubTi j , n
(2) (3)
8
J. Chauhan and T. Alam
n UtilizationVM j =
i=1 PTi j , makespan
m Average Utilization =
j=1
UtilizationVM j m
(4) ,
(5)
Each task in T is bounded by T max and T min , i.e., T min ≤ T i ≤ T max , and each VM in VM set is bounded by VMmax and VMmin , i.e., VMmin ≤ V j ≤ VMmax [45]. VMs are always considered to be available all the time. The tasks cannot be interrupted or pre-empted during processing on VM. Each VM can process only one cloudlet at a time and cloudlets cannot be run on more than one VM at a time. When cloudlet i is allocated to machine j, X ij becomes 1, otherwise it is 0. Two basic conditions are considered to satisfy the above-specified constraints. Condition (6) ensures that each task is assigned to only one VM [24]. m
X i j = 1 ∀i ∈ T,
(6)
j
X i j ∈ {0, 1} ∀ j ∈ M, i ∈ T .
(7)
5 Comparative Analysis of PSO, GA, and ACO Task scheduling aims to perform appropriate mapping of the cloudlets to the available VMs so that computing resources can be utilized efficiently and cloud users’ expenses can be minimized. The aim is to find the best metaheuristic approach for task scheduling which minimizes makespan and average response time for cloud users and maximizes the average resource utilization for cloud providers in highly distributed and dynamic multiprocessing environments, i.e., the cloud computing environment. The authors have performed various experiments by increasing the number of cloudlets for heterogeneous systems to perform comparative analysis of existing metaheuristic algorithms, PSO, ACO, and GA for task scheduling problem for the parameter settings of VMs and cloudlets. Ten datacenters are created with two hosts and 50 VMs each in the experiment and cloudlets count is varied from 100 to 1000 under the simulation environment. The task length is taken in the range of 1000– 20,000 Million Instructions (MIs). The cloudlets are assigned to heterogeneous VMs by varying their MIPS between 500 and 2000 and bandwidth in between 500 and 1000. The stopping criteria are set up to 100 iterations. The results of ten experiments are taken over 100 iterations for task range 100–1000 and the average of the optimization parameter values is taken.
Comparative Study of Metaheuristic Algorithms for Scheduling …
9
Fig. 1 Makespan comparison
The algorithms are compared based on the following parameters, i.e., makespan, average response time and average resource utilization. The average of ten repetitions is taken to obtain the average makespan for PSO, ACO, and GA as shown in Fig. 1. The PSO algorithm shows a lower makespan than ACO and GA. The PSO takes less time to execute a given task set on available VMs than ACO and GA which indicates its outperformance in minimizing the makespan. The cloud users also wish for quick response time from the cloud system to satisfy their QoS requirements. The evaluation of average response time for PSO, ACO, and GA algorithms is done as shown in Fig. 2 which shows that PSO takes less time to respond than ACO and GA. The average resource utilization is calculated using Eq. (5). It is found that PSO uses resources more efficiently and effectively as per the cloud providers’ desire to gain more profit and revenue from cloud computing. The comparison of average resource utilization is shown in Fig. 3. Based on the experimental or simulation outcomes, it is clearly visible that few of the scheduling algorithms are very much favorable to be adopted in cloud computing. From experimental results, it is clearly visible that PSO found to be outperforming ACO and GA for optimization metrics makespan, average response time, and LBR.
6 Conclusion and Future Work In this paper, brief analysis of the existing heuristics and metaheuristics approaches has been presented for task scheduling. As task scheduling problem is NP-hard in nature and slow convergence and trap in local optimal occur in heuristic approaches, the metaheuristics approaches have gained popularity over heuristics one. This paper
10
J. Chauhan and T. Alam
Fig. 2 Average response time comparison
Fig. 3 Average resource utilization comparison
mainly focuses on comparative analysis of metaheuristic approaches for the task scheduling problem to optimize QoS parameters. The metaheuristics algorithms PSO, ACO, and GA are evaluated using CloudSim Simulation tool for optimization metrics: makespan, average response time for satisfying cloud users, and average resource utilization for cloud provider’s benefits. Simulation results show that PSO
Comparative Study of Metaheuristic Algorithms for Scheduling …
11
outperforms over GA and ACO for makespan, average response time, and average resource utilization for scheduling batch of independent tasks in a heterogeneous cloud computing environment. There is no such metaheuristic algorithm which performs better in all the problems. Their performance varies with the complexity of the problem. Researchers found PSO as an interesting heuristic algorithm because of its various advantages compared to other metaheuristic techniques such that it can be written in few lines of code and can be implemented with only basic mathematical operators. PSO is capable of escaping from local optima and shows faster convergence than other metaheuristic techniques by sustaining a balance between exploitation and exploration. In most of the less complex and continuous search space problems, PSO performs better than ACO and GA in terms of its success rate and quality of the solution as observed in the considered task problem in this paper. For complex and large search space problems, GA or ACO may perform better than PSO. PSO is fast gradient, more robust, and stable algorithm. Its mathematical implementation is easier than ACO and GA as it has few parameters to be adjusted. This may be the reason for outperforming PSO over GA and ACO in optimizing the specified QoS parameters. The authors are working on a modified PSO approach that improves the other QoS parameters such as fault tolerance and reducing the cost involved in the CC for scheduling workflow-centered scientific applications in cloud computing.
References 1. Zhang Q, Cheng L, Boutaba R (2010) Cloud computing: state-of-the-art and research challenges. J Internet Serv Appl 1:7–18. https://doi.org/10.1007/s13174-010-0007-6 2. Madni SHH, Latiff MSA, Coulibaly Y, Abdulhamid SM (2017) Recent advancements in resource allocation techniques for cloud computing environment: a systematic review. Cluster Comput 20:2489–2533. https://doi.org/10.1007/s10586-016-0684-4 3. Manvi SS, Krishna Shyam G (2014) Resource management for infrastructure as a service (IaaS) in cloud computing: a survey. J Netw Comput Appl 41:424–440. https://doi.org/10.1016/j.jnca. 2013.10.004 4. Ghanam Y, Ferreira J, Maurer F (2012) Emerging issues and challenges in cloud computing—a hybrid approach. J Softw Eng Appl 05:923–937. https://doi.org/10.4236/jsea.2012.531107 5. Masdari M, ValiKardan S, Shahi Z, Azar SI (2016) Towards workflow scheduling in cloud computing: a comprehensive analysis. J Netw Comput Appl 66:64–82. https://doi.org/10.1016/ j.jnca.2016.01.018 6. Kalra M, Singh S (2015) A review of metaheuristic scheduling techniques in cloud computing. Egypt Informatics J 16:275–295. https://doi.org/10.1016/j.eij.2015.07.001 7. Alam T, Dubey P, Kumar A (2018) Adaptive threshold based scheduler for batch of independent jobs for cloud computing system. Int J Distrib Syst Technol 9:20–39. https://doi.org/10.4018/ IJDST.2018100102 8. Xhafa F, Abraham A (2010) Computational models and heuristic methods for grid scheduling problems. Futur Gener Comput Syst 26:608–621. https://doi.org/10.1016/j.future.2009.11.005 9. Al-Arasi R, Saif A (2020) Task scheduling in cloud computing based on metaheuristic techniques: a review paper. EAI Endorsed Trans Cloud Syst 6:162829. https://doi.org/10.4108/eai. 13-7-2018.162829
12
J. Chauhan and T. Alam
10. Madni SHH, Latiff MSA, Coulibaly Y, Abdulhamid SM (2016) An appraisal of meta-heuristic resource allocation techniques for IaaS cloud. Indian J Sci Technol 9. https://doi.org/10.17485/ ijst/2016/v9i4/80561 11. Tsai CW, Rodrigues JJPC (2014) Metaheuristic scheduling for cloud: a survey. IEEE Syst J 8:279–291. https://doi.org/10.1109/JSYST.2013.2256731 12. Madni SHH, Latiff MSA, Coulibaly Y, Abdulhamid SM (2016) Resource scheduling for infrastructure as a service (IaaS) in cloud computing: challenges and opportunities. J Netw Comput Appl 68:173–200. https://doi.org/10.1016/j.jnca.2016.04.016 13. Braun TD, Siegel HJ, Beck N et al (2001) A comparison of eleven static heuristics for mapping a class of independent tasks onto heterogeneous distributed computing systems. J Parallel Distrib Comput 61:810–837. https://doi.org/10.1006/jpdc.2000.1714 14. Thomas A, Krishnalal G, Jagathy Raj VP (2015) Credit based scheduling algorithm in cloud computing environment. Procedia Comput Sci 46:913–920. https://doi.org/10.1016/j.procs. 2015.02.162 15. Elzeki OM, Reshad MZ, Elsoud M (2012) Improved max-min algorithm in cloud computing. Int J Comput Appl 50:22–27.https://doi.org/10.5120/7823-1009 16. Parsa (2009) RASA: a new grid task scheduling algorithm. Int J Digit Content Technol Appl. https://doi.org/10.4156/jdcta.vol3.issue4.10 17. Devipriya S, Ramesh C (2013) Improved max-min heuristic model for task scheduling in cloud. In: Proceedings 2013 international conference on green computing, communication and conservation of energy, ICGCE 2013, pp 883–888.https://doi.org/10.1109/ICGCE. 2013.6823559 18. Maguluri ST, Srikant R, Ying L (2012) Stochastic models of load balancing and scheduling in cloud computing clusters. In: Proceedings—IEEE INFOCOM, pp 702–710.https://doi.org/10. 1109/INFCOM.2012.6195815 19. Kaur S, Bagga P, Hans R, Kaur H (2019) Quality of service (QoS) aware workflow scheduling (WFS) in cloud computing: a systematic review. Arab J Sci Eng 44:2867–2897. https://doi. org/10.1007/s13369-018-3614-3 20. Alam T, Raza Z (2018) Quantum genetic algorithm based scheduler for batch of precedence constrained jobs on heterogeneous computing systems. J Syst Softw 135:126–142. https://doi. org/10.1016/j.jss.2017.10.001 21. Shahid M, Raza Z, Sajid M (2015) Level based batch scheduling strategy with idle slot reduction under DAG constraints for computational grid. J Syst Softw 108:110–133. https://doi.org/10. 1016/j.jss.2015.06.016 22. Zhang Y, Koelbe C, Cooper K (2009) Batch queue resource scheduling for workflow applications. Proceedings—IEEE international conference on cluster computing. https://doi.org/10. 1109/CLUSTR.2009.5289186 23. Attiya I, Zhang X (2017) A simplified particle swarm optimization for job scheduling in cloud computing. Int J Comput Appl. https://doi.org/10.5120/ijca2017913744 24. Mathew T, Sekaran KC, Jose J (2014) Study and analysis of various task scheduling algorithms in the cloud computing environment. In: Proceedings 2014 international conference on advances in computing, communications and informatics, ICACCI 2014, pp 658–664. https:// doi.org/10.1109/ICACCI.2014.6968517 25. Tawfeek M, El-Sisi A, Keshk A, Torkey F (2015) Cloud task scheduling based on ant colony optimization. Int Arab J Inf Technol 12:129–137 26. Srikanth GU, Maheswari VU, Shanthi P, Siromoney A (2012) Tasks scheduling using ant colony optimization. J Comput Sci 8:1314–1320. https://doi.org/10.3844/jcssp.2012.1314.1320 27. Jabreel M. The study of genetic algorithm-based task scheduling for cloud computing 28. Safwat A, Fatma A (2016) Genetic-based task scheduling algorithm in cloud computing environment. Int J Adv Comput Sci Appl 7. https://doi.org/10.14569/ijacsa.2016.070471 29. Almezeini N, Hafez A (2017) Task scheduling in cloud computing using lion optimization algorithm. Int J Adv Comput Sci Appl 8:. https://doi.org/10.14569/ijacsa.2017.081110 30. Agarwal M, Srivastava GMS (2019) A PSO algorithm based task scheduling in cloud computing. Int J Appl Metaheuristic Comput 10:1–17. https://doi.org/10.4018/IJAMC.201910 0101
Comparative Study of Metaheuristic Algorithms for Scheduling …
13
31. Masdari M, Salehi F, Jalali M, Bidaki M (2017) A survey of PSO-based scheduling algorithms in cloud computing. J Netw Syst Manag 25:122–158. https://doi.org/10.1007/s10922-0169385-9 32. Salman A, Ahmad I, Al-Madani S (2002) Particle swarm optimization for task assignment problem. Microprocess Microsyst 26:363–371. https://doi.org/10.1016/S0141-9331(02)000 53-4 33. Zhang L, Chen Y, Yang B (2006) Task scheduling based on PSO algorithm in computational grid. Proc - ISDA 2006 Sixth Int Conf Intell Syst Des Appl 2:696–701. https://doi.org/10.1109/ ISDA.2006.253921 34. Al-Maamari A, Omara FA (2015) Task scheduling using PSO algorithm in cloud computing environments. Int J Grid Distrib Comput 8:245–256. https://doi.org/10.14257/ijgdc.2015.8. 5.24 35. Beegom ASA, Rajasree MS (2019) Integer-PSO: a discrete PSO algorithm for task scheduling in cloud computing systems. Evol Intell 12:227–239. https://doi.org/10.1007/s12065-019-002 16-7 36. Guo L, Zhao S, Shen S, Jiang C (2012) Task scheduling optimization in cloud computing based on heuristic algorithm. J Networks 7:547–553. https://doi.org/10.4304/jnw.7.3.547-553 37. Awad AI, El-Hefnawy NA, Abdel-Kader HM (2015) Enhanced particle swarm optimization for task scheduling in cloud computing environments. Procedia Comput Sci 65:920–929. https:// doi.org/10.1016/j.procs.2015.09.064 38. Ebadifard F, Babamir SM (2018) A PSO-based task scheduling algorithm improved using a load-balancing technique for the cloud computing environment. Concurr Comput 30 39. Salot P (2013) A survey of various scheduling algorithm in cloud computing environment. Int J Res Eng Technol 2(2):131–135 40. Kaur S, Verma A (2012) An efficient approach to genetic algorithm for task scheduling in cloud computing environment. Int J Inf Technol Comput Sci 4:74–79. https://doi.org/10.5815/ijitcs. 2012.10.09 41. Konar D, Sharma K, Sarogi V, Bhattacharyya S (2018) A multi-objective quantum-inspired genetic algorithm (Mo-QIGA) for real-time tasks scheduling in multiprocessor environment. Procedia Comput Sci 131:591–599. https://doi.org/10.1016/j.procs.2018.04.301 42. Gupta A, Garg R (2017) Load balancing based task scheduling with ACO in cloud computing. In: 2017 International conference computing applications ICCA 2017, pp 174–179.https://doi. org/10.1109/COMAPP.2017.8079781 43. Introduction I (2011) Improved ant colony optimization for grid scheduling. 1:596–604 44. Alworafi MA, Dhari A, El-Booz SA et al (2019) An enhanced task scheduling in cloud computing based on hybrid approach. Springer Singapore 45. Alsaidy SA, Abbood AD, Sahib MA (2020) Heuristic initialization of PSO task scheduling algorithm in cloud computing. J King Saud Univ—Comput Inf Sci.https://doi.org/10.1016/j. jksuci.2020.11.002
Impact of Spatial Distribution of Repeated Samples on the Geometry of Hyperplanes Reema Lalit
and Kapil
Abstract Support vector machines (SVMs) and their uses in various scientific domains have been the subject of extensive research in recent years. SVMs are among the most potent and reliable classification and regression algorithms in various application areas. In the proposed work, the impact of location and multiple occurrences of support vectors on SVM has been studied by noticing the geometrical differences. Multiple occurrences or repetitions of data points are generally done; in case of imbalance classes to balance the data otherwise, results will be biased toward the majority class. Multiple occurrences of the same data points will result in a change of behavior and orientation of the hyperplane. The hyperplane will change if the support vectors are deleted or added. Keywords Support vectors · Support vector machine · Imbalanced class
1 Introduction One of the most popular techniques for classification problems, such as disease detection [1, 2], text recognition [3], emotion detection [4] and face detection [5], is the support vector machine (SVM). For the optimization problem, SVM provides a globally optimal solution by employing a maximum margin strategy. The notion of structural risk minimization is included into SVM. Vapnik is presented SVM as a machine learning model for applications including classification and regression.
R. Lalit (B) · Kapil National Institute of Technology, Kurukshetra, India e-mail: [email protected] Kapil e-mail: [email protected] R. Lalit Panipat Institute of Engineering and Technology, Samalkha, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_2
15
16
R. Lalit and Kapil
SVM’s outstanding generalization ability, optimal solution, and exclusionary capability have recently piqued the interest of the data mining, pattern recognition, and machine learning communities. SVM has proven to be a potent technique for handling real-world binary classification issues. SVMs have been demonstrated to outperform other supervised learning techniques. Due to their solid theoretical foundations and great generalization capacities, SVMs have become one of the most widely used classification approaches in recent years [6]. The remaining sections are arranged as follows: Sect. 2 provides a thorough theoretical analysis of the support vector machine. Section 3 details the proposed model of the implemented work, experiments, and results obtained, and finally, Sect. 4 presents the conclusion and future directions in this area of research.
2 Review of SVM Classifiers Support vector machines were created by Vladimir Vapnik in 1979. As seen in Fig. 2, an SVM is a hyperplane that, with the largest feasible margin, separates a set of positive samples from a set of negative samples. The distance between the hyperplane and the closest positive and negative examples in the linear instance determines the margin. Various versions of SVM are available such as Twin SVM, Least squares twin SVM, L1-norm-based TSVM, Fuzzy SVM, and SVM for multi-view and multi-class learning. Jayadeva and Chandra [7], by tackling quadratic programming issues in the twin support vector machine, produce two non-parallel hyperplanes. Kumar et al. [8], in least square Twin SVM, equality constraints are modified in inequality constraints. Wang et al. [9] present L1-norm-based TSVM to increase the robustness of the TSVM model. To reduce the effects of outliers, fuzzy support vector machines came into the picture [10]. Multi-view learning further enhances the generalization of the SVMbased models [11]. Richhariya and Tanveer [12] proposed a reduced Universum twin support vector learning to address the issue of class imbalance by employing a tiny rectangular kernel matrix to shorten the computation time of our Universumbased approach. Ganaie and Tanveer [13] take into account the neighborhood that is included in the objective function’s weight matrix.
2.1 Support Vector Machine (SVM) In its linear form, SVM is a hyperplane to distinguish between sets of positive and negative data samples. Numerous hyperplanes might be used to divide the two classes, but the one that generates the greatest margin is picked. The margin is determined by calculating the distance between the hyperplane and the nearest positive or negative data sample [14]. Let the training data be denoted by T d . Td = {( A1 D1 ), ( A2 D2 ), . . . , ( Am Dm )},
(1)
Impact of Spatial Distribution of Repeated Samples on the Geometry …
17
where Am ∈ R n and D ∈ {1, −1} are the labels for ith observations i = 1, 2, . . . , m. In Linear SVM, the Primal QPP is intended to be resolved using Linear SVM. 1 min w22 + C ξi w,b,ξ 2 i=1 m
s.t. D(∅(A)w + eb) ≥ e − ξ,
(2)
where ξ is the slack variable and C is a penalty parameter. The objective is to identify the best separating hyperplane. (∅(A)w) + b = 0, where x ∈ R n .
(3)
The Dual corresponding to Eq. (2) can be articulated as 1 max − α t D∅( A)∅(A)t Dα + et α α,μ 2 s.t., Ce ≥ α ≥ 0, D t α = 0,
(4)
where D = diag(D) and α ≥ 0, Lagrangian multiplier. Equation (4) may be rewritten as: 1 min α t D K A, At Dα − et α α,μ 2 s.t., Ce ≥ α ≥ 0, D t α = 0,
(5)
Here, K A, At = ∅(A)∅(A)t represents the linear Kernel function. To find the value of b and ξ, we will look for support vectors. Data points where α = 0 as per KKT conditions:
D(∅(A)i w + ei b) + ξi − ei = 0,
(6)
where D is diagonal matrix and ei = 1. Since this is the solution instead of looking for αi = 0, we say that |α| ≤ 10−10 = threshold, and support vectors are the points that lie on the hyperplane or inside the margin. Variables b and w will be calculated by Eqs. (7) and (8), respectively. Consideringξi = 0 and μ = 0, b = Di −
α j D j ∅(A)tj ∅( A)i .
j∈sv
Then, mean of b will be taken. s.t., α ≥ threshold and α ≤ C − threshold
(7)
18
R. Lalit and Kapil
w = At Dα.
(8)
value of b with support vectors can be calculated via Eq. (9). (n + + n − )b = (n + − n − ) − α t D A
xi .
(9)
Here, n + and n − are the support vectors belonging to positive and negative classes and xi is the data points of both classes. For each support vector, if it is belonging to a positive class, the value of ξi can be calculated by Eq. (10). For each support vector, if it is belonging to a negative class, the value of ξi∗ will be calculated by Eq. (11). ξi = 1 − xi ∅(A)t Dα − b,
(10)
ξi∗ = 1 + xi ∅(A)t Dα + b.
(11)
xi belongs to the positive class data sample for ξi , and for ξi∗ it will belong to the negative class data points.
3 Experimental Setup, Results, and Analysis In this part, an experiment artificially generated dataset is undertaken. The experiments are conducted in MATLAB 2019 on a system with 8 GB RAM, 1 TB storage, Intel Core i7 processor with a processing speed of 3.0 GHz. As per the selection of the kernel, the linear kernel is used.
3.1 Dataset Used For the proposed work, two clustered normal distributed datasets, having two features X1 and X2, are generated and divided into two classes. Dataset is imbalanced and has 500 and 1000 data samples for positive and negative classes, respectively. Dataset imbalance is in 1:2 as shown in Fig. 1. Few data samples from each class lie in the overlapped region. The dataset dimensions are 1500 × 2. After implementing the SVM classifier on artificially generated dataset and calculating and plotting support vectors on the classifier as shown in Fig. 2. Now, we will try to note the geometrical differences after repeating support vectors at different locations. Firstly, the value of ξ and ξ * is divided into ten different bins for positive and negative classes, respectively. The distribution of the positive and negative classes in various bins is shown in Figs. 3 and 4.Two bins of size (0 to 0.5) and (0.5 to 1.0) from the value of ξ and ξ * are created, and their data samples are repeated
Impact of Spatial Distribution of Repeated Samples on the Geometry …
19
Fig. 1 Artificially generated dataset with normal distribution
Fig. 2 SVM hyperplane with support vectors without repeating the data samples
for both positive and negative classes. After repeating the data samples of the positive class, the entire hyperplane will be shifted in the upward direction as shown in Figs. 5 and 6. Similarly, data samples can be repeated for the negative class, then the entire hyperplane will be shifted in downward direction. ξ and ξ * values obtained are positive, but some values of ξ and ξ * are negative because some points are lying very close to the line.
3.2 Algorithm Step 1. Define a matrix X of size m × n. Define One’s matrix e, slack variables ξ and ξ * for positive and negative classes respectively.
20
R. Lalit and Kapil
Fig. 3 Histogram showing the value of ξ for the positive class
Fig. 4 Histogram showing the value of ξ * for the negative class
Step 2. Implement the SVM classifier from Eq. (5). Considering variable C = 1, and threshold = 10−10 . Step 3. Calculate Support Vector data points where α ≥ Threshold and plot them. Step 4. Calculate the value of b and w from Eqs. (7) and (8) respectively. Step 5. Calculate the value of ξ and ξ * from Eqs. (10) and (11) respectively. Step 6. Dividing the value of ξ and ξ *into bins and repeating the data points of a particular bin n number of times at a time, and noticing geometrical differences on SVM. Step 7. Now, repeat the number of support vectors at a different location and notice geometrical differences on SVM. Various locations at which support vectors are repeated are arranged in four cases. Case 1: Left side SV of the Positive class.
Impact of Spatial Distribution of Repeated Samples on the Geometry …
21
Fig. 5 Repeating the values of Bin1 (0–0.5) for positive class samples, 10 times
Fig. 6 Repeating the values of Bin2 (0.5–1.0) positive class samples, 10 times
Case 2: Right side SV of the Positive class. Case 3: Left side SV of the negative class. Case 4: Right side SV of the negative class.
3.3 Case 1 and Case 3: Repeating the Left-Side Support Vectors Belonging to the Positive and Negative Classes Consider Fig. 2, for the original data. Here, in this case, data points of the positive class for input feature in the range X1(− 0.6 to − 0.2) and X2(0 to 0.5) are repeated. In total, eight data points fall in this range. As the number of data points of the positive left class is repeated, the contour is shifted in the upward direction from that
22
R. Lalit and Kapil
Fig. 7 Repeating the data points with range X1(− 0.6 to − 0.2) and X2(0 to 0.5), 10 times
particular location as shown in Fig. 7. Similarly, data points of the negative class for input feature in the range X1(− 0.6 to 0) and X2(0.4 to 0.6) are repeated. In total, 21 data points fall in this range. As the number of data points of the negative left class is repeated, the contour is shifted in the left downward direction from that particular location as shown in Fig. 9
3.4 Case 2 and Case 4: Repeating the Right-Side Support Vectors Belonging to the Positive and Negative Classes Consider Fig. 2, for original data. Here, in this case, data points of the positive class for input features in the range X1(0.2 to 0.4) and X2(0 to 0.5) are repeated. In total, eight data points fall in this range. Then, the geometrical difference is observed. As the number of the right data points of the positive class is repeated, the contour is shifted in the upward direction from that particular location as shown in Fig. 8. Similar behavior can be observed for the negative class, and data points of the negative class for input features in the range X1(0 to 0.4) and X2(0.4 to 0.6) are repeated. In total, 21 data points fall in this range and the hyperplane is shifted in the right downward direction from that particular location as shown in Fig. 10. The summary of all the positive class and negative class cases is summarized in Table 1.
Impact of Spatial Distribution of Repeated Samples on the Geometry …
23
Fig. 8 Repeating the data points with range X1(0.2 to 0.4) and X2(0 to 0.5), 10 times
Fig. 9 Repeating the data points with range X1(− 0.6 to 0) and X2(0.4 to 0.6), 10 times
4 Conclusion and Future Directions In this paper, we presented a novel point of view on the SVM by discussing the impact of the spatial distribution of repeated samples on the geometry of hyperplanes. As it is seen in the proposed work by repeating the number of samples on a particular location, a specified no of times, the hyperplane can shift its position. This means that average error can be reduced which can further reduce the misclassification of data samples. In the future, spatial distribution of repeated samples can be implemented on variants of SVM.
24
R. Lalit and Kapil
Fig. 10 Repeating the data points with range X1(0 to 0.4) and X2(0.4 to 0.6), 10 times
Table 1 Impact of repeating SV at different locations of positive class on the SVM classifier Range to repeat point of (X1 feature)
Range to repeat point of (X2 feature)
No. of repeated points
Result
Case 1 − 0.6 to − 0.2
0 to 0.5
8
The hyperplane moved in the left upward direction as shown in Fig. 6
Case 2 0.2 to 0.4
0 to 0.5
8
The hyperplane moved in the right upward direction as shown in Fig. 7
Case 3 − 0.6 to 0
0.2 to 0.4
21
The hyperplane moved in the left downward direction as shown in Fig. 8
Case 4 0 to 0.4
0.4 to 0.6
21
The hyperplane moved in the right downward direction as shown in Fig. 9
References 1. Richhariya B, Tanveer M (2018) EEG signal classification using universum support vector machine. Expert Syst Appl 106:169–182. https://doi.org/10.1016/j.eswa.2018.03.053 2. Eke CS, Jammeh E, Li X, Carroll C, Pearson S, Ifeachor E (2021) Early detection of Alzheimer’s disease with blood plasma proteins using support vector machines. IEEE J Biomed Health Inform 25(1):218–226. https://doi.org/10.1109/jbhi.2020.2984355 3. Liu Z, Lv X, Liu K, Shi S (2010) Study on SVM compared with the other text classification methods. In: 2010 Second international workshop on education technology and computer science. https://doi.org/10.1109/etcs.2010.248 4. Sepúlveda A, Castillo F, Palma C, Rodriguez-Fernandez M (2021) Emotion recognition from ECG signals using wavelet scattering and machine learning. Appl Sci 11(11):4945. https://doi. org/10.3390/app11114945 5. Raji ID, Fried G (2021) About face: a survey of facial recognition evaluation. ArXiv: Computer Vision and Pattern Recognition. https://arxiv.org/pdf/2102.00813
Impact of Spatial Distribution of Repeated Samples on the Geometry …
25
6. Ramirez-Padron, R. (2007). A roadmap to svm sequential minimal optimization for classification. Tutorial online. 7. Jayadeva KR, Chandra S (2007) Twin support vector machines for pattern classification. IEEE Trans Pattern Anal Mach Intell 29(5):905–910.https://doi.org/10.1109/tpami.2007.1068 8. Arun Kumar M, Gopal M (2009) Least squares twin support vector machines for pattern classification. Expert Syst Appl 36(4):7535–7543. https://doi.org/10.1016/j.eswa.2008.09.066 9. Wang C, Ye Q, Luo P, Ye N, Fu L (2019) Robust capped L1-norm twin support vector machine. Neural Netw 114:47–59. https://doi.org/10.1016/j.neunet.2019.01.016 10. Jiang X, Yi Z, Lv JC (2006) Fuzzy SVM with a new fuzzy membership function. Neural Comput Appl 15(3–4):268–276. https://doi.org/10.1007/s00521-006-0028-z 11. Tang J, Tian Y, Liu X, Li D, Lv J, Kou G (2018) Improved multi-view privileged support vector machine. Neural Netw 106:96–109. https://doi.org/10.1016/j.neunet.2018.06.017 12. Richhariya B, Tanveer M (2020) A reduced universum twin support vector machine for class imbalance learning. Pattern Recogn 102:107150. https://doi.org/10.1016/j.patcog.2019. 107150 13. Ganaie M, Tanveer M (2022) KNN weighted reduced universum twin SVM for class imbalance learning. Knowl-Based Syst 245:108578. https://doi.org/10.1016/j.knosys.2022.108578 14. Platt J (1998) Sequential minimal optimization: a fast algorithm for training support vector machines. Microsoft research technical report, 21. http://recognition.mccme.ru/pub/papers/ SVM/smoTR.pdf
IoT-Based Smart Farming for Sustainable Agriculture Geetan Manchanda, Bhumika Papnai, Aadi Lochab, and Shikha Badhani
Abstract The exponential growth of the population and environmental challenges such as climate change are some of the problems that significantly impact agriculture. Indian agriculture sector needs an efficient method for improvement in the growth of food production simultaneously sustainably using resources. Emerging technologies like Internet of Things (IoT) can provide India with a better and more sustainable agriculture sector. In this paper, we first glimpse the role of IoT in agriculture. Then, we analyze and validate mathematically how various agricultural factors on which IoT works affect the productivity of different crops using available agricultural datasets. Keywords Agriculture · IoT · Irrigation · Sensor · Sustainability · Yield
1 Introduction The agriculture sector is an indispensable sector of every country and becomes even more important, especially for a developing country like India. Agriculture is the primary source of livelihood for nearly 58% of India’s population and contributes about 17% to Gross Value Added (GVA) [1]. India is among the world’s leading producers of rice and wheat in terms of net production volume; agriculture has a vital role in import and export as well. Many industries depend on agriculture as it is the primary source of raw materials like cotton, jute, sugar, tobacco, oils, etc. According to the Department for Promotion of Industry and Internal Trade (DPIIT), a cumulative Foreign Direct Investment (FDI) equity inflow of about US$ 9.08 billion was achieved from April 2000 to 2019 in the agriculture sector alone [2]. A significant contribution toward any country’s growth is derived from agriculture. G. Manchanda · B. Papnai · A. Lochab Department of Mathematics, Maitreyi College, University of Delhi, Delhi, India e-mail: [email protected] S. Badhani (B) Department of Computer Science, Maitreyi College, University of Delhi, Delhi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_3
27
28
G. Manchanda et al.
With the increasing global population estimated to touch 9.6 billion by 2050, advancement in the agriculture sector is a must to feed the growing population [3]. However, farmers in India still use manual methods for crop monitoring, irrigation, and other activities. These manual methods take time and sometimes cannot detect the exact situation, leading to poor crop yield. Therefore, food security is a crucial issue in India. According to the Food and Agriculture Organization of the UN (FAO), it is estimated that over 189.2 million people go hungry every day in the country [4]. Adopting sustainable farming practices can increase both productivity and reduce ecological harm as it will help produce a greater agricultural output while using less land, water, and energy, ensuring profitability for the farmers. Sustainable agriculture is defined as a system that helps conserve resources and reduces agricultural practices that pose a threat to the environment [5]. The use of innovations like the Internet of Things (IoT) in farming could have the best results against the challenges (like adverse environmental conditions, climate change, increasing expenses, wastage of resources, etc.) in the future [6]. IoT is a system of interrelated networks of physical tools with sensors, software, and other technological equipment that can transfer and collect data with other devices or systems over the internet without requiring human interference [7]. Smart farming with big data and advanced analytics technology includes automation, adding senses and analytics to modern agriculture. The use of technology will not only help provide better yield and less labor effort, but it will also revolutionize agriculture for farmers in India. The potential of IoT in the agricultural sector motivated us to explore the same in this research work. The significant contributions of this paper are as follows: • We first present a review of Internet of Things (IoT) as an intelligent farming solution that has the potential to overcome the problems faced in Indian agriculture and stimulate sustainable agriculture. • Then, we analyze and validate mathematically how the agricultural factors on which IoT works affect the productivity of various crops using available agricultural datasets. To validate the role of IoT in agriculture mathematically, we have used R Studio’s [8] “agridat” package [9]. The rest of the paper is organized as follows. In Sect. 2, we describe the methodology of this work. Then, Sect. 3 presents the results, and Sect. 4 presents the discussion. Lastly, we conclude in Sect. 5.
2 Methodology We started our work by collecting information about the role and need for IoT in Indian agriculture and its applications. Then, we experimented using R Studio’s “agridat” package and selected some of the available datasets to statistically prove the benefits of using IoT devices for sustainable agriculture.
IoT-Based Smart Farming for Sustainable Agriculture
29
This work was divided into two stages. In the first stage, we analyze the role of IoT in agriculture. In the second stage, we use the “agridat” package available in RStudio to analyze the effect of various factors on crop yield using the available datasets.
2.1 IoT in Indian Agriculture and Its Applications In this section, we explore how IoT can be beneficial for sustainable agriculture and how it has the potential to overcome various problems in the agricultural sector. Agricultural problems in India: The success of the agricultural sector depends on various factors such as climate, irrigation, soil quality, humidity, seeds, pesticides. The problems associated with these factors thus affect agricultural production too. Some of the significant factors are discussed below: • Climatic conditions: Climate change harms agricultural produce. A rise has been noticed in all of India’s mean temperature, and the frequency of rainfalls has been increased in the last three decades. These climatic changes are more likely to affect the agricultural yield negatively. These changing circumstances are directing us to monitor climatic conditions [10]. • Irrigation: Irrigation is an essential input for agriculture in every country. The yield of a crop depends on the way how the watering of these crops is done. In a tropical country like India, where the rainfall pattern is so uncertain and irregular, irrigation is the only hope to sustain agriculture. However, over-irrigation has their ill effects. Large areas of land in Punjab and Haryana have become useless due to faulty irrigation that led to salinity, alkalinity, and water-logging [11]. • Soil Quality: Soil quality is one of the most essential components for good crop health. Soil mismanagement and land misuse adversely affect soil health. Farming practices like in-field burning crop residuals, excessive digging or tillage, irrigation dependent on the flood, and indiscrete use of chemicals often lead to degradation of soil health [12]. The degrading soil health shows the dire need to monitor soil health. • Humidity: Humidity refers to the amount of water vapor present in the air. Humidity is often expressed in terms of Relative Humidity (RH). The Relative Humidity is the percentage of water vapor in the air at a given temperature and pressure. Very high and very low Relative Humidity (RH) does not lead to high grain yield. These can further contribute to more usage of pesticides which has its ill effects [13]. • Seeds, Fertilizers, and Pesticides: The three pillars of modern agriculture constitute seeds, fertilizers, and pesticides. The main task of these is to enhance agricultural productivity. Seeds are the most essential input as far as agriculture is concerned. It has been observed that still many farmers use common grain saved from the previous crop as seed and cannot distinguish between common grain and seed. Using common grain seeds affects productivity. Judicious and optimal use
30
G. Manchanda et al.
of fertilizers is necessary to meet the future demand for food with the increasing population. Based on the study reports of the National Institute of Agricultural Economics and Policy (NIAP), one-third of the major states apply excess nitrogen and two-thirds of them apply nitrogen below the optimum level [14]. There are similar regional imbalances in the use of Potassium (K) and Phosphorus (P). This further stresses the use of modern technology for the right mix of crops. In India, a drop in the crop yield has been found due to pests including weeds, insect pests, diseases, nematodes, and rodents, ranging from 15 to 25% causing a loss of 0.9 to 1.4 lakh crore rupees annually [15]. IoT as a solution: Crop yield is the measure of grains that are produced from a given land of the plot. It is the most important factor in agriculture as it measures the performance of the farmer and depicts in totality the efforts and resources invested in the development of plants on the fields. Increasing crop yield is the main aim of every farmer and one of the common ways to do so is effectively improving crop management which includes preparation of soil, sowing of seeds, the addition of manures and fertilizers, irrigation, protection from weeds, harvesting, and storage. The above management decisions should be used efficiently in reducing losses and improving quality. Using IoT to control and monitor devices at the farm which eventually collects the data from the sowing of seeds to harvesting makes it an easier task to improve the crop yield without wastage of any resource. IoT plays a very important role in smart agriculture; IoT sensors are capable of providing information about agriculture fields. IoT agricultural monitoring system makes use of sensor networks that collects data from different sensors deployed at various nodes and sends it through the wireless protocol. The primary data flow mechanism used by sensors allows them to sense, store, present, evaluate, decide, and control by receiving real-time data feeds on a variety of gadgets, such as smart phones and tablets [16]. The main function of IoT gadgets is live monitoring of environmental data in terms of temperature, moisture, and other types depending on the sensors integrated with it, and then, farmers can implement smart farming by getting live data feeds on various devices like smartphones, tablets, etc. The data generated via sensors can be easily shared and viewed by agriculture consultants via cloud computing technology [17]. Various sensors that are used in the IoT devices [15] for agriculture to gather information are discussed below: • Temperature Sensor: The DS18B20 temperature sensor [18] provides 9-bit to 12bit Celsius and it also has an alarm function with non-volatile user-programmable upper and lower trigger points. The biggest changing range of soil temperature is 0–40° and the optimum average range required for plant growth is 20–30 °C [19]. The DS18B20 has 64-bit serial code which allows multiple DS18B20s to function on the same 1-wire bus. • Soil Moisture Sensor: The soil moisture sensor has two large exposed conductors which function as probes for the sensor, together acting as a variable resistor. When the water level is low in the soil, the conductivity will be low, and thus, the analog voltage will be low and this analog voltage keeps increasing as the
IoT-Based Smart Farming for Sustainable Agriculture
31
conductivity between the electrodes in the soil changes. In this way, soil moisture is detected by the sensor [17]. • Light Intensity Sensor: All crops react differently and have different physiologies to deal with light intensity. Thus, the farmers need to provide sufficient light of at least 8–10 h per day to have healthy growth. Using smart farming techniques which include the sensor system that controls the light intensity could be a better option as it is time-efficient. There are several types of light sensors that is photoresistors, photodiodes, and phototransistors. They are used for the automated light intensity monitoring process as they separate the substance of light in a growth chamber and increase or decrease the brightness of the light to have an accurate level [20]. The use of IoT in the Indian agricultural sector has been widely promoted by the Government of India as well. The Government of India has introduced new schemes to help Indian farmers in the advancement of Indian agriculture utilizing the concept of smart farming. Some of the Government initiatives and schemes are described below: • Mobile apps The Government of India has launched several mobile applications for farmers which provide information on agriculture activities, free of cost, for the benefit of farmers and other stakeholders [21]. Crop Cutting Experiments (CCE) Agri Mobile App: This app collects crop cutting experiment data and has a special quality as it works on both online and offline modes. Internet is required for only installing this app on mobile and for registration. After this data can be added to the CCE app without internet and when internet connectivity is available, data can be pushed to the server [22]. Kisan Suvidha: This app provides information related to weather (humidity, temperature, etc.), market prices, plant protection techniques, cold stores, godowns, and agro-advisory section which shows messages for farmers in different local languages. This app also directly connects the farmer with the kisan call center where technical graduates answer farmers’ queries [23]. • Agriculture events Following are some of the events and projects organized by the Government of India to promote smart farming. Agri India Hackathon: The Agri India Hackathon is organized by Pusa Krishi, ICAR—Indian Agricultural Research Institute (IARI), Indian Council of Agricultural Research (ICAR) and Department of Agriculture, Cooperation and Farmers’ Welfare, Ministry of Agriculture and Farmers’ Welfare. It is the largest virtual gathering to boost up the advancements in agriculture. Agri India Hackathon discussed precision farming including the application of sensors, WSN, ICT, AI, IoT, and drones. Precision livestock and aquaculture are also a goal of this initiative [24]. SENSAGRI Project for Drone Based Agriculture Technology: SENSAGRI is “SENsor-based Smart AGRIculture” formulated by the Indian Council of Agricultural Research (ICAR) through the Indian Agricultural Research Institute (IARI).
32
G. Manchanda et al.
The main objective is to develop an indigenous prototype for a drone-based crop and soil health monitoring system using hyperspectral remote sensing (HRS). It has a feature to smoothly scout over farm fields, gathering precise information and transmitting the data on a real-time basis. It will be an advantage in the farming sector at regional/local scale for assessing land and crop health: extent, type, and severity of damage besides issuing forewarning, post-event management, and settlement of compensation under crop insurance schemes [25].
2.2 Agridat Package in RStudio RStudio [8] is an open-source statistics software environment that provides a free resource for modern statistics computing. The basic RStudio download includes a range of core tools, but the real strength of RStudio is in the contributed packages that extend and generalize the code language. “Agridat” is a recently contributed package that provides access to real datasets from a large number of published agricultural research papers. The “agridat” datasets are formatted as a data frame [9]. We selected two datasets from the “agridat” package, namely “gregory.cotton” (a factorial experiment of cotton in Sudan) and “gumpertz.pepper” (phytophthora disease incidence in a pepper field) [9], and performed an analysis on these datasets.
3 Results In this section, we present and discuss the results from our analysis. We deduced the following results after doing the statistical analysis on the datasets “gregory.cotton” and “gumpertz.pepper” [9]. • The “gregory.cotton” package is a factorial experiment of cotton conducted in Sudan, and it includes 144 observations on the following six variables: yield (a numeric vector), year, nitrogen (nitrogen level), date (sowing date), water (irrigation amount), and spacing (spacing between plants). We analyzed the effect of water and nitrogen level on the yield of these crops and the findings are explained below: (a) Yield and water—Cotton yield is very much dependent on the amount and frequency of irrigation water [26]. The effects on yield were studied on three irrigation levels: I1 = Light, I2 = Medium, and I3 = Heavy. The yield was found to be maximum under heavy irrigation as depicted in Figs. 1 and 2. (b) Nitrogen level and yield—Nitrogen is a crucial nutrient that plays important role in the photosynthesis, growth, and yield of cotton crops [27]. The effects on the yield were studied on two nitrogen levels: L = none/control, H = 600 rotls/feddan. The yield was found to be maximum under H nitrogen level as shown in Figs. 3 and 4.
IoT-Based Smart Farming for Sustainable Agriculture
33
Fig. 1 Scatter plot of yield and irrigation level Fig. 2 Boxplot of irrigation level and yield
Fig. 3 Boxplot of nitrogen level and yield
• The “gumpertz.pepper” package is a phytophthora disease incidence in a pepper field and it includes 800 observations on the following six variables: field (field factor), row (x-ordinate), quadrant (y-ordinate), disease (presence (Y ) or absence (N) of disease), water (soil moisture percent), and leaf (leaf assay count). We analyzed the dependency of the presence of disease in the crop on the soil moisture. Presence of disease versus soil moisture level: Soil moisture status has an important role in the incidence of disease in plants [28]. The presence of disease could be detected by checking soil moisture level, i.e., plants having high soil moisture percentage show the pattern of presence of disease as depicted in Fig. 5. Although
34
G. Manchanda et al.
Fig. 4 Scatter plot of yield and nitrogen Fig. 5 Boxplot presence of disease and yield
various other factors affect the presence of disease, but this can be an initial warning sign for the same.
4 Discussion In this section, we discussed our findings of the merits of IoT for sustainable and advanced agriculture and derived that IoT is the need of the hour for a good crop yield which brings some challenges. Key findings from Agridat package datasets: The crop yield depends on irrigation, as depicted in Figs. 1 and 2, and if proper irrigation is not provided, it can affect crop production. Using IoT devices such as soil sensors and cloud-based data analytics can monitor the need for water in the soil and thereby allow farmers to determine when they should irrigate their farms. This will not only help conserve water but also prevent over-irrigation, which can affect yield adversely. To minimize losses and increase efficiency in cotton plants, Nitrogen (N) fertilizer should be applied as close as possible to the time it will be taken up by the plant, indicating that cotton requires varying amounts of N throughout its growth, as depicted
IoT-Based Smart Farming for Sustainable Agriculture
35
in Figs. 3 and 4. By using smart devices, we can automate multiple processes across our production cycle, increasing yield efficiency through automation. Example: It will help monitor the plant’s requirements for nutrients in the soil. If any plant was wilted, dead, or had lesions, the phytophthora disease was considered present in the plot, as depicted in Fig. 5. IoT devices help in crop management as they can monitor crop growth and any anomalies to prevent diseases or infestations that could harm the yield effectively. Challenges for IoT in Agriculture Sector: Farmers cannot take full advantage of this technology due to poor Infrastructure. There is a problem with Internet accessibility in farms located in remote areas. In such cases, the monitoring systems these farmers use become unreliable and useless. The machinery used in the implementation of the IoT system is costly. The sensors used in this system are the least expensive, but fitting this system in the agricultural field is too costly.
5 Conclusion In conclusion, agriculture plays a vital role in developing a country, and introducing smart farming can be helpful for future needs. Internet of Things (IoT) is an upand-coming technology that works in different areas to improve time efficiency, crop management, water management, control of disease, etc. In our findings, we found how IoT minimizes human efforts and assists the agricultural sector in achieving sustainable development goals. To validate that IoT affects agricultural yield and can help achieve sustainability in agriculture, we analyzed various agricultural datasets and validated the use of IoT in agriculture. The validation methodology presented in this work can also aid in identifying new IoT-powered solutions by studying the effect of various factors on agriculture through the available datasets. Acknowledgements This research follows from the project work done as part of Summer Internship Programme (SIP) 2020–21 organized by Centre for Research, Maitreyi College, University of Delhi.
References 1. Annual Report 2020. ICAR, Government of India, Ministry of Agriculture & Farmers Welfare. https://icar.gov.in/sites/default/files/ICAR-AR-2020-English.pdf. Last accessed 12 Apr 2022 2. The emerging scope of agri-tech in India. https://www.investindia.gov.in/team-india-blogs/eme rging-scope-agri-tech-india. Last accessed 12 Apr 2022 3. Balakrishna G, Nageshwara Rao M (2019) Study report on using IoT agriculture farm monitoring. Lect Notes Networks Syst 74:483–491. https://doi.org/10.1007/978-981-13-7082-3_ 55 4. IFBN: hunger in India. https://www.indiafoodbanking.org/hunger. Last accessed 12 Apr 2022
36
G. Manchanda et al.
5. D’souza G, Cyphers D, Phipps T (1993) Factors affecting the adoption of sustainable agricultural practices. Agric Resour Econ Rev 22:159–165. https://doi.org/10.1017/s10682805000 04743 6. Salecha M (2022) Smart farming: IoT in agriculture. https://analyticsindiamag.com/smart-far ming-iot-agriculture/. Last accessed 12 Apr 2022 7. Ministry of Electronic and Information Technology: IoT Policy Document. https://meity.gov. in/sites/upload_files/dit/files/Draft-IoT-Policy%281%29.pdf. Last accessed 12 Apr 2022 8. RStudio: RStudio: Integrated development environment for R. www.rstudio.com. Last accessed 12 Apr 2022 9. Wright K (2022) “agridat”: agricultural datasets. R package version 1.20. https://cran.r-project. org/package=agridat. Last accessed 12 Apr 2022 10. Effect of climate change on agriculture. Press Information Bureau Government of India Ministry of Agriculture & Farmers Welfare. https://pib.gov.in/Pressreleaseshare.aspx?PRID= 1696468. Last accessed 12 Apr 2022 11. Krar P. Parts of Haryana have salty groundwater and rains add to the salt content. https://economictimes.indiatimes.com/news/economy/agriculture/parts-of-haryanahave-salty-groundwater-and-rains-add-to-the-salt-content/articleshow/71342070.cms 12. Soil health is degraded in most of regions of India. https://www.livemint.com/news/india/soil-health-is-degraded-in-most-regions-of-india-11595225689494.html. last accessed 12 Apr 2022 13. Agrometeorology: relative humidity and plant growth. https://agritech.tnau.ac.in/agriculture/ agri_agrometeorology_relativehumidity.html. Last accessed 12 Apr 2022 14. Raising agricultural productivity and making farming remunerative for farmers. https://www. niti.gov.in/sites/default/files/2019-08/RaisingAgriculturalProductivityandMakingFarmingR emunerativeforFarmers.pdf. Last accessed 12 Apr 2022 15. Vennila S, Lokare R, Singh N, Ghadge SM, Chattopadhyay C (2022) Crop pest surveillance and advisory project of Maharashtra. https://farmer.gov.in/imagedefault/handbooks/Boo KLet/MAHARASHTRA/20160725144307_CROPSAP_Booklet_for_web.pdf. Last accessed 12 Apr 2022 16. Meera SN, Kathiresan C (2022) Internet of Things (IoT) in agriculture industries. https:/ /aphrdi.ap.gov.in/documents/Trainings@APHRDI/2017/8_aug/IOT/ShaikNMeera1.pdf. Last accessed 12 Apr 2022 17. Nayyar A, Puri V (2017) Smart farming: IoT based smart sensors agriculture stick for live temperature and moisture monitoring using Arduino, cloud computing & solar technology. In: Communication and computing systems—proceedings of the international conference on communication and computing systems, ICCCS 2016, pp 673–680. https://doi.org/10.1201/ 9781315364094-121 18. DS18B20+T&R. https://in.element14.com/maxim-integrated-products/ds18b20-t-r/temper ature-sensor-0-5deg-c-to/dp/2515605. Last accessed 12 Apr 2022 19. Aniley AA, Kumar N, Kumar A (2017) Soil temperature sensors in agriculture and the role of nanomaterials in temperature sensors preparation. Int J Eng Manuf Sci 7:2249–3115 20. Lakhiar IA, Jianmin G, Syed TN, Chandio FA, Buttar NA, Qureshi WA (2018) Monitoring and control systems in agriculture using intelligent sensor techniques: a review of the aeroponic system. J Sensors 2018. https://doi.org/10.1155/2018/8672769 21. Mobile apps empowering farmers. https://www.manage.gov.in/publications/edigest/dec2017. pdf. Last accessed 12 Apr 2022 22. Crop cutting experiment. http://krishi.maharashtra.gov.in/Site/Upload/Pdf/CCE_App_Tut orial_Primary_Worker_Maharashtra.pdf. Last accessed 12 Apr 2022 23. Kisan Suvidha. http://mkisan.gov.in/aboutmkisan.aspx. Last accessed 12 Apr 2022 24. Agri India Hackathon. https://innovateindia.mygov.in/agriindiahackathon/. Last accessed 12 Apr 2022 25. Agricultural situation in India. https://eands.dacnet.nic.in/PDF/August2016.pdf. Last accessed 12 Apr 2022
IoT-Based Smart Farming for Sustainable Agriculture
37
26. Hunsaker DJ, Clemmens AJ, Fangmeier DD (1998) Cotton response to high frequency surface irrigation. Agric Water Manag 37:55–74. https://doi.org/10.1016/S0378-3774(98)00036-5 27. Nitrogen fertility and abiotic stresses management in cotton crop: a review 28. Bowers JH (1990) Effect of soil-water matric potential and periodic flooding on mortality of pepper caused by Phytophthora capsici. Phytopathology 80:1447. https://doi.org/10.1094/ phyto-80-1447
ELM-Based Liver Disease Prediction Model Charu Agarwal, Geetika Singh, and Anurag Mishra
Abstract The liver is an important part of the digestive system. Unfortunately, an unhealthy diet affects the liver and it kills more than 2 million people worldwide every year. With so many people affected, it is important to develop computational disease prediction models. In the present work, we develop liver disease prediction model based on ELM classifier to classify whether a patient is suffering from liver disease or not based on the data. The dataset used in the work is a standard dataset known as Indian liver patient dataset (ILPD) which is freely available. The computational results show that the proposed model outperforms other existing models. Keywords Liver disease · Extreme learning machine · Activation function · Accuracy score · Feedforward neural network
1 Introduction The Greek word ‘Hepar’ means liver which is the largest gland present in the human body. A cone-like structure is present on top of the stomach, protected by a rib cage. Being a vital digestive organ, it is necessary to maintain a healthy liver. A healthy liver is a key to a healthy body of a person. Unfortunately, because of change in lifestyle and dietary pattern that involves the intake of junk and canned food, tend to impact the liver causing it to lose its ability to work efficiently. C. Agarwal (B) Ajay Kumar Garg Engineering College, Dr. A.P.J. Abdul Kalam Technical University, Ghaziabad, Uttar Pradesh, India e-mail: [email protected] G. Singh KIET Group of Institutions, Dr. A.P.J. Abdul Kalam Technical University, Ghaziabad, Uttar Pradesh, India A. Mishra Deendayal Upadhyay College, University of Delhi, Delhi, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_4
39
40
C. Agarwal et al.
Liver diseases can be classified into four stages: inflammation, fibrosis, cirrhosis, and liver failure [1]. Inflammation is the initial stage in which the tissue of the body tends to swell and is also known as hepatitis. Hepatitis is a viral infection of five types, i.e., A, B, C, D, E. The second stage of liver disease is fibrosis in which mild scarring of tissues starts to appear. The third stage is cirrhosis, and the latestage scarring of liver tissues happens which is permanent. Cirrhosis can be further classified into compensated and decompensated stages. Compensated cirrhosis is asymptomatic, and the decompensated cirrhosis is symptomatic in which the liver is unable to function well. Liver Failure, which is the last stage, is life-threatening. The only available treatment is an expensive liver transplant. Rapid recognition of liver illness is beneficial to a person’s ability to live a healthy life. Liver Function Test (LFT) and Imaging can be done for diagnosis of disease. The blood sample of a person is collected and a report is generated after analyzing the sample which includes parameters like sgpt, sgot, total bilirubin, etc. Based on these parameters, the hepatologist prescribes medication and precautionary measures which can treat the individual. In India, liver disease is the tenth leading cause of death and the second major reason for death in the USA. It was found that approximately two million people die due to one or the other liver illness [2]. From the above facts, it can be inferred that manual analysis by hepatologists would be a tedious and difficult task. Manual analysis is also error prone. To help the medical community, we can build fully automated analytical systems using a variety of advanced technologies to deliver efficient and accurate results. Various machine learning algorithms can be used to develop such models. Many researchers have worked on the development of the model, as shown below. The ILPD dataset was modified from Geetha and Arunachalam [3] to evaluate the effectiveness of SVM and LR algorithms in diagnosing liver disease. The authors examined SVM and LR for accuracy, precision, sensitivity, and specificity and found that SVM had 75.04% higher accuracy (73.23%) than LR. Various researchers examined different machine learning classification algorithms for liver disease prediction [4–10]. They applied algorithms such as SVM, LR, KNN, RF, DT and computed the values of classification accuracy. After a thorough study of the literature, it is clear that there is a need to develop computer-based models that can more accurately predict liver disease with less computational effort. In this paper, we used extreme learning machines (ELMs) as classifiers to propose a more computationally efficient and accurate machine learning-based model for liver disease prediction. ELM is a fast single-layer feedforward neural network with good generalization capabilities. ELM has already been used successfully for various other classification task such as ECG classification [11], fingerprint recognition [12], watermarking [13], face mask detection [14]. The proposed model is trained and tested on the ILPD dataset. We also examined ELM performance using different activation functions. This is because networks help understand complex data, taking input from a previous layer and transforming it into a format that is used as input to the next layer. ELM applies a nonlinear activation function to transform it into a linear system. The activation functions used in this model are Sigmoid, Relu, Leaky_
ELM-Based Liver Disease Prediction Model
41
Relu, Tanh, Sin, Tanhre, and Swish with different sets of hidden neurons (8, 16, 32, 64, 128, 512, and 1024). The main contribution of the current experimental work is to find out the possibility of detecting liver diseases using ELM algorithms. This paper is organized as follows. Section 2 presents the mathematical formulation of ELM and its activation function. Section 3 presents details of the ILPD dataset. Section 4 presents a proposed methodology. Section 5 presents results and discussion. Finally, Sect. 6 concludes the work.
2 Extreme Learning Machine Extreme Learning Machine, abbreviated as ELM, is a single hidden-layer neural network [15, 16]. It learns faster than gradient-boosted based algorithms. Weights between hidden and output layers are the only parameters that need to be learned. Because ELM learns without iterations, it converges much faster than standard algorithms. This means that a single hidden-layer feedforward network consisting of a finite number of neurons can approximate a continuous function with a compact subset due to the loose activation function assumption.
3 Dataset For our study, we have used Indian Liver Patient Dataset (ILPD) [17] from the Kaggle repository as this is the only dataset that is available freely for the research fraternity. This dataset has 11 attributes. This is a standard dataset which is used for liver disease prediction by many researchers [18, 19]. This dataset contains 583 records of which 167 were records of healthy patients and 416 were suffering from liver disease.
4 Proposed Methodology The methodology presented in this study uses an extreme learning machine as a classifier to identify whether patients have liver disease based on their data. The preprocessed data are divided into a training set and a test set. ELM algorithm is then trained using training dataset to make predictions. Figure 1 shown below depicts the framework of the proposed methodology for detecting liver disease. Finally, the performance of the model is analyzed.
42
C. Agarwal et al.
Fig. 1 Framework of the proposed methodology
The steps involved in the proposed framework are as mentioned below: Step 1. Reading and Uploading of Dataset The ILPD dataset as mentioned above in Sect. 3 is collected from the data repository and provided as input to the system. Step 2. Preprocessing of Dataset (a) Null Check: The attributes having no value are handled. This can be done either by removing those records or replacing them by mean, average value. Four null values were found in the ILPD dataset and replaced with a mean value which is 0.94. (b) Duplicate Check: In this, the duplicate values are handled. Thirteen duplicate values existed in the ILPD dataset, which was dropped for retrieving the clean dataset. (c) Feature Scaling: The categorical values are handled in feature scaling. The attribute ‘gender’ consists of categorical values ‘Female’ and ‘Male’. These categorical values were replaced by 0 and 1, respectively. Step 3. Splitting of Dataset The dataset is split in a certain ratio into training and test data during this phase. In our study, we split the dataset into two ratios 80:20 where 80% of data is used to train ELM and 20% is used for testing the model and 70:30 where 70% of data is used to train ELM and 30% is used for testing the model. Step 4. The algorithm used for building the proposed model is Extreme Learning Machine (ELM). First, the ELM is trained with the training data and then tested with the test data. Step 5. We analyzed the performance based on accuracy score, precision, recall, F1-score [20], and training time of the algorithm.
ELM-Based Liver Disease Prediction Model
43
5 Result and Discussion We compile the performance parameter values for different number of hidden neurons and activation functions. The present experiment is carried out in three categories as mentioned below:
5.1 Performance Analysis of ELM Using 80:20 Training–Testing Ratio In the first experiment, we considered 80% data for training the ELM and 20% data for testing the ELM. Figure 2 depicts the graphical representation of accuracies of various activation functions with respect to different numbers of hidden neurons. A Relu activation function with 32 neurons and TanHRe with 128 neurons give a maximum accuracy of 77.77%. Then, 77.19% accuracy is obtained using 512, 512, and 32 neurons from Sigmoid, Leaky_Relu, and Swish, respectively. The precision score, recall, and F1-score for all the activation functions for different hidden neurons for the 80:20 data split are shown in Tables 1, 2, and 3, respectively. We also analyzed the model based on training time taken by ELM using different activation functions and concluded that training time is less than 1 s for all the cases for an 80:20 data split.
Fig. 2 Depicts the accuracy of 80:20 data split
44
C. Agarwal et al.
Table 1 Precision for 80:20 data split Sigmoid
Relu
Leaky Relu
tanh
sin
TanHRe
Swish
8
1
0.57
0.6
0.5
0.43
0.57
0.6
16
0.5
0.63
0.55
0.54
0.45
0.66
0.6
32
0.75
0.7
0.71
0.72
0.54
0.66
0.66
64
0.83
0.66
0.64
0.63
0.60
0.68
0.59
128
0.85
0.6
0.65
0.56
0.51
0.68
0.6
512
0.733
0.53
0.62
0.47
0.44
0.53
0.57
1024
0.66
0.48
0.52
0.45
0.42
0.5
0.57
Table 2 Recall for 80:20 data split Sigmoid
Relu
Leaky Relu
tanh
sin
TanHRe
Swish
8
0.21
0.08
0.06
0.45
0.41
0.23
0.06
16
0.1
0.26
0.21
0.26
0.30
0.21
0.13
32
0.06
0.3
0.21
0.17
0.28
0.26
0.3
64
0.10
0.30
0.23
0.30
0.36
0.23
0.28
128
0.13
0.26
0.28
0.28
0.41
0.32
0.26
512
0.23
0.30
0.39
0.39
0.34
0.34
0.23
1024
0.26
0.43
0.45
0.34
0.36
0.34
0.41
Table 3 F1-score for 80:20 data split Sigmoid
Relu
Leaky Relu
tanh
sin
TanHRe
Swish
8
0.04
0.15
0.11
0.47
0.42
0.33
0.11
16
0.17
0.36
0.31
0.35
0.36
0.32
0.21
32
0.12
0.42
0.33
0.28
0.37
0.37
0.41
64
0.19
0.41
0.34
0.41
0.45
0.35
0.38
128
0.22
0.36
0.39
0.37
0.45
0.44
0.36
512
0.36
0.38
0.48
0.42
0.3
0.42
0.33
1024
0.37
0.45
0.48
0.39
0.39
0.41
0.48
5.2 Performance Analysis of ELM Using 70:30 Training–Testing Ratio In the second experiment, we considered 70% data for training the ELM and 30% data for testing the ELM. Figure 3 depicts the graphical representation of accuracies of various activation functions with respect to different numbers of hidden neurons. The highest accuracy for the 70:30 data split is 78.36% shown by the sigmoid function for 32 hidden neurons. The precision score, recall, and F1-score for all the
ELM-Based Liver Disease Prediction Model
45
Fig. 3 Depicts the accuracy of the 70:30 data split
Table 4 Precision for 70:30 data split Sigmoid
Relu
Leaky Relu
tanh
sin
TanHRe
Swish
8
1
0.4
0.5
0.33
0.21
0.42
0.37
16
0.57
0.5
0.45
0.38
0.42
0.42
0.41
32
0.63
0.55
0.47
0.42
0.42
0.5
0.55
64
0.53
0.47
0.42
0.48
0.35
0.46
0.48
128
0.44
0.44
0.46
0.42
0.41
0.43
0.5
512
0.47
0.52
0.46
0.41
0.4
0.48
0.5
1024
0.52
0.45
0.44
0.425
0.38
0.45
0.47
activation functions for different hidden neurons for the 70:30 data split are tabulated in Tables 4, 5, and 6, respectively. For the 70:30 data split, training time was also analyzed and concluded that training time is less than 1 s for all the cases.
5.3 Comparison with Other Published Works Done in the Same Domain Table 7 showcases the comparison of our experimental results with the results of work done by other authors based on accuracy.
46
C. Agarwal et al.
Table 5 Recall for 70:30 data split Sigmoid
Relu
Leaky Relu
tanh
sin
TanRe
Swish
8
0.05
0.1
0.15
0.17
0.15
0.15
0.22
16
1
0.2
0.12
0.25
0.3
0.27
0.175
32
0.17
0.27
0.25
0.3
0.27
0.32
0.25
64
0.2
0.27
0.3
0.32
0.25
0.3
0.32
128
0.2
0.27
0.37
0.37
0.35
0.35
0.35
512
0.22
0.5
0.5
0.36
0.37
0.45
0.45
1024
0.3
0.45
0.47
0.42
0.42
0.55
0.52
Table 6 F1-score for 70:30 data split Sigmoid
Relu
Leaky Relu
tanh
sin
TanHRe
Swish
8
0.09
0.16
0.23
0.22
0.17
0.22
0.28
16
0.17
0.28
0.19
0.3
0.35
0.33
0.24
32
0.27
0.36
0.32
0.35
0.33
0.39
0.34
64
0.29
0.34
0.35
0.38
0.29
0.38
0.38
128
0.27
0.33
0.41
0.33
0.37
0.42
0.41
512
0.3
0.51
0.48
0.39
0.38
0.46
0.47
1024
0.38
0.45
0.45
0.42
0.4
0.5
0.5
Table 7 Accuracy values of different methods executed on ILPD dataset References
ML technique
Accuracy (%)
[3]
LR, SVM
75.04
[4]
LR, SVM, KNN
73.97
[5]
J48, LMT, RF, RT, REPTree, DS, Hoeffding Tree
70.67
[6]
K-Nearest Neighbor K-Means Naive Bayes C5.0 Random Forest
75.19
[7]
PSO, DT
69.6
[8]
SVM and BPN
73.2
[9]
DT and NB
67.01
[10]
K-Nearest Neighbor + Random Forest + Logistic Regression
77.58
Proposed methodology
ELM
78.36
ELM-Based Liver Disease Prediction Model
47
From the table above, we can see that the proposed methodology based on the ELM classifier shows an accuracy of 78.36. This is the highest precision achieved compared to other published studies in the same field. Therefore, we can conclude that liver disease detection models designed with ELM as a classifier have been proven to be optimal for prediction and can be used in the healthcare field. The current work can be further extended by applying the proposed model to other datasets as well.
6 Conclusion As we know, early detection of liver disease can help a person live longer. Manual analysis of liver disease is a laborious task, so medical departments can use machine learning models to predict liver disease. In this post, we used Extreme Learning Machine (ELM) classifiers to build a liver disease prediction model. It is a fast learning algorithm compared to other neural networks because it uses a feedforward method instead of backpropagation. ELM is generally preferred over other methods for AI-related challenges due to its high speed, good generalization, and ease of implementation. Our work comprises training ELM classifier using various activation functions and analyzing the performance of our model with different number of neurons on ILPD dataset first with 80:20 training:testing data ratio, then 70:30 training:testing data ratio, and lastly comparison with other author’s work. We concluded that the highest accuracy was shown by the sigmoid activation function with 32 neurons, which is 78.36% for a data split of 70:30.
References 1. https://www.healthline.com/health/liver-failure-stages 2. Asrani SK, Devarbhavi H, Eaton J, Kamath PS (2019) Burden of liver diseases in the world. J Hepatol 70(1):151–171 3. Geetha C, Arunachalam A (2021) Evaluation based approaches for liver disease prediction using machine learning algorithms. In: International conference on computer communication and ınformatics (ICCCI), pp 1–4 4. Thirunavukkarasu K, Singh AS, Irfan M, Chowdhury A (2018) Prediction of liver disease using classification algorithms. In: 4th International conference on computing communication and automation (ICCCA), pp 1–3 5. Nahar N, Ara F (2018) Liver disease prediction by using different decision tree techniques. Int J Data Min Knowl Manage Process:1–9 6. Kumar S, Katyal S (2018) Effective analysis and diagnosis of liver disorder by data mining. In: International conference on ınventive research in computing applications (ICIRCA), pp 1047–1051 7. Hashem S et al (2018) Comparison of machine learning approaches for prediction of advanced liver fibrosis in chronic Hepatitis C patients. IEEE/ACM Trans Comput Biol Bioinf 15(3):861– 868 8. Sontakke S, Lohokare J, Dani R (2017) Diagnosis of liver diseases using machine learning. In: International conference on emerging trends & ınnovation in ICT (ICEI), pp 129–133
48
C. Agarwal et al.
9. Alfisahrin SNN, Mantoro T (2013) Data mining techniques for optimization of liver disease classification. In: International conference on advanced computer science applications and technologies, pp 379–384 10. Gogi VJ, Vijayalakshmi MN (2018) Prognosis of liver disease: using machine learning algorithms. In: International conference on recent ınnovations in electrical, electronics & communication engineering (ICRIEECE), pp 875–879 11. Yang J, Xie S, Yoon S, Park D, Fang Z, Yang S (2013) Fingerprint matching based on extreme learning machine. Neural Comput Appl:435–445 12. Kim J, Shin H, Shin K, Lee M (2009) Robust algorithm for arrhythmia classification in ECG using extreme learning machine. BioMedical Engineering OnLine 13. Mishra A, Agarwal C, Chetty G (2018) Lifting wavelet transform based fast watermarking of video summaries using extreme learning machine. In: 2018 International joint conference on neural networks (IJCNN), Rio de Janeiro, Brazil, pp 1–7. https://doi.org/10.1109/IJCNN.2018. 8489305 14. Agarwal C, Itondia P, Mishra A (2023) A novel DCNN-ELM hybrid framework for face mask detection. Intell Syst Appl 17:200175, ISSN 2667-3053https://doi.org/10.1016/j.iswa.2022. 200175 15. Zhang R, Lan Y, Huang G-B, Xu Z-B (2012) Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Trans Neural Netw Learn Syst 23(2):365– 371 16. Guang-Bin H, Qin-Yu Z, Chee-Kheong S (2006) Extreme learning machine: Theory and applications. Neurocomputing:489–501 17. Moody GB, Mark RG (2001) The impact of the MIT-BIH arrhythmia database. IEEE Eng in Med and Biol 20(3):45–50 18. Singh G, Agarwal C (2023) Prediction and analysis of liver disease using extreme learning machine. In: Shakya S, Du KL, Ntalianis K (eds) Sentiment analysis and deep learning. Advances in ıntelligent systems and computing, vol 1432. Springer, Singapore. https://doi. org/10.1007/978-981-19-5443-6_52 19. Singh G, Agarwal C, Gupta S (2022) Detection of liver disease using machine learning techniques: a systematic survey. https://doi.org/10.1007/978-3-031-07012-9_4 20. Grandini M, Bagli E, Visani G (2020) Metrics for multi-class classification: an overview. ArXiv abs/2008.05756
Intercompatibility of IoT Devices Using Matter: Next-Generation IoT Connectivity Protocol Sharat Singh
Abstract The market for IoT devices is massive, with hundreds of companies developing a variety of IoT devices; however, due to different methods and software technologies used to develop these products, all of these devices do not necessarily work together in a seamless manner. The Connectivity Standard Alliance (CSA) came up with the concept and created “Matter,” an open standard for all Internet of Things (IoT) devices. This serves as a universal connectivity standard, making it easier to use and manage IoT devices. This paper examines the implementation, necessity, and impact of this new protocol as the next generation of IoT connectivity protocol. Keywords Matter · Thread · Open standard protocol · Internet of Things
1 Introduction The term “Internet-of-Things” (IoT) is frequently used in today’s world because it is at the heart of the most recent technological advancements and developments toward Industry 4.0, with an estimated compound annual growth rate (CAGR) of 10.53% between 2022 and 2027 [1]. Kevin Ashton coined the term IoT in 1991, and the IoT ecosystem has grown exponentially since then, with use cases including smart appliances in homes (sensors, automation) and industries (logistics chains, robotic automation, augmenting human labor) [2]. With so many firms and companies having a share in this huge market, the usage of these devices is dependent on the services and software that the developing companies provide, which can be in the form of Desktop Application or Smartphone Apps in a variety of operating systems like Android, iOS, etc. All of these devices operate on their own instruction sets and protocols, making interoperability and compatibility of IoT devices from different companies very difficult. S. Singh (B) Department of Electronics, Deen Dayal Upadhyaya College, University of Delhi, New Delhi 110078, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_5
49
50
S. Singh
1.1 Current Alternatives Right now, these are the current approaches possible to make different/natively incompatible devices to work together. • Home automation hubs: One popular approach is to use a central hub that connects to all of your smart devices and allows you to control them from a single app or interface [3]. Examples of popular home automation hubs include Amazon Echo, Google Home, and Apple HomeKit. • APIs: Many smart devices come with APIs that allow developers to interact with them programmatically. This means that one can use code to connect different devices together and create custom automation. For example, you could use an API to connect a smart thermostat to a smart lighting system, so that when the temperature drops below a certain level, the lights automatically turn on. • If–This–Then–That (IFTTT): IFTTT is a web-based service that allows us to create custom “applets” that connect different devices and services together [4]. For example, you could create an applet that automatically turns off your smart lights when you leave your home, as determined by your phone’s location. • Zigbee and Z-Wave: Zigbee and Z-Wave are wireless communication protocols specifically designed for home automation [5, 6]. These protocols allow devices to communicate with one another, making it possible to create complex automation and control all devices from a central hub.
2 Need for Matter 2.1 Smart Home Ecosystems The standard for Smart Home IoT devices’ management and control is Smart Home Ecosystems such as Apple Homekit, Google Home, Amazon Alexa, Samsung Smart Things. These ecosystems connect, consolidate, group, and manage the smart devices with great ease due to agreement with manufacturers for a common protocol and development guides provided by these ecosystems. All of the various ecosystems need their own applications, and every smart device needs a device application of their own. These device applications are based on various ecosystems for seamless connectivity and management. The same device, but different applications, can be used to control all of these devices from various ecosystems. A device from another ecosystem cannot be detected by or connected to by any ecosystem.
Intercompatibility of IoT Devices Using Matter: Next-Generation IoT …
51
Fig. 1 Setup of different smart home ecosystems with their own set compatible smart devices
2.2 Consumer and Manufacturer Standpoint Manufacturers of smart appliances are required to design their products to integrate with the smart home ecosystem. This means that development for various platforms must be done concurrently in order to support multiple ecosystems, which costs money, time, and human resources. Customers are also perplexed about the ecosystems that different smart appliance options support. Customers are limited to the selection of products designed for the specific smart home ecosystem if they already own devices with that ecosystem. This is a significant setback for the market for smart devices and the driving force behind the demand for inter-compatible devices (Fig. 1).
2.3 Security Security is a critical concern in IoT because these devices often handle sensitive data and may be used to control critical systems. There are a number of important factors to take into account when it comes to securing IoT systems, and because there is not a local, secure network, using the cloud to store data from IoT devices and perform smart analysis and value added services is unavoidable, which raises data security concerns. (IoT devices frequently
52
S. Singh
gather and transmit private data, such as identifying information or system control information. The transmission and storage of this data must be protected).
3 What is Matter The Matter Open Standard (formerly known as the Thread Group) is an open, royaltyfree networking protocol designed for low-power, low-bandwidth devices in the Internet of Things (IoT) [7]. It is based on the Internet Protocol (IPv6) and is designed to be simple, secure, and scalable, enabling devices to connect and communicate with each other and with the cloud. By using a mesh networking architecture, the Matter Open Standard eliminates the need for a central hub or server and enables direct device-to-device communication. Greater reliability is made possible because devices can still communicate even if some of them are offline or out of range. The protocol is intended to be low power and low bandwidth, making it suitable for use in battery-powered devices and devices with constrained resources. In addition to its technical capabilities, the Matter Open Standard is intended to be open and interoperable, enabling seamless communication between devices made by various manufacturers. Additionally, it is supported by a sizable and expanding ecosystem of businesses and developers, ensuring that it will proceed to develop and advance over time.
3.1 Architecture Matter is not a protocol like WiFi, ZigBee, Thread, etc.; it is an application layer that acts as a standard. Utilizing previously successful technologies like Google Home, Apple Homekit, and the Connectivity Standard Alliance (formerly known as the Zigbee Alliance), the application layer was developed in accordance with Matter guidelines (Fig. 2). It is constructed using the IPv6 architecture and currently only supports WiFi, Thread, and Ethernet, with Thread aimed at low-power, resource-constrained IoT devices like sensors, locks, etc. [9], whereas WiFi is best for high-bandwidth, active powered smart appliances like cameras, smart hubs, etc. All devices require to onboard this application layer on their smart devices to be Matter-certified.
Intercompatibility of IoT Devices Using Matter: Next-Generation IoT …
53
Fig. 2 Brief architecture overview for Matter [8]
3.2 Security Matter is an open standard for the Internet of Things (IoT) that aims to make it easier to connect and control smart devices. It is designed to be secure, scalable, and most important of all, communicate locally. This means that Matter-certified smart devices do not need to upload or share any device data to the cloud, except for add-ons or required features provided by the manufacturers that make it necessary to use the cloud. This functionality is similar to that of Apple Home Kit. One of the key security features of Matter is the use of secure communication protocols. The standard defines a set of mandatory and optional security protocols that devices must implement in order to be Matter-compliant. These protocols include transport layer security (TLS) for encryption, as well as secure key exchange and device authentication mechanisms. Another important aspect of Matter’s security is the concept of a “security domain,” which is a group of devices that share the same security credentials and are trusted to communicate with one another. This allows for secure communication between devices within a security domain while preventing unauthorized access from outside the domain. Matter also includes a mechanism for device provisioning, which is the process of securely onboarding new devices to a network. This includes securely provisioning the device’s initial credentials using Bluetooth Low Energy (BLE) [10], as well as any subsequent updates to those credentials. A feature of the standard is the way that implements the device’s automatic software updates and firmware updates, which can help to prevent device vulnerabilities from being exploited.
54
S. Singh
Overall, Matter aims to provide a robust and secure foundation for IoT devices to communicate and interact with one another. However, it is important to note that security is a continuous process and implementing standard alone is not enough. A good security practice includes regular software updates, monitoring of the device, and an incident response plan.
4 Thread for Matter Thread is a fundamental component of reality because it necessitates the use of PAN for local data transmission. PAN is connected to the internet through a gateway [11]. Thread is the first industry-wide adoption of an IPv6-based low-power-consuming mesh network with fundamentals like multiple embedded network layers, a specific physical through network layer, and scalability with hundreds of embedded devices (up to 32 Routers) [12]. In the context of Matter, Thread is used as the network layer, while the Matter standard is used as the application layer. This means that Thread is responsible for creating and maintaining the network, while Matter defines how devices can interact with one another over that network. Thread networks are composed of Thread Routers [13], which are devices that have the capability to forward messages to other devices in the network. Thread Routers are responsible for creating and maintaining the network topology, as well as providing security and encryption for the network. Each Thread Router also acts as a “border router” that allows devices on the thread network to communicate with the wider Internet. In total, there are seven types of devices within the thread IoT technology: • Border Routers: Border Routers play a crucial role in Thread networks as they serve as the gateway between the Thread network and other IP-based networks such as the Internet. They are responsible for handling routing between the Thread network and other IP networks and also provide security and network management functions. Border Routers are typically more powerful than other devices in the network, as they need to handle large amounts of data and maintain a secure connection to the Internet. • End Devices: End Devices are the simplest and most resource-constrained nodes in a Thread network. They are typically low-power devices such as sensors, actuators, and control devices. End devices communicate with the rest of the network through Routers and are designed to be highly power-efficient and simple to use. • Routers: Routers provide local network routing and forwarding functions and help extend the reach of the network. They are responsible for forwarding data between End Devices and Border Routers and also help maintain the network topology by exchanging information with other Routers. Routers are typically more powerful than End Devices, but less powerful than Border Routers.
Intercompatibility of IoT Devices Using Matter: Next-Generation IoT …
55
• Commissioning Devices: Commissioning Devices are devices that help add new devices to the network and manage network security. They are typically used to securely provision new End Devices with network keys and also to manage the security of the network. • Sleepy End Devices: Sleepy End Devices are low-power End Devices that spend most of their time in a low-power sleep state and wake up periodically to communicate with the network. They are designed to be highly power-efficient and are often used in battery-powered applications where long battery life is critical. • Intermediate Devices: Intermediate Devices are devices that support the Thread protocol but do not fully implement all its features. They are typically used to provide additional functionality or to act as bridges between different types of networks. • Network Co-processor (NCP): An NCP is a device that provides an IP interface to a Thread network, enabling other devices to communicate with the network without implementing the Thread protocol themselves. NCPs are typically used in more complex systems where the main processor does not have the resources to handle the Thread protocol directly. They provide a convenient way to add Thread capability to existing devices and can also help to reduce the power consumption of the main processor by offloading some of the network processing. Figure 3 [14] shows a brief working of a thread-based mesh network using the most prominent types of devices that are found in a common smart home environment. Thread also supports over-the-air updates which enable devices to update their firmware or software automatically which can improve the security and stability of the network. When a device joins a Thread network, it first goes through a process called “commissioning” to establish secure communication with the Thread Routers. Once a device has been commissioned, it can participate in the network and communicate with other devices on the network. To sum up, Matter and Thread work together to provide a secure and reliable networking solution for IoT devices. The Matter standard provides a set of rules for how devices can interact with one another, while Thread provides the underlying communication infrastructure to make those interactions possible.
5 Outcome Matter provides a solution for IoT devices to increase the compatibility of smart appliances belonging to multiple ecosystems and also simplifies development for manufacturers. This also means that a company has to develop and focus on one common “standard.” Additionally, the market for Matter-certified IoT devices will grow, which will reduce user confusion because all devices will be interoperable and able to be controlled by a single common application regardless of the supported ecosystem.
56
S. Singh
Fig. 3 Thread network topology
As can be seen from Fig. 4, all smart devices are compatible with any smart home ecosystem, thus making all devices work seamlessly together over a suite of software applications.
5.1 Moving Forward The Matter IoT Protocol, formerly known as Project CHIP (Connected Home over IP), has the potential to bring a new level of interoperability and security to the Internet of Things (IoT) industry. The launch of Matter is considered to be a huge step forward for IoT, especially in the consumer market. Cross-platform smart device integration and interoperability will be possible, making the selection of smart devices easier and more convenient for IoT customers all over the world, and will increase the range of features that can be integrated on a smart device. Matter will also enable the creation of smarter and more efficient ecosystems in a much larger geographic region, like a Smart Security manager for Residential Societies, with smart smoke, gas, and motion sensors spread all across using thread, and
Intercompatibility of IoT Devices Using Matter: Next-Generation IoT …
57
Fig. 4 Setup of different smart home ecosystems with Matter-certified smart devices
actively powered devices like cameras, security gates on WiFi/Ethernet, all integrated for residents. Here are a few possible future implementations of the Matter IoT protocol: • Smart Homes: The Matter IoT protocol could become the backbone for smart homes, enabling seamless integration of different smart devices from different manufacturers. This would allow homeowners to easily control and automate their homes, regardless of the brand of their devices. • Industrial IoT: The integration of various industrial equipments and machineries from various manufacturers could be made possible by the use of the Matter in industrial settings. As a result, industrial systems would operate more effectively and safely overall. • Health care: Matter could be used in health care, providing a secure and reliable way for different medical devices and sensors to communicate with each other. This would improve patient care and make it easier for healthcare professionals to access and analyze patient data. • Automotive: The automotive sector could use the Matter IoT protocol to enable the integration of various in-car devices from various manufacturers. The overall driving experience would be enhanced, and drivers’ access to and control over vehicle data would be made simpler and will be accessible remotely as well. • Agricultural IoT: The integration of various sensors and devices for crop management and livestock monitoring could be made possible by the use of the Matter in agriculture. This would increase agriculture’s productivity and sustainability
58
S. Singh
while also assisting farmers in making better decisions with their devices and smart system environments. These are just a few examples of the potential future implementations of the Matter IoT protocol. With its focus on security, interoperability, and open standards, the Matter IoT protocol has the potential to revolutionize the IoT industry and bring new levels of efficiency and convenience to people’s lives. Acknowledgements The author would like to thank Anshuman Singh (Roll Number: 20HEL2111) from the Department of Electronics, Deen Dayal Upadhyaya College, University of Delhi, for his contribution to designing the figures presented in this paper.
References 1. Internet of things (IoT) market growth, trends, covid19 impact, and forecasts (2022 2027). Available: https://www.mordorintelligence.com/industry-reports/internet-of-things-iotmarket. Accessed 15 Dec 2022 2. Aston K. That ’internet of things’ thing. Available: http://www.rfidjournal.com/articles/view? 4986. Accessed 15 Dec 2022 3. Setz B, Graef S, Ivanova D, Tiessen A, Aiello M. A comparison of opensource home automation systems. https://doi.org/10.1109/ACCESS.2021.3136025 4. Ovadia S. Automate the internet with “if this then that” (IFTTT). https://doi.org/10.1080/016 39269.2014.964593 5. Ergen SC (2004) ZigBee/IEEE 802.15. Available: https://pages.cs.wisc.edu/~suman/courses/ 707/papers/zigbee.pdf. Accessed 10 Jan 2023 6. Unwala I, Taqvi Z, Lu J (2018, April) IoT security: ZWave and thread. In: 2018 IEEE green technologies conference (GreenTech). IEEE, pp 176–182. https://doi.org/10.1109/GreenTech. 2018.00040 7. Thread applications. Available https://www.threadgroup.org/BUILT-FOR-IOT/Smart-Home# Application. Accessed 10 Jan 2023 8. Matter security and privacy fundamentals connectivity standards alliance documentation. Available: https://csa-iot.org/wp-content/uploads/2022/03/Matter_Security_and_Pri vacy_WP_March-2022.pdf. Accessed 10 Jan 2023 9. What is thread? Available https://www.threadgroup.org/What-is-Thread/Thread-Benefits. Accessed 10 Jan 2023 10. How thread can work seamlessly with Bluetooth for commissioning and operation. Available: https://www.threadgroup.org/news-events/blog/ID/196. Accessed 10 Jan 2023 11. Unwala I, Taqvi Z (2018) Thread: an IoT protocol. In: IEEE green technologies conference. https://doi.org/10.1109/GreenTech.2018.00037 12. Kim HS, Kumar S, Culler DE. Thread/OpenThread: a compromise in low power wireless multihop network architecture for the Internet of Things. In: Future internet. https://doi.org/ 10.1109/MCOM.2019.1800788 13. Gregersen C. An expert guide to the thread and matter protocols in IoT. Available: https:// nabto.com/matter-thread-protocols-iot/. Accessed 10 Jan 2023 14. Thread in homes. Available https://www.threadgroup.org/BUILT-FOR-IOT/Smart-Home. Accessed on 4 Feb 2023
Role of Node Centrality for Information Dissemination in Delhi Metro Network Kirti Jain, Harsh Bamotra, Sakshi Garg, Sharanjit Kaur, and Gunjan Rani
Abstract The topological structure of the network and the role of node centrality need to be investigated and evaluated for information dissemination related to transport services and their functionalities, to promote government policies and products, etc., for vast reachability. Additionally, it is important to study the network reliability in case of extreme situations like halting, power failure, overcrowding at a node, etc. This paper aims to investigate the role of three standard measures of node centrality for information dissemination in the Delhi Metro Network. We simulate the process of information diffusion utilizing the Susceptible-Infected-Removal (SIR) spread model through seed nodes identified as vital metro stations. Not only are the identified central stations ideal for advertising products and disseminating vital information, but they can also lead to chaotic situations that disrupt the metro’s functionality. Keywords Information diffusion · Delhi Metro Network · Social networks · Node centrality
1 Introduction With the rapid development of the urban metro network, especially in Delhi, the spatial structure of the metro network has evolved gradually from a simple line crossing and grid to complex patterns [1]. To accommodate the increasing population and congestion, metro infrastructure requires maintenance and extension [2]. The metro network is not only a solution to traffic congestion but also a potent medium for information dissemination to popularize vital government policies, products, K. Jain Department of Computer Science, University of Delhi, New Delhi, Delhi, India e-mail: [email protected] H. Bamotra · S. Garg · S. Kaur · G. Rani (B) Acharya Narendra Dev College, University of Delhi, New Delhi, Delhi, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_6
59
60
K. Jain et al.
schemes, etc., among a vast fraction of the population through commuters [3]. Metro systems, even though useful for spreading information, are also prone to power failures, natural disasters, accidents, and malicious attacks, which entail appropriate measures to guarantee their safety and reliability. Complex network theory has recently gained popularity in ecology, finance, social networks, social sciences, transport systems, etc., for its capabilities to model complex relationships [4]. We make use of network theory to model the complex network of the Delhi metro, in which a node represents a metro station, and an edge between two nodes marks a direct route between two stations. Commuters make use of the metro to reach different destinations and meet people along the way. They propagate the perceived information along the way and fuel the information depending upon the topology of the network and the positioning of the nodes (stations) within a network [1]. Information prevalence depends on the source nodes, which need to be carefully selected. Several studies have been conducted to identify the central nodes that maximize the spread of information [5–7]. Quantifying node importance through centrality provides a means to rank nodes based on their significance and influence on others [8]. There exist well-known measures such as degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality that identify central nodes considering the topological structure of the network [9, 10]. Existing studies on the metro network focus on its topological characteristics and its evolution [4, 11]. Recently, Kanwar et al. carried out a complex network-based comparative analysis between the operational Delhi Metro Network (DMop) and the extended Delhi Metro Network (DMext). They found similar degree distributions for both networks [12]. Although an increase in local connectivity in DMext seems efficient for tackling congestion and managing higher transport loads, it comes at the cost of increased vulnerability. Another case study on the metro transit system of Shenzen City explored the effectiveness of a node using an entropy-based multimeasure metric [13]. Motivated by this, we investigate information diffusion in the Delhi Metro Network using different centrality measures to understand the pace of information spread and its disruption under the failure of central nodes.
1.1 Our Contributions We construct a simple, undirected, and unweighted network for the Delhi metro system (referred to as DMN) to investigate the following: (i) Determine the network model for the constructed DMN based on its topological properties (Sect. 3.1). (ii) Identify and investigate important metro stations pivotal for information dissemination using three centrality measures (Sects. 4.1 and 4.2). (iii) Report effective centrality measure for the transport network (Sect. 4.3).
Role of Node Centrality for Information Dissemination …
61
(iv) Empirically demonstrates the vulnerability of metro network functionality under the failure of central stations (Sect. 4.4). Organization of the paper: Section 2 briefly describes the topological characteristics and centrality measures. Section 3 provides the methodology that covers the construction of DMN and information diffusion based on the SIR model. We detail experiments and their results in Sect. 4.
2 Characteristics of the Network We briefly explain network characteristics in two parts: fundamental structural properties and network centrality measures for node ranking [9].
2.1 Structural Properties (i) Degree Distribution: The degree distribution . Pk of a network is defined as the fraction of nodes with degree .k. Thus if there are .n nodes in a network and .n k nk of them have degree .k, we have . Pk = . n (ii) Average∑ Degree: It is the global property of the network and is computed as n 1 2m .⟨k⟩ = i ki = n , where .ki , .m, and .n symbolize the degree of node .i, total n edges, and the number of nodes, respectively, in the network. (iii) Density: It is defined as the ratio of the number of edges .m over the maximum possible number of edges in the network of order .n and is computed as .d = 2m . n(n−1) (iv) Average Shortest Path length: It is defined as the average number of steps along the∑shortest paths for all∑possible pairs of network nodes, given by 2 ¯ = 1n .L d = n(n−1) 1≤i< j≤n di j , where.di j is the shortest distance (2) 1≤i< j≤n i j between nodes .i and . j. (v) Average Clustering Coefficient: The clustering coefficient of a node .i with degree .ki captures the extent to which its .ki neighbors are linked to each other i , where .m i represents the number of links. The and is defined as . f i = ki (k2mi −1) average clustering coefficient of a graph .G is simply the average of clustering coefficients over all the nodes.
62
K. Jain et al.
2.2 Centrality Metrics The centrality measure captures the significance (importance) of nodes considering their topological placement in the network. In this section, we give a brief overview of three standard measures for computing centrality scores: (i) Betweenness Centrality (BC): It captures the importance of a node .i based on the count of its occurrence in the shortest path between all pairs of nodes and ∑ σ (i) is computed as .BCi = h/=i/= j hσjh j . Here, .σh j represents the total number of shortest paths between nodes .h and . j, and .σh j (i) means the number of those paths that pass through node .i. (ii) Closeness Centrality (CC): This metric defines the importance of a node considering its closeness to all other nodes. Let .di j be the length of the shortest path between nodes .i and . j. The closeness centrality of a node .i is inversely proportional to its average distance from other nodes, i.e., .CCi = ∑ ndi j . j (iii) Eigenvector Centrality (EVC): This metric considers the neighbors’ importance while computing the relevance of a node. If a node is connected to highly important nodes, it will have a higher EVC score than a node connected to lesser important nodes. The ∑relative eigenvector ∑ centrality score (.EVCi ) of a vertex .i is computed as . λ1 j∈M(i) EVC j = λ1 j∈G ai j EVC j , where . M(i) is the set of all neighbors of node .i, .λ is a constant, and .ai j =1 if nodes .i and . j are connected, else 0. This equation is defined recursively to incorporate the importance of the vertices to which a node is connected.
3 Methodology 3.1 Construction of Delhi Metro Network The Delhi metro rail consists of 10 color-coded lines serving 230 stations in its current operational state.1 It is by far the largest and busiest metro rail system in India and the second oldest after the Kolkata Metro. The station and track lines are the basic elements of the metro network, and the stations are connected via tracks, resulting in a complex network. The Delhi Metro Network (DMN) is modeled as an undirected and unweighted network and is represented as G = (V, E), where V is a set of nodes (metro stations), and E is a set of edges (metro tracks) connecting two successive metro stations. DMN is a sparse network with 244 lines between 230 stations whose visualization is shown in Fig. 1 plotted using the PyViz tool.2 Note that the exact placement and 1
https://www.delhimetrorail.com/: Note that we omit NOIDA-Greater NOIDA Aqua Line and Rapid Metro Gurugram. 2 https://pyviz.org/: Python data visualization tool.
Role of Node Centrality for Information Dissemination …
63
Fig. 1 Graph representing the Delhi Metro Network (DMN) using PyViz Table 1 Topological properties of DMN Network metrics Number of stations Number of links Network density Average degree Minimum and maximum degree Average clustering coefficient ASPL Radius and diameter
Value 230 244 0.0092 2.12 1, 5 0.0 16.856 27, 52
ASPL—average shortest path length
positioning of the stations are not taken into account. Basic topological features of DMN are given in Table 1 along with its degree distribution plot in Fig. 2. The average clustering coefficient of DMN is 0.0, which affirms the fact that neighboring stations are not connected. A high average shortest path between nodes, a smaller clustering coefficient, and a straight line on the log-log scale of the truncated degree distribution make the constructed network a candidate for the scale-free network over random and small-world networks.
64
K. Jain et al.
Fig. 2 Degree distribution of DMN
3.2 Information Diffusion Model Researchers have investigated information diffusion using basic epidemic spread models [14]. In this work, we adopt the classical Susceptible-Infected-Recovered (SIR) model for simulating information diffusion on DMN. At any time, the observed entity may be in one of the following states: susceptible, infected, or recovered, where the susceptible state indicates the initial state with no information received, open to receive information and communicate it further; the infected state implies information received to spread it further, and the recovered state vindicates recouping to neutralize the state (immune) permanently. The susceptible state changes to the infected state with probability .β (transmission probability) when the susceptible encounters a communicator (infected state). On the other hand, a communicator may stop spreading the information with probability .γ , which is the same as the recovered state in the SIR model.
4 Results In this section, we describe the experiment settings and observations covering topranking stations (nodes), the analysis of information diffusion through prominent nodes, and finally, changes in the process of information diffusion after the removal of significant nodes. We create the network using the NetworkX library and implement the SIR model using the NdLib package in Python (64bits, version 3.7.2). Programs are executed on an Intel(R) Core(TM) i7, CPU @1.80GHz with 16GB RAM. In the rest of the paper, unless otherwise specified, transmission probability .β = 0.5 and recovery rate .γ = 0.1 are used for all the diffusion simulations (Sect. 3.2). The reported results are averaged over 20 simulations.
Role of Node Centrality for Information Dissemination …
65
4.1 Identifying Prominent Metro Stations In complex networks, centrality is a useful tool to measure the importance of nodes. We identify the top-5 central stations (Table 2) in the Delhi Metro Network (DMN) using three well-known centrality measures, namely betweenness centrality (BC), closeness centrality (CC), and eigenvector centrality (EVC). We refrain from using degree centrality as there is not much variation in the degree of the nodes. There is only one station of degree 5, and the majority of stations are connected to only two other stations. It is clear from Table 2 that Kashmere Gate station plays an important role in the metro network as an interchange station based on betweenness centrality. Its prominent position as a junction for red, yellow, and violet lines is validated by its top-most rank by BC. It is proved in the literature that stations located in the center of the network tend to have a higher rank because of their appearance on most of the shortest paths between the rest of the stations [15]. Additionally, CC and EVC measures identify Rajiv Chowk station as a central station, affirming its importance as an interchange station connecting yellow and blue lines. Note that both New Delhi and Patel Chowk are ranked 3rd by the CC metric. Centrality-based analysis implies that both Kashmere Gate and Rajiv Chowk stations are crucial in the metro network. Considering node centrality while installing new resources or maintenance of the infrastructure can be valuable and assist in the fault-free working of metro lines.
4.2 Role of the Central Most Station on Information Dissemination Since metro stations are interdependent, information originating from one station reaches the whole system due to a cascading effect. Identifying a source node using centrality is based on the premise that a central station transmits information faster to other stations because of its position in the network. In this experiment, we system-
Table 2 Top-5 metro stations in DMN Rank BC 1 2 3 4
Kashmere Gate INA Punjabi Bagh West Lajpat Nagar
5
Rajouri Garden
CC
EVC
Rajiv Chowk INA New Delhi Patel Chowk, Central Secretariat Mandi House
Rajiv Chowk Mandi House Central Secretariat Barakhamba Road Patel Chowk
BC—betweenness centrality, CC—closeness centrality, and EVC—eigenvector centrality
66
K. Jain et al.
Fig. 3 Number of stations informed when information disseminates from the most central station with varying diffusion parameters
atically assess seed (source) node selection with respect to its impact on the speed and reach of information diffusion. Figure 3 shows the number of stations informed when the diffusion process begins at the most central station (Rank 1) as the seed node based on BC, CC, and EVC (Table 2). Note that Rajiv Chowk is the most central station identified by CC and EVC, and Kashmere Gate is reported as a top-rank node in BC. We also vary diffusion parameters .β = {0.5, 0.8} and .γ = {0.1, 0.3} to understand the variation in information spread originating from the same seed node. It is vindicated from Fig. 3 that information dissemination starting at Rajiv Chowk leads to the highest final reach with the maximum number of informed stations. In contrast, the spread from Kashmere Gate leads to a relatively lower reach and fewer informed stations. However, this observation is also true for varying diffusion parameters (.β and .γ ) with distinct numbers of informed stations. A high value of .β results in higher information dissipation among stations, whereas a high value of .γ eliminates the received information from the station, resulting in fewer informed stations. The inset table in Fig. 3 shows the total number of stations informed for two central stations: Kashmere Gate and Rajiv Chowk, for varying diffusion parameters. We also diffused information from the ranked fifth node and compared the maximum number of informed stations (M) with that of the ranked one station to affirm the importance of the latter (Fig. 4). Higher spread from the top-ranked nodes compared to the lower-ranked stations, viz. Rajouri Garden (BC), Mandi House (CC), and Patel Chowk (EVC) endorse the said claim. It is revealed that diffusion from stations with a high CC and EVC score significantly contributes to the spread of information. Moreover, the speed of diffusion and recovery from information are two opposing forces that are decisive for information
Role of Node Centrality for Information Dissemination …
67
Fig. 4 Most informed stations (M) at a time for ranked first (R1) versus fifth (R5) stations
spread. High information dispersion among stations with low eradication accelerates the spread. Hence, identifying the top-central node is crucial for maintaining the overall prevalence of the information.
4.3 Identifying Efficient Centrality Measure by Regression Analysis Since the centrality of the seed node is a key factor in how information spreads throughout a network, we look into the relationship between the centrality score and the number of stations that have information. Information is diffused from each station with a centrality score (CS), and the maximum (M) and total number (T) of stations informed are noted. The top row in Fig. 5 shows a relationship between M and CS. The bottom row displays the dependency between T and CS, with each subfigure corresponding to a centrality metric. It is clear from the scatter plots that both M and T escalate with an increase in centrality score. We also plotted a regression line to analyze the relationship and observed that the CC score has a strong linear relationship with both M and T (. R 2 = 0.83 and . R 2 = 0.77 respectively). The stated observations are in tandem with earlier results showing that central nodes promote the spread of information and closeness centrality is an effective measure for information propagation.
68
K. Jain et al.
Fig. 5 Relationship of centrality scores (BC, CC, and EVC) with the maximum stations informed (M) and the total number of stations informed (T)
4.4 Connectivity Analysis If a station becomes dysfunctional, its load cascades through the network. Hence, it is important to study the effect of station failure in this scenario. We removed the most prominent station corresponding to each BC, CC, and EVC sequentially from the network and recomputed the topological characteristics of the updated network and top-ranked nodes by three centrality measures (Table 3). Diffusion spread is simulated from the newly identified central node in the DMN, and comparative results for M cases are shown for three centrality metrics in Fig. 6. Results show that the number of informed stations (M) is reduced in the absence of the top-ranked station for all centrality measures and assert its importance in maintaining maximum functionality within the network.
Table 3 Properties of the updated DMN after removing the top-ranked station in the original DMN Network metrics
BC
CC, EVC
Max degree ASPL Radius Diameter New top-ranked station
4 18.62 28 54 INA
5 17.56 27 52 INA, Kashmere Gate
Role of Node Centrality for Information Dissemination …
69
Fig. 6 Maximum number of stations informed (M) before and after the removal of first ranked stations
5 Conclusion This paper studies the role of node centrality in information dissemination through the Delhi Metro Network. The network is sparse, with an average degree of two nodes, and has almost no clustering coefficient, reflecting no connectivity within the neighbors of a node. Three well-known centrality measures, namely betweenness centrality, closeness centrality, and eigenvector centrality, are used to deduce prominent stations. Rajiv Chowk and Kashmere Gate are identified as prominent interchange stations and are crucial for the information dissemination required for the proper functioning of the network. Acknowledgements This work was supported by the DBT Star College Scheme at Acharya Narendra Dev College, DU.
References 1. Kandhway K, Kuri J (2016) Using node centrality and optimal control to maximize information diffusion in social networks. IEEE Trans Syst, Man, Cybern: Syst 47(7):1099–1110 2. Frutos Bernal E, Martín del Rey A, Galindo Villardón P (2020) Analysis of Madrid metro network: from structural to hj-biplot perspective. Appl Sci 10(16):5689 3. Yadav S, Rawal G (2016) The novel concept of creating awareness about tuberculosis at the metro stations. The Pan Afr Med J 23 4. Chen S, Zhuang D, Zhang H (2018) Urban metro network topology evolution and evaluation modelling based on complex network theory: a case study of Guangzhou, China. MATEC web Conf 232:01034
70
K. Jain et al.
5. Alshahrani M, Fuxi Z, Sameh A, Mekouar S, Huang S (2020) Efficient algorithms based on centrality measures for identification of top-k influential users in social networks. Inf Sci 527:88–107 6. Ilyas MU, Radha H (2011) Identifying influential nodes in online social networks using principal component centrality. In: 2011 IEEE international conference on communications (ICC). IEEE, pp 1–5 7. Madotto A, Liu J (2016) Super-spreader identification using metacentrality. Sci Rep 6(1):1–10 8. Kaur S, Gupta A, Saxena R (2021) Identifying central nodes in directed and weighted networks. IJACSA 12(8):1421–1443 9. Zaki MJ, Meira W Jr, Meira W (2014) Data mining and analysis: fundamental concepts and algorithms. Cambridge University Press 10. Das K, Samanta S, Pal M (2018) Study on centrality measures in social networks: a survey. Soc Netw Anal Min 8:1–11 11. Wu XT, Tse CK, Dong HR, Ho IWH, Lau FCM (2016) A network analysis of world’s metro systems. In: International symposium on nonlinear theory and its applications (NOLTA), Yugawara, Japan, November, pp 27–30 12. Kanwar K, Kumar H, Kaushal S (2019) Complex network based comparative analysis of Delhi metro network and its extension. Phys A: Stat Mech Its Appl 526:120991 13. Du Z, Tang J, Qi Y, Wang Y, Han C, Yang Y (2020) Identifying critical nodes in metro network considering topological potential: a case study in Shenzhen city-China. Phys A: Stat Mech Its Appl 539:122926 14. Li M, Wang X, Gao K, Zhang S (2017) A survey on information diffusion in online social networks: models and methods. Information 8(4) 15. Derrible S (2012) Network centrality of metro systems. PLOS ONE 7(7):1–10
Biometric Iris Recognition System’s Software and Hardware Implementation Using LabVIEW Tool Rajesh Maharudra Patil, B. G. Nagaraja, M. R. Prasad, T. C. Manjunath, and Ravi Rayappa
Abstract This study employs LabVIEW-generated block diagrams to demonstrate the software implementation of an automatic biometric iris recognition system in unconstrained contexts. The paper defines the fundamental building blocks used in the proposed approaches, as well as the different steps involved in each contribution’s design process. Three categories of iris recognition procedures are carried out, using various feature extraction, classification, and matching techniques. Moreover, the proposed methodology is compared with existing works, showing better accuracy, lower error rates, and superior performance. The software implementation is carried out using MATLAB, and the codes are created as .vi files in the LabVIEW environment. The simulation results are analyzed and commented upon for each contributor. Finally, general conclusions are drawn after careful examination of all three contributions. The study also proposes a hardware implementation using a microcontroller, which has yielded excellent results. Keywords Biometrics · Iris · Authentication · Recognition · Segmentation · Images · Preprocessing · Algorithms · GUIs · H/W · S/W · Database R. M. Patil (B) Electrical Engineering Department, SKNS College of Engineering Korti, Affiliated to Solapur University, Pandharpur, Maharashtra, India e-mail: [email protected] B. G. Nagaraja Electronics and Communication Engineering, Vidyavardhaka College of Engineering, Mysuru, Karnataka, India M. R. Prasad Computer Science and Engineering, Vidyavardhaka College of Engineering, Mysuru, Karnataka, India T. C. Manjunath Electronics and Communication Engineering Department, Dayananda Sagar College of Engineering, Bengaluru, India R. Rayappa Electronics and Communication Engineering, Jain Institute of Technology, Davanagere, Karnataka, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_7
71
72
R. M. Patil et al.
1 Introduction Biometric iris recognition is an advanced technology that has gained significant attention in recent years. It is a technique used to identify individuals based on the unique patterns found in their iris, which is the colored part of the eye. Iris recognition is a highly accurate method of identification that is widely used in various applications, such as access control systems, forensic analysis, and border security. The iris of the eye is a complex and highly structured tissue that contains a large number of unique features that can be used to distinguish one individual from another. These features include the shape and texture of the iris, as well as the patterns of the iris muscles and blood vessels. The development of biometric iris recognition systems has been driven by the need for secure and reliable methods of identification. The system works by capturing an image of the iris and then extracting the unique features to create a template that can be stored and used for future identification. In recent years, the use of biometric iris recognition has become more prevalent due to advancements in technology and the need for increased security measures. The technology has been integrated into various systems, including smartphones, ATMs, and airport security checkpoints. The purpose of this paper is to demonstrate the software implementation of an automatic biometric iris recognition system in unrestricted contexts. The study uses LabVIEW-generated block diagrams and proposes various feature extraction, classification, and matching techniques to achieve higher accuracy and better performance. The paper also proposes a hardware implementation using a microcontroller, which has yielded excellent results. A biometric iris recognition system typically consists of several phases, including image acquisition, preprocessing, feature extraction, matching, and decision making. The image acquisition phase involves capturing an image of the iris using a specialized camera. The camera must be capable of capturing high-quality images of the iris, which is a complex and highly structured tissue. The preprocessing phase involves enhancing the captured image to improve the quality of the iris image. This phase typically includes noise reduction, segmentation, and normalization techniques to remove any noise or unwanted elements from the image and to standardize the image for further processing. In the feature extraction phase, unique features of the iris are identified and extracted from the preprocessed image. These features include the shape and texture of the iris, as well as the patterns of the iris muscles and blood vessels. The matching phase involves comparing the extracted features with previously stored templates to determine if a match exists. The templates are typically stored in a database and are created during the enrollment process, where a user’s iris image is captured and their features are extracted and stored for future identification. Finally, the decision-making phase involves making a decision based on the match score obtained during the matching phase. If the match score is above a certain threshold, the system identifies the person, and if it is below the threshold, the system rejects the identification. Overall, the biometric iris recognition system
Biometric Iris Recognition System’s Software and Hardware …
73
involves complex algorithms and techniques to accurately identify individuals based on the unique features of their iris. The system has found widespread use in various applications, including access control systems, forensic analysis, and border security. The literature on iris biometric recognition has shown that this technology has great potential for secure authentication purposes. Iris recognition systems are able to provide high accuracy rates and low false acceptance rates, making them suitable for a variety of applications such as border control, banking, and access control. However, the literature also highlights the importance of addressing certain limitations and challenges associated with iris recognition, such as the impact of varying environmental conditions and the need for robust feature extraction techniques. Several studies have proposed innovative solutions to overcome these challenges, including the use of advanced image processing techniques and machine learning algorithms. Overall, the literature suggests that iris biometric recognition technology is a promising field for future research and development. The article is organized into several sections that cover different aspects of the development of an iris recognition system. Sect. 2 describes the methodology used in the research, followed by Sect. 3 on the drawbacks of earlier research in the field. Sect. 4 discusses image analysis and the algorithmic steps involved in image recognition. Section 5 focuses on iris recognition procedures using DIP summaries. Sect. 6 provides an overview of the LabVIEW concepts used in the development of the system. The proposed contributions are discussed in Sect. 7, and the article concludes with Sect. 8, which presents the conclusions drawn from the research.
2 Methodology The generalized DFD for the iris detection using unconstrained environment for the three proposed methodologies is shown in Fig. 1. In recent years, iris recognition systems have achieved impressive recognition rates in controlled environments. The study and implementation of iris recognition technologies have been the focus of various research communities for the past 50 years. However, most earlier research on iris recognition was limited to clear and well-captured iris images, and the system’s effectiveness was thought to be highly dependent on image quality. Images with lower quality and resolution, captured from a distance, or those that contain dynamic motion can significantly reduce the performance of iris recognition systems that are already limited in scope. A non-ideal iris image is one that suffers from issues such as poor acquisition angles, occlusions, pupil dilations, image blurriness, and low contrast. Most research has been confined to restricted environments, but we have focused on addressing this gap by studying the iris recognition problem in unrestricted settings [1–5].
74
R. M. Patil et al.
Fig. 1 Generalized DFD for the iris detection using unconstrained environment for the three proposed methodologies
Biometric Iris Recognition System’s Software and Hardware …
75
3 Drawbacks of the Research Works Done by Earlier Researchers Numerous attempts have been made to create iris biometric recognition systems for secure authentication. Many researchers have focused on developing recognition systems in constrained environments, where the camera must be aimed directly at the subject’s eye, the subject must look directly into the camera, there must be no parallax, the subject’s eyes must be open for iris capturing, and adequate lighting must be present [6–10]. However, only a few have developed iris identification systems in unrestricted settings, which is the focus of the proposed work presented in this article. While these algorithms are effective in constrained situations, they may not perform well in unconstrained environments. It is important to note that for unconstrained situations, certain limitations must be considered for the system to function properly and accurately. We have previously covered this topic in earlier articles [11–15].
4 Image Analysis and the Image Recognition Algorithmic Steps The standard procedure for the enrollment segment starts with the acquisition of an image of the iris from a high-resolution iris camera, followed by the identification of the area of interest (ROI) from the entire image of the subject’s face. The process of image acquisition catches people’s attention. It should be taken into consideration that the region of inference only includes the portion of the iris, and that single portion of the iris should be taken into account for detection purpose [6–10]. The preprocessed original image is used for analysis and to improve system performance. The preprocessed image is next subjected to a normalization approach to reduce noise and improve the effectiveness w.r.t. recognized rule sets, which is carried along with the improvement of the iris portion. Then, the next stage may be the identification of the subject once the iris features have been retrieved and will save a featured vector for comparing of a number of vectors in the database of the iris collected. The six contributed works that employ the ideas of preprocessing of images, edge detections, segmentations, normalization, feature, and the classifications for the identification of any humans iris are given in Fig. 1 [11–15]
5 Recognition of Iris Procedures Utilizing Sing DIP Summaries Currently, a complete iris scan of a human eye is performed using an iris recognition system (IDS) consisting of multiple block sets. Each block has a specific function and is used in our research [5–10]. The procedural aspects of the digital image processing
76
R. M. Patil et al.
Fig. 2 Procedures utilized in the DIP processings
Fig. 3 Functional block diagram of the proposed research work
(DIP) are shown in Fig. 2, while Fig. 3 depicts the functional data flow diagram (DFD) of the approach presented in this paper.
6 Overview of the LabVIEW Concepts In this section, we will discuss the contributions made to the iris recognition process using the LabVIEW tool, which is a product of National Instruments NI®. LabVIEW is a programming language and development environment that offers an interactive environment for designing and solving problems in various application-dependent tasks. It includes a workspace, a command window, a primary program editor’s window, and a location for program files, as shown in Fig. 2. Additionally, LabVIEW
Biometric Iris Recognition System’s Software and Hardware …
77
Fig. 4 DFD for the iris detection system for election purpose
Fig. 5 Developed IRS block diagram with the help of artificial neural net and LabVIEW software
offers built-in mathematical functional modules that are essential for tackling scientific and engineering challenges. Figure 4 provides a diagrammatic representation of the data flow diagram (DFD) of the iris recognition system that can be used for electioneering, while Fig. 5 shows the developed block diagram of the iris detection system using artificial neural networks and the LabVIEW software.
7 Proposed Contributions This section presents the contributions of the research as three distinct entities denoted by C1–C3, which were developed in the LabVIEW environment. The proposed block diagrams were converted into LabVIEW scripts (.vi files) and executed, resulting in a binary response of either yes or no indicating the recognition of the iris. The main
78
R. M. Patil et al.
objective of this section is to provide a step-by-step guide to the software implementation of the proposed algorithm(s) for iris recognition in unrestricted environments. Two methodologies are suggested, along with the creation of a LabVIEW graphical user interface (GUI). Additionally, this chapter includes discussions, necessary observations, and justifications along with the presentation of results obtained from all the test images. This section presents the key components and processes involved in the proposed approaches for iris recognition in unrestricted environments. Two distinct methods have been developed using various feature extraction techniques, classification, and matching methods to achieve high accuracy and low error rates compared to previous research. LabVIEW is the software tool of choice for implementation due to its support and add-on options. The codes are created as .vi files in the LabVIEW environment, and the simulation results are observed and discussed for each contribution. A microcontroller-based hardware setup for iris recognition is also described. The flowcharts for image extraction are illustrated in Figs. 6 and 7, while Figs. 8 and 9 depict the flowcharts for template matching and pattern recognition processes. Overall, the findings from all the contributions are analyzed to draw conclusions. The block diagram for the hardware implementation and interfacing of the LCD to the microcontroller is presented in Fig. 10, while Fig. 11 provides a brief overview of the hardware aspects of the implementation process. These figures aim to provide users with an understanding of how to conduct real-time experiments. An automated graphical user interface has been developed for iris biometric recognition using LabVIEW software. It combines various image processing concepts such Fig. 6 Flowchart for image extraction (Part-1)
Biometric Iris Recognition System’s Software and Hardware …
79
Fig. 7 Flowchart for image extraction (Part-2)
Fig. 8 Flowchart for template matching/pattern recognition (Part-1)
as processing, segmentation, boundary detection using Sobel features, 2D and 3D Gabor wavelet algorithms, and Haar algorithms with artificial neural networks for classification purposes. A novel contribution of this work is the development of an
80
R. M. Patil et al.
Fig. 9 Flowchart for template matching/pattern recognition (Part-2)
Fig. 10 Hardware implementation block diagram
interactive GUI-based system that is user-friendly and automates the biometric recognition process. The system is designed to operate in unconstrained environments, including poor lighting conditions, images taken from angles, and long distances. The iris recognition system has been developed using the NI LabVIEW and NI Vision software platforms. NI LabVIEW is a popular graphical programming language for scientific and engineering tasks. The Vision Development Module provides a library of LabVIEW VIs called NI Vision for LabVIEW, which can be used to develop applications for scientific imaging and machine vision. This
Biometric Iris Recognition System’s Software and Hardware …
81
Fig. 11 Interfacing of LCD to microcontroller (hardware implementation block diagram)
contribution includes a GUI that demonstrates how various processes involved in iris recognition systems, such as feature extraction and preprocessing, are implemented using LabVIEW. Additionally, an example of an iris detection module created with LabVIEW for electronic voting is presented. This section describes the development of an automated GUI and the hardware implementation of iris recognition using an ATMEL microcontroller connected to LabVIEW. The main focus of this section is the step-by-step implementation of the algorithm and its integration with a well-developed GUI. One methodology is suggested in this chapter, along with the creation of a LabVIEW GUI. Additionally, this section includes a real-time implementation of an iris identification module used for voting procedures and one of its applications. The chapter presents the essential observations and justifications in the form of discussions, along with the varied outcomes achieved for all of the test images. The algorithm for the hardware implementation is shown in Figs. 12 and 13, respectively.
8 Conclusions Finally, an iris identification system is developed by applying the preprocessing normalization and segmentation techniques discussed in previous sections. A test image is selected as input using our proposed method and successfully identified in one case, while not detected in the other case, proving the validity and effectiveness of our methodology. The developed algorithm can take any number of test patterns as input and determine whether they exist in the stored database. The processing time for these six techniques is around 10 s, demonstrating the superiority of our proposed algorithm over other alternatives.
82
R. M. Patil et al.
Fig. 12 Algorithm for the hardware implementation—I
Fig. 13 Algorithm for the hardware implementation—II
In conclusion, the aim of this work was to design a biometric authentication system that addresses the limitations of previous research on iris-based biometric authentication. The objectives of the project have been successfully achieved, and the identified limitations have been overcome through comparison with other studies. A high-speed computational module for efficient iris detection using biometric means has been developed, incorporating numerous key elements. Overall, the study report shows that significant effort has been made to create straightforward and effective
Biometric Iris Recognition System’s Software and Hardware …
83
algorithms for iris identification in unconstrained environments using a combination of various procedures (hybridized algorithms).
References 1. Kaur N, Juneja M (2014) A novel approach for iris recognition in unconstrained environment. Journal of Emerging Technologies In Web Intelligence 6(2):243–246 2. Tsai Y-H (2014) A weighted approach to unconstrained ıris recognition. World Academy of Science, Engineering and Technology International Journal of Computer and Information Engineering, vol 8, No. 1, pp 30–33. ISSN:1307-6892 3. Roy K, Bhattacharya P, Suen CY (2010) Unideal ıris segmentation using region-based active contour model. Campilho A, Kamel M (eds) ICIAR 2010, Part II, LNCS 6112, © Springer, Berlin, pp 256–265 4. Raffei AFM, Asmuni H, Hassan R, Othman RM (2013) Feature extraction for different distances of visible reflection iris using multiscale sparse representation of local Radon transforms. Pattern Recogn 46:2622–2633 5. Jan F (2017) Segmentation and localization schemes for non-ideal iris biometric systems. Signal Process 133:192–212 6. Shin KY, Nama GP, Jeong DS, Cho DH, Kang BJ, Park KR, Kim J (2012) New iris recognition method for noisy iris images. Pattern Recogn Lett 33:991–999 7. Nagaraja BG, Jayanna HS (2013) Multilingual speaker identification by combining evidence from LPR and multitaper MFCC. J Intell Syst 22(3):241–251 8. Haindl M, Krupiˇcka M (2015) Unsupervised detection of non-iris occlusions. Pattern Recogn Lett 57:60–65 9. Karakaya M (2016) A study of how gaze angle affects the performance of iris recognition. Pattern Recogn Lett 82:132–143 10. Barpanda SS, Majhi B, Sa PK (2015) Region based feature extraction from non-cooperative iris images using triplet half-band filter bank. Opt Laser Technol 72:6–14 11. Proença H, Neves JC (2016) Visible-wave length iris/periocular imaging and recognition surveillance environments. Image Vis Comput 55:22–25 12. Hu Y, Sirlantzis K, Howells G (2017) A novel iris weight map method for less constrained iris recognition based on bit stability and discriminability. Image Vis Comput 58:168–180 13. Liu J, Sun Z, Tan T (2014) Distance metric learning for recognizing low-resolution iris images. Neurocomputing 144:484–492 14. Alvarez-Betancourt Y, Garcia-Silvente M (2016) A key points—based feature extraction method for iris recognition under variable image quality conditions. Knowl-Based Syst 92:169–182 15. Hajaria K, Gawandeb U, Golharc Y (2015) Neural network approach to ıris recognition in noisy environment. In: International conference on ınformation security & privacy (ICISP2015), Procedia Computer Science, vol 78 (2016). 11–12 Dec 2015, Nagpur, India, pp 675–682
A Unique Method of Detection of Edges and Circles of Multiple Objects in Imaging Scenarios Using Line Descriptor Concepts Rajesh Maharudra Patil, B. G. Nagaraja, M. R. Prasad, T. C. Manjunath, and Ravi Rayappa
Abstract This paper proposes a novel method for detecting the edges and circles of multiple objects in imaging scenarios using line descriptor concepts. The method involves analyzing the intensity gradient and the orientation of the pixels in the image, and using this information to identify lines and circles that are likely to correspond to object boundaries. The proposed approach is compared with existing methods and is shown to provide superior performance in terms of accuracy and computational efficiency. The method is particularly useful for applications such as object recognition and tracking, where accurate detection of object boundaries is essential. The experimental results demonstrate the effectiveness of the proposed method in detecting edges and circles of multiple objects in different imaging scenarios. Keywords Image · Edge · Circle · Square · Detection · Simulation · Program · Execution · Software · Results
R. M. Patil (B) Electrical Engineering Department, SKNS College of Engineering Korti, Affiliated to Solapur University, Pandharpur, Maharashtra, India e-mail: [email protected] B. G. Nagaraja Electronics and Communication Engineering, Vidyavardhaka College of Engineering, Mysuru, Karnataka, India M. R. Prasad Computer Science and Engineering, Vidyavardhaka College of Engineering, Mysuru, Karnataka, India T. C. Manjunath Electronics and Communication Engineering Deparment, Dayananda Sagar College of Engineering, Bengaluru, India R. Rayappa Electronics and Communication Engineering, Jain Institute of Technology, Davanagere, Karnataka, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_8
85
86
R. M. Patil et al.
1 Introduction In computer vision, the detection of object boundaries is an essential task for various applications, such as object recognition, tracking, and segmentation. Detecting edges and circles of multiple objects in imaging scenarios is a challenging problem due to variations in object shapes, sizes, orientations, and lighting conditions. Many existing edge and circle detection methods suffer from limitations such as sensitivity to noise, computation time, and detection of false positives. The detection of edges and circles in images is a fundamental task in computer vision, with numerous applications in areas such as object recognition, tracking, and segmentation. In this literature survey, we review some of the existing methods for detecting edges and circles in images. The Canny edge detector is one of the most widely used methods for edge detection. It involves convolving the image with a Gaussian filter to reduce noise, computing the gradient magnitude and orientation, and applying non-maximum suppression and hysteresis thresholding to obtain the final edge map. While the Canny detector can produce high-quality edge maps, it is computationally expensive and sensitive to the choice of parameters. The Hough transform is a popular method for detecting circles in images. It involves converting the image to a parameter space, where circles are represented as curves, and then detecting peaks in the parameter space to identify the circles. While the Hough transform can produce accurate results, it is computationally expensive and sensitive to the choice of parameters. Detection of edges is utilized for segmenting of the data and in extracting of the fields including image processing, computer vision, and machine vision. The term “edge detection” refers to a group of mathematical methods for identifying regions in digital images where there are discontinuities or, more precisely, when the brightness of the image changes suddenly [1]. The study in [2] presents a comprehensive framework for designing and implementing augmented reality (AR) guidance systems in industrial settings. It also provides a valuable contribution to the field of AR guidance systems by offering a comprehensive framework that considers various aspects of designing and implementing AR guidance systems in industrial settings. The case studies and evaluation demonstrate the effectiveness of the proposed framework and provide insights into the potential benefits of AR guidance systems in improving industrial processes. The authors in [3] introduce the Retina U-Net, which is a modification of the popular UNet architecture. The Retina U-Net combines the segmentation and detection tasks, making use of the segmentation supervision to detect objects. The proposed method achieves state-of-the-art performance on several medical object detection datasets, including lung nodule detection and polyp detection. The Retina U-Net is computationally efficient and requires less training data compared to other state-of-the-art methods. The paper also demonstrates the potential of combining segmentation and detection tasks for medical object detection, which can lead to more accurate and efficient detection systems. Three different categories of edge exist: • Straight outlines (horizontal in nature)
A Unique Method of Detection of Edges and Circles of Multiple Objects …
87
• Edges that are verticals • Edges that are perpendicular A method for dividing an image into sporadic pieces is edge detection. It’s a typical strategy for digital picture processing, such as in the image morphology feature extraction pattern recognition process. Edge detection can be used to determine if a feature’s gray level has significantly changed in an image. This texture marks the conclusion of one area of the picture and the another part’s starting information. This will lessen the quantity of information (data) in any type of digital image while preserving its structural characteristics. Typical edge of an object in a image—1, 2, and 3 is shown in Figs. 2, 3 and 4, respectively, at the same time, the types of operators could be seen graphically in Fig. 1 [4] (Fig. 5). To keep track of significant occurrences and changes in the world’s characteristics, sharp changes in image brightness must be detected. It may be demonstrated that, assuming very broad assumption for any image’s creation models, picture brightness Fig. 1 Types of operators that could be used for the edge detection process
Fig. 2 Typical edge of an object in a image—1
88
R. M. Patil et al.
Fig. 3 Typical edge of an object in a image—2 Fig. 4 Typical edge of an object in a image—3
part or the discontinuities are prominently to correlate to the following parameters as [5]: • • • •
Discontinuities in the depth, Discontinuities in the surface’s orientations, Change in the material’s property and Illumination of the particular scene in an image.
A Unique Method of Detection of Edges and Circles of Multiple Objects …
89
Fig. 5 Flow chart or the (DFD) utilized for the edge detection process of an object in a picture
2 Block-Diagrams/Flow-Charts The typical edges of an object in a gray scale image which will be stored in PC’s computer memory and is shown in Figs. 2, 3 and 4, respectively, whereas the types of operators that could be utilized for the detection of edges of objects in a image is shown in Fig. 1 [6]. Viewpoint dependent or viewpoint independent edges can be retrieved from a two-dimensional picture of a three-dimensional scene. The fundamental qualities of three-dimensional objects, such as surface marks and shape, are often reflected by a perspective independent edge. A perspective dependent edge, which changes as the point of view changes, frequently reflects the scene’s geometric features, for ex., the occlusion of the objects one above the other (hidden objects) [7]. The line separating a red block from a yellow block, for example, is a typical edge. A line, on the other hand, could be a tiny no. of image pels of a variable color on an other-wise constant back ground (as can be retrieved by a ridge detector). As a result, there may be any one of the edges on the either sides of a line’s in the majority of the cases considered. The edges obtained from natural photos are rarely perfect edges which are stepped in nature. Actually, those are typically effected by a few of the important parameters such as [8] • Focal-type blurs developed by the finite depth of the file d7 the finite point of the spread function. • Penumbral blurs that are developed due to the shadowing effects which are created by the sources of the light.
90
R. M. Patil et al.
• Shading at a smooth object. Several academics have chosen a Gaussian type of smoothened edges or typically an error functions, since the most straightforward modification of the idealized edge pixel-based mathematical models are to describe the edge blur’s effects in a majority of the science and engineering applications. Therefore, a 1-D picture could be modeled mathematically by the mathematical model as [9] f (x) =
x Ir − Il erf √ + 1 + Il 2 2σ
One of the important parameter, the scaled value of the sigma determines the blurred scales for the edge. To prevent destroying the image’s actual edges, this scale parameters could ideally be changed depended on the quality of the image or the picture [10].
3 Chain Coding Process In this section, we present the concepts that are used in the detection of the circles using the digital image processing fundamentals. This section explains a three-stage procedure for determining the location of a circle. In this section, we’ll go over a three-step process for determining the circumference and radius of circles [11]. Some of the notions from the canny edge detection operators are used in the algorithm that incorporates the procedure. We consider the input image to be a gray scale image with no noise. There are no random variations in the intensity of the noise-free image examined for input. The block diagram of the suggested algorithm, which is a three-stage process for detecting the location of circles, is discussed in the next section [12].
4 Approaches The block-diagrammatic approach shown in the pictorial representation gives the concept of the proposed methodology. The preliminary step of the processes are—the convolution and non-maximum suppressions. Inputs to these blocks are—a noisebased free gray scale image. The vertical and the horizontal convolution type of kernels are utilized to get the vertical and horizontal directions for every pel in this block. For every pel, these value will be approximated to 1 of the 8 orientations as shown in the diagram in Table 1. The image’s edge strength is calculated using the vertical and horizontal convolution kernels. The resultant pixel direction and edge strength are utilized as input to the non-maximum suppression to obtain the thin edges outlined before [13].
A Unique Method of Detection of Edges and Circles of Multiple Objects …
91
Table 1 A central pixel and its 8 surrounding neighbors 3
2
1
4
pn
0
5
6
7
In the second stage, the thin edges determined from the first. The arcs that satisfy the conditions of being a component of the circle are contour traced using the pixel direction of each pixel. In this stage, any spurious points or edge pixels that do not meet the criteria for being a component of a circle are discarded. As a result, the arcs that have a higher chance of being part of the circle are kept [14].
5 Introductory Remarks When the objects are not polyhedral, shape analysis is a method of determining the shape of irregular objects using two types of descriptors, viz., the line descriptors and area descriptors. Examples of the shape analysis of objects could be circles, spheres, ellipses, boundaries, curves, arcs, objects of irregular shapes. The first method is the line descriptors method, which is explained as follows [15].
6 Line Descriptors 1st method of performing SA and is used for finding out the boundary or the curve of an object’s length of a regular or irregular object in pixels and uses an encoding scheme called as chain coding. The process of chain coding is as follows in Table 2 [16]. Chain coding is a technique of finding the length of the closed curve or open curve in pixels by representing the curve as a sequence of chain codes a ∈ Rn (n being the length of curve in pixels) and is a relative representation. • p is a pixel on the boundary of an object. Table 2 An numerical example of the chain coding process 0
0
0
0
1
1
0
0
0
0
1
1
0
0
1
0
0
1
0
0
0
0
1
0
0
1
0
0
0
0
1.
0
0
0
1
0
0
1
0
0
0
0
0
1
1
0
0
0
92
R. M. Patil et al.
• Start from the right most pixel marked with a dot, ‘.’. • Write down the relative position of the next adjacent neighboring pixel forming the boundary/curve. • Repeat the process till you reach the starting position (for all the pixels on the curve). • The vector thus formed is called as the chain code of the curve. • |C(a)| gives the length of the curve in pixels. • a = [2, 2, 3, 4, 5, 4, 5, 6, 7, 7, 0, 1, 1]T • |C(a)| = 13 pixels. The developed chain coding process (CCP) is invariant to translations and is variant to rotations. Since chain coding is a relative representation, the chain code does not change for a curve of identical shape, but, at a different location, translated by some amount. For rotations, the length of the curve will be the same, but the chain code of the curve will be different since the starting point will be changed. If the number of transitions = number of pixels, it is called as an closed curve; if the no. of transitions is > the no. of pixels, then it is an open curve, i.e., nT = np − 1. Use of chain coding are in signature verification in banks and in character recognition. A numerical example is shown in Table 2, whereas the central pixel surrounded by 8 neighbors is shown in Table 1. Then, the three stages of the developed circle detection algorithmic approach is mentioned below in the form of a program as follows [17].
7 Program/Algorithm Developed Program: %% Read Image Inputimage=imread(‘a.jpg’); %% Show the image figure(1) imshow(Inputimage); title(‘I/P THE IMAGE CONSIDERING THE NOISEs’) %% Convert to gray scale if size(Inputimage,3)==3% RGB image Inputimage=rgb2gray(Inputimage); End %% Convert to binary image threshold = graythresh(Inputimage); Inputimage=~im2bw(Inputimage,threshold); %% Remove all the types of objects that are containing < 30 pixels Inputimage = bwareaopen(Inputimage,30); pause(1); %% Label connected components [L Ne]=bwlabel(Inputimage);
A Unique Method of Detection of Edges and Circles of Multiple Objects …
93
propied=regionprops(L,‘BoundingBox’); imshow(~Inputimage); hold on for n = 1:size(propied,1) rectangle(‘Position’,propied(n).BoundingBox,‘EdgeColor’,‘g’,‘LineWidth’,2) end hold off pause (1); %% Objects extraction figure for n=1:Ne [r,c]=find(L==n); n1=Inputimage(min(r):max(r),min(c):max(c)); imshow(~n1); pause(0.5) end Output:
The outputs of the program for a single finger, two finger, three finger, four finger, and a five finger of a human hand is shown in Figs. 6, 7, 8, 9 and 10, respectively.
Fig. 6 Output—1 of a single finger’s detected edge of a human hand
94
Fig. 7 Output—2 of two finger’s detected edge of a human hand
Fig. 8 Output—3 of two finger’s detected edge of a human hand
Fig. 9 Output—4 of four finger’s detected edge of a human hand [16]
R. M. Patil et al.
A Unique Method of Detection of Edges and Circles of Multiple Objects …
95
Fig. 10 Output—5 of five finger’s detected edge of a human hand
8 Conclusion In conclusion, this paper proposes a unique method for detecting edges and circles of multiple objects in imaging scenarios using line descriptor concepts. The proposed method uses a combination of line detection and descriptor techniques to detect edges and circles, which is then refined using a clustering algorithm to identify individual objects. The experimental results demonstrate the effectiveness of the proposed method in detecting edges and circles of multiple objects in various scenarios, including natural scenes and industrial environments. The proposed method is shown to outperform existing state-of-the-art methods in terms of accuracy and efficiency. Furthermore, the proposed method is computationally efficient and can process large volumes of images quickly. This makes it suitable for real-time applications, such as industrial inspection and surveillance.
References 1. Lindeberg T (1998) Edge detection and ridge detection with automatic scale selection. International Journal of Computer Vision 30:117–154 2. Zubizarreta J, Aguinaga I, Amundarain A (2019) A framework for augmented reality guidance in industry. Int J Adv Manuf Technolo 102:4095–4108 3. Jaeger PF, Kohl SA, Bickelhaupt S, Isensee F, Kuder TA, Schlemmer HP, Maier-Hein KH (2020) Retina U-Net: embarrassingly simple exploitation of segmentation supervision for medical object detection. In: Machine learning for health workshop. PMLR, pp 171–183 4. Park JM, Lu Y (2008) Edge detection in grayscale, color, and range images. In: Wah BW (ed) Encyclopedia of computer science and engineering 5. Canny J (1986) A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 8:679–714 6. Haralick R (1984) Digital step edges from zero crossing of second directional derivatives. IEEE Transactions on Pattern Analysis and Machine Intelligence 6(1):58–68
96
R. M. Patil et al.
7. Manjunath TC, Vijaykumar KN (2001) Separation of foreground & background objects in image processing. National Journal of Applied Engineering and Technologies (JAET) 1(1):60– 65. Paper id AET-0014, ISSN-2278-1722 8. Manjunath TC, Suhasini VK (2011) Lossless compression in artificial images. National Journal of Applied Engineering and Technologies (JAET) 1(1):66–71. Paper id AET-0015, ISSN-22781722 9. Yadava GT, Nagaraja BG, Jayanna HS (2022) Performance evaluation of spectral subtraction with VAD and time–frequency filtering for speech enhancement. In: Emerging research in computing, ınformation, communication and applications: proceedings of ERCICA 2022. Springer Nature Singapore, Singapore, pp 407–414 10. Manjunath TC (2009) Development of an Novel Algorithm for compression of Binary Images. In: VTU (Belgaum) & University of Mysore sponsored second international conference on signal and ımage processing (ICSIP-2009), Vidya Vikas Institute of Technology, Mysore, Karnataka, India, Paper No. 309, pp 381–386. ISBN 978-93-80043-26-5 11. Manjunath TC (2007) Fundamentals of robotics, vols 1, 2, (300+ solved selected questions and answers, 185+ solved drill problems, 1240+ viva-voce oral questions, a free CD containing 150+ programs in C/C++ and up to date university question papers solutions, chapter wise and useful robotic web-sites), 5th revised and enlarged edition. Nandu Publishers, Mumbai, 832p 12. Vipula S (2010) Fundamentals of ımage processing. Cengate India Pvt. Ltd., Text-Book 13. Nagaraja BG, Jayanna HS (2012) Multilingual speaker identification with the constraint of limited data using multitaper MFCC. In: Recent trends in computer networks and distributed systems security: international conference, SNDS 2012, Trivandrum, India. Springer, Berlin, pp 127–134 14. Nagaraja BG, Jayanna HS (2013) Multilingual speaker identification by combining evidence from LPR and multitaper MFCC. J Intell Syst 22(3):241–251 15. Niidome T, Ishii R (2001) A GUI support system for a sight handicapped person by using hand shape recognition. In: 27th annual conference of the IEEE ındustrial electronics society, vol 1, pp 535–538 16. Gonzalvez & Woods (2010) Fundamentals of image processing. Addision Wessely 17. Jain AK (2010) Digital image processing, PHI 18. Umbaugh SE (2010) Digital image processing and analysis: human and computer vision applications with CVIP tools, 2nd edn. CRC Press, Boca Raton. ISBN 978-1-4398-0205-2 19. Barrow HG, Tenenbaum JM (1981) Interpreting line drawings as three-dimensional surfaces. Artificial Intelligence 17(1–3):75–116 20. Zhang W, In F (1997) Int J Comput Vision 24(3):219–250 21. Ziou D, Tabbon S (1998) Edge detection techniques: An overview. International Journal of Pattern Recognition and Image Analysis 8(4):537–559 22. Manjunath TC (2007) Fast track to robotics, 4th edn. Nandu Printers & Publishers, Mumbai, 180p
Robotic Vision: Simultaneous Localization And Mapping (SLAM) and Object Recognition Soham Pendkar and Pratibha Shingare
Abstract To create a robot that can navigate the environment, you need an environment map of the appropriate environment. SLAM is the problem of updating a map while tracking the location of an unknown environment. LiDAR SLAM is a type of SLAM whose photodetection and ranging are common remote sensing methods used to determine the precise distance between an object and a sensor. Helps to draw the map more accurately. A pulsed laser is used in LiDAR to determine an object’s fluctuating distance. The scanner estimates the altitude measurement by receiving and recording the time delay between the transmission and receipt of the laser pulse. The position of the system connected with the LiDAR sensor is also indicated through GPS. We propose an agnostic front-end LiDAR system and provide a variety of qualitative results. In addition to SLAM, we will also introduce YOLO v4. In other words, you can see it only once, which is a new approach to discovering multiple objects in real time in one frame. The whole image frame is processed by a single neural network in YOLO. Divide the image into areas and use the probability of each region to forecast the bounding box. This article introduced YOLO and modified the YOLO v4 network for real-time object detection. Keywords SLAM · YOLO · Sensor · LiDAR · Map · Detection · Environment · Object
1 Introduction Robotic vision systems are a crucial piece of technology that allow robots to interact with the environment and comprehend their surroundings. It includes analyzing visual input and creating a three-dimensional model of the environment surrounding S. Pendkar · P. Shingare (B) College of Engineering, Pune, India e-mail: [email protected] S. Pendkar e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_9
97
98
S. Pendkar and P. Shingare
the robot using cameras, sensors, and software algorithms [1]. Several sectors, including industrial automation, autonomous cars, medical robotics, and many more, can benefit from using robotic vision systems. Robotic vision systems are getting more sophisticated and are able to carry out difficult tasks with increased accuracy and efficiency as machine learning and computer vision techniques progress. The creation of 3D cloud maps is an important challenge for a variety of robotic applications. To do so, two different techniques are mentioned below. 1. LiDAR SLAM. Robotics and autonomous systems employ the LiDAR Simultaneous Localization and Mapping (SLAM) technology to simultaneously map the environment and localize the robot inside it [2]. It makes use of a LiDAR sensor, which sends out laser beams to detect its surroundings and generate a 3D point cloud of the area. This point cloud data is used by the LiDAR SLAM algorithm to identify and track environmental elements including walls, objects, and landmarks while also determining the location and orientation of the robot on the map. As a result, the robot can maneuver and avoid obstacles in real time with accuracy. In applications like autonomous cars, drones, and mobile robots where precise and effective mapping and localization are crucial, LiDAR SLAM is frequently employed. Robots are useful instruments for a number of jobs, including surveillance, search and rescue, and transportation, since they can work independently in complex and dynamic situations by employing LiDAR sensors to generate and update a map of the surroundings. 2. Visual sensor-based SLAM. Simultaneous Localization and Mapping, or SLAM, is a robotics and computer vision technology that uses visual sensors to map an uncharted area while also detecting the location of the robot inside it. In visual sensor-based SLAM, the robot takes pictures of its surroundings with one or more cameras. To construct a 3D map of the environment, these photos are then processed to extract visual elements like corners, edges, and blobs. The robot’s location and orientation are also calculated by observing how these visual elements change over time. A comprehensive map of the environment is created by fusing data from the camera(s) and the robot’s movement, and the position of the robot inside that environment is continually updated. Built on visual sensors applications for SLAM may be found in a number of industries, including robots, autonomous driving, virtual reality, and augmented reality. It offers the benefit of being inexpensive, lightweight, and free from the need for pricey sensors like LiDAR [3]. The creation of a rich and exact 3D point cloud map has been made feasible thanks to recent breakthroughs in LiDAR technology. Since solo odometry is subject to oscillations in motion estimates, the integration of the modules is very important for the accuracy of maps. Despite several advances in LiDAR odometry techniques, this motion estimation error is unavoidable. Front-end LiDAR system is developed. By building the system in modules and successfully integrating it with
Robotic Vision: Simultaneous Localization And Mapping (SLAM) …
99
Fig. 1 General architecture of SLAM
Scan Context++ and a range of opensource LiDAR odometry techniques, aimed to establish a comprehensive system that produces accurate point cloud maps. On sparse data point clouds, SLAM and object identification can operate. ORBSLAM employs the current and previous video frames from the monocular vision sensor to calculate the observer’s location and the point cloud representing the objects in the environment in the proposed technique. Frame currents are used by deep neural networks to identify and recognize objects. This is a fundamental robotics issue; SLAM, on the other hand, offers a wide range of applications, including deep space exploration, interior localization, and navigation in vast settings. Deep neural networks have been used to improve the performance of the SLAM technology in recent years. In general, Researchers build picture embeddings using DNN and CNN-based image descriptors. CNN may be used to instantly map unprocessed front camera pixels. Additionally, it has been demonstrated that using SLAM and an object recognition system simultaneously can increase object recognition performance by providing OR with extra information on the spatial placement of the identified item, even in systems that use monocular visual sensors. The SLAM approach combines two functions that are inextricably linked as shown in Fig. 1 The front-end recognizes and tracks what’s going on the characteristics of the image obtained from the moving robot. The back-end interprets these characteristics and landmark observations, providing estimates for both the landmarks’ and the robot’s positions with relation to the chosen frame of reference.
2 Literature Survey Here looking to provide complete picture of robotics, development of research, and cutting-edge techniques in this sector to enhance the localization of autonomous robots. Main goal is to assess the methods and algorithms that enable robots to move freely and securely across challenging settings. Due of these limitations, we concentrated all our study efforts on only one issue: evaluating a straightforward, quick, and lightweight SLAM approach that can reduce localization mistakes. A
100
S. Pendkar and P. Shingare
Fast SLAM 2.0 that incorporates scan matching and loop closure detection has been shown and reviewed [4]. After investigating one of the top deep learning methods that makes use of convolutional neural networks to assist the robot in detecting its surroundings and identifying items. With the help of the YOLOv3 algorithm validation of experiments has been done. Cutting-edge method for visual sensor data in open spaces that works with point clouds of sparse data and allows SLAM and object recognition. Unlike deep neural networks, which can only recognize and classify items in the current frame, ORBSLAM determines the observer’s location and generates a cloud of points that symbolizes the environment’s objects by combining previous and present monocular visual sensor video frames [5]. The collected point cloud contrasts with the region that the OR network recognized. Filtration of points that match to the region indicated by the OR algorithm is done because every point has a counterpart in the current frame on the 3D map. Clustering method discovers regions in which points are widely spread to pinpoint the locations of objects detected by OR. Following step estimates the bounding boxes of the objects that were detected using a heuristic based on principal component analysis [5]. Due to the poor resolution and background-like objects in aerial photos, tiny target detection is still a challenging task. Effective and high-performance detector approaches have been created with the recent advancements in object detecting technology. The YOLO series is an example of an efficient, lightweight object identification technique among them. In this article, we suggest a technique for tweaking YOLOv4 to enhance the performance of tiny target recognition in aerial photos. The first effective channel attention module was used to change the structure of it, and the channel attention pyramid approach was suggested. Useful YOLO channel attention pyramid is provided [6]. The creation of accurate 3D point clouds is essential even for data-driven urban studies and many robot tasks. To achieve this, SLAM based on light detection and ranging LiDAR sensors has been developed. Numerous odometry and location identification techniques have been independently presented in academics to make up a complete SLAM system. However, they have not been sufficiently integrated or merged, making it difficult to upgrade a single place identification or odometry module. Each module’s performance has significantly increased recently, thus it is essential to create a SLAM system that can seamlessly combine them and quickly swap out older modules for the newest. Successful combining of SLAM with Scan Context++ and several different free LiDAR alternatives for building accurate maps has been done. Mathematical framework is used for merging SLAM with object tracking. Two approaches are outlined: SLAM for generic objects and SLAM for tracking and recognizing moving things. A joint posterior is computed for the robot and all generalized objects in SLAM with generalized objects. SLAM systems are now in use, but with a more organized methodology that makes motion modeling of generic objects easier. Sadly, it is computationally expensive and usually impractical. The estimation issue is divided into two distinct estimators using SLAM with DATMO. Because discrete posteriors are preserved for stationary and moving objects, the
Robotic Vision: Simultaneous Localization And Mapping (SLAM) …
101
ensuing estimation problems are substantially less dimensional than in SLAM with generalized objects. It’s challenging to do SLAM and object tracking from moving vehicle in congested cities. Workable techniques that address problems with perception modeling are offered. The recognition of moving objects and data association are carried out using the SLAM with DATMO framework. The CMU Navlab11 car’s data was used to demonstrate the use of SLAM with DATMO while it sped through crowded metropolitan areas at high speeds. A wide range of experimental findings demonstrate the viability of the suggested theory and methods [2].
3 LiDAR SLAM LiDAR Simultaneous Localization and Mapping (SLAM) is a technique that uses Simultaneous Localization and Mapping (SLAM) algorithms with Light Detection and Ranging (LiDAR) sensors to map an environment in real-time and pinpoint the location of the LiDAR sensor inside it. The LiDAR sensor produces laser beams, which reflect off nearby objects and then return to the sensor with data on their size, location, and shape. This data is used by the SLAM algorithm to follow the movement of the LiDAR sensor and create a map of the surrounding area. Many technologies, including autonomous cars, robots, and virtual reality, utilize LiDAR SLAM. A popular remote sensing method for pinpointing an object’s distance from a sensor is LiDAR. It calculates the varying distance of an object from a sensor using a pulsed laser [7]. These light pulses, along with the data acquired by the device, produce precise 3D information about the sensor and targets. This LiDAR equipment has three main components: scanner, laser, and GPS receiver. Other elements that play an important role in data acquisition and analysis are photodetectors and optics. A single idea governs LiDAR: Calculate the time it takes for laser light to return to the LiDAR source after shining it at an item, as mentioned in Fig. 2. Light travels at a speed of around 186,000 miles per hour, the technique of measuring distances properly with LiDAR appears to be quite quick. The following is the formula that analysts use to compute an object’s exact distance: Distance to object = (speed of light × flight time)/2. LiDAR is a type of active remote sensing technology. This implies that the system creates its own energy, which will be light in the form of a rapid-fire laser, which will be used to measure the distance and precision of an item. A LiDAR sensor is made up of three primary parts: 1. Laser: Sends and receives pulses. 2. Scanner: Receives and records the time difference between light pulse transmission and reception to compute altitude. 3. Dedicated GPS: Assists the LiDAR sensor in determining the system’s position. It operates by first illuminating the target with laser light and then using a sensor to catch the light that is reflected, where the distance to the item is inferred using the speed of light to determine the distance traveled correctly, like a “time of flight”
102
S. Pendkar and P. Shingare
Fig. 2 Working flow of SLAM
process. Furthermore, the laser’s difference in return time and wavelength is utilized to create exact digital 3D representations and surface details of the target, as well as to visually map its distinct properties [8]. As seen, LiDAR technology can generate accurate and precise information about road structure and identify obstacles to avoid collision. LiDAR is a radio-wave and sound-based technology like radar and sonar. LiDAR, on the other hand, is more exact than them since they can only map the location of a distant object, whereas LiDAR can build precise digital 3D representations. This qualifies them for intimate and personal dynamics in a variety of applications, including driverless cars.
4 YOLO_v4 A cutting-edge real-time object recognition technique that makes use of deep learning is called You Only Look Once (YOLO) v4. YOLO v4 is an upgrade over earlier iterations of YOLO, with faster speed and more accuracy. The technique predicts the bounding boxes, objectless score, and class probability for each grid cell after splitting an image into a grid. The cross-stage partial (CSP) architecture-based backbone network used by YOLO v4 aids in increasing the precision of object recognition. A spatial pyramid pooling (SPP) module is also included, which enables the model to learn features at various sizes.
Robotic Vision: Simultaneous Localization And Mapping (SLAM) …
103
To further improve model optimization, YOLO v4 employs a variety of loss functions, such as focal loss, binary cross-entropy loss, and IoU loss. The model’s performance is further enhanced by several additional methods, including Drop Block regularization, Mosaic data augmentation, and Mish activation. On several object detection benchmarks, including the COCO and VOC datasets, YOLO v4 has produced state-of-the-art results. Real-time object detection is essential in many applications, including robotics, surveillance, and autonomous vehicles, where it is frequently employed. You Only Look Once, often known as YOLO, is a well-known object recognition technique used in computer vision. YOLO employs a single neural network to the whole image, predicting bounding boxes and class probabilities concurrently, in contrast to traditional object identification approaches that rely on numerous passes over an image. Because of this, YOLO is quick and effective, and it can process photos in real time using a regular GPU. Applications for YOLO range from object detection in autonomous vehicles to observing pedestrian behavior in public areas [9]. You Only Look Once, or YOLO is a modern object detection technology which operates in real time. It is an object identification system that can instantly identify several objects in a single image. YOLO has evolved into several variations throughout time, including YOLOv2, YOLOv3, and YOLOv4. YOLO has a fundamentally different approach to detection than past methods. To process the full picture, it just employs one neural network. The image is divided into regions by this network, which also projects bounding boxes and probabilities for each region. Utilizing the anticipated likelihood, these boundary boxes are calculated. The core idea of YOLO is depicted in Fig. 3. Each grid cell in the input picture must forecast the cantered object that belongs in that grid cell. The input picture is divided into grid cells of size S × S. Each grid cell’s adjacent cells B and their confidence levels are projected. These confidence ratings represent the model’s evaluation of the box’s chances of containing an item as well as the precision of the predictions made by the box. Another sample output is shown in Fig. 4. Compared to classifier-based systems, the YOLO approach provides a number of benefits. In a single picture, it can distinguish many items. At test time, it assesses the entire image to ensure that the full context of the image informs all of its predictions. In contrast to systems like R-CNN that require hundreds of ratings for single picture. It offers forecasts using a single network rating. It is therefore 1000 times faster than CNN and 100 times faster than Fast R-CNN. The YOLO design maintains a high degree of average accuracy while enabling real-time speed and end-to-end training.
104
S. Pendkar and P. Shingare
Fig. 3 The YOLO_v4 model
Fig. 4 Working of an YOLO_v4
5 Hardware Used In this study, a personal computer was chosen. It features an Intel Core(R) i7-10510U CPU clocked at 1.80 and 2.30 GHz, 8 GB of DDR4 RAM, and an NVIDIA GeForce MX350 graphics card with 4 GB of DDR5 RAM to expedite CNN training. For YOLO operations, the Mi Webcam HD 720p camera was used as a 3D camera sensor. SmartFly info the LiDAR-053 EAIYDLIDAR X4 LiDAR Laser Radar sensor module is utilized for LiDAR with a range of 10 m and a frequency of 5 kHz (Fig. 5).
Robotic Vision: Simultaneous Localization And Mapping (SLAM) …
105
Fig. 5 LiDAR sensor and HD 720p webcam
6 Results LiDAR SC-PGO is the fundamental core of SC-LiDAR-SLAM. Open source and integrated LiDAR odor measuring technologies have been combined with SC-PGO. For convenience of usage, the entire LiDAR SLAM system is available through the repository. The SLAM output of an example image is shown in Fig. 6, providing a real-time 3D map of an interfaced environment in a specific graphical interface. YOLO_v4 YOLOv4 achieves cutting-edge results in real-time object detection and is capable of running at 60 FPS using the GPU. Model is trained to detect 81 other objects together. The real-time object detection situation is depicted in Fig. 7, where it detects 3 different objects simultaneously.
Fig. 6 LiDAR implementation op stages
106
Fig. 6 (continued) Fig. 7 Object detected using YOLO_v4
S. Pendkar and P. Shingare
Robotic Vision: Simultaneous Localization And Mapping (SLAM) …
107
7 Conclusion Front-end agnostic LiDAR system has been developed and that gave qualitative results. Easy interaction between several LiDAR (or even radar odor measuring) technologies can be possible and successfully built the exact point cloud maps thanks to our modular architecture and Scan Context++’s excellent loop-closing features [10]. In subsequent work, we’ll provide several quantitative evaluations of the performance of the recommended LiDAR system. YOLO series is established and enhanced the YOLOv4 network in this study to better recognize indoor micro targets [11].
References 1. https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping 2. Wang C-C, Thorpe C, Thrun S, Hebert M, Durrant-Whyte H (2007) Simultaneous localization, mapping and moving object tracking. The International Journal of Robotics Research 26(9):889–916 3. Maolanon P, Sukvichai K, Chayopitak N, Takahashi A (2019) Indoor room identify and mapping with virtual based SLAM using furnitures and household objects relationship based on CNNs. In: 2019 10th international conference of information and communication technology for embedded systems (IC-ICTES). IEEE, pp 1–6 4. Chehri A, Zarai A, Zimmermann A, Saadane R (2021) 2D autonomous robot localization using fast SLAM 2.0 and YOLO in long corridors. In: International conference on human-centered intelligent systems. Springer, Singapore, pp 199–208 5. Mazurek P, Hachaj T (2021) SLAM-OR: simultaneous localization, mapping and object recognition using video sensors data in open environments from the sparse points cloud. Sensors 21(14):4734 6. Kim M, Jeong J, Kim S (2021) ECAP-YOLO: efficient channel attention pyramid YOLO for small object detection in aerial image. Remote Sensing 13(23):4851 7. Chan S-H, Wu P-T, Fu L-C (2018) Robust 2D indoor localization through laser SLAM and visual SLAM fusion. In: 2018 IEEE international conference on systems, man, and cybernetics (SMC). IEEE, pp 1263–1268 8. Bavle H, De La Puente P, How JP, Campoy P (2020) VPS-SLAM: visual planar semantic SLAM for aerial robotic systems. IEEE Access 8:60704–60718 9. Garcia-Rodriguez J (ed) (2020) Robotic vision: technologies for machine learning and vision applications: technologies for machine learning and vision applications. IGI Global 10. Kim G, Yun S, Kim J, Kim A (2022) SC-LiDAR-SLAM: a front-end agnostic versatile LiDAR SLAM system. In: 2022 international conference on electronics, information, and communication (ICEIC). IEEE, pp 1–6 11. Cai Y, Alber F, Hackett S (2020) Path markup language for indoor navigation. International conference on computational science. Springer, Cham, pp 340–352
Optimum Value of Cyclic Prefix (CP) to Reduce Bit Error Rate (BER) in OFDM Mahesh Gawande and Yogita Kapse
Abstract Orthogonal frequency division multiplexing is one of the multi-carrier transmission systems used in wireless communication networks (OFDM). OFDM’s bit error rate (BER) is constantly decreasing. Using the cyclic prefix (CP), we can reduce the bit error rate, but increasing the value of the cyclic prefix will waste our transmitted data. As a result, in this paper, we will discuss the optimum value of the cyclic prefix to reduce the bit error rate. When an OFDM signal is sent in a dispersive channel, a cyclic prefix is used to prevent inter-symbol and inter-carrier interference. An OFDM signal is said to be dispersed when delivered over a dispersive channel. OFDM is crucial in 4G communication systems such as fixed Wi-Fi, fixed WiMAX, mobile WiMAX, and long-term evolution (LTE). The impact of changing design parameters (i.e., cyclic prefix) on OFDM systems will be studied in this study by simulating multiple OFDM systems with various modulation schemes in MATLAB coding/Simulink. Keywords Cyclic prefix (CP) · Bit error rate (BER) · Inter-symbol interference (ISI) · Inter-carrier interference (ICI)
1 Introduction The orthogonal frequency division multiplexing (OFDM) techniques use many narrow-band sub channels to carry the high data rates. These sub channels are opposed to one another. They are a narrow band with close spacing. Because of its capacity to handle multipath interference at the receiver, OFDM is being deployed. Idea behind the use of OFDM is to part a high-rate data stream into smaller streams that are sent out in a succession of subcarriers at the same time, any type of digital modulation M. Gawande (B) · Y. Kapse Electronics and Telecommunication, College of Engineering Pune, Pune, India e-mail: [email protected] Y. Kapse e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_10
109
110
M. Gawande and Y. Kapse
can be used as a modulation system, the most prevalent being BPSK, QPSK, and 64-QAM [1]. The result of adding the outputs of all the modulators is the signal to be broadcast. Compared with alternative modulation techniques, the OFDM system with channel coding and BPSK modulation produce the lowest BER value. As a result, a BPSKbased OFDM system using FFT and cyclic code gives the lowest BER value. A low-rate data stream modulates each orthogonal carrier in the OFDM system, which divides the spectrum into many of them. It will be possible to utilize the spectrum more efficiently than with frequency division multiple access if carrier spacing overhead is removed. Each carrier’s bandwidth is limited, it has a low symbol rate and, as a result, a high tolerance for multipath delay spread [2]. To cause significant intersymbol interference, the delay spread must be enormous. As a result, inter-symbol interference is a significant issue when evaluating the signal’s performance during the various phases of transmission. BPSK, 64-QAM, and other modulation constellations are used to modulate and map a wide-band data stream of binary digits to a symbol stream. Inverse multiplexing is used to de-multiplex these symbols into several parallel streams. There’s a chance the constellation will be different this time. As a result, the bit rate of some streams may be higher than that of others. An inverse FFT is performed on each set of symbols, resulting in a collection of complex time-domain samples that are quadrature blended with passing band in the typical manner [3].
2 Related Work To attain the highest level of data transfer dependability, OFDM was initially developed in the communication industry as a technique for encoding digital data on multiple carrier frequencies. From previous research work, I have understood that only one thing is common in all papers, i.e., as we increase the cyclic prefix value, our inter-carrier interference and inter-symbol interference reduce. However, one of the papers also showed that as we increase our cyclic prefix value, our bit error rate is reduced [4]. However, in the previous paper, they did not talk about optimum value for same, because as we increase maximum percentage of cyclic prefix then at the same time our transmitted date is also lost, in previous all these are found on using simulation of MATLAB, i.e., Simulink. However, I not only worked with simulation but also did the same thing using MATLAB coding and took the optimum value to reduce the bit error rate without losing our data too much [5]. In this paper, we will calculate the optimum value of cyclic prefix as previous work was done on by increasing the value of cyclic prefix. We will reduce the ICI and ISI, and the BER. However, as a cyclic prefix is increasing at the same time, our date is also lost, so we will here talk about the optimum value of the cyclic prefix [6].
Optimum Value of Cyclic Prefix (CP) to Reduce Bit Error Rate (BER) …
111
3 Proposed Methodology A. OFDM Transmitter Figure 1 depicts the OFDM system’s basic model, i.e., the transmitter part uses the BPSK modulation technique to modulate digital data to be transmitted, after which the data is transformed into many parallel streams. The modulated signals are then given to the IFFT block, which converts the spectrum representation of the data into the time domain, which is significantly more computationally efficient and employed in all practical systems [7]. The signals are then prefixed with a cyclic prefix. During the guard interval, the cyclic prefix consists of the end of the OFDM symbol copied into the guard interval, followed by the guard interval and the OFDM symbol. The guard interval comprises a copy of the end of the OFDM symbol so that when the receiver conducts OFDM demodulation with each multipath, it will integrate across an integer number of sinusoid cycles. OFDM exhibits exceptional resilience in multipath scenarios. The cyclic prefix keeps subcarriers orthogonal. The receiver can catch more multipath energy with a cyclic prefix. The signals are then translated to serial form and sent through a transmitter. After that, digital data is sent over the channel [8]. B. AWGN Channel The AWGN channel model is commonly utilized in OFDM research. In this model, the amplitudze distribution is Gaussian, and there is just linear addition of white noise with a constant spectral density. The model doesn’t work. Consider fading, frequency selectivity, interference, and so on [9]. Even though it is unsuitable for most people, terrestrial networks are still being utilized to provide essential and reliable services. Controlled mathematical models to investigate the fundamentals in the absence of the, the behavior of system elements above, i.e., the reason we go for AWGN channel as compared to Rayleigh Channel.
Fig. 1 Basic block diagram of OFDM
112
M. Gawande and Y. Kapse
C. OFDM Receiver Serial data received from the AWGN channel is to be converted into parallel form, i.e., in the number of subcarriers, so that the cyclic prefix can be removed from each of the subcarriers [10]. Cyclic Prefix Removal Block: Cyclic prefix which is added to eliminate intersymbol interference (ISI) is to be removed first to retrieve the original data [11]. Fast Fourier Transform (FFT): The IFFT block on the transmitter side serves the same function as the FFT block on the receiver side. FFT of each subcarrier is determined individually parallel to the serial converter. For our results, we are using 256 FFT length and 1024 FFT length [12]. All the subcarriers are then merged in the serial form, i.e., converted from parallel to serial form to retrieve the original data. Demodulation serial data is then demodulated by the required modulation technique to get the original data back [13].
4 Simulation Flowchart Figure 2 shows the flowchart of MATLAB coding, main focus on flowchart is where we are using the cyclic prefix adder and cyclic prefix remover, as we can say cyclic prefix adder using on transmitter side and on other side, i.e., on receiver side we are using the cyclic prefix remover. After completing all the steps, we will get the bit error rate graph versus signal to noise ratio, showing different values of the cyclic prefix.
5 Results and Conclusion A. Simulation Models For different modulation techniques.
5.1 BPSK (Binary Phase Shift Keying) See Fig. 3.
5.2 QPSK (Quadrature Phase Shift Keying) See Fig. 4
Optimum Value of Cyclic Prefix (CP) to Reduce Bit Error Rate (BER) …
Fig. 2 Simulation flowchart
113
114
M. Gawande and Y. Kapse
Fig. 3 Simulation of BPSK model
Fig. 4 Simulation of QPSK model
5.3 16-QAM (16-Quadrature Amplitude Multiplexing) See Fig. 5.
5.4 32-QAM (32-Quadrature Amplitude Multiplexing) See Fig. 6. Using a Bernoulli distribution, the Bernoulli binary generator block produces random binary numbers (above simulation only eight binary bits are generated). Utilize this block to mimic digital communication networks and produce random data bits to acquire performance metrics like bit error rate. Zero is produced by the
Optimum Value of Cyclic Prefix (CP) to Reduce Bit Error Rate (BER) …
115
Fig. 5 Simulation of 16-QAM model
Fig. 6 Simulation of 32-QAM model
Bernoulli distribution with parameter p with probability p and one is produced with probability 1 − p. The mean value of the Bernoulli distribution is 1 and the variance is p. (1 − p). Any real value between [0, 1] can be used as the probability of zero parameter, which determines p. A column or row vector, a two-dimensional matrix, or a scalar might be the output signal. The samples per frame parameter determines the number of rows in the output signal, which corresponds to the number of samples in a frame. The number of elements in the probability of zero parameter determines how many columns there are in the output signal, which is equal to the number of channels. AWGN Channel (or) Rayleigh Channel: This is the path by which the data is transmitted. The presence of noise in this medium has an impact on the signal and
116
M. Gawande and Y. Kapse
produces data content distortion. Additive white Gaussian noise (AWGN) is a fundamental noise model used in information theory to simulate the effect of many random processes seen in nature. Rayleigh fading is a statistical model that explains how the propagation environment affects a radio signal, such as the one used by wireless devices. The underlying premise of Rayleigh fading models is that the strength of a signal traveling through such a transmission medium (also referred to as a communication channel) will randomly change, or fade, in accordance with a Rayleigh distribution, which is the radial component of the sum of two uncorrelated Gaussian random variables. Under QAM, 16-QAM, and 64-QAM modulation schemes, the AWGN channel has the best performance of all channels because it has the lowest bit error rate (BER). The quantity of noise in this channel’s BER is much lower than in fading channels. Rayleigh fading has the worst performance of all channels, since its BER has been heavily influenced by noise under QAM, 16-QAM, and 64-QAM modulation schemes. The evaluation of various cyclic prefix lengths indicated that, on that basis, we established the optimal value acceptable for the results, which lowered our bit error rate while not causing excessive data loss. Figures 7, 8, 9, and 10 depict the various cyclic prefix values for various modulation techniques, but for our understanding we are taking only two modulation techniques, i.e., BPSK, 64-QAM with different FFT length like 256, 1024 FFT length. From Figs. 7, 8, 9, and 10 we can say that as cyclic prefix value increases our bit error rate decreases, but from Figs. 7, 8, 9, and 10, we can conclude that as cyclic prefix value increases, we know that our date is also lost hence we also take value which is less or optimum to reduce our bit error rate, at the same time our data to not be lost we can take that value nearly 10%, because that much data loss is less when compared with 40–50% of cyclic prefix (CP). for FFT length 16 max bit error rate for cyclic prefix 0.1 is
0.2730
max bit error rate for cyclic prefix 0.4 is
0.2657
max bit error rate for cyclic prefix 0.7 is
0.2565
for FFT length 64 max bit error rate for cyclic prefix 0.1 is
0.2726
max bit error rate for cyclic prefix 0.4 is
0.2627
max bit error rate for cyclic prefix 0.7 is
0.2556
for FFT length 256 max bit error rate for cyclic prefix 0.1 is
0.2713
max bit error rate for cyclic prefix 0.4 is
0.2653
max bit error rate for cyclic prefix 0.7 is
0.2592
for FFT length 1024 max bit error rate for cyclic prefix 0.1 is
0.2747
max bit error rate for cyclic prefix 0.4 is
0.2634
max bit error rate for cyclic prefix 0.7 is
0.2557
Optimum Value of Cyclic Prefix (CP) to Reduce Bit Error Rate (BER) …
Fig. 7 Different cyclic prefix values for BPSK modulation of 256 FFT length
Fig. 8 Different cyclic prefix values for BPSK modulation of 1024 FFT length
117
118
M. Gawande and Y. Kapse
Fig. 9 Different cyclic prefix values for 64-QAM modulation of 256 FFT length
Fig. 10 Different cyclic prefix values for 64-QAM modulation of 1024 FFT length
Optimum Value of Cyclic Prefix (CP) to Reduce Bit Error Rate (BER) …
119
References 1. Lim C, Chang Y, Cho J, Joo P, Lee H (2005) Novel OFDM transmission scheme to overcome caused by multipath delay longer than cyclic prefix. In: 2005 IEEE 61st vehicular technology conference, vol 3. IEEE, pp 1763–1767 2. Subotic V, Primak S (2007) BER analysis of equalized OFDM systems in Nakagami, m < 1 fading. Wireless Pers Commun 40(3):281–290 3. Chang Y-P, Lemmens P, Tu P-M, Huang C-C, Chen P-Y (2011) Cyclic prefix optimization for OFDM transmission over fading propagation with bit-rate and BER constraints. In: 2011 Second international conference on innovations in bio-inspired computing and applications. IEEE, pp 29–32 4. Miškovi´c B, Lutovac MD (2012) Influence of guard interval duration to interchannel interference in DVB-T2 signal. In: 2012 Mediterranean conference on embedded computing (MECO). IEEE, pp 220–223 5. Lorca J (2015) Cyclic prefix overhead reduction for low-latency wireless communications in OFDM. In: 2015 IEEE 81st vehicular technology conference (VTC Spring). IEEE, pp 1–5 6. Waichal G, Khedkar A (2015) Performance analysis of FFT based OFDM system and DWT based OFDM system to reduce inter-carrier interference. In: 2015 international conference on computing communication control and automation. IEEE, pp 338–342 7. Jadav NK (2018) A survey on OFDM interference challenge to improve its BER. In: 2018 second international conference on electronics, communication and aerospace technology (ICECA). IEEE, pp 1052–1058 8. Gowda NM, Sabharwal A (2018) CPLink: interference-free reuse of cyclic-prefix intervals in OFDM-based networks. IEEE Trans Wireless Commun 18(1):665–679 9. Farzamnia A, Hlaing NW, Mariappan M, Haldar MK (2018) BER comparison of OFDM with M-QAM modulation scheme of AWGN and Rayleigh fading channels. In: 2018 9th IEEE control and system graduate research colloquium (ICSGRC). IEEE, pp 54–58 10. Athaudage CRN, Dhammika A, Jayalath S (2004) Delay-spread estimation using cyclic-prefix in wireless OFDM systems. IEE Proceedings-Communications 151(6):559–566 11. Bandele JO (2019) A Matlab/Simulink design of an orthogonal frequency division multiplexing system model. International Journal of Engineering Inventions 8(4) 12. Alkamil ADE, Hassan OTA, Hassan AHM, Abdalla WFM (2020) Performance evaluations study of OFDM under AWGN and Rayleigh channels. In: 2020 international conference on computer, control, electrical, and electronics engineering (ICCCEEE). IEEE, pp 1–6 13. Mohseni S (2013) Study the carrier frequency offset (cfo) for wireless OFDM
Optimum Sizing of Solar/Wind/Battery Storage in Hybrid Energy System Using Improved Particle Swarm Optimization and Firefly Algorithm Gauri M. Karve, Mangesh S. Thakare, and Geetanjali A. Vaidya
Abstract The integration of hybrid energy systems (HES) with solar photovoltaic (PV)—wind turbine (WT)—battery energy storage (BES) is increasing rapidly to enhance the performance of microgrid or power systems along with mitigating local energy crisis and environmental pollution concerns. In this work, the performance of HES with PV-WT-BES is improved by minimizing the total annual cost of these components using improved particle swarm and firefly optimization algorithms. These two algorithms are compared and analyzed for three system configurations as PV-BES; WT-BES and PV-WT-BES to determine the optimum capacity sizing of PV, WT, and BES to fulfill the load of a particular place. The study also highlighted the impact of the state of charge (SOC) of BES on the optimum sizing of system components and its overall cost. The effect of SOC variation is analyzed for two battery chemistries as lithium-ion and lead acid. Keywords Optimum sizing · Battery storage · Improved particle swarm optimization · Firefly algorithm
1 Introduction Hybrid energy systems (HES) with PV and WT are proven to be a boon for addressing crises due to depletion in fossil fuels and power shortages from the past few eras. But these PV and WT energy sources have the limitations of uncertain generated power output and high installation cost, hence battery energy storage (BES) should be integrated with them. This addition of BES can play a multi-functional role in the electrical power system such as reducing operating costs or capital expenditures G. M. Karve (B) · M. S. Thakare · G. A. Vaidya Electrical Engineering Department, PVG’s COET & GKPIM, Pune, India e-mail: [email protected] M. S. Thakare e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_11
121
122
G. M. Karve et al.
when used as a generator in the utility sector, facilitating the integration of RES into the electric power system, load leveling, peak shaving, stabilizing voltage, and frequency and maintaining uninterrupted power supply [1]. Despite certain advantages, the market maturity of BES is slow due to its high cost incurred in the expensive cell material and lack of a corresponding legal framework [2]. Though the combination of solar-wind-BES is beneficial for mitigating environmental pollution concerns and local energy crises, this hybrid renewable energy system (HRES) faces challenges regarding its optimum operation, optimum sizing, system security, system reliability, and cost of the system [3]. After the literature review, it is observed that researchers have presented many aspects for better performance of HRES either by optimizing system cost with total annual or project cost or by taking into account loss of power supply probability (LPSP) or by minimizing the cost of energy (COE) or by applying various optimization techniques implemented on real-time case studies. The literature on optimum sizing of HRES components can be categorized as—minimizing COE only [3–5] or LPSP only [6, 7]. Some of the literature shows improvement in the system performance by combining both—COE and LPSP [8–12] and some research articles proposed HRES optimization studies using techniques like PSO [4, 8–10, 13–15], FA [11, 12, 16], GA [6, 14] or hybrid optimization techniques like PSO-GSA [10], PSO-GWO [8], SA-PSO [13], etc. The case study of a university campus on a Mediterranean island by considering economic metrics as net present cost (NPC) and levelized COE of a PV/WT/BES hybrid system is proposed in [3]. The analysis of the PV/WT/diesel/BES hybrid system for minimum COE for homes in Morocco, Spain, and Algeria is carried out using PSO [4]. In [5], HRES is designed for the remote island—of Jiuduansha, Shanghai by considering the effects of saturation of RES on various parameters like system reliability, NPC, BES size, and the repayment period. In [6], the Nigerian case study based on optimum sizing of a hybrid system consisting of PV/WT and storage system is presented to fulfill the load demand based on the LPSP using enhanced genetic algorithm. In [7], the comprehensive review of the optimum sizing of HRES in Oman is described using various optimization methods. In [8], the enhanced whale optimization algorithm (EWOA) is applied for the optimization and operation of HRES for electrifying a rural city in Algeria. It also compared EWOA with various optimization algorithms like PSO, gray wolf optimizer (GWO), and modified GWO to resolve the COE problem considering LPSP. In [9], optimal planning of the microgrid consisting of solar/wind/bio-generator/ BES is presented with a real-time case study of Bihar, India. It analyzed the tradeoff between COE and LPSP using hybrid PSO-GWO. In [10, 11], novel optimization techniques are employed for the optimal sizing of hybrid microgrids of Egypt and Manipur, India, respectively. In [12], power reliability with COE and load dissatisfaction criteria for PV/WT systems are proposed using FA. The [13] proposed hybrid optimization methods using SA and PSO as SA-PSO for determining the optimal size of a microgrid located in Egypt. The iterative complexity for optimum sizing of HRES which arises in GA is minimized by PSO [14]. The optimum sizing of HRES is discussed by comparing the fuel cell with the battery by applying harmony search, tabu search, SA, and PSO [15]. The paper concluded that PV/WT/BES combination is economically better than the fuel cell system and PSO resulted better than other
Optimum Sizing of Solar/Wind/Battery Storage in Hybrid Energy …
123
algorithms. The [16] analyzed the optimal design of HRES using FA to achieve the profitable operations of RES to supply the load. The objective of this paper is to analyze the performance of the PV-WT-BES system by minimizing total annual cost (TAC) using IPSO and FA. The objective also includes the analysis of the impact of variation in SOC on BES sizing with two BES chemistries lithium-ion batteries and lead acid batteries. The rest of the paper is arranged as—Sect. 2 describes system components under consideration, mathematically along with objective function and constraints. Section 3 discusses size optimization algorithms. The discussion of results is given in Sect. 4 and 5 concludes the paper with key contributions.
2 System Configuration, Mathematical Modeling of Components of Hybrid Renewable Energy System, Objective Function and Constraints The HRES under consideration along with the specifications is given in Fig. 1. The HRES comprises PV, WT, inverters/converters, BES, and the load of a particular location at Rafsanjan, Iran [15].
2.1 Mathematical Modeling of Components of Hybrid Energy System: [15, 17] The mathematical modeling of components of HRES like PV, WT, and BES is referred from [15, 17].
Fig. 1 Hybrid energy system under consideration
124
G. M. Karve et al.
Solar Irradiance (W/m2)
Fig. 2 Average daily solar irradiance for a specific location
1000 800 600 400 200 0 1 3 5 7 9 11 13 15 17 19 21 23
Time (Hour)
2.1.1
Solar PV System
Figure 2 shows the average daily solar irradiance of Rafsanjan, Iran [15]. The output power of each solar PV panel is derived as per Eq. (1) and total PV power output can be found by using Eq. (2). ⎧ ( 2 ) r ⎪ 0 ≤ r ≤ Rcr ⎪ ⎨ Prs ( Rsrs ·R )cr Ppv (t) = Prs r Rcr ≤ r ≤ Rsrs Rsrs ⎪ ⎪ ⎩ Prs Rsrs ≤ r
(1)
Ppv(t) = Npv × Ppv(t)
(2)
where, Prs Rated PV power in watt, r Solar radiation W/m2 , RCR Certain solar radiation in W/m2 , Npv Number of PV panels, Ppv Power output of all PV panels, ppv−Each (t) Power rating of a PV panel, and RSRS Solar radiation with the standard environment as 1000 W/m2 .
2.1.2
Wind System
The average daily wind velocity of Rafsanjan, Iran [15] is depicted in Fig. 3. The output power of each WT is derived as given in Eq. (3) and the total power output of WT can be found by using Eq. (4).
Fig. 3 Average daily velocity of wind for a specific location
Wind Speed(m/s)
Optimum Sizing of Solar/Wind/Battery Storage in Hybrid Energy …
125
8 6 4 2 0 1
3
5
7
9
11 13 15 17 19 21 23
Time (Hour of day)
⎧ ⎪ Vwr ≤ Vw (t) ≤ Vco ⎨ Pwn ci ] V Pw (t) = Pwn [V[Vwwn(t)−V ci ≤ Vw (t) ≤ Vwr −Vci ] ⎪ ⎩0 Vw (t) ≤ Vci or Vw (t) ≥ Vco Pwt (t) = Nwt × Pwt (t)
(3)
(4)
where; Pwt (t) Pwt (t) N wt Vw Pwn V ci , V co , V wn
2.1.3
Power generated by each panel in watts at time t Total power output (watt) from all WT at time t Number of WT Velocity of wind in m/s, Nominal output power (watt) supplied by the installed WT, Cut-in wind velocity, cut-out wind velocity, and nominal wind velocity, respectively, in m/s.
Battery Energy Storage (BES)
Fig. 4 Battery power during charging-discharging [17]
Battery Power (watt)
Depending upon the charge available in the battery (SOC), the BES either can fulfill the load demand (discharge) or store the excess power if the generated power by RES (PV/WT) is greater than the load (charge). The energy of the BES at the time of charging and discharging can be derived from Eqs. (5) and (6) respectively. All these equations are referred from [15, 17] explicitly (Fig. 4).
6000 4000 2000 0 -2000 1 3 5 7 9 11 13 15 17 19 21 23 -4000
Time (Hour)
G. M. Karve et al.
Fig. 5 Average daily load curve for a specific location [15, 17]
Load (watt)
126 3000 2500 2000 1500 1000 500 0
1
3
5
7
9
11 13 15 17 19 21 23
Time (Hour)
A. Charging Mode (EBatt = SOC) [ ] E batt (t) = E batt (t − 1) × (1 − σ ) + E pv (t) + E wt (t) − E load (t)/ηinv ) × ηbatt (5) B. Discharging Mode [ ] E batt (t) = E batt (t − 1) × (1 − σ ) (E load (t)/ηinv ) − E pv (t) + E wt (t) × ηbatt (6) where; E pv (t) E batt (t) E load (t) σ ïinv ïbatt 2.1.4
Energy generated by solar PV (kWh), Energy stored in battery (kWh), Energy required by the load in kWh, Self-discharge rate of BES, Efficiency of inverter in percentage, and Efficiency of BES. Load
The real-time data from the load curve of a particular area (Rafsanjan, Iran) by averaging out data of one year into one day or 24 h [15, 17] (Fig. 5).
2.2 Objective Function and Constraints This work aims to find out the optimum capacity size of the hybrid energy system (HES) components, which is carried out by minimizing the TAC of these components. The TAC is addition of capital cost (C Cpt ) and maintenance cost (C Mtn ) of every component of HES such as solar PV, wind turbine, and BES annually. The cost intended at the time of the project installation is the capital cost and the cost intended
Optimum Sizing of Solar/Wind/Battery Storage in Hybrid Energy …
127
during the working of the project is the maintenance cost. The minimum TAC is attained by reducing C Cpt and C Mtn of solar PV, wind turbine, and BES annually. Equations (7)–(9) are depicting these costs determined along with the equality and inequality constraints as ΔP = (Pgen − Pdem ) ≥ 0. TAC = (CCpt + CMtn )of PV, WT and battery )] [( CCptpv + CCptwt + CCptbatt ) [( ] = Npv × Cpv + (Nwt × Cwt ) + (Nbatt × Cbatt )
(7)
Capital Cost ofHRES = Capital Cost of HRES
[ ] CMtn = (CMtnpv + CMtnwt + CMtnbatt )
(8) (9)
where, C Cpt : Capital cost of solar PV (C CptPV ), Wind (C Cptwt ) and battery (C Cptbatt ), C Mtn : Maintenance cost of solar PV (C MtnPV ), Wind turbine (C Mtnwt ) and battery (C Mtnbatt )
3 Size Optimization Algorithms Optimization is the process of finding out the optimum solution to make something as feasible and efficient as possible by minimizing or maximizing the problem variables. In this study, an attempt is made to determine the optimum capacity size of PV/wind/ BES to fulfill the load demand of a particular area by minimizing system TAC using two optimization techniques as IPSO and FA for three system configurations.
3.1 Improved Particle Swarm Optimization (IPSO) [15, 17, 18] PSO and its types are introduced by Kennedy and Eberhart in 1995. It is a heuristic algorithm to determine the optimized solution to a problem. In the IPSO algorithm, every viable solution to the optimization problem is represented by a ‘particle’ and it is stated by a velocity vector. The mathematical modeling of this optimization method is given by Eq. (10) and Eq. (11), which are referred from [15, 17, 18]. Vi+1 = ω · Vi + C1 · r1 (Pbest − X i ) + C2 · r2 (G best − X i )
(10)
X i + 1 = Vi + 1 + X i
(11)
128
G. M. Karve et al.
where, V X C1, C2 r1, r2 Pbest , Gbest
Velocity of particle, Position of particle, Acceleration constants, Randomly generated numbers between 0 and 1, and Local and global best positions of particles.
3.2 Firefly Algorithm (FA): [11, 12, 16]: The firefly algorithm (FA) was introduced by Xin-She Yang and is inspired by the flashing behavior of fireflies. The ideal rubrics which are followed in this algorithm are as given below. 1. All fireflies have the same sex, hence one firefly gets attracted to other fireflies. 2. The brightness of fireflies decides the attractiveness. Therefore, the less luminant firefly gets attracted by the more luminant one. The attraction is dependent on the intensity of brightness and they both are inversely proportional to the distance. If there is no brighter firefly than a certain firefly, it flies arbitrarily. 3. The brightness of the firefly changes with the objective function. The mathematical modeling of this optimization method is based on the above three rules and is given by Eqs. (12)–(16). The attractiveness (β) of a firefly is the function of the distance ‘r’. Hence the relation between attractiveness and distance of two fireflies is mentioned as β = β0 · e−γ r2
(12)
The distance of separation of firefly ‘i’ and firefly ‘j’ which have their positions as ‘X i ’ and ‘X j ’ can be expressed by Eq. (13). ri j =
/[(
) ( ) ] X i − X j 2 + Yi − Y j 2
(13)
The movement of a firefly ‘i’ is determined as ) ( X i+1 = X i + β X j − X i + α(rand − 0.5)
(14)
) ( X i+1 = X i + β0 · e − γ ri j 2 X j − X i + α(rand − 0.5)
(15)
where, Xi Xj
Present location of a firefly ‘i’, Brighter location of another firefly ‘j’, which is at a distance ‘r’ from firefly ‘i’,
Optimum Sizing of Solar/Wind/Battery Storage in Hybrid Energy …
β0 β γ α rand X i+1
129
Attractiveness at source or reference point, Attractiveness of a firefly ‘i’ at its present location, Coefficient of light absorption ranges from 0.1 to 10, Randomization variable, Randomly generated number between 0 and 1, and Brightest firefly.
The various terms involved in Eq. (15) are the 1st term (Xi ) is the present location of firefly ‘I’, the 2nd term (β 0 · e−γ r2 (X j − X i )) signifies a firefly’s attractiveness while the last term (α(rand − 0.5)) is used for the random movement, X i+1 : Updated location (brighter position) of a firefly ‘I’. For most cases, β 0 = 1. Practically, the light absorption coefficient (γ ) defines attractiveness (β) variation and its value indicates the speed of the FA convergence. So, for the brightest firefly (X b ), the 2nd term changes as ( )) ( ) ( 2nd term = β0 · e−γ r2 X j −X i = X j −X i = 0 X b = X b + α[rand − 0.5] (16)
3.3 Procedure for Implementing IPSO [18] and FA 1. Read input parameters for solar PV as solar irradiance, panel efficiency, and power rating of single PV panel for 1st and 3rd system configuration. 2. Read input parameters for wind system as wind speed (wind cut-in, cut-out, nominal), and power rating of single WT for 2nd system configuration. 3. For all three system configurations, read data of load demand over 24 h [15]. 4. Compute average annual load demand (kWh) over 24 h [15, 17]. 5. Set the number of PV panels (for system configurations 1 and 3) or wind systems (for system configurations 2 and 3) to one. Calculate the power generated by PV by using Eqs. (1) and (2). 6. Find out differential power as ΔP = (Pgen − Pdem ). 7. If differential power (ΔP) < zero, then follow step 8. Otherwise, follow step 9. 8. Increase the number of PV panels by one and repeat the process from step 3. Get the optimum quantity of PV panels (N PV ). Estimate total PV power. 9. Calculate ΔP over a period of 24 h by using N PV . 10. Find ΔP curve over a period of time ∫and convert it∫into the energy curve (ΔW ). 11. From the energy curve as [ΔW = (ΔP dt = (Pgen − Pdem )dt], estimate battery capacity. Considering SOC, the self-discharge rate of the battery, and inverter losses, compute the optimum capacity size of the battery.
130
G. M. Karve et al.
12. Find out the optimum quantity of batteries (N Batt ) by taking into account the 1.35kWh capacity of single BES and SOC for lithium-ion and lead acid batteries. 13. Repeat the steps from 2 to 14, for calculating power generation by wind turbine (WT) by using Eq. (7). Find the optimum number of wind turbines (N WT ) and batteries (N Batt ) for 2nd system configuration. 14. Repeat the steps from 1 to 14, to find out the optimum N pv , N WT , and N batt for the 3rd system configuration. By following the above steps from 1 to 14, IPSO and FA are implemented and coded in MATLAB 2020 version for the objective function of minimum TAC for the three system configurations. The results obtained are organized below from Tables 2, 3 and 4 for all system components for both optimization methods.
3.4 Comparison of Optimization Methods IPSO and FA Though IPSO and FA both are nature inspired and swarm intelligence-based algorithms, they both have some significant differences. FA can exhibit better characteristics than IPSO due to its nonlinear and dynamic nature, which are briefly summarized in Table 1.
4 Results and Discussions Tables 2 and 3 give the optimum number of PV panels, WT, and batteries required to fulfill the load demand for three system configurations as PV-BES; WT-BES, and PV-WT-BES for minimum TAC using two optimization algorithms—IPSO and FA. Table 2 considered lithium-ion battery and compared the results obtained with the results given in [15]. Table 3 considered the optimum system components with lead acid battery for the same system configurations using IPSO and FA. Table 4 gives the impact of change in % SOC of both battery chemistries (lithiumion and lead acid) on their optimum number and also on the overall annual cost of the system to fulfill the same load demand. The results are shown only for the third system configuration as PV-WT-BES with two optimization algorithms—IPSO and FA. From Table 2, it is seen that the results obtained by implementing IPSO and FA are reasonably matching with the results in [15]. It is observed that time taken by FA method to reach to optimum solution is considerably reduced than that of IPSO method. In this particular case (Rafsanjan, Iran), it is also seen that out of three system configurations, WT-BES configuration gives optimum solution. But this may not be a generalized statement as availability of solar irradiance and wind velocity at that specific location must be checked prior to the installation of HRES.
Optimum Sizing of Solar/Wind/Battery Storage in Hybrid Energy …
131
Table 1 Comparison of IPSO [15, 18] and FA Points of comparison
Improved particle swarm optimization (IPSO)
Firefly algorithm (FA)
Type of the algorithm
IPSO is based on swarm intelligence like fish schooling
FA is based on swarm intelligence like fireflies
Linearity of system
‘Linear system’ due to the presence of linear terms in ‘position (X i )’ and ‘velocity (V i+1 )’ equations
‘Nonlinear system’ due to the presence of the nonlinear attraction term )as ( β0 exp(−γ ri j2 )
Modeling of optimization algorithm
V i+1 = ω·V i + C 1 ·r 1 (Pbest − X i ) + C 2r2 (Gbest − X i ) X i+1 = V i+1 + X i
X i+1 = X i + β(X j − X i ) + α(rand − 0.5) and X i+1 = X i + β 0 ·e−γ rij2 (X j−i ) + α (rand − 0.5)
Ability of multi-swarming
IPSO does not have this ability due to its linearity. Hence, it cannot find multiple optimum solutions at the same time
FA has this ability due to its nonlinearity. Hence FA can find multiple optimum solutions at the same time efficiently
Drawbacks associated with As IPSO uses the ‘velocity velocity equation’, it has disadvantages related to velocity initialization and instability for particles with high velocities
FA does not have a ‘velocity equation’, so does not have the disadvantage of velocity initialization and instability for particles with high velocities
Scaling control
IPSO has no scaling control, hence less flexibility
FA has scaling control (via γ ), which makes FA more flexible
Convergence rate
Slower when compared with FA Faster when compared with IPSO
Advantages
Simple method, reasonably accurate
Easy to implement, accurate
Disadvantages
Slow convergence compared with FA, method may be stuck in local minima
The method does not guarantee to find of the global optimal solution
The mismatch in results regarding number of the system components may be due to the inexact mapping of wind velocity and solar irradiance data than that of [15]. While, the mismatch in system’s overall cost than mentioned in [15] is due to the changes in the cost of PV, WT, and BES. Table 3 gives the results obtained for lead acid batteries by implementing IPSO and FA. It is observed that time taken by FA method to reach to optimum solution is considerably reduced than that of IPSO method. In this case (Rafsanjan, Iran), it is also seen that out of three system configurations, WT-BES configuration gives optimum solution. But this may not be a generalized statement as availability of solar irradiance and wind velocity at that specific location must be checked prior to the installation of H RES . After comparing Tables 2 and 3, it is seen that the cost of the system is more with lead acid batteries than that of cost of the system with lithium-ion batteries. Again, it may not be a general statement as the cost of batteries changes with the advanced research in the material or chemistry of the battery. So, this work
132
G. M. Karve et al.
Table 2 Optimum N PV , N WT , and N batt (lithium-ion) by IPSO [17] and FA methods Sr. no. 1
2
3
System configuration No. of PV panels, no. of wind turbines, and no. of BES PV + BES + load
Paper [15]
IPSO
FA
N PV = 57, N Batt = 79
N PV = 61, N Batt = 80
N PV = 61, N Batt = 80
TAC of the system in 5957.47 $
6237.42
Time elapsed (s)
–
15.12
WT + BES + load
N WT = 10, N Batt = N WT = 8, N Batt = 7 N WT = 8, N Batt = 7 7
4.21
TAC of the system in 4554.98 $
2421.89
Time elapsed (s)
–
16.12
5.13
PV + WT + BES + load
N PV = 10, N WT = 8, N Batt = 8
N PV = 12, N WT = 7, N Batt = 8
N PV = 12, N WT = 7, N Batt = 8
TAC of the system in 4623.15 $
5852.53
Time elapsed (s)
22.92
–
8.95
Table 3 Optimum number of PV, WT, and batteries (lead acid) by IPSO and FA methods Sr. no. System configuration 1
2
3
No. of PV panels, no. of wind turbines, and no. of BES IPSO
FA
PV + BES + load
N PV = 61, N Batt = 110
N PV = 61, N Batt = 110
TAC of the system in $
8742.54
Time elapsed (s)
15.12
4.21
WT + BES + load
N WT = 8, N Batt = 12
N WT = 8, N Batt = 12
TAC of the system in $
5341.76
Time elapsed (s)
16.12
5.13
PV + WT + BES (1.35 kWh) + load
N PV = 12, N WT = 7, N Batt = 14
N PV = 12, N WT = 7, N Batt = 14
TAC of the system in $
7856.28
Time elapsed (s)
22. 92
8.95
has given a choice to the user to select any particular type of battery chemistry (either lithium-ion or lead acid) as per the requirements or feasibility or economy. Table 4 presents the effect of % SOC of both battery chemistries (lithium-ion and lead acid) on their optimum number required for PV-WT-BES system configuration with both optimization algorithms as IPSO and FA.
Optimum Sizing of Solar/Wind/Battery Storage in Hybrid Energy …
133
Table 4 Results of IPSO and FA method for number of lithium-ion battery and lead acid battery for 3rd system configuration (PV-WT-BES) Sr. no.
% SOC of BES
No. of PV panels (N PV )
No. of WT (N WT )
1
1350 (100%)
12
2
1080 (80%) 13
3
810 (60%)
13
4
675 (50%)
16
13
5
270 (20%)
19
16
No. of LI BES (N Batt )
No. of LA BES (N Batt )
Convergence Convergence time for IPSO time for FA (s) (s)
7
8
15
22. 9
8.95
7
11
20
20.42
7.61
11
42
80
14.61
8.32
60 ara>
98
15.32
15.32
75
120
17.96
6.24
The result Tables 2, 3 and 4 showed odd numbers for PV panels, wind turbines and batteries, but practically even numbers can be preferred after checking technical and economic feasibility. From Table 4, it is seen that, if the SOC of both battery chemistries (lithium-ion and lead acid) is reduced from 100% (1350 Wh) to 20% (270 Wh), then the number of batteries, number of PV panels, and number of WT required to fulfill the same load demand are increasing for the same system configuration. This observation is valid for both optimization algorithms—IPSO and FA. But FA converges faster than IPSO. Though Table 4 gives the ideal operating % SOC range of both batteries from 20% (270 Wh) to 100% (1350 Wh), practically the range of the % SOC of lead acid battery is from 50% (675 Wh) to 100% (1350 Wh) and that of lithium-ion battery is from 20% (270 Wh) to 100% (1350 Wh). This is shown in Fig. 6a and b respectively. Figure 6 shows variation in % SOC of lithium-ion and lead acid battery for 24 h of a day, respectively. Figure 6 indicates the range of % SOC for lithium-ion battery which is in between 20 and 100% and for lead acid battery is in between 50 and
(a)
(b)
Fig. 6 a % SOC for lead acid battery (20–100%). b % SOC for lithium-ion battery (50–100%)
134
G. M. Karve et al.
Fig. 7 a Convergence for IPSO. b Convergence for FA
100%, which is nothing but one cycle of charging-discharging of that particular battery. These ranges of % SOC are very important as they decide required number of batteries to fulfill the load demand. From the results, given in Table 4, it is realized that lithium-ion batteries needed are less in numbers than that of lead acid batteries to fulfill the same load demand and hence it will affect overall system cost considering cost of BES chemistry. The convergence curves for two optimization methods as IPSO and FA are shown in Fig. 7a, b respectively for only one run of the program specifically. From these figures, it is observed that convergence of FA toward the optimum solution is faster than IPSO algorithm.
5 Conclusions The paper discussed the optimum sizing of PV, WT, and BES by minimizing TAC of the hybrid energy system using IPSO and FA. After comparing results of IPSO and FA with results of referred paper, it is seen that both methods are sensibly accurate, but FA converges faster than IPSO. For the particular case of Rafsanjan, Iran, it is also seen that out of three system configurations (PV-BES, WT-BES, PV-WTBES), WT-BES configuration gives optimum numbers of system components with minimum system cost. But this may not be a generalized comment as availability of solar irradiance and wind velocity at that specific location must be checked prior to the installation of HRES. The impact of SOC variations of battery on optimum sizing of system components is also analyzed with these two methods. It is observed that as % SOC of battery goes on decreasing, the optimum size of battery is increasing which will increase the overall system cost. This effect is analyzed for two battery chemistries—lithium-ion and lead acid. The result shows that a greater number of lead acid batteries are required than that of lithium-ion batteries to fulfill the same load demand.
Optimum Sizing of Solar/Wind/Battery Storage in Hybrid Energy …
135
References 1. Integrating high levels of renewables into microgrid: opportunities, challenges, strategies, a GTM research white paper (government/authoritative report giving concise information or proposals on a complex issue), sponsored by ABB, Feb 2016 2. Bruch M, Müller M (2013) Calculation of the cost-effectiveness of a PV battery system. In: Proceedings of the 8th international renewable energy storage conference (IRES 2013), Berlin, Germany, 18–20 Nov 2013 3. Sajed Sadati SM, Jahani E, Taylan O, Baker DK (2018) Sizing of PV-wind-battery hybrid system for a Mediterranean Island community based on estimated and measured meteorological data. J SolEnergy Eng 140:011006-1–011006-12 4. El Boujdaini L, Mezrhab A, Moussaoui MA, Jurado F, Vera D (2022) Sizing of a stand-alone PV–wind–battery–diesel hybrid energy system and optimal combination using a particle swarm optimization algorithm. Springer, Electrical Engineering, 15 Feb 2022, pp 1–21 5. Ma T, Javed MS (2019) Integrated sizing of hybrid PV-wind-battery system for remote island considering the saturation of each renewable energy resource. Energy Convers Manage 182:178–190 6. Traoré A, Elgothamy H, Zohdy MA (2018) Optimal sizing of solar/wind hybrid off-grid microgrids using an enhanced genetic algorithm. JPEE 64–77 7. Al Busaidi AS, Kazem HA, Al-Badi AH, Khan MF (2015) A review of optimum sizing of hybrid PV–wind renewable energy systems in Oman. Renew Sustain Energy Rev 53:185–193 8. Yahiaoui A, Tlemçani A (2022) Enhanced whale optimization algorithm for sizing of hybrid wind/photovoltaic/diesel with battery storage in Algeria desert. Wind Eng 46(3):844–865 9. Suman GK, Guerrero J (2021) Roy OP (2021) Optimisation of solar/wind/bio-generator/diesel/ battery based microgrids for rural areas: a PSO-GWO approach. Sustain Cities Soc 67:102723 10. Diab AAZ, Sultan HM, Mohamed IS, Kuznetsov ON, Do TD (2019) Application of different optimization algorithms for optimal sizing of PV/wind/diesel/battery storage stand-alone hybrid microgrid. IEEE Access 7:119223–119245 11. Sanajaoba S (2019) Optimal sizing of off-grid hybrid energy system based on minimum cost of energy and reliability criteria using firefly algorithm. Sol Energy 188:655–666 12. Kaabeche A, Diaf S, Ibtiouen R (2017) Firefly-inspired algorithm for optimal sizing of renewable hybrid system considering reliability criteria. Sol Energy 155:727–738 13. Hafez AA, Abdelaziz AY, Hendy MA, Ali AFM (2021) Optimal sizing of off-line microgrid via hybrid multi-objective simulated annealing particle swarm optimizer. Comput Electr Eng 94:107294 14. Sasidhar K, Kumar BJ (2015) Optimal sizing of PV-wind hybrid energy system using genetic algorithm (GA) and particle swarm optimization (PSO). IJSETR 4(2):8396–8410 15. Maleki A, Askarzadeh A (2014) Comparative study of artificial intelligence techniques for sizing of a hydrogen-based stand-alone photovoltaic/wind hybrid system. International Conference on Hydrogen Energy 39:9973–9984 16. Nazarian P, Hadidian-Moghaddam MJ (2015) Optimal sizing of a stand-alone hybrid power system using firefly algorithm. International Journal of Industrial Electronics and Electrical Engineering 3(4). ISSN: 2347-6982 17. Karve GM, Kurundkar KM, Vaidya GA (2019) Implementation of analytical method and improved particle swarm optimization method for optimal sizing battery energy storage hybrid system of a standalone PV/wind. In: 2019 IEEE 5th international conference for convergence in technology (I2CT), 29–31 March 2019, pp 1–6 18. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks, pp 1942–1948
Fuzzy Based MPPT Control of Multiport Boost Converter for Solar Based Electric Vehicle Vishnukant Gore and Prabhakar Holambe
Abstract Power converters play an important role in electrical vehicles from charging the battery pack to driving the motor. To make this electrifying journey sustainable, renewable energy will help to increase the range of EVs. In this work, a multiport boost converter is used for solar PV, energy storage, and DC load. Solar power is not constant throughout the day, power converter is required for power management between solar PV and energy storage to give continuous supply to DC load. The fuzzy-based MPPT controller is used. Fuzzy-based controller with timesharing control for proposed converter gives constant supply to load. The multiport converter operates in different modes, such as multiple input and multiple outputs. BMS algorithm with mode selection logic is included to prevent overcharging and deep discharge of energy storage. This prolongs battery life and prevents battery decay. Keywords Multiport boost converter · Fuzzy · Solar PV · Energy storage system · DC load · Power management
1 Introduction The rapid increase in the number of conventional automobiles causes an increase in transportation-related greenhouse gas emissions. Fossil fuels must be replaced with alternative fuels. Electric cars are essential to reach this objective. The present barrier to vehicle electrification is the price-to-range imbalance and short battery lifecycle of battery packs. To increase the range of EV we can add another source of energy or we can increase battery pack. By adding another source of power, it will increase the range of EV and also it will protect battery life. Solar PV, super capacitors and fuel cells can be used as a secondary source for power in EVs. For this purpose power V. Gore (B) · P. Holambe Electrical Engineering Department, College of Engineering Pune, Pune, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_12
137
138
V. Gore and P. Holambe
electronics plays an important role to manage the power between two energy sources [1] and [2]. Single inductor-based topologies have been proposed in [3–6], single inductorbased multiple ports DC–DC converters based on the main two step-up and stepdown types. By reducing the number of conduction devices in each stage, the size of the converter reduces, and the converter becomes more effective. Multiple power sources are used by single inductor-based multiport topologies, cascade and parallel multiport converters, to power a wide range of applications. It has minimum losses and simple control when compared with multiple converters [5, 7–9]. The purpose of this research is to provide a novel battery management system and an MPPT controller based on fuzzy logic for an isolated PV system. A new method is used to develop the energy management system with fuzzy-based MPPT control. In different irradiance, MPPT always track maximum power point [10, 11]. In this work, multiple input/output DC–DC converter is designed for the power management between solar PV and energy storage system. The control technique used is a time-sharing closed-loop control, whereas it maintains the power flow between solar PV, hybrid energy storage, and load. The maximum power point tracking algorithm is used to extract maximum power from solar PV. The power converter manages power flow between solar PV and energy storage systems to give continuous supply to the load. It will also monitor the power generation at solar PV and charge–discharge of the energy storage system accordingly. The proposed system will be simulated in a MATLAB Simulink environment.
2 Proposed System Figure 1 displays the block diagram of the proposed work in which there are two different sources giving constant supply to the load. Solar PV with energy storage is fed to the dc load via a multiport boost converter. The main source for the standalone DC load is solar PV, with batteries operating as energy storage. The control block presents the MPPT algorithm to track maximum power point from solar PV panel and the voltage controller to keep continuous output voltage. The power modulation block is used to generate required duty signals of three switches. The generated PWM signals go to gate driver circuit, this provides isolation between two voltage levels and gives required gate pulse to turn on and turn off power converter switches. The power management between solar, battery, and load is done by using a state flowbased modified time-sharing control scheme. Solar power depends on the external parameters, for the uninterrupted power supply to load energy storage is required. The multiport non-isolated boost converter is used for power management between solar PV, ESS, and DC load [12, 13].
Fuzzy Based MPPT Control of Multiport Boost Converter for Solar …
139
Fig. 1 Block diagram of the proposed system
3 Solar PV 3.1 Solar PV Array In order to generate the required amount of energy, a solar photovoltaic system uses solar energy. To track maximum power, we need some power electronics system. Photovoltaic cells with MPPT algorithms are employed to continuously capture the maximum solar energy. At a specific temperature and level of irradiation, the solar PV module’s output is determined by the PV voltage and current drawn by the load. By adjusting the solar arrays in series and parallel combinations, we can design solar panels. From Fig. 2 we can determine the power to voltage relation. For hardware purposes, the solar simulator can design rated solar panels.
Fig. 2 Characteristics of solar PV
140
V. Gore and P. Holambe
3.2 Fuzzy-Based MPPT Control Maximum power point algorithms help to track MPP in solar system. Different methods are present with different algorithms, i.e., modified P and O, I and C. Fuzzy logic-based method is easy and efficient. Multirule-based resolution and multivariable consideration for both linear and nonlinear fluctuations of parameters are two characteristics of fuzzy logic control. Additionally, it can function with improper inputs. Fuzzification, rule base, inference engine, and defuzzification are the four aspects of a fuzzy logic system. Figure 3 shows the fuzzy-based MPPT algorithm [14].
Fig. 3 Fuzzy-based MPPT algorithm
Fuzzy Based MPPT Control of Multiport Boost Converter for Solar …
141
4 Non-isolated Multiport Boost Converter The proposed converter shown in Fig. 4 is a non-isolated multiport boost converter. It consists of three diodes and three semiconductor switches. The switch S1 is used as a boost switch, S2 for charging the battery, and S3 for discharging the battery. The diode D1 is before the inductor prevents the solar from reverse power flow. The D2 is used for reverse flow from load and D3 for the battery. The capacitor Ci reduces input voltage ripples from solar PV and Co from the load output side. The proposed topology is operated in different modes (1) Double input in which solar is insufficient to match load demand, then solar and battery both supplies to load. (2) Single input mode when only solar or battery supply to load (3) Double output mode [DO] when solar power more than load demand. It delivers a load and a battery [15].
4.1 Double Output Mode When the solar power is more than the load requirements, the solar PV panel serves as the primary source of supply for the energy storage and the DC load. Figure 5 shows a multiport converter in double mode operation. When the solar power increases or the load decreases then the system goes into DO mode. In this mode, charging switch S2 is on and switch S3 becomes off state. By charging and discharging of inductor, the proposed converter operates in three stages switch S1 is turned on during the initial step and solar PV charges the inductor L. Power flows through this switch by D1, passes the current and D2 and D3 block the current. Here ip, V p represent primary source current and voltage, current of inductor and output load voltage are represented as, iL and V o , respectively. By switching the operation of switches, we perform this
Fig. 4 Non-isolated multiport boost converter
142
V. Gore and P. Holambe
Fig. 5 Multiport converter in double mode operation
operation. Due to single inductor-based topology, conduction devices at one stage are less, and also controller becomes easier. Hence conduction loss is reduced. To determine output voltage, we use the volt-second balance in a conventional boost converter. Similarly, in this mode also, we can determine output voltage by using volt-sec balance Vo =
V P V −Vbat DS2 D D1
In this mode, solar irradiance increases, and solar power becomes more than the load demand. The proposed converter in this stage acts like double input mode. Solar PV as a primary source for the load and battery both. In this mode only switch 1 and switch 2 operates. Third switch becomes off during this stage. In the first stage primary source supply to the inductor and inductor fully charges. In the next stage, inductor supply to the load as an only first switch is on. Then by turning third switch on inductor supplies power to the battery. Figure 6 indicates different stages of converter in this mode.
5 Proposed Controller The MPPPT algorithm is set to extract the maximum power from primary source, i.e., solar PV. Solar power is not constant throughout the day. MPPT by adjusting the impedance will help to operate solar PV close to maximum power point under varying conditions like solar irradiance temperature. Here the simple I and C method is used. The mode selection is important in this control strategy. For the mode selection in Fig. 7, the PV power and load power taken as an input and Msel is the mode selection
Fuzzy Based MPPT Control of Multiport Boost Converter for Solar …
Fig. 6 Three stages of converter in double mode
143
144
V. Gore and P. Holambe
Fig. 7 Mode selection logic
signal. The state flow control logic is used to select the appropriate mode according to excess and deficit in solar power. In the state chart, conditions are given to transitions to select the mode. When there is excess solar PV power, then Msel gives signal to 0 which is DOBM and when there is deficit solar power then gives signal 1 which is DIBM mode. By using state flow, the control becomes easy and we can see the changes in the transitions live. Battery charge control is added to control the charging and discharging of battery in different modes. Lifecycle is main issue in the Lithiumion batteries. By limiting the over-charge and over-discharge we can protect battery lifecycle. Here battery can charge only below 80% SOC and can discharge only above 20% SOC.
6 Results In this instance, when the solar irradiance rises from 400 to 800 W/m2 and vice versa, changes in solar irradiation result in an increase in solar power of 40–70 W. Due to the system entering DO mode, solar power is more than load power, and the mode selection block’s output is 1. When solar power is not sufficient to fulfill load requirement, then both supply to load. For system power management, MPPT duty signal and voltage control signal as input to the controller. The battery holds
Fuzzy Based MPPT Control of Multiport Boost Converter for Solar …
145
Fig. 8 Simulation results power management between different modes
the remaining energy. By charging and discharging an inductor, solar energy in this case, provides power to a load as well as a battery. Those changes in mode but output voltage remains constant (Fig. 8).
7 Conclusion The modified fuzzy-based MPPT with time-sharing control is used for power flow control between solar and battery. In this work, a fuzzy-based algorithm is, used for MPPT which is more efficient than conventional. The suggested converter provides a constant supply to the load through the operation with multiple inputs and outputs. Mode selection logic increases system effectiveness and slows down battery degradation. Performing simulation operations in different conditions with a modified control method provides better results.
References 1. Oosthuizen C, Van Wyk B, Hamam Y, Desai D, Alayli Y (2019) Lot R (2019) Solar electric vehicle energy optimization for the Sasol Solar challenge 2018. IEEE Access 7:175143–175158. https://doi.org/10.1109/ACCESS.2019.2957056 2. Elshafei M, Al-QutubA, Saif AA (2016) Solar car optimization for the world solar challenge. In: 2016 13th international multi-conference on systems, signals & devices (SSD), pp 751–756. https://doi.org/10.1109/SSD.2016.7473675 3. Jiang W, Fahimi B (2011) Multiport power electronic interface—concept, modeling, and design. IEEE Trans Power Electron 26(7):1890–1900 4. Ki W-H, Ma D (2001) Single-inductor multiple-output switching converters, vol. 1, pp 226–231. https://doi.org/10.1109/PESC.2001.954024
146
V. Gore and P. Holambe
5. Bandyopadhyay S, Chandrakasan AP (2012) Platform architecture for solar, thermal, and vibration energy combining with mppt and single inductor. IEEE J Solid-State Circuits 47(9):2199–2215 6. Holambe P, Dambhare S (2021) Sensorless robust controller for buck converter using modified fast terminal sliding surface. IEEE Kansas Power and Energy Conference (KPEC) 2021:1–6. https://doi.org/10.1109/KPEC51835.2021.9446228 7. Shao H, Li X, Tsui CY, Ki WH (2014) A novel single-inductor dualinput dual-output DC–DC converter with PWMcontrol for solar energy harvesting system. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 22(8):1693–1704 8. Babaei E, Abbasi O (2016) Structure for multi-input multi-output DC–DC boost converter. IET Power Electronics 9(1):9–19 9. Khan SA, Islam MR, Guo Y, Zhu J (2019) A new isolated multi-port converter with multidirectional power flow capabilities for smart electric vehicle charging stations. IEEE Transactions on Applied Superconductivity 29(2):1–4, Art no. 0602504. https://doi.org/10.1109/ TASC.2019.2895526 10. Holambe PR, Talange DB, Bhole VB (2015) Motorless solar tracking system. International Conference on Energy Systems and Applications 2015:358–363. https://doi.org/10.1109/ ICESA.2015.7503371 11. Srivastava A, Nagvanshi A, Chandra A, Singh A, Roy AK (2021) Grid integrated solar PV system with comparison between fuzzy logic controlled MPPT and P&O MPPT. In: 2021 IEEE 2nd international conference on electrical power and energy systems (ICEPES), pp 1–6, https:// doi.org/10.1109/ICEPES52894.2021.9699492 12. Huang MH, Chen KH (2009) Single-inductor multi-output (simo) DC–DC converters with high light-load efficiency and minimized cross-regulation for portable devices. IEEE J Solid-State Circuits 44(4):1099–1111 13. Khaligh A (2007) Stability criteria for the energy storage bi-directional DC/DC converter in the Toyota Hybrid System II. IEEE Vehicle Power and Propulsion Conference 2007:348–352. https://doi.org/10.1109/VPPC.2007.4544149 14. Haji D, Genc N (2018) Fuzzy and P&O based MPPT controllers under different conditions. In: 2018 7th international conference on renewable energy research and applications (ICRERA), pp 649–655. https://doi.org/10.1109/ICRERA.2018.8566943 15. Nguyen BLH, Cha H, Vu T, Nguyen T-T (2021) Integrated multiport bidirectional DC–DC converter for HEV/FCV applications. IECON 2021—47th annual conference of the IEEE industrial electronics society, pp 1–6. https://doi.org/10.1109/IECON48115.2021.9589188 16. Schuss C, Fabritius T, Eichberger B, Rahkonen T (2020) Impacts on the output power of photovoltaics on top of electric and hybrid electric vehicles. IEEE Trans Instrum Meas 69(5):2449–2458. https://doi.org/10.1109/TIM.2019.2962850 17. Jacobs IS, Bean CP (1963) Fine particles, thin films and exchange anisotropy. In: Rado GT, Suhl H (eds) Magnetism, vol III. Academic, New York, pp 271–350 18. Anand I, Senthilkumar S, Biswas D, Kaliamoorthy M (2018) Dynamic power management system employing a single-stage power converter for standalone solar PV applications. IEEE Trans Power Electron 33(12):10352–10362. https://doi.org/10.1109/TPEL.2018.2804658 19. Khaligh A, Cao J, Lee Y-J (2009) A multiple-input DC–DC converter topology. IEEE Trans Power Electron 24(3):862–868
Image Classification Model Based on Machine Learning Using GAN and CNN Algorithm Ch. Bhavya Sri, Sudeshna Sani , K. Naga Bavana, and Syed. Hasma
Abstract Many decagons faced several unresolved problems because of the growth of machine learning, including image identification, image detection, picture categorization, etc. The most fundamental, traditional, and important subject matter of research in the area of machine learning has always been image recognition. Image recognition software progresses in society at a faster rate than technology. The protection of personal information, for instance, when using mobile phones, depends on the picture recognition. For picture recognition, we used the GAN algorithm and the CNN algorithm. To categorize segment, and recognize images, machine learningbased image preprocessing technology is used. Nevertheless, because of the intricacy of video images and the current nature of things in various application qualifications, accurate categorization becomes vital and difficult. The usage of image recognition technologies is very useful in the future generation. Keywords Eigen faces · Support vector machine · Correlation matrix · Face detection · Face recognition · CNN · GAN
1 Introduction With the advent of technology, real-time facial gesture detection is becoming increasingly important in the field of human–computer interaction. While we employ contact-free and reasonable, face detection-based approaches [1, 2] that use warble detection to identify face gestures and the vision-based approaches only require one or more cameras for taking images or videos to recognize face movements. Numerous vision-based static approaches for recognizing postures or specific poses as well as dynamic methods for recognizing a series of postures and facial gestures have been proposed. And machine learning is a significant and challenging subject for image processing [3, 4], particularly in the field of enormous image processing where Ch. Bhavya Sri · S. Sani (B) · K. Naga Bavana · Syed. Hasma Koneru Lakshmaiah Education Foundation, Vijayawada, Andhra Pradesh, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_13
147
148
Ch. Bhavya Sri et al.
machine learning approaches can be used to analyze complex data [5]. Machine learning techniques can be created from complex data like disease identification by plant leaves image processing [6]. To enable the fair application of image recognition in many domains and industries, the primary features of an image are split [7]. Machine learning-based image processing techniques have been extensively employed in picture classification, segmentation, and recognition [8]. A technique called biometrics is used to measure and examine a person’s physical and behavioral traits. An intriguing innovation in machine learning recently is a method called generative adversarial networks (GANs). GANs, or generative models, create new data instances that resemble your training data. For example, GANs have the ability to create images that resemble photographs of faces with human traits even when the corresponding faces don’t actually belong to any living thing. The primary input for a GAN algorithm is random noise. The generator then transforms this noise into a useful output. By introducing noise and sampling from various points across the target distribution, we may make the GAN provide a broad range of data. CNN is a well-liked and efficient pattern detection and image processing approach. It has many benefits, such as adaptability, a simple structure, and reduced training requirements. Spatial correlations found in the input data are used by CNN. Each concurrent layer of the neural network is coupled to certain input neurons. The area is referred to as the “local receptive field”. The focal point of the local receptive field is a hidden neuron. CNNs, often referred to as convolutional neural networks, are a class of artificial neural networks used in deep learning and are frequently used for object and image recognition and categorization [9]. As a result, deep learning employs a CNN to recognize objects in a picture.
1.1 Face Detection One of the best image analysis tools that have recently achieved prominence in our surveillance and security-related applications is face recognition [10]. It requires verifying someone’s identity by looking at their face. Based on the subject’s face features, such as their eyes and nose, it captures, assesses, and contrasts patterns. System access is granted, and authentication is put into place. It uses a human face’s biometric patterns as part of its biometric identification mechanism.
1.2 Developing Face Recognition Software Depending on the system’s settings, a face recognition system may collect an incoming image from a camera device in 2D or 3D. Then, using the faces stored in a database, it confirms the crucial details of the incoming picture signal in a real-time image or video frame. According to [11], data derived from this kind of
Image Classification Model Based on Machine Learning Using GAN …
149
efficient real-time data is safer than data derived from a static image. Face recognition is a commonly used method that uses biometrics to map facial attributes from our database. Face verification is a method for comparing two faces to find the correct person.
2 Literature Review Zhu et al. in [11] used data complexity for the generation of contextual synthetic data. In this work, calculating the length of boundary calculation technique is used to calculate the data complexity [DC]. The length of the class boundary decides the complexity of the data. The dimensionality may also influence classifier accuracy. The use of synthetic datasets is more useful in analyzing the algorithm in a controlled scenario. Several geometrical descriptors has been defined by several studies by identifying characteristics of the datasets. These descriptors were found useful to understand classifier performance. This approach can be more useful in identifying the performance of the algorithm in different degrees of class imbalance. They arrived at the expression for data complexity calculation which is given in Eq. (1). They have taken the example of the generation of minimum spanning trees with different data complexities. P = b × (n − 1) b n b p
(1)
[0,1] Number of instances Length of class boundary (desired complexity) Number of edges connecting different classes.
Yuan in [12] applied visual attention-based networks for the synthesis of the image. Here the main objective is to convert magnetic resonance (MRI) images to computed tomography (CT) images using the fully convolutional networks, to reduce the side effects for the patient because of the radiation due to CT scan. An MR input picture is first divided into overlapping areas, and the generator is then used to forecast the associated CT patch for each patch. The first method proposed in this paper is supervised GANs, which consist of a network that contains a generator for predicting the CT and a discriminator for separating the genuine CT from the generated CT. Usually, GANs have two network-generators (G) and discriminators (D). In order to reduce the binary cross entropy (BCE) between D’s decisions and the appropriate label (real or synthetic), we minimize the BCE between D’s decisions and the correct label (real or synthetic). Generators are FCNs that generate images, and discriminators are CNNs that calculate the likelihood that the input image was created from real images (real or synthetic). The second approach presented is auto-context model (ACM) for refinement.
150
Ch. Bhavya Sri et al.
Wang et al. in [13, 14] used synthetic data for image segmentation. Synthetic data is vital because it can be generated to meet specific requirements or conditions that are not available in existing (real) data. The technique for synthetic data generation is GAN. GAN is an unsupervised task in machine learning. Generative adversarial networks consist of two models that automatically discover and learn the patterns form the input data. The Generator and Discriminator models run in competition with each other to generate new records, examine records, and classify the variances within a dataset. The numbering of heading levels should be limited to two. Lower-level headings are structured as run-in headings and are left unnumbered. The generative adversarial self-attention network (SAGAN), which provides attention-driven, long-range dependency modeling for image tasks, was employed by Zhang, Han, and colleagues [15]. Generative adversarial networks (GANs) with traditional convolutional architectures only produce high-resolution information based on local spatial points and function cards with poor resolution. Signals from every functional location can be used by SAGAN to produce details. The discriminator may also examine several intricate details in far-off passages. The pictures match up with one another. Recent studies have also revealed that the performance of GANs is impacted by the conditioning of the generators. Apply the spectral normalization supplied to the GAN generator using this information to check whether the training dynamics are improved. The suggested SAGAN [16] performs better than task 1 and generates the strongest drive. On the difficult ImageNet dataset, they moved the starting point from 36.8 to 52.52 and decreased Fréchet’s starting distance from 27.62 to 18.65. Low-rise visualizations demonstrate that the generator benefits from the relevant neighborhood within the object’s shape, are not from its immediate surroundings.
3 Experimental Investigation We discovered the most effective technique for producing the necessary high-quality photographs after having a literature review. The optimal method for producing as many images as needed, according to our research, is GANs, or generative adversarial networks [17]. To create new, artificial instances of data that can be mistaken for genuine data, algorithmic structures known as GANs use two neural networks in competition with one another. There are often two networks in GANs. Generators G and discriminators D, which can distinguish between genuine and synthetic images, are both trained simultaneously. Whereas generators G are FCNs that create images, discriminators D are CNNs that estimate the likelihood that an input image is taken from a real image. D was trained to distinguish between actual and artificial data, whereas G was trained to create realistic visuals that will deceive D [18]. We need to identify the criteria used to obtain the necessary quality photographs from our experimental research. With the celebrity dataset, we initially trained the model. It includes pictures of famous people. We can produce the photos as needed.
Image Classification Model Based on Machine Learning Using GAN …
151
3.1 Description of Modules The following modules are used in the suggested system: 1. Removing embedded content from the photos. 2. The GAN algorithm’s training. 3. Identifying faces in still photos and video frames. Removing embedded content from the photos. In this module, embeddings is the term used for the feature vectors. To find faces in a picture, a DL face detector based on caffe has been utilized [19]. For extracting the embeddings from the photographs inside the dataset, this module uses a porch-based embedder. The retrieved embeddings are then kept in an encoded way in a pickle file [20]. The GAN algorithm’s training. To learn the generative model, which defines how the data is produced using a probabilistic model, generative is used. The model is trained in an adversarial environment, using the adversarial [21] (Fig. 1). Identifying faces in still photos and video frames. We followed a similar procedure to extract embeddings from a set of real-time images for input. A trained support vector model is used to detect, contrast, and recognize faces. The accuracy of detected face values is evaluated using the triplet loss function by comparing the vectors (Fig. 2).
Fig. 1 Flowchart for proposed system
152
Ch. Bhavya Sri et al.
Fig. 2 Analyzing and detecting images
Fig. 3 Sample dataset
3.2 Response and Conversation The dataset of photos of celebrities for this article is taken from Kaggle. The sample photos that are included in the collection are described in Fig. 3. The number of photos provided in the dataset affects how well this training model performs.
4 Result Analysis The main criterion used to evaluate the effectiveness of any face identification algorithm is the obtained accuracy of the match. The accuracy is calculated using the algorithm’s ability to recognize face input. The percentage of the match is displayed. It is essential for the algorithm to show the closest percentage of matches. With
Image Classification Model Based on Machine Learning Using GAN …
153
Table 1 Summary of test Classification
Noiseless test (%)
Noisy test (%)
Recognition time (ms)
h1
90.31
85.80
10
h2
96.85
85.65
13
h3
93.04
75.65
11
BP
90.13
80.69
11
GAN
94.67
92.54
56
50 no. of real-time images included in the dataset, the recommended GAN-based facial detection system achieves a similar accuracy of roughly 90%. The accuracy can be improved even further by expanding the collection’s number of photographs. Vectors, light intensity, pixel quality, and other elements are the most crucial factors to consider while analyzing facial features (Table 1). These three theories have been chosen, H1, which has the best performance in the training set, has the worst performance in the test set. The outcomes demonstrate that the overfitting issue with a neural network is a problem that the evolutionary algorithm is unable to fully resolve, though overall classification accuracy is exceptionally high. Character offset, distortion, skew, and blur problems in the gained image cause any text features to have a spatial distribution. It is challenging to appropriately categorize noise interference using a few basic criteria. The execution of the identification image is better when the experiment’s fitness is Fitness (h2) = 0.9065. This is demonstrated by the experiment’s total accurate rate for the comprehensive classification. The observation of recognition rate leads us to the conclusion that neural networks trained with genetic algorithms have greater recognition rates than neural networks taught with conventional techniques (Fig. 4).
5 Data Visualization Techniques Pie Chart and Bar Chart. Pie charts are one of the most well-known and oftenused ways of data visualization and are utilized in a wide range of applications. This pie chart shows how well-known each celebrity is because of their movies. This pie chart is also easy to understand and apply when studying the case study. It can be an effective tool for communicating with even the most ignorant audiences because it graphically represents data as a small portion of a larger total. It makes it possible for the audience to easily understand information or compare data in order to undertake analysis. The bar chart’s bar lengths display how each group stacks up against the value. When there are too many categories present, the bar’s labeling and clarity can become a problem. This bar graph displays the celebrity’s level of online popularity (Fig. 5). Correlation Matrix. A correlation matrix is a table that exhibits correlation coefficients among several variables. Each cell’s color represents the degree to which and
154
Ch. Bhavya Sri et al.
Fig. 4 Validation curve for noiseless test in GAN
Fig. 5 Chart representation
whether two variables are related to one another, reflecting the relationship between the two variables. Correlation matrices can be used to summarize huge datasets and find patterns. A correlation matrix may be used in business to investigate the connections between different product-related data items, such as the launch date, etc. This matrix displays the facial expressions of famous people. Additionally, the celebrity’s name is displayed (Fig. 6). Scatter Plot Matrix. The dataset’s scatter plot matrix displays all pairwise scatter between different variables as a matrix for k sets of variables or columns, such as (x1, x2…xk), along with their names in this scatter plot matrix. Many relationships
Image Classification Model Based on Machine Learning Using GAN …
155
Fig. 6 Correlation matrix for list_bbox_celeba.csv
between variables can be examined in one chart by scatter plots. We may generally assess if there is a linear link between several variables by using scatterplot matrices. This is especially useful for identifying particular variables that might correlate with your genomic or proteomic data. To display bivariate correlations between different combinations of variables, scatter plots are arranged in a grid (or matrix). Numerous associations can be investigated in a single chart thanks to the scatter plots in the matrix, which each show the link between a pair of variables. As a result, the scatter plot matrix has k rows and k columns for each of the k variables in the dataset. Each row and column represents a scatter plot. Additionally, this scatter matrix graph demonstrates how the celebrity’s facial emotions are displayed together with their names (Fig. 7). On the vertical axis, variable xj. On the horizontal axis, variable xi.
156
Ch. Bhavya Sri et al.
Fig. 7 Scatter plot and density plot
6 Conclusion Real-world ML models require a lot of data to be trained, so growing the dataset requires a lot of resources, including time, money, and labor. We have ultimately opted to take into account the GAN model to generate the synthetic data for the low-intensity images, which is motivated by the research papers below. And we were successful. Using the GAN model, we are able to produce photos. In recent years, celebrity identification research has made extensive use of machine learning, a crucial artificial intelligence technique. Due to its intelligence, broad overview, and high detection efficiency, it has progressively come to the forefront of image identification research. It took a lot of target data to perform this experiment. But obtaining huge amounts of useful data is quite challenging when it comes to target recognition. This is the main issue preventing deep learning from being used in the area of picture recognition. In order to properly apply deep learning, on the basis of the original database, it is crucial to develop a more effective method to carry
Image Classification Model Based on Machine Learning Using GAN …
157
out physical data growth. Although data is all around us, tagged data is uncommon. Similar to other fields, collecting data for picture recognition is simpler, but doing so manually requires a lot of time and effort.
References 1. Sukmandhani AA, Sutedja I (2019) Face recognition method for online exams. In: International conference on information management and technology (ICIMTech), Jakarta/Bali, Indonesia, pp 175–179 2. Venkateswar Lal GR, Nitta AP (2019) Ensemble of texture and shape descriptors using support vector machine classification for face recognition. Ambient Intell Humaniz Comput 3. Fayyoumi A, Zarrad A (2014) Novel solution based on face recognition to address identity theft and cheating in online examination systems. Adv Internet Things 4(3):5–12 4. Bah SM, Ming F (2020) An improved face recognition algorithm and its application in attendance management system. Array 5 5. Kranthikiran B, Pulicherla P (2020) Face detection and recognition for use in campus surveillance. Int J Innovative Technol Exploring Eng 9(3) 6. Mitra D, Gupta S (2022) Plant disease identification and its solution using machine learning. In: 2022 3rd international conference on intelligent engineering and management (ICIEM), London, United Kingdom, pp 152–157. https://doi.org/10.1109/ICIEM54221.2022.9853136 7. Kamencay P, Benco M, Mizdos T, Radil R (2017) A new method for face recognition using convolutional neural network. Digital Image Process Comput Graphics 15(4):663–672 8. Traoré YN, Saad S, Sayed B, Ardigo JD, de Faria Quinan PM (2017) Ensuring online exam integrity through continuous biometric authentication 9. Sani S, Bera A, Mitra D, Das KM (2022) COVID-19 detection using chest X-Ray images based on deep learning. Int J Softw Sci Comput Intell (IJSSCI) 14(1):1–12. https://doi.org/10.4018/ IJSSCI.312556 10. Traoré I, Awad A, Woungang I (eds) (2017) Information security practices. Springer, Cham 11. Zhu C, Zheng Y, Luu K, Savvides M (2017) CMS-RCNN: contextual multi-scale regionbased CNN for unconstrained face detection. In: Bhanu B, Kumar A (eds) Deep learning for biometrics. Advances in computer vision and pattern recognition. Springer, Cham 12. Yuan Z (2020) Face detection and recognition based on visual attention mechanism guidance model in unrestricted posture. In: Scientific programming towards a smart world 13. Wang B, Chen LL (2019) Novel image segmentation method based on PCNN. Optik 187:193– 197 14. Wang K, Zhang D, Li Y et al (2017) Cost-effective active learning for deep image classification. IEEE Trans Circ Syst Video Technol (99):1–1 15. Zhang H et al (2019) Self-attention generative adversarial networks. International conference on machine learning. PMLR 16. Merrigan A, Smeaton AF (2021) Using a GAN to generate adversarial examples to facial image recognition. ArXiv. https://doi.org/10.2352/EI.2022.34.4.MWSF-210 17. Cheng F, Hong Z, Fan W et al (2018) Image recognition technology based on deep learning. Wireless Pers Commun C:1–17 18. Lin BS, Liu CF, Cheng CJ et al (2018) Development of novel hearing aids by using image recognition technology. IEEE J Biomed Health Inf 99:1-1 19. Zhang XB, Ge XG, Jin Y et al (2017) Application of image recognition technology in census of national traditional Chinese medicine resources. Zhongguo Zhong yao za zhi = Zhongguo zhongyao zazhi = China J Chinese Materia Medica 42(22):4266
158
Ch. Bhavya Sri et al.
20. Sun D, Gao A, Liu M et al (2015) Study of real-time detection of bedload transport rate using image recognition technology. J Hydroelectric Eng 34(9):85–91 21. Aggarwal A, Mittal M, Battineni G (2021) Generative adversarial network: An overview of theory and applications. Int J Inf Manag. 100004. https://doi.org/10.1016/j.jjimei.2020.100004
Role of Natural Language Processing for Text Mining of Education Policy in Rajasthan Pooja Jain and Shobha Lal
Abstract The knowledge of education policy will bring an array of new growth, but it has necessitated an improved type of human–machine intercommunication, in which the machine enhances a thoughtful and interactive intelligence. Natural language processing (NLP), a part of artificial intelligence (AI), is the competence of a computer program to comprehend spoken and written human language (https://www.linguamatics.com/what-text-mining-text-analyticsand-natural-language-processing; Zhang and Segall in IJITDM 7(4):683–720, (2008)) [1, 2]. After being thoughtful about it, in mining, one should have sagacity for the predetermination of policy (Bhardwaj in Int J Eng Res Technol (IJERT) 1(3), 2012; Maes in Commun ACM 7:30–40, 1994) [3, 4]. Using NLP, this provides a quick way of extracting information about education policy. This paper focuses on manipulating NLP commands after data collection using unstructured interviews about the attitude of NLP and then filling out a website questionnaire form to collect the satisfaction result. Coding is executed to get the required data using Python and NLP. During the analysis of feedback at colleges in Jaipur, Rajasthan, it is divulged about the satisfaction of using NLP commands, so it is observed that NLP creates a convenient way of mining. The goal behind this text mining is to identify the importance of NLPs in getting data into an integrated form. Lastly, in the execution phase, it narrates the process to obtain cognition for extricating data about policies for gratification. Keywords NLP · Education policy · Unstructured data mining · Web text mining · Execution
P. Jain (B) · S. Lal Jayoti Vidyapeeth Women’s University, Jaipur, Rajasthan, India e-mail: [email protected] S. Lal e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 A. Mishra et al. (eds.), Advances in IoT and Security with Computational Intelligence, Lecture Notes in Networks and Systems 756, https://doi.org/10.1007/978-981-99-5088-1_14
159
160
P. Jain and S. Lal
1 Introduction Text mining is one of the AI techniques. It enlists NLP and converts unstructured text into data analysis format. Data on the web is mainly in unstructured format [5, 6]. Unstructured data is inputted into models to get predictions. NLP is a sub-part of data science that consists of processes for intelligently processing, interpreting, and getting knowledge from text data. NLP and its components can be used to organize large amounts of data, perform various automated tasks, and solve a variety of problems. Important tasks of NLP are text classification, text matching, and coreference resolution. Text mining is a technique for reviewing the records of a large group to find knowledge from the data. It is broadly useful for getting knowledge [7–10]. This uncovers documentation of large amounts with interrelationships. To process the text, text mining can be used with NLP. Text mining produces structured data that can be incorporated into databases [11–15].
1.1 Interpretation with ML for NLP Python is a highly regarded and machine-friendly programming language in the artificial intelligence world. It works on a variety of data science topics such as ML, NLP, and more. It has a path for every stage of the data science process [16]. A query in Python extracts the data for cleaning and sorting. NLP simulates making devices more intelligent to search the web. It allows machines to read the text and reply accordingly. It encompasses both natural language understanding and natural language generation [17]. A search engine like Google provides every type of required data due to NLP. Understanding the meaning of text can be accomplished by using machine learning for NLP. NLP turns unstructured text into usable data. It can be categorized as supervised machine learning (SML) and unsupervised machine learning (USML). If the model is put into other text, it is considered SML, and a set of algorithms that extricate meaningful data is viewed as UML. Classification and regression are two categories of SML. Through classification, it can be used for fraud detection, image classification, customer retention, diagnostics, etc. On the other hand, commercial prophecy, climate foretelling, business conjecture, evaluating expectations of life, citizens’ development projections, etc., are manipulated by regression. So classification problems are applied to train a model to predict qualitative goals. After predicting a number, the relationship between dependent and independent variables is discovered in regression. And when the datasets, like unlabeled data, are loaded into the model for analysis and clustering without being predefined, clustering, association, and dimensionality reduction (generalization) are the ways to describe UML. Customer segmentation, targeted marketing, recommender systems, etc., are included in clustering. Market basket analysis, customer clustering in retail, price bundling, assortment decisions, cross-selling, and others are contained in the association, whereas meaningful compression, structure discovery, feature elicitation, and
Role of Natural Language Processing for Text Mining of Education …
161
big data visualization, etc., are operated using dimensionality reduction. This is an unsupervised technique where the unlabeled groups of similar entities are processed as image compression, recognizing forgery newscasts, unsolicited processes, advertising mechanisms, systematizing web marketing, associating crooked or delinquent tasks, recording surveys, and others are solved by it [18].
1.2 About Education Policy 2020 National Education Policy 2020 includes nearly 2 lakh suggestions from 2.5 lakh gram panchayats, 6600 blocks, 6000 urban local bodies, and 676 districts. By 2030, this new policy aims to universalize education from pre-school to the secondary level. There is a strong emphasis on foundational literacy. Vocational education will begin in Grade 6 with internships, and until Grade 5, it will be taught in the parent’s native language. According to NEP 2020, it has been dividing the 10 + 2 system into the 5 + 3 + 3 + 4 format. Flexibility in a higher education curriculum will be added [19–21]. Medical education will be mingled with Ayurveda, Naturopathy, Unani, Homoeopathy, Siddha, and vice versa at the undergraduate level, according to the education policy [22].
2 Methodology In the execution, it is initialized as “Education Policy Websites.” For applying web text mining, NLP commands are used. If we get a satisfactory result from policy extraction using NLP, we will stop the execution. Otherwise, it will be continued with the same technique for a worthy result (Fig. 1).
2.1 Code and Executed Screenshots of Python for NLP NLP is applied for cleaning and summarizing text, tokenizing sentences and words, getting the frequency of words, etc. There are some steps in text mining for deriving meaningful information when manipulating NLP with Python code [23] (Figs. 2, 3, 4, 5, 6, 7, 8, 9). #Installing NLTK (Natural Language Toolkit) C:\Users\HP\AppData\Local\Programs\Python\Python39>python >>> import nltk >>> nltk.download() Showing info https://raw.githubusercontent.com/nltk/nltk_data/ gh-pages/index.xml #Working with tokenization in NLP
162
P. Jain and S. Lal
Start
Education Policy Websites
Web Text Mining (Information Retrieval, Extraction, & Mining)
No If Get Purposef
NLP With Python
Yes
Stop Execution
Fig. 1 Execution flow of text mining
Fig. 2 Stemming for text mining
>>> Education_Policy="According to NEP 2020, it has been dividing the 10+2 system into the 5+3+3+4 format. Flexibility in a higher education curriculum will be added." >>> token=word_tokenize(Education_Policy) >>> token [’According’, ’to’, ’NEP’, ’2020’, ’,’, ’it’, ’has’, ’been’, ’dividing’, ’the’, ’10+2’, ’system’, ’into’, ’the’, ’5+3+3+4’, ’format’, ’.’, ’Flexibility’, ’in’, ’a’, ’higher’, ’education’, ’curriculum’, ’will’, ’be’, ’added’, ’.’] # Locating the frequency distinct in the tokens >>> from nltk.probability import FreqDist >>> fdist=FreqDist(token) >>> fdist FreqDist({’the’: 2, ’.’: 2, ’According’: 1, ’to’: 1, ’NEP’: 1, ’2020’: 1, ’,’: 1, ’it’: 1, ’has’: 1, ’been’: 1, ...})
Role of Natural Language Processing for Text Mining of Education …
163
Fig. 3 Stemming and lemmatization for text mining
Fig. 4 Removing stop words for text summarization
>>> fdist1=fdist.most_common(9) >>> fdist1 [(’the’, 2), (’.’, 2), (’According’, 1), (’to’, 1), (’NEP’, 1), (’2020’, 1), (’,’, 1), (’it’, 1), (’has’, 1)] # Opening a jupyter notebook
164
Fig. 5 Classifying words using POS-tagging, tagged token and Brown Corpus
Fig. 6 Importing re (regular expression) module for finding
P. Jain and S. Lal
Role of Natural Language Processing for Text Mining of Education …
Fig. 7 Finding, searching, splitting, replacing patterns
Fig. 8 Importing beautiful soup and nltk.tokenize
165
166
P. Jain and S. Lal
Fig. 9 Importing sent_tokenize() and word_tokenize() from nltk.tokenize package using Beautiful Soup
C:\Users\HP\AppData\Local\Programs\Python\Python39>jupyter notebook [W 14:47:40.293 NotebookApp] Terminals not available (error was No module named ’winpty.cywinpty’) [I 14:47:40.543 NotebookApp] Serving notebooks from local directory: C:\Users\HP\AppData\Local\Programs\Python\Python39 [I 14:47:40.543 NotebookApp] Jupyter Notebook 6.2.0 is running at: [I 14:47:40.543 NotebookApp] http://localhost:8888/ ?token=85319cedbe702cff61e821a7e71b767c23e5c6db032d48ef [I 14:47:40.559 NotebookApp] or http://127.0.0.1:8888/ ?token=85319cedbe702cff61e821a7e71b767c23e5c6db032d48ef [I 14:47:40.559 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [C 14:47:40.637 NotebookApp] To access the notebook, open this file in a browser: file:// /C:/Users/HP/AppData/Roaming/jupyter/runtime/nbserver-1700open.html Or copy and paste one of these URLs: http://localhost:8888/?token=85319cedbe702cff61e821a7e71b767c 23e5c6db032d48ef or http://127.0.0.1:8888/?token=85319cedbe702cff61e821a7e71b767c 23e5c6db032d48ef [W 14:49:57.733 NotebookApp] 404 GET /undefined/undefined (::1) 22.060000ms referer=None [I 14:53:45.992 NotebookApp] Creating new file in [I 14:53:46.054 NotebookApp] Creating new notebook in [I 14:53:46.443 NotebookApp] Creating new notebook in [I 14:53:46.683 NotebookApp] Creating new notebook in [W 14:53:46.939 NotebookApp] 404 GET /undefined/undefined (::1) 29.570000ms referer=None
Role of Natural Language Processing for Text Mining of Education …
167
[I 14:53:46.943 NotebookApp] Creating new notebook in [I 14:53:51.419 NotebookApp] Kernel started: d18dbe85-4850-45b2a71f-534acdb74e99, name: python3 #Using urllib.request for fetching URLs >>> import urllib.request >>> response = urllib.request.urlopen(’https://www.rajasthanshi ksha.com/’) >>> html = response.read() >>> print(html) b’