118 15 26MB
English Pages 735 [702] Year 2021
Lecture Notes in Electrical Engineering 708
Akash Kumar Bhoi Pradeep Kumar Mallick Valentina Emilia Balas Bhabani Shankar Prasad Mishra Editors
Advances in Systems, Control and Automations Select Proceedings of ETAEERE 2020
Lecture Notes in Electrical Engineering Volume 708
Series Editors Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli Federico II, Naples, Italy Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán, Mexico Bijaya Ketan Panigrahi, Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, Munich, Germany Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China Shanben Chen, Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore Rüdiger Dillmann, Humanoids and Intelligent Systems Laboratory, Karlsruhe Institute for Technology, Karlsruhe, Germany Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China Gianluigi Ferrari, Università di Parma, Parma, Italy Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid, Madrid, Spain Sandra Hirche, Department of Electrical Engineering and Information Science, Technische Universität München, Munich, Germany Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA, USA Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Alaa Khamis, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt Torsten Kroeger, Stanford University, Stanford, CA, USA Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA Ferran Martín, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA Sebastian Möller, Quality and Usability Laboratory, TU Berlin, Berlin, Germany Subhas Mukhopadhyay, School of Engineering & Advanced Technology, Massey University, Palmerston North, Manawatu-Wanganui, New Zealand Cun-Zheng Ning, Electrical Engineering, Arizona State University, Tempe, AZ, USA Toyoaki Nishida, Graduate School of Informatics, Kyoto University, Kyoto, Japan Federica Pascucci, Dipartimento di Ingegneria, Università degli Studi “Roma Tre”, Rome, Italy Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Gan Woon Seng, School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore Joachim Speidel, Institute of Telecommunications, Universität Stuttgart, Stuttgart, Germany Germano Veiga, Campus da FEUP, INESC Porto, Porto, Portugal Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China Junjie James Zhang, Charlotte, NC, USA
The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments in Electrical Engineering - quickly, informally and in high quality. While original research reported in proceedings and monographs has traditionally formed the core of LNEE, we also encourage authors to submit books devoted to supporting student education and professional training in the various fields and applications areas of electrical engineering. The series cover classical and emerging topics concerning: • • • • • • • • • • • •
Communication Engineering, Information Theory and Networks Electronics Engineering and Microelectronics Signal, Image and Speech Processing Wireless and Mobile Communication Circuits and Systems Energy Systems, Power Electronics and Electrical Machines Electro-optical Engineering Instrumentation Engineering Avionics Engineering Control Systems Internet-of-Things and Cybersecurity Biomedical Devices, MEMS and NEMS
For general information about this book series, comments or suggestions, please contact [email protected]. To submit a proposal or request further information, please contact the Publishing Editor in your country: China Jasmine Dou, Editor ([email protected]) India, Japan, Rest of Asia Swati Meherishi, Editorial Director ([email protected]) Southeast Asia, Australia, New Zealand Ramesh Nath Premnath, Editor ([email protected]) USA, Canada: Michael Luby, Senior Editor ([email protected]) All other Countries: Leontina Di Cecco, Senior Editor ([email protected]) ** This series is indexed by EI Compendex and Scopus databases. **
More information about this series at http://www.springer.com/series/7818
Akash Kumar Bhoi Pradeep Kumar Mallick Valentina Emilia Balas Bhabani Shankar Prasad Mishra •
•
Editors
Advances in Systems, Control and Automations Select Proceedings of ETAEERE 2020
123
•
Editors Akash Kumar Bhoi Department of Electrical and Electronics Engineering Sikkim Manipal Institute of Technology Rangpo, Sikkim, India
Pradeep Kumar Mallick School of Computer Engineering Kalinga Institute of Industrial Technology (KIIT Deemed to be University) Bhubaneswar, Odisha, India
Valentina Emilia Balas Department of Automation and Applied Informatics “Aurel Vlaicu” University of Arad Arad, Romania
Bhabani Shankar Prasad Mishra School of Computer Engineering Kalinga Institute of Industrial Technology (KIIT Deemed to be University) Bhubaneswar, Odisha, India
ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-981-15-8684-2 ISBN 978-981-15-8685-9 (eBook) https://doi.org/10.1007/978-981-15-8685-9 © Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Committees
Chief Patron Prof. (Dr.) Achyuta Samanta, Honorable Founder, KIIT and KISS, Bhubaneswar
Patrons Prof. (Dr.) Hrushikesha Mohanty, Vice-Chancellor, KIIT Deemed to be University Prof. (Dr.) Sasmita Samanta, Pro Vice-Chancellor, KIIT Deemed to be University Prof. (Dr) Jyana Ranjan Mohanthy, Registrar, KIIT Deemed to be University
General Chairs Dr. Samaresh Mishra, Director, School of Computer Engineering, KIIT Deemed to be University Dr. Chinmay Kumar Panigrahi, Director, School of Electrical Engineering, KIIT Deemed to be University Dr. Valentina Emilia Balas, Professor, Aurel Vlaicu University of Arad, Romania Dr. Goo Soo Chae, Professor, Baekseok University, South Korea
Invited Speakers Mr. Debajyoti Dhar, Group Director, SIPG, ISRO, Ahmedabad Dr. Amitanshu Patnaik, DTRL, DRDO, Delhi Dr. Saroj Kumar Meher, SSIU, ISI, Bangalore
v
vi
Committees
Dr. Anup Kumar Panda, NIT Rourkela Dr. Satchidananda Dehuri, ICT, F.M. University, Balasore Dr. Swagatm Das, ECSU, ISI, Kolkata
Program Chairs Dr. Pradeep Kumar Mallick, Associate Professor, School of Computer Engineering, KIIT DU, Odisha Dr. Manoj Kumar Mishra, School of Computer Engineering, KIIT DU, Odisha Dr. Ranjeeta Patel, School of Electrical Engineering, KIIT DU, Odisha
Program Co-chairs Dr. Amiya Ranjan Panda, School of Computer Engineering, KIIT DU, Odisha Prof. Suresh Chandra Moharana, School of Computer Engineering, KIIT DU, Odisha
Publication Chair Dr. Akash Kumar Bhoi, SMIT, SMU, Sikkim
Internal Advisory Committee Dr. Bhabani Sankar Prasad Mishra, Associate Dean, School of Computer Engineering, KIIT DU, Odisha Dr. Madhabananda Das, School of Computer Engineering, KIIT DU, Odisha Dr. G. B. Mund, School of Computer Engineering, KIIT DU, Odisha Dr. Biswajit Sahoo, School of Computer Engineering, KIIT DU, Odisha Dr. Suresh Chandra Satapathy, School of Computer Engineering, KIIT DU, Odisha Dr. Prasant Kumar Patnaik, School of Computer Engineering, KIIT DU, Odisha Dr. Alok Kumar Jagadev, School of Computer Engineering, KIIT DU, Odisha Dr. Santosh Kumar Swain, School of Computer Engineering, KIIT DU, Odisha Dr. Hrudaya Kumar Tripathy, School of Computer Engineering, KIIT DU, Odisha Dr. Amulya Ratna Swain, School of Computer Engineering, KIIT DU, Odisha Dr. Arup Avinna Acharya, School of Computer Engineering, KIIT DU, Odisha
Committees
International Advisory Committee Dr. Atilla ELÇİ, Aksaray University, Turkey Dr. Bhabani Sankar Swain, KU, South Korea Dr. Benson Edwin Raj, Fujairah Women’s College, Fujairah, UAE Dr. Mohd. Hussain, Islamic University, Madinah, Saudi Arabia Prof. Frede Blaabjerg, Aalborg University, Denmark Dr. Yu-Min Wang, National Chi Nan University, Taiwan Prof. Gabriele Grandi, University of Bologna, Italy Dr. Steve S. H. Ling, University of Technology Sydney, Australia Dr. Hak-Keung Lam, King’s College London, UK Dr. Frank H. F. Leung, Hong Kong Polytechnic University, Hong Kong Dr. Yiguang Liu, Sichuan University, China Dr. Abdul Rahaman, Debre Tabor University, Ethiopia Prof. Sanjeevikumar Padmanaban, Aalborg University Dr. Bandla Srinivasa Rao, Debre Tabor University, Ethiopia Prof. Pierluigi Siano, University of Salerno, Italy Dr. Hussain Shareef, UKM, Malaysia Dr. Anil Kavala, UKM, Malaysia Dr. Hussain Shareef, Sr. Engineer, Samsung Electronics, Seoul, South Korea Prof. Michael Pecht, University of Maryland, USA Prof. Josep M. Guerrero, Aalborg University, Denmark Dr. Akshay Kumar Rathore, Concordia University, Montreal, Canada
National Advisory Committee Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr.
Rajib Misra, IIT Patna, India Anup Kumar Panda, NIT Rourkela Kishore Sarawadekar, IIT BHU, Varanasi, India Sachidananda Dehury, F.M. University, Odisha Inderpreet Kaur, Chandigarh University Debashish Jena, NITK, India N. P. Padhy, IIT Roorkee Sashikala Mishra, IIIT Pune Subhendu Pani, OEC, Odisha Mihir Narayan Mohanty, ITER, SOA University Sabyasachi Sen Gupta, IIT Kharagpur P. Sateesh Kumar, IIT Roorkee Swagatam Das, ISI Kolkata Sourav Mallick, NIT Sikkim Ashok Kumar Pradhan, IIT Kharagpur Bhim Singh, IIT Delhi
vii
viii
Committees
Dr. C. Sashidhar, CE, JNTUA Dr. M. Sashi, NIT Warangal Prof. Rajshree Srivastava, DIT University, Dehradun, India Dr. Kolla Bhanu Prakash, K L University, AP Dr. Vikas Dubey, Bhilai Institute of Technology, Raipur (C.G.) Dr. Vinod Kumar Singh, S R Group of Institutions, Jhansi (U.P.) Dr. Ratnesh Tiwari, Bhilai Institute of Technology, Raipur (C.G.) Prof. J. Naren, K L University, Vijayawada, AP, India Dr. G. Vithya, K L University, Vijayawada, AP, India
Reviewer Board Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr. Dr.
G. B. Mund, School of Computer Engineering, KIIT DU, Odisha Prasant Kumar Patnaik, School of Computer Engineering, KIIT DU, Odisha Amulya Ratna Swain, School of Computer Engineering, KIIT DU, Odisha Preetisudha Meher, NIT Arunachal Pradesh Avinash Konkani, Associate Human Factors Professional (AHFP), US Shruti Mishra, Department of CSE, VIT Amaravati, AP Sachidananda Dehury, F.M. University, Odisha Sandeep Kumar Satapathy, K L University, AP Brojo Kishore Mishra, GIET University, Odisha Karma Sonam Sherpa, SMU, India Sashikala Mishra, IIIT Pune Akhtar Kalam, VU, Australia Richard Blanchard, Renewable Energy, LBU, UK Anjan Kumar Ray, NIT Sikkim Richard Blanchard, Renewable Energy, LBU, UK Babita Panda, School of Electrical Engineering, KIIT DU, Odisha Sriparna Roy Ghatak, School of Electrical Engineering, KIIT DU, Odisha Minakhi Rout, School of Computer Engineering, KIIT DU, Odisha Mohit Ranjan Panda, School of Computer Engineering, KIIT DU, Odisha
Conference Management Chairs Dr. Manas Ranjan Lenka, School of Computer Engineering, KIIT DU, Odisha Dr. Siddharth Swarup Rautaray, School of Computer Engineering, KIIT DU, Odisha Dr. Minakhi Rout, School of Computer Engineering, KIIT DU, Odisha Dr. Subhra Devdas, School of Electrical Engineering, KIIT DU, Odisha Dr. Tanmoy Roy Choudhury, School of Electrical Engineering, KIIT DU, Odisha Prof. Subhendu Bikash Santra, School of Electrical Engineering, KIIT DU, Odisha
Committees
ix
Registration Chairs Dr. Satarupa Mohanty, School of Computer Engineering, KIIT DU, Odisha Prof. Roshni Pradhan, School of Computer Engineering, KIIT DU, Odisha Prof. Swagat Das, School of Electrical Engineering, KIIT DU, Odisha Prof. Subodh Mohanty, School of Electrical Engineering, KIIT DU, Odisha
Finance Chairs Dr. Bhabani Sankar Prasad Mishra, School of Computer Engineering, KIIT DU, Odisha Dr. Amiya Ranjan Panda, School of Computer Engineering, KIIT DU, Odisha
Publicity Chairs Dr. Jagannatha Singh, School of Computer Engineering, KIIT DU, Odisha Prof. Abhay Kumar Sahoo, School of Computer Engineering, KIIT DU, Odisha Prof. Anil Kumar Swain, School of Computer Engineering, KIIT DU, Odisha Dr. Kundan Kumar, School of Electrical Engineering, KIIT DU, Odisha Dr. Pampa Sinha, School of Electrical Engineering, KIIT DU, Odisha Dr. Deepak Gupta, School of Electrical Engineering, KIIT DU, Odisha Dr. Chita Ranjan Pradhan, School of Computer Engineering, KIIT DU, Odisha Dr. Manjusha Pandey, School of Computer Engineering, KIIT DU, Odisha Dr. Yashwant Singh Patel, IIT Patna Dr. Rudra Narayan Das, School of Electrical Engineering, KIIT DU, Odisha Yashwant Singh, IIT Patna, India
Preface
The 2nd International Conference on Emerging Trends and Advances in Electrical Engineering and Renewable Energy (ETAEERE 2020) which was held at Kalinga Institute of Industrial Technology (KIIT) Deemed to be University, Bhubaneswar, Odisha, from March 5 to 6, 2020, brings together the latest research in smart grid, renewable energy and management, electronics, communication, computing, systems, control and automations. The aim of the conference was to provide a platform for researchers, engineers, academicians and industry professionals to present their recent research works and to explore future trends in various areas of engineering and management. The conference also brings together both novice and experienced scientists and developers, to explore newer scopes, collect new ideas, establish new cooperation between research groups and exchange ideas, information, techniques and applications in the fields of electrical, renewable energy, electronics and computing. Advances in Systems, Control and Automations, the outcome of ETAEERE 2020, provides a profound understanding of different systems or machine along with their complex operation, behaviors and linear–nonlinear relationship at a different environment, to solve problems of multivariable control systems as well as the necessary background for performing research in the field of control and automations, and to understand and focus on the classical and modern design of different intelligent automated systems. Our sincere thanks to School of Computer Engineering and School of Electrical Engineering of KIIT Deemed to be University for the combined effort for making this ETAEERE 2020 as a successful event, and we would like to record our appreciation to the whole committee members of ETAEERE 2020. We are also thankful to all the participants and our keynote speakers, who have presented scientific knowledge and foresight scope for different tracks. We have received more than 450+ research articles and thank our peer-reviewing team for selecting quality papers for each volume. The participants have presented their work in four main tracks, i.e., systems, control and automations, smart grid, renewable energy and management, electronics, communication and computing and advanced computing. xi
xii
Preface
We would also like to acknowledge our technical partners, i.e., Sikkim Manipal Institute of Technology, India, and Baekseok University, South Korea, for the continuous technical support throughout the journey. Sikkim Manipal Institute of Technology (SMIT) deserves a special mention, for holding the 1st edition of ETAEERE 2016 and proving a collaborative opportunity to host ETAEERE 2020 at KIIT University. Bhubaneswar, India Rangpo, India
Dr. Pradeep Kumar Mallick Dr. Akash Kumar Bhoi
Contents
Comparative Analysis of Multi-stage DC–DC Boost Converters . . . . . . Prajna Dash, Snehalika, and Babita Panda
1
A Review on Finding Optimum Paths with Genetic and Annealing Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Achyuth Ram, Kapula Kalyani, and Durgesh Nandan
15
Mathematical Evaluation of Solar PV Array with T-C-T Topology Under Different Shading Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. BalaRaju and Ch. Chengaiah
23
Discrete Wavelet Transform for CNN-BiLSTM-Based Violence Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rajdeep Chatterjee and Rohit Halder
41
Agrochain: Ascending Blockchain Technology Towards Smart Agriculture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pratyusa Mukherjee, Rabindra Kumar Barik, and Chittaranjan Pradhan
53
Random Subspace-Based Hybridized SVM Classification for High-Dimensional Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sarita Tripathy and Prasant Kumar Pattnaik
61
MLAI: An Integrated Automated Software Platform to Solve Machine Learning Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sayantan Ghosh, Sourav Karmakar, Shubham Gantayat, Sagnik Chakraborty, Dipyaman Saha, and Himansu Das Frequency Regulation of a Multi-area Renewable Power System Incorporating with Energy Storage Technologies . . . . . . . . . . . . . . . . . . Subhranshu Sekhar Pati, Prajnadipta Sahoo, Santi Behera, Ajit Kumar Barisal, and Dillip Kumar Mishra
69
83
xiii
xiv
Contents
A Short Survey on Real-Time Object Detection and Its Challenges . . . . Naba Krushna Sabat, Umesh Chandra Pati, and Santos Kumar Das
93
Small Object Detection From Video and Classification Using Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 R. Arthi, Jai Ahuja, Sachin Kumar, Pushpendra Thakur, and Tanay Sharma An Efficient IoT Technology Cloud-Based Pollution Monitoring System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Harshit Srivastava, Kailash Bansal, Santos Kumar Das, and Santanu Sarkar A Semi-automated Smart Image Processing Technique for Rice Grain Quality Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Jay Prakash Singh and Chittaranjan Pradhan Analysis and Prediction of Cardiovascular Disease Using Machine Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Belete Kassaye Mengiste, Hrudaya Kumar Tripathy, and Jitendra Kumar Rout Fundus Image-Based Macular Edema Detection Using Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 C. Aravindan, Vedang Sharma, A. Thaarik Ahamed, Mudit Yadav, and Sharath Chandran Solar Tracker With Dust Removal System: A Review . . . . . . . . . . . . . . 155 Mukul Kumar, Reena Sharma, Mohit Kushwaha, Atul Kumar Yadav, Md Tausif Ahmad, and A. Ambikapathy Driver Behavior Profiling Using Machine Learning . . . . . . . . . . . . . . . . 165 Soumajit Mullick and Pabitra Mohan Khilar Comparative Analysis of Different Image Classifiers in Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Ritik Pratap Singh, Saloni Singh, Ragini Nandan Shakya, and Shahid Eqbal A Survey on Autism Spectrum Disorder in Biomedical Domain . . . . . . 185 Shreyashi Das and Adyasha Dash Performance of Photovoltaics in Ground Mount-Floating-Submerged Installation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Nallapaneni Manoj Kumar, A. Ajitha, Aneesh A. Chand, and Sonali Goel IoT-Based System for Residential Peak Load Management and Monitoring of Connected Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 A. Ajitha and Sudha Radhika A New AC Model for Transmission Line Outage Identification . . . . . . . 223 Mehebub Alam, Shubhrajyoti Kundu, Siddhartha Sankar Thakur, Anil Kumar, and Sumit Banerje
Contents
xv
Perspective Analysis of Anti-aging Products Using Voting-Based Ensemble Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Subarna Mondal, Hrudaya Kumar Tripathy, Sushruta Mishra, and Pradeep Kumar Mallick Analysis of a Career Prediction Framework Using Decision Tree . . . . . 247 Ankit Kumar, Rittika Baksi, Sushruta Mishra, Sourav Mishra, and Sagnik Rudra Machine Learning Approach in Crime Records Evaluation . . . . . . . . . . 255 Sushruta Mishra, Soumya Sahoo, Piyush Ranjan, and Amiya Ranjan Panda German News Article Classification: A Multichannel CNN Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Shantipriya Parida, Petr Motlicek, and Satya Ranjan Dash MMAS Algorithm and Nash Equilibrium to Solve Multi-round Procurement Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Dac-Nhuong Le, Gia Nhu Nguyen, Trinh Ngoc Bao, Nguyen Ngoc Tuan, Huynh Quyet Thang, and Suresh Chandra Satapathy Technology and Body Art: An Appraisal of Tattoo Renaissance Across Cultures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Swati Samantaray and Amlan Mohanty Protection of End User’s Data in Cloud Environment . . . . . . . . . . . . . . 297 K. Chandrasekaran, B. Kalyan Kumar, R. M. Gomathi, T. Anandhi, E. Brumancia, and K. Indira Privacy Preserving and Loss Data Retrieval in Cloud Computing Using Bucket Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 T. Durga Nagarjuna, T. Anil Kumar, and A. C. Santha Sheela Question Paper Generator and Result Analyzer . . . . . . . . . . . . . . . . . . . 315 R. Sathya Bama Krishna, Talupula Rahila, and Thummala Jahnavi Online Crime Reporting System—A Model . . . . . . . . . . . . . . . . . . . . . . 325 Mriganka Debnath, Suvam Chakraborty, and R. Sathya Bama Krishna An Efficient Predictive Framework System for Health Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 K. Praveen, K. V. Rama Reddy, S. Jancy, and Viji Amutha Mary Identification of Diabetic Retinopathy and Myopia Using Local Binary Pattern with Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Kranthi, Sai Kiran, A. Sivasangari, P. Ajitha, T. Anandhi, and K. Indira Social Network Mental Disorders Detection Using Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Yarrapureddy Harshavardhan Reddy, Yeruva Nithin, and V. Maria Anu
xvi
Contents
Multi-layer Security in Cloud Storage Using Cryptography . . . . . . . . . . 373 Mukesh Kalyan Varma, Monesh Venkul Vommi, and Ramya G. Franklin Web-Based Automatic Irrigation System . . . . . . . . . . . . . . . . . . . . . . . . 381 Bajjurla Uma Satya Yuvan, J. A. BalaVineesh Reddy Pentareddy, and S. Prince Mary Student Location Tracking Inside College Infrastructure . . . . . . . . . . . . 391 K. Yedukondalu, K. Chaitanya Nag, and S. Jancy Finding Smelly or Non-smelly Using Injected and Revision Method . . . 399 B. Suresh and A. C. Santha Sheela Smart Bus Management and Tracking System . . . . . . . . . . . . . . . . . . . . 407 M. Hari Narasimhan, A. L. Reinhard Kenson, and S. Vigneshwari Telemetry-Based Autonomous Drone Surveillance System . . . . . . . . . . . 419 R. Sriram, A. Vamsi, and S. Vigneshwari A Perceptive Fake User Detection and Visualization on Social Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 Sai Arjun, K. V. Jai Surya, and S. Jancy Route Search on Road Networks Using CRS . . . . . . . . . . . . . . . . . . . . . 435 K. Nitish, K. Phani Harsha, and S. Jancy Smart Fish Farming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 S. Guruprasad, R. Jawahar, and S. Princemary Perceptual Image Hashing Using Surf for Tampered Image Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Chavva Sri Lakshmi Rama Swetha, Chakravaram Divya Sri, and B. Bharathi Diabetic Retinopathy Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Athota Manoj Kumar, Atchukola Sai Gopi Kanna, and Ramya G. Franklin Traffic Status Update System With Trust Level Management Using Blockchain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 Bhanu Prakash Yagitala and S. Prince Mary Unique and Dynamic Approach to Predict Schizophrenia Disease Using Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Nelisetti Ashok, Tatikonda Venkata Sai Manoj, and D. Usha Nandini Live Bus Tracking System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 Akash Singh, Shivam Choudhary, and A. Mary Posonia A System for Informed Prediction of Health . . . . . . . . . . . . . . . . . . . . . 505 Rakshith Guptha Thodupunoori, Praneeth Sai Ummadisetty, and Duraisamy Usha Nandini
Contents
xvii
A Safety Stick for Elders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 Korrapati Bhuvana, Bodavula Krishna Bhargavi, and S. Vigneshwari Intelligent Analysis for Wind Speed Forecasting Using Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 Bethi Gangadhar Reddy, Bhuma Dinesh Kumar, and S. Vigneshwari A Proficient Model for Action Detection Using Deep Belief Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531 K. S. S. N. Krishna and A. Jesudoss Semantic-Based Duplicate Web Page Detection . . . . . . . . . . . . . . . . . . . 541 A. C. Santha Sheela and C. Jayakumar An Integrated and Dynamic Commuter Flow Forecasting System for Railways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549 Y. Bevish Jinila, D. Goutham Reddy, and G. Yaswanth Reddy Attribute-Based Data Management in Crypt Cloud . . . . . . . . . . . . . . . . 557 Eswar Sai Yarlagadda and N. Srinivasan Analysis on Sales Using Customer Relationship Management . . . . . . . . 565 Silla Vrushadesh, S. N. Ravi Chandra, and R. Yogitha A Unified and End-to-End Methodology for Predicting IP Address for Cloud and Edge Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 Vutukuri Manasa, A. C. Charishma Jayasree, and M. Selvi Predicting the Farmland for Agriculture from the Soil Features Using Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581 Kada Harshath, Kamalnath Mareedu, K. Gopinath, and R. Sathya Bama Krishna e-Commerce Site Pricing and Review Analysis . . . . . . . . . . . . . . . . . . . . 595 Sourav Nandi and A. Mary Posonia Heart Disease Prediction Using Machine Learning . . . . . . . . . . . . . . . . 603 M. Sai Shekhar, Y. Mani Chand, and L. Mary Gladence Secured Image Retrieval from Cloud Repository Using Image Encryption Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 Mercy Paul Selvan, Viji Amutha Mary, Putta Abhiram, and Reddem Srinivasa Reddy Hybrid Edge-Based Gaussian Mixture Model for Foreground Detection in Video Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619 Subhaluxmi Sahoo, Sunita Samant, and Sony Snigdha Sahoo
xviii
Contents
Design of EEG Based Classification of Brain States Using STFT by Deep Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627 Rahul Agrawal and Preeti Bajaj Swarm Intelligence-Based Feature Selection and ANFIS Model Parameter Optimization for ASCV Risk Prediction and Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639 Paulin Paul, B. Priestly Shan, and O. Jeba Shiney Discrimination of Hemorrhage in Fundus Images Using Shape and Texture-based Descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651 Jeba Derwin, Tamil Selvi, Priestly Shan, Jeba Singh, and S. N. Kumar Modeling Approach for Different Solar PV System: A Review . . . . . . . 661 Akhil Nigam and Kamal Kant Sharma An Empirical Study on Gender–Age Influence on Direct-To-Consumer Promotion of Pharmaceutical Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677 Jaya Rani, Saibal K. Saha, Vivek Pandey, and Ajeya Jha An Investigation of Inclusion of Marginalized People in Skill Development Mission, Sikkim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685 Anita Gupta, Neeta Dhusia, and Ajeya Jha A Fuzzy Logic Approach for Improved Simulation and Control Washing Machine System Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699 Tejas V. Bhatt, Akash Kumar Bhoi, Gonçalo Marques, and Ranjit Panigrahi
About the Editors
Dr. Akash Kumar Bhoi has completed his B.Tech. (Biomedical Engineering) from Trident Academy of Technology, BPUT University, Odisha and M.Tech. (Biomedical Instrumentation) from Karunya University, Coimbatore in the year 2009 and 2011 respectively. In 2019 he is awarded Ph.D. from Sikkim Manipal University, India. He is working as Assistant Professor in the Department of Electrical and Electronics Engineering at Sikkim Manipal Institute of Technology (SMIT), India since 2012. He is member of ISEIS and IAENG, associate member of IEI, UACEE and editorial board member reviewer of Indian and international journals. His areas of research are biomedical signal processing, medical image processing, sensors and transducers, and medical instrumentation. He has published several papers in national and international journals and conferences. He has also served on numerous organizing panels for the international conferences and workshops. Dr. Pradeep Kumar Mallick is currently working as Associate Professor in the School of Computer Engineering, Kalinga Institute of Industrial technology (KIIT) Deemed to be University, Odisha, India and Post Doctoral Fellow (PDF) in Kongju National University South Korea, PhD from Siksha O Anusandhan University, M. Tech. (CSE) from Biju Patnaik University of Technology (BPUT), and MCA from Fakir Mohan University Balasore, India. Besides academics, he is also involved various administrative activities, Member of Board of Studies, Member of Doctoral Research Evaluation Committee, Admission Committee etc. His area of research includes Algorithm Design and Analysis, and Data Mining, Image Processing, Soft Computing, and Machine Learning. He has published 5 books and more than 55 research papers in national and international journals and conference proceedings. Dr. Valentina Emilia Balas is currently Full Professor in the Department of Automatics and Applied Software at the Faculty of Engineering, “Aurel Vlaicu” University of Arad , Romania. She holds a Ph.D. in Applied Electronics and Telecommunications from Polytechnic University of Timisoara. Dr. Balas is author xix
xx
About the Editors
of more than 280 research papers in refereed journals and international conferences. Her research interests are in Intelligent Systems, Fuzzy Control, Soft Computing, Smart Sensors, Information Fusion, Modeling and Simulation. She is the Editor-in Chief of the International Journal of Advanced Intelligence Paradigms (IJAIP) and to International Journal of Computational Systems Engineering (IJCSysE), member in Editorial Board member of several national and international journals and is the director of Intelligent Systems Research Centre in Aurel Vlaicu University of Arad. She is a member of EUSFLAT, SIAM and a Senior Member IEEE, member in TC – Fuzzy Systems (IEEE CIS), member in TC – Emergent Technologies (IEEE CIS), member in TC – Soft Computing (IEEE SMCS). Dr. Bhabani Shankar Prasad Mishra is working as an Associate Professor and Associate Dean in School of Computer Engineering at KIIT University, Bhubaneswar, Odisha since 2006. He has received his PhD degree in Computer Science from F.M.University, Balasore, Odisha in 2011. He completed his Post Doctoral Research from Soft Computing Laboratory, Yonsei University, Seoul, South Korea under the Technology Research Program for Brain Science through the National Research Foundation, Ministry of Education, Science & Technology, South Korea. His research interest includes Evolutionary Computation, Neural Networks, Pattern Recognition, Dataware housing and Mining, and Big Data. He has already published about 61 research papers in refereed journals and conferences, has published one book and edited four books in his credit. He is also acting as an editorial member of various journals.
Comparative Analysis of Multi-stage DC–DC Boost Converters Prajna Dash, Snehalika, and Babita Panda
Abstract Over the last few decades, DC–DC converters have been subjected to greater developments and used in different applications. In this paper, a review on multi-stage-based boost converters has been undertaken. The study is focused mainly on an improved version of a simple boost converter by adding different stages to it known as positive output cascaded DC converters. These multi-stage boost converters enhance their voltage using super-lift technique. In this paper, the main and additional series configuration of these positive output cascaded converters is reviewed. A twostage circuit is formed by the simple addition of an inductor, diode, capacitor and a DEC circuit when added can enhance the voltage transfer gain effectively. Luo converters use voltage lift techniques to have more the same load and comparative analysis is provided for different duty cycles. The simulations are performed using MATLAB Simulink. Keywords Multi-stage boost · Main series · Additional series · DEC · Simulation · Comparative analysis
1 Introduction The DC to DC converter is being widely used in switch mode (ON/OFF) power supply and in DC motor drive. It has its application in both isolation and nonisolation converters. Different converter topologies are boost, buck, buck boost, fly back, interleaved, forward, Cuk, push pull. Each converter operates with some of its own advantages and disadvantages. The converter in which the voltage at the output P. Dash (B) · Snehalika · B. Panda School of Electrical Engineering, KIIT University, Bhubaneswar, India e-mail: [email protected] Snehalika e-mail: [email protected] B. Panda e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_1
1
2
P. Dash et al.
is larger than the voltage in the input is called a Boost, which is also called a step-up converter. The converter in which the voltage in the output is less than the voltage in the input is called Buck, which is also called a step-down converter. Constructionally, the buck boost is the combined form of step up and step down, where the voltage in the output is greater or less than the voltage in the input. It is used in battery-powered systems, adaptive control applications. Cuk converter is similar to buck boost type. The conversion of AC to DC and DC to DC is achieved by fly back converter. Constructionally, it resembles as buck boost converter where L(inductor) is separated to form a transformer. It is used in cell phone chargers, CRT in TV and monitor. Interleaved boost converters are formed by a parallel arrangement of two single-phase boost converter, which has reduced the current ripples to 45% and voltage ripples to 30%. There are certain disadvantages like slow transient response (in boost and buck), EMI issues, poor efficiency for high gain (in buck boost), issue in the size of filter (in Cuk), more current ripple and more losses(in fly back), cost and complexity issues (in interleaved boost). Due to thesis shortcomings, Luo converters and positive output cascaded boost converters are preferred that uses improved techniques to increase the output voltage. In this paper, we will discuss mainly on positive output cascaded boost converters and comparative analysis is provided based on different duty cycles. In Sect. 2, overview of DC–DC converters is discussed, and in Sect. 3, positive output cascaded converter, its division and sub-divisions are discussed. In Sect. 4, the analysis of the simulation results of each converter is provided along with its values of the parameter. The comparison based on different duty cycles is presented in Sect. 5.
2 DC–DC Converter This converter is regarded as an electronic or electromechanical device in which DC source is converted from one voltage level to another. It is used in medium/high power applications. It has its application in the PV power system, offshore wind turbines, electric vehicles, HVDC, telecommunication systems. It is also being used in switch mode power supplies that convert DC voltage level to another, in which input energy being stored temporarily that released to the output. Switching mode DC–DC converters are being more efficient than practical electronic converter as the heat sink is reduced and the durability of the battery is increased. The basic block diagram is shown in Fig. 1. In this paper, we will discuss the addition of stages in a simple boost converter without any isolation. In the next section, we will discuss multi-stage boost converters and its corresponding equations and circuit diagrams.
Comparative Analysis of Multi-stage DC–DC Boost Converters
3
Fig. 1 Block diagram of DC–DC converter
3 Multi-stage Boost Converter Two techniques are provided in this paper to boost up the voltage in the output compared to a simple boost converter by adding stages. They are voltage lift technique and super-lift technique. Increment of voltage in the output by arithmetic progression is possible by the technique of voltage lift in the step-by-step manner, which is used in Luo converters, whereas in the technique of super lift, the voltage in the output is done step by step by geometric progression in positive output cascade boost converters. Positive output cascaded converters are being classified into five groups: main series, additional series, double series, triple series and multiple series. Here d is the duty cycle, f is the switching frequency, T = 1/f is the switching period. In this paper, only main and additional series is being discussed [1].
3.1 Main Series It is the first stage of a positive output cascaded boost converter. It is divided into three types, i.e., elementary, two-stage and three-stage. In this paper, only elementary and two-stage are discussed.
3.1.1
Elementary Boost
It is also called a step-up converter which is used in battery-powered devices, e.g., notebooks, mobile phones and camera flashes. The corresponding diagram of the main boost is shown in Fig. 2(i) and its corresponding ON and OFF is shown in Fig. 2(ii), (iii). The capacitor C 1 is being charged to output voltage V o . The inductor current is being increased to voltage V in when the turn-ON period dT then decreased to −(V o − V in ) when the turn-OFF period (1 − d)T. Output voltage Vo =
1 Vin 1−d
(1)
4
P. Dash et al.
(i) ILQ
L
IR
D1
+
+ VLQ
&
S
R VR
(ii)
-
ILQ
L
IR
+
+ VLQ
C
R VR
-
L
(iii) ILQ
IR
+
+
V LQ
C
R
VR
Fig. 2 (i) Circuit diagram of elementary boost converter at normal condition. (ii) Turn-ON condition. (iii) Turn-OFF condition
The ripple in inductor current iL i L 1 =
Vin Vin − Vo dT = (1 − d)T L1 L1
(2)
The voltage transfer gain G= The inductor average current
Vo 1 = Vin 1−d
(3)
Comparative Analysis of Multi-stage DC–DC Boost Converters
IL1 =
Io 1− D
5
(4)
The deviation ratio of current in inductor L 1 ζ1 =
i L 1 /2 d R = i L1 2 f L1
(5)
Though the value of ξ 1 is less than one, converter works in CCM. The crest voltage of output voltage V o vo =
Q Io (1 − d)T 1 − d Vo = = C1 C1 f C1 R
(6)
The deviation of output voltage V o ε=
3.1.2
vo /2 1−d = Vo 2R f C1
(7)
Two-Stage Boost
It is being made by adding additional components to the elementary boost circuit. The components (L 2 –D2 –D3 –C 2 ) are added to the simple boost [1, 2]. The circuit is comprised of a single switch, two inductors, two capacitors and three diodes. The circuit diagram of two-stage boost, its corresponding On and OFF condition is being shown in Fig. 3(i)–(iii). The voltage across capacitor C 1 is V 1 . C 2 is being charged to output voltage V o . The current in inductor L 2 is being increased to V 1 during the turn-ON period dT and being decreased to −(V o − V 1 ) during turn-OFF period time (1 − d)T. The voltage across capacitor C 1 V1 =
Vin 1−d
(8)
Ripple in inductor current i L 2 i L 2 =
V1 Vo − V1 dT = (1 − d)T L2 L2
(9)
The output voltage Vo =
Vin (1 − d)2
(10)
6
P. Dash et al.
(i)
(ii)
(iii)
Fig. 3 (i) Circuit diagram of two-stage boost converter at normal condition. (ii) Turn-ON condition. (iii) Turn-OFF condition
The voltage transfer gain G=
Vo 1 = Vin (1 − d)2
(11)
Ripple in i L 1 Vin dT L1
(12)
Io (1 − d)2
(13)
Io 1−d
(14)
i L 1 = IL1 =
IL2 =
Comparative Analysis of Multi-stage DC–DC Boost Converters Fig. 4 DEC circuit
+
7
D 12
D
+
V LQ
-
C
C
VR
The deviation in the ratio of both inductor currents ζ1 =
i L 1 /2 d(1 − d)2 T Vin d(1 − d)4 R = = IL1 2L 1 Io 2 f L1
(15)
ζ2 =
i L 2 /2 d(1 − d)T V1 d(1 − d)2 R = = IL2 2L 2 Io 2 f L2
(16)
Though the current variation ratio is less than one, the converter works in continuous conduction mode. The output voltage variation V o is ε=
1−d vo /2 = Vo 2R f C2
(17)
3.2 Additional Series Additional series is being formed by including components (D11 –D12 –C 11 –C 12 ) to the output of boost converter. DEC stands for Double/Enhance Circuit [1] shown in Fig. 4. Here, the output voltage becomes two times the input voltage. It can increase the voltage gain of boost converter. In this section, only elementary and two-stage additional boost circuits are being discussed.
3.2.1
Elementary Additional Boost
It is being formed by adding the DEC circuit to the elementary boost converter. The circuit diagram of elementary additional boost, its turn-ON, and turn-OFF conditions is being shown in Fig. 5(i)–(iii). The voltage across C 1 and C 11 is being charged to V 1 . The voltage across capacitor C 12 is being charged to V o . The inductor current in L 2 is being increased to voltage V in during switch-ON condition dT and decreased to −(V 1 − V in ) during switch-OFF condition (1 − d)T.
8
P. Dash et al.
(i)
(ii)
(iii)
Fig. 5 (i) Circuit diagram of elementary additional boost converter at normal condition. (ii) TurnON condition. (iii) Turn-OFF condition
Voltage in C 1 and C 11 V1 =
Vin 1−d
(18)
Output voltage Vo = 2V1 =
2 Vin 1−d
(19)
The deviation in the inductor current i L 1 i L 1 = The voltage transfer gain
Vin V1 − Vin dT = (1 − d)T L1 L1
(20)
Comparative Analysis of Multi-stage DC–DC Boost Converters
G=
9
Vo 2 = Vin 1−d
(21)
2 Io 1−d
(22)
IL1 =
The variation of ratio current through inductor L 1 ζ1 =
i L 1 /2 d(1 − d)T Vin d(1 − d)2 R = = IL1 4L 1 Io 8 f L1
(23)
Though the value of ξ 1 is less than one, this converter works under CCM mode. Crest in output voltage vo =
Q Io (1 − d)T 1 − d Vo = = C12 C12 f C12 R
(24)
Deviation ratio of V o ε=
3.2.2
vo /2 1−d = Vo 2R f C12
(25)
Two-Stage Additional
It is being formed by the addition of a DEC circuit to the two-stage. The circuit diagram, its corresponding ON and OFF conditions is being shown in Fig. 6(i)–(iii). The voltage in C 1 V1 =
Vin 1−d
(26)
The voltage in C 2 and C 11 V2 =
V1 = 1−d
1 1−d
2 Vin
(27)
The capacitor C 12 is being charged to output voltage V o . The current in the inductor L 2 is increased to V 1 during turn-ON period dT and decreased to −(V 2 − V 1) during turn-OFF period (1 − d)T. The crest in inductor current of L 2 i L 2 =
V1 dT L2
(28)
10
P. Dash et al.
(i)
(ii)
(iii)
Fig. 6 (i) Circuit diagram of two-stage additional boost converter at normal condition. (ii) Turn-ON condition. (iii) Turn-OFF condition
The output voltage Vo = 2V2 =
2 2Vin V1 = 1−d (1 − d)2
(29)
The voltage transfer gain G=
Vo 2 = Vin (1 − d)2
i L 1 = IL1 =
Vin dT L1
2 Io (1 − d)2
(30) (31) (32)
Comparative Analysis of Multi-stage DC–DC Boost Converters
IL2 =
2 Io 1−d
11
(33)
The deviation in the ratio of current in L 1 and L 2 ζ1 =
i L 1 /2 d(1 − d)2 T Vin d(1 − d)4 R = = IL1 4L 1 Io 8 f L1
(34)
ζ2 =
i L 2 /2 d(1 − d)2 T V1 d(1 − d)2 R = = IL2 4L 2 Io 8 f L2
(35)
Though the values of ξ 1 and ξ 2 are less than one, the converter works in CCM. The output voltage ripple vo =
Q Io (1 − d)T 1 − d Vo = = C12 C12 f C12 R
(36)
Deviation in the ratio of output voltage ε=
vo /2 1−d = Vo 2R f C12
(37)
4 Analysis of Simulation Result The converters are being simulated. The simulation results for the above-discussed converters are being presented in this paper, having V in = 170 V with frequency f = 60 kHz and duty cycle of 50% is the same for all the four converters. The output voltage waveform of simple boost is shown in Fig. 7, two-stage boost converter in Fig. 8, elementary boost converter in Fig. 9 and two-stage additional boost in Fig. 10.
5 Comparative Analysis The comparison of these converters is evaluated under different duty cycles based on theoretical and measured values (Tables 1, 2, 3 and 4).
12
Fig. 7 Boost output voltage
Fig. 8 Two-stage boost output voltage
Fig. 9 Elementary additional boost output voltage
P. Dash et al.
Comparative Analysis of Multi-stage DC–DC Boost Converters
13
Fig. 10 Two-stage additional boost output voltage Table 1 Values of boost converter at different duty cycles Duty cycle
V o (Measured)
I o (Measured)
V o (Theoretical)
I o (Theoretical)
0.25
229
1.14
226.6
1.13
0.5
347
1.73
340
1.7
0.75
702.3
3.51
680
3.4
0.9
1764
8.81
1700
8.5
Table 2 Values of two-stage boost converter at different duty cycles Duty cycle
V o (Measured)
I o (Measured)
V o (Theoretical)
I o (Theoretical)
0.25
344.5
1.72
302.2
1.51
0.5
675.7
3.3
680
3.4
0.75
4198
20.99
2720
13.6
0.9
83.09
76.11
17,000
85
Table 3 Values of elementary boost converter at different duty cycles Duty cycle
V o (Measured)
I o (Measured)
V o (Theoretical)
I o (Theoretical)
0.25
430.6
2.15
453.3
2.26
0.5
645.6
3.22
680
3.4
0.75
1256
6.28
1360
6.8
0.9
2721
13.6
3400
17
Table 4 Values of two-stage elementary boost converter at different duty cycles Duty cycle
V o (Measured)
I o (Measured)
V o (Theoretical)
I o (Theoretical)
0.25
615.6
3.07
604.44
3.02
0.5
1353
6.76
1360
6.8
0.75
5810
29.05
5440
27.2
0.9
71,796.5
66.7
34,000
170
14
P. Dash et al.
6 Summary A new advanced type of DC–DC converters, i.e., positive output cascade boost converters has been successfully studied that significantly increased the voltage in output in geometric progression. The high output voltage is being obtained with the help of voltage lift technique used in positive output cascaded converters. These converters are being applied in industries with a very high output voltage. Simulation results are being verified and presented and comparison made on different duty cycles is also provided.
References 1. F.L. Luo, Positive output Luo converters, voltage lift technique. IEEE Proc. Electr. Power Appl. 146(4), 415–432 (1999) 2. F.L. Luo, Negative output Luo converters, voltage lift technique. IEEE Proc. Electr. Power Appl. 146(2), 208–224 (1999)
A Review on Finding Optimum Paths with Genetic and Annealing Algorithms M. Achyuth Ram, Kapula Kalyani, and Durgesh Nandan
Abstract This paper explains how the route planning with the optimum path can be done with the help of the virtual instrumentation algorithms. Here, in the paper, we use both the genetic and the annealing algorithms to find the optimum results. Genetic algorithm is used as it provides the flexibility of need not to how to solve the problem but just have to evaluate the quality of the solution. GA is like a skeleton in optimization, and it does not give the best solution everywhere. In some cases, we need to take up other approaches such as heuristics. The genetic algorithm has a disadvantage at the local minima, and it is not adaptable at the complexity. However, we consider the other algorithm called the annealing algorithm. The annealing algorithm can produce optimum results globally. Thus, this can be enhanced by using the genetic algorithm with the combination of the annealing algorithm. The parameters are explained, and the results are tabulated below. Keywords Mutation · Crossover · Optimization · Fitness function · Chromosome · Efficiency
1 Introduction In the modern era, optimum results are required, for route planning of robots and similarly traveling salesman; multicast broadcasting requires optimum route planning to minimize the cost and increase efficiency. Many virtual techniques like genetic algorithm, annealing algorithm, heuristic algorithms, and many other algorithms might M. Achyuth Ram (B) · K. Kalyani Department of ECE, Aditya College of Engineering and Technology, Surampalem, Andhra Pradesh, India e-mail: [email protected] K. Kalyani e-mail: [email protected] D. Nandan (B) Accendere Knowledge Management Services Pvt. Ltd., CL Educate Ltd., New Delhi, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_2
15
16
M. Achyuth Ram et al.
help to get optimum results of route planning. The heuristic algorithm is a guided search; a heuristic algorithm is an approach which might or might not always find the optimum solution in short time, and it increases efficiency by sacrificing completeness. So, it can be used in problems which take large time [1–5]. Here, in the context of the traveling salesman, he has to start from one city covering a certain number of cities, but he has to reach the destination without crossing the cities which are already crossed with the least cost of the path. The genetic algorithm is trying to emulate the genetic evolution process to find optimum results. The main advantage in the genetic algorithm need not know the way to solve the problem; we just evaluate the quality of the generated results. But the disadvantages in the genetic algorithm are that it lags at local minima; to avoid this, we introduce the fast deterministic method-based solution (like Krushal’s minimal spanning tree) as the initial result. The other way to get the optimum result is by combining genetic and annealing algorithms. Annealing algorithm has good optimization capabilities globally; it works on the principle of thermodynamics. So, by combining genetic and annealing algorithms, we can get both local and global optimum solutions. The genetic algorithm uses chromosomes, crossover, mutation and the survival of the fittest [6–10].
2 Literature Review The evolutionary planner/navigator can obtain near optimality of path planning as well as high planning efficiency. Evolutionary planner/navigation does not need total information about the environment to plan the path of the robot [1]. In 1995, Charles C. Palmer et al. have explained the new way to find the solution for optimum communication spanning tree problems by genetics In 1994, Jing Xiao worked on evolutionary computational concepts. The main important thing is the algorithm. In the end, differentiation of the genetic algorithm solutions to the solution of good heuristic is given that demonstrates the capability of a genetic algorithm to get solutions at good and superior solutions. They proposed that reparation may cause a loss of genetic information and leads to a decrease in planning efficiency [2]. In 1996, Stephen Derring, Dino Farinacci, van Jacobason and Ching-Gung Liu developed a multicast routing architecture that efficiently demonstrates the distribution of trees across large area Internet networks, where a large number of groups are represented. This approach uses (a) receiver-initiated, constrained membership advertisement for lightly distributed multicast groups, (b) holds both shared and shortest path tree models in a single protocol, (c) never depends on unicast protocols and (d) adopts soft-state mechanisms [3]. In 1996, Hitoshi Kanoh et al. proposed a new method to increase the frequency of searching capacity of genetic algorithms through viral infection in place of the general mutation process. Incomplete results of a CSP are taken as viruses, and a population of viruses is established as well as a population of candidate solutions. They did experiments through randomly generated CSPs and evaluated that the mean time required is less than the genetic algorithm [4]. In 1997,
A Review on Finding Optimum Paths with Genetic and Annealing …
17
Erol Gelenbe, Anoop Ghanwani and Vijay Srinivasan proposed that the random neural network can be unitized to drastically increase the efficiency of Steiner trees delivered by a heuristic which are like the average distance heuristic and minimum spanning tree heuristic. We provide a factual differentiation and check that the heuristics that are rearranged using the neural network yield better trees [5]. In 1999, Zhang Xiang et al. explained a new genetic algorithm for multicast routing to meet the needs of broadcasting from one point to multiple receiver points with minimal spanning tree. Its main thing is to introduce the orthogonal scheme into a crossover operation. So, it can find the solution in a statistically sounding method. It can have both parallel implementation and execution [6]. In 2001, Xu Jing-Wen et al. have proposed the continuation of the minimum spanning tree problem which is an NP-hard problem that does not wind-up a polynomial-time algorithm. They proposed a quick optimization technique on MST problem, the Gradient Gene Algorithm. When compared to all other algorithms on the MST problem, it is more updated: It is very efficient, easy and reliable also [7]. In 2002, Chun-Wei Tsai has established a new algorithm based on genetic algorithm for building the minimal cost tree with time delay constrains. It is a novel multiple-searching genetic algorithm for multimedia multicast routing. As per the results, the calculation cost of our proposed method can minimize when compared to others [8]. In 2004, Mehdi Krabi, Mahmood Fathy and Mehdi Dehghan have proposed heuristic algorithm which might resolve the problem of bandwidth-delay-constrained least-cost QoS multicast routing. The QoS multicast routing like the end-end delay and minimum cost spanning tree cost. These outcomes prove that the algorithm gives a low average tree cost than the currently existing algorithms with a reasonably good time [9]. In 2015, Muhao Chen et al. have proposed a solution for the problem of a traveling salesman, and it is by combining both genetic and annealing algorithms. So, the GA has both advantages of local and global optimizations. So, the cost of the path is minimized; efficiency and accuracy are increased finally [10]. In 2004, Tatarakis and V. Zeimpekis have explained how to minimize vehicle routing problems with recent developments of virtual reality. Further, they propose systematic distribution methods [11]. In 1996, Jing Xiao et al. have introduced a new method based on evolutionary concepts; the main importance of the EP/N is its capability for self-tuning to adapt to a wide range of task environments [12]. In 2007, Hitoshi Kanoh et al. have proposed that dynamic route planning of cars can be done with virtual techniques like virus genetic algorithm. A small region of an arterial road is considered as the virus [13]. In 2002, Hitoshi Kanoh has addressed the dynamic route planning with the help of the genetic algorithm. If traffic congestion alters during driving, an alternative route is selected [14]. In 2003, A. T. Haghighat et al. have recommended a QoS-based evolutionary algorithm. This paper concludes that this algorithm overcomes the GA [15].
18
M. Achyuth Ram et al.
3 Methodology 3.1 Genetic Algorithm Design Description of the Environment In this traveling salesman problem issue, we consider the cities to be covered before the examination. An n rows and n column matrix d(i, j) matrix is regarded as the length between two cities. This can be represented as below d(i, j) = sqrt((X ; −X/ + (1 jY/)
Method of Code Design In the coding, the model is designed by actual batches in an unplanned manner to code complete cities. Fitness Evaluations Here, in this situation, d(i, j) is the distance between the two cities; by avoiding the errors in the computer optimum, the results are obtained. distance = d(i, j) Fitness = 100 ∗ (distance)
General Procedure Firstly, we use a genetic algorithm to get global optimized results, and then, we use annealing algorithm to get local optimization. Let us understand how the genetic algorithm works do. Start with randomly generated k chromosomes, also known as a population. Each and every chromosome represents a separate solution. By using k chromosomes through mutation and crossover create k-offspring. Initialization of Population The initialization of the population is to design five dissimilar routes/paths that join four dissimilar cities which utilizes the function to design a single dimension random vector, as an independent of the genetic group. Do this for n repetitions. Finding Fitness of Individual The function of fitness is utilized to find the cost of every separate path; the shortest the path, then the highest the fitness. Crossover Crossover is the process in which some portions of chromosomes are interchanged.
A Review on Finding Optimum Paths with Genetic and Annealing …
19
Mutation The mutation is the process of inversion of some parts in a single chromosome. Here, there are no two chromosomes; there is a single chromosome. Selection of Fittest Individual The selection is to select the best and shortest path by removing the unsuitable paths. This leads to getting the fittest and optimum path.
3.2 Design of Annealing Algorithm Initial Condition The genetic algorithm result is assumed as the starting point for the annealing. By assuming the initial temperature as 500 °C, the repetition equation is given below T k = AT ∗ (k − 1) k is 1, 2. where A always lies between 0.88 and 0.9. Rules of Metropolis The rules of Metropolis are checked to take the new path or not. It can be decided by the change in energy between two points. If (f ) is lesser than 0, a new path is considered. If the probability of balancing (exp(−M!k * T )) at (t) degree is higher than the random function, then also we consider the path. Finally, the result obtained is the best optimum result.
4 Simulations and Solutions Figure 1 shows the main functional block diagram of the genetic algorithm (Table 1).
5 Conclusion The above tabular form shows that the optimized results can be obtained by combining genetic and annealing algorithm. The individual outcomes of the genetic and annealing algorithms are not as good as the outcome of the genetic–annealing algorithm. This paper uses the advantages of both the genetic algorithm and the annealing algorithm. These algorithms combination make the profit of both local and global optimizations of the path cost. This leads to increase the accuracy and increase the efficiency of the solution. The above results show the practicality of an increase inaccuracy.
20
M. Achyuth Ram et al. Environment descripon about path planning
Path coding
Inializaon of populaon
Whether it meets the end condion End
Evaluaon of fitness
Whether it sasfy the principle of genec opmizaon
select x j randomly from the neighbour of xi
elite one x gs
∆≤0
present path x i = x gs Present temperature T i =To
Choice
crossover
exp( -∆/T i )= rand(0.1)
Mutaon
X i = xj
New individuals
Fig. 1 Flowchart of genetic algorithm
Table 1 Following are the results we observe below References
Algorithms
Fitness 1
Fitness 2
Average
[2]
Genetic algorithm
31.25
32.163
31.07
[15]
Annealing algorithm
33.18
34.52
33.85
[15]
Genetic–annealing
35.16
36.67
35.92
References 1. J. Xiao, The evolutionary planner/navigator in a mobile robot environment. Handb. Evol. Comput. 1–11 (2004). https://doi.org/10.1887/0750308958/b386c103 2. C.C. Palmer, A. Kershenbaum, An approach to a problem in network design using genetic algorithms. Networks 26, 151–163 (1995). https://doi.org/10.1002/net.3230260305 3. S. Deering, D.L. Estrin, D. Farinacci, V. Jacobson, C.G. Liu, L. Wei, The PIM architecture for wide area multicast routing. IEEE/ACM Trans. Netw. 4, 153–162 (1996). https://doi.org/10. 1109/90.490743 4. H. Kanoh, M. Matsumoto, K. Hasegawa, N. Kato, S. Nishihara, Solving constraint-satisfaction problems by a genetic algorithm adopting viral infection. Eng. Appl. Artif. Intell. 10, 531–537 (1997). https://doi.org/10.1016/s0952-1976(97)00035-3 5. E. Gelenbe, A. Ghanwani, V. Srinivasan, Improved neural heuristics for multicast routing. IEEE J. Sel. Areas Commun. 15, 147–155 (1997). https://doi.org/10.1109/49.552065
A Review on Finding Optimum Paths with Genetic and Annealing …
21
6. Q. Zhang, Y.W. Leung, An orthogonal genetic algorithm for multimedia multicast routing. IEEE Trans. Evol. Comput. 3, 53–62 (1999). https://doi.org/10.1109/4235.752920 7. X. Jing-Wen, Z. Jin-Bo, L. Yuan-Xiang, An inversion evolutionary algorithm on how to convert FDP to TSP. Wuhan Univ. J. Nat. Sci. 6, 589–592 (2001). https://doi.org/10.1007/BF03160307 8. C.W. Tsai, C.F. Tsai, C.P. Chen, Novel multiple-searching genetic algorithm for multimedia multicast routing, in Proceedings of the 2002 Congress on Evolutionary Computation, CEC 2002, vol. 1 (2002), pp. 506–511. https://doi.org/10.1109/CEC.2002.1006286 9. M. Karabi, M. Fathy, M. Dehghan, QOS multicast routing based on a heuristic genetic algorithm, in Canadian Conference on Electrical and Computer Engineering (CCECE), vol. 3 (2004), pp. 1727–1730. https://doi.org/10.1109/ccece.2004.1349747 10. M. Chen, C. Gong, X. Li, Z. Yu, A.G. Process, Based on virtual instrument technology and genetic-annealing algorithms 1825–1827 (2015) 11. G.M. Giaglis, I. Minis, A. Tatarakis, V. Zeimpekis, Minimizing logistics risk through real-time vehicle routing and mobile technologies: research to date and future trends. Int. J. Phys. Distrib. Logist. Manag. 34, 749–764 (2004). https://doi.org/10.1108/09600030410567504 12. J. Xiao, Z. Michalewicz, L. Zhang, Evolutionary planner/navigator: operator performance and self-tuning, in Proceedings of the IEEE Conference on Evolutionary Computation (1996), pp. 366–371. https://doi.org/10.1109/icec.1996.542391 13. H. Kanoh, Dynamic route planning for car navigation systems using virus genetic algorithms. Int. J. Knowl. Based Intell. Eng. Syst. 11, 65–78 (2007). https://doi.org/10.3233/KES-200711105 14. H. Kanoh, T. Nakamura, Knowledge based genetic algorithm for dynamic route selection, in International Conference on Knowledge-Based Intelligent Electronic Systems. Proceedings, KES, vol. 2 (2000), pp. 616–619. https://doi.org/10.1109/kes.2000.884123 15. A.T. Haghighat, K. Faez, M. Dehghan, A. Mowlaei, Y. Ghahremani, Efficient multicast routing with multiple QoS constraints based on genetic algorithms
Mathematical Evaluation of Solar PV Array with T-C-T Topology Under Different Shading Patterns V. BalaRaju and Ch. Chengaiah
Abstract Solar photovoltaic (SPV) array topologies are formed by the electrical interconnections between module to module in SPV arrays which consists of PV modules connected in series and parallel. The main conventional SPV array topologies are series, parallel, Total-Cross-Tied, series-parallel, honey comb, and bridge linked types. The performance of Total-Cross-Tied (T-C-T) type of topology or configuration is better as compare to other type of connections. This paper presents the mathematical evaluation of 6 × 6 size conventional Total-Cross-Tied (T-C-T) SPV array topology under different shading patterns such as short shading, half array shading, and long shading patterns. The electrical equivalent circuit of SPV array TCT topology is analyzed by Kirchhoff’s laws at different nodes and loops in SPV array topology. In this paper, the performance of SPV array with the TCT topology in different shading patterns are investigated and theoretical measurements in output power voltage characteristics of the global maximum power point (GMPP) locations are examined. Keywords PV array · Topology · Shaded patterns · Array power
1 Introduction The increased electrical energy demand in worldwide, environmental problem, and global warming effect due to fossil fuels has resulted in the growing adoption of renewable energy for power generation. To meet the energy demand, renewable energy is alternative sources of electrical energy. Among all renewable energy sources, the photovoltaic (PV) system has more advantage than other sources due to latest development in PV technology and price drop of PV modules, rugged and V. BalaRaju (B) · Ch. Chengaiah Department of EEE, SV University College of Engineering, Tirupati 517502, India e-mail: [email protected] Ch. Chengaiah e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_3
23
24
V. BalaRaju and Ch. Chengaiah
simple in design requiring very little maintenance, subsidies provided by the government, no pollution, etc. [1]. Solar power is obtained by the direct conversion method of sunlight into electricity. The performance of SPV system depends on solar irradiance, shading effect, aging effect, temperature, and degradation effects, etc., and the most effecting factors are temperature and solar irradiance. Solar PV system has unique maximum power point (MPP) under uniform irradiance case and multiple peaks occur under non-uniform irradiance cases such as local peaks, global peaks; from this, global peak is considered on the output P–V characteristics [2]. Solar PV cells are converts directly from solar power to electrical power and possible to connect them through different interconnections to achieve more power. The solar panel or modules are formed by connecting PV cells in series and PV array is formed by series and parallel connection of PV panels. The number of interconnections between module to module in an array is changed to make the different SPV array connection topologies such as bridge-link, parallel, Total-CrossTied, simple-series, series–parallel, and honey comb types [3]. Among all topologies, TCT has minimum mismatch or shading power losses and high generating output power. Many researchers presented literature review on solar PV array configurations under partial shading scenarios [4–6]. This paper presents mathematical evaluation of 6 × 6 SPV array with TCT topology under three shading patterns such as short shading (S-S), half array shading (H-S), long shading (L-S) type of patterns, and one un-shading case(U). The mathematical analysis of 6 × 6 size conventional TCT configuration is derived from Kirchhoff’s laws, i.e., Kirchhoff’s current law (KCL) applied at nodes and Kirchhoff’s voltage law (KVL) applied at closed loops. This paper first discusses the modeling of a single diode SPV cell, module and array in Sect. 2. Different conventional topologies are presented in Sect. 3. In Sects. 4 and 5, the mathematical evaluation of SPV array with TCT topology under nonshading case and three shading patterns and also simulation results for a 6 × 6 array TCT topology are presented. In Sect. 6, conclusions are presented.
2 Modeling of Solar Photovoltaic System 2.1 Modeling of Photovoltaic Cell, Module, and Array Solar photovoltaic cells directly convert photon energy from solar irradiance into DC electricity by photovoltaic effect. Each cell generates small amount of current and connected in series to form a single PV panel or module and it results produce higher currents. The combination of series- and parallel-connected SPV panels is formed to PV array. The SPV array formation with the number of cells and modules is shown in Fig. 1. The simplified model of a single diode solar PV cell and array as shown in Figs. 2 and 3, respectively. Solar PV array is made by series-connected (N S ) and parallel-connected (N P ) PV panels.
Mathematical Evaluation of Solar PV Array with T-C-T Topology …
Fig. 1 Formation of solar PV cell to array
Fig. 2 Simplified model of a single diode solar cell
Fig. 3 Circuit of SPV Array with series-parallel combination of modules
25
26
V. BalaRaju and Ch. Chengaiah
The mathematical representation of one diode PV cell is given by in Eq. 1 [7]. Icell
q(Vcell + Icell RS ) (Vcell + Icell RS ) −1 − = ILcell − acI0 exp K aTc RSH
(1)
The mathematical representation of PV module with N s series cells given in Eq. (2), q(Vm + Im RS ) (Vm + Im RS ) Im = IPh − I0 exp −1 − n s K aTc RP
(2)
where I Ph is module light generated current, represented as IPh =
G [ILSTC + K isc (Tc − TSTC )] G0
(3)
where K isc : module short circuit co-efficient, I LSTC : module light generated current at STC. G is the incident irradiance and G0 is standard irradiation of 1000 W/m2 . The simplified mathematical equation of PV array [8] is given by, ⎡ Ia = NP .IPh − NP .I0 ⎣exp
⎫ ⎧ ⎨ q Va + RS NNS Ia ⎬ P ⎩
NS K aTc
⎤ Va + RS NNSP Ia ⎦ − 1⎦ − ⎣ NS ⎭ .RP NP ⎤
⎡
(4) where N P and N S are the number of shunt and series-connected panels in SPV array, RP and RS are shunt and series resistance of the panel, V a and I a are the voltage and current of the SPV array. I Lcell denotes the photo current, I 0 : reverse saturation current, a: ideality factor, q: elementary charge, T is temperature of the solar cell at STC, and k: Boltzmann constant.
3 Solar PV Array Topologies 3.1 Conventional Solar PV Array Topologies Topologies are formed based on the type of connections between modules in SPV array. The main conventional type of topologies or configurations are classified into, 1. 2. 3. 4.
Simple series (S) topology Series–parallel (SP) topology Bridge-linked (BL) topology Total-Cross-Tied (TCT) topology
Mathematical Evaluation of Solar PV Array with T-C-T Topology …
27
Fig. 4 6 × 6 solar PV array conventional topologies
5. Simple parallel (P) topology and 6. Honey comb (HC) topology. Figure 4 displays traditional 6 × 6 size solar PV array topologies. There are 36 modules in 6 × 6 arrays, and all 36 modules are connected series and parallel in series and parallel topologies, respectively, shown in Fig. 4a, 4b. In S-P type, six number of PV modules connected in series named as string and these six numbers of strings to be connected in parallel shown in Fig. 4c. In BL and HC type of topologies reduces the number of electrical connections among the modules in an SPV array as shown in Fig. 4e, 4f. Total-Cross-Tied (TCT) The TCT topology is formed from the electrical connections or ties between the rows and columns of S-P topology shown in Fig. 4d. In this TCT type, SPV modules are connected in matrix form. For example, in 6 × 6 connected SPV array 1st row consists of PV modules labeling from 11 to 16 and 1st column consists of modules from 11 to 61 as shown in Fig. 4.
4 Mathematical Evaluation of T-C-T Topology 4.1 Power Developed Across the Solar PV Array with TCT Topology In this paper, the mathematical evaluation is performed for a 6 × 6 size SPV array with Total-Cross-Tied (TCT) topology shown in Fig. 6. The traditional TCT topology is evaluated mathematically by means of Kirchhoff’s laws, i.e., Kirchhoff’s current law applied at nodes and Kirchhoff’s voltage law applied at closed loops [6]. Assume that under standard test conditions (STC), the maximum current generated by the PV module is I m . The TCT topology with the size m × n (rows × columns) as shown in Fig. 5. The current generated by one PV module at any irradiance G is given as,
28
V. BalaRaju and Ch. Chengaiah
Fig. 5 m × n SPV array with TCT topology
Imodule =
G .Im G0
(5)
where G is irradiance at shading condition and G0 is the standard irradiance of 1000 W/m2 . If the solar module receives full irradiance, the output current of module is more and vice versa. The PV array voltage V PV is given by the summation of individual module voltages in the rows in an array, i.e., VPV =
n
V pm
(6)
p=1
where V pm is the module voltage at mth row. By applying the Kirchhoff’s current law, total current at each node is given by, IPV =
n
I pm − I( p+1)m = 0; where row, m = 0, 1, 2, . . . ( p − 1)
(7)
q=1
where V m and I m are the maximum voltage and maximum current generated by the PV modules, respectively. The 6 × 6 size solar PV array with TCT topology as shown in Fig. 6. The voltage of six parallel PV modules at the mth row is V m . Where
Mathematical Evaluation of Solar PV Array with T-C-T Topology …
29
Fig. 6 6 × 6 SPV array with Total-Cross-Tied topology
⎧ ⎪ ⎪n ⎪ ⎪ ⎪ n−6 ⎪ ⎪ ⎨ n − 12 m= ⎪ n − 18 ⎪ ⎪ ⎪ ⎪ n − 24 ⎪ ⎪ ⎩ n − 30
for 1 ≤ n ≤ 6 for 7 ≤ n ≤ 12 for 13 ≤ n ≤ 18 for 19 ≤ n ≤ 24 for 25 ≤ n ≤ 30 for 31 ≤ n ≤ 36
The total SPV array voltage V PV is the sum of six rows individual solar PV module voltages: VPV =
6
Vm
(8)
m=1
Total array voltage of 6 × 6 PV array by neglecting the voltage drop across diodes is given as, V P V = 6Vm
(9)
30
V. BalaRaju and Ch. Chengaiah
Apply KCL at nodes 1–5 as shown in Fig. 6. IPV =
5
I6m+q − I6m+(q+1) = 0; q = 0, 1, 2, 3, 4, 5;
(10)
m=0
The TCT topology has 36 module currents and six module voltages shown in Fig. 6. TCT configuration has currents in 1st, 2nd, 3rd, 4th, 5th, 6th columns are I 1 to I 6 , I 7 to I 12 , I 13 to I 18 , I 19 to I 24 , I 25 to I 30 , I 31 to I 36 and voltages in 1st row to 6th rows are V 1 to V 6 . The row currents of a 6 × 6 SPV array topology are given by applying KCL to Fig. 6, I Rp =
6
S pn Im = S p1 Im + S p2 Im + S p3 Im + S p4 Im + S p5 Im + S p6 Im . . . (11)
n=1
G p1 G p2 G p3 G p4 , S p2 = , S p3 = , S p4 = , = G0 G0 G0 G0 G p5 G p6 , S p6 = ... (12) = G0 G0
Where, S p1 S p5
where p is row number, n is the module number in each row, G0 is standard irradiance of 1000 W/m2 , G p1 is the pth row of 6 × 6 SPV array 1st column module irradiance in W/m2 , and I m is the maximum current of each module.
4.2 Row Currents and Output Power Evaluation for SPV Array with TCT Topology Under Un-Shaded Conditions Un-shaded conditions means full solar irradiance, i.e., 1000 W/m2 are uniformly distributed throughout the SPV array topology. The currents in each row, array voltage, and array power for 6 × 6 SPV TCT topology are calculated as follows [8]. The 1st row current is given as (from Eq. 11), I R1 =
6
S1n Im = S11 Im + S12 Im + S13 Im + S14 Im + S15 Im + S16 Im . . .
n=1
G 11 G 12 G 13 G 14 , S12 = , S13 = , S14 = , = G0 G0 G0 G0 G 16 G 15 , S15 = = G0 G0
where, S11 S16
(13)
Mathematical Evaluation of Solar PV Array with T-C-T Topology …
31
where G11 is the 1st row 1st column module irradiance, similarly G16 is the irradiance of 1st row 6th column module in a 6 × 6 SPV array. The currents in 2 to 6th rows are given as, I R2 =
6
S2n Im = S21 Im + S22 Im + S23 Im + S24 Im + S25 Im + S26 Im . . .
(14)
S3n Im = S31 Im + S32 Im + S33 Im + S34 Im + S35 Im + S36 Im . . .
(15)
S4n Im = S41 Im + S42 Im + S43 Im + S44 Im + S45 Im + S46 Im . . .
(16)
S5n Im = S51 Im + S52 Im + S53 Im + S54 Im + S55 Im + S56 Im . . .
(17)
S6n Im = S61 Im + S62 Im + S63 Im + S64 Im + S65 Im + S66 Im . . .
(18)
n=1
I R3 =
6 n=1
I R4 =
6 n=1
I R5 =
6 n=1
I R6 =
6 n=1
Under un-shaded case, G0 and Gp1 to Gp6 is 1000 W/m2 . So, the total array current is given by, I P V = 6I m
(19)
In TCT SPV array topology, the global MPP is the product of voltage and current of each row. The array current depends on the irradiance and the array voltage is same for all the rows by neglecting the voltage drop across the diodes. To calculate the solar PV array output voltage, KVL is applied to Fig. 6, V P V = 6V m
(20)
Finally, the GMPP for TCT topology can be written as, P G M P P = P P V = V P V .I P V = 36.V m .I m
(21)
The theoretical calculations of SPV array TCT topology are tabulated in Table 1.
32
V. BalaRaju and Ch. Chengaiah
Table 1 Parameters of a 6 × 6 solar PV array topologies Parameters
Array voltage V PV (V)
Array current I PV (A)
Array power PPV = V PV .I PV (W)
TCT topology without shading conditions (un-shaded case)
6
6
PPV = 36Vm .Im
Vn = 6Vm
n=1
In = 6Im
n=1
4.3 Row Currents Evaluation for SPV Array with TCT Topology Under Various Shading Patterns Figure 7 shows the different shading patterns considered for 6 × 6 SPV array with TCT topology (Fig. 8). 1. Short Shading Pattern (S-S)
Fig. 7 Shading patterns: (i) short shading (ii) half array shading (iii) long shading
Fig. 8 Solar irradiance values for different shading patterns
Mathematical Evaluation of Solar PV Array with T-C-T Topology …
33
The short shading pattern can be observed in Fig. 7(i). In this shading case, PV modules in 1st to 4th rows are receives an uniform irradiance of 1000 W/m2 while the remaining modules in 5th and 6th rows are under shaded with solar insolation of 600 and 800 W/m2 , respectively. By applying KCL to Fig. 6 with shading pattern-I, the generated current of the 1st, 2nd, 3rd and 4th rows are calculated as follows: 1000 1000 1000 Im + Im + Im = I R2 = I R3 = I R4 = 1000 1000 1000 1000 1000 1000 + Im + Im = 6Im + 1000 1000 1000
I R1
Modules in row 5th and 6th are shaded. Corresponding row currents are given by,
600 600 600 600 Im + Im + Im + Im 1000 1000 1000 1000 600 600 Im + Im = 3.6Im + 1000 1000 800 800 800 800 Im + Im + Im + Im = 1000 1000 1000 1000 800 800 Im + Im = 4.8Im + 1000 1000
I R5 =
I R6
2. Half array Shading Pattern (H-S) The half array shading pattern can be observed in Fig. 7(ii). In this shading case, PV modules in 1st to 3rd rows receive uniform insolation of 1000 W/m2 while the other modules in 4th, 5th, and 6th rows are shaded with irradiance of 500, 600 and 800 W/m2 respectively. By applying KCL to Fig. 6 with shading pattern-II, the current in different rows are calculated as follows: 1000 1000 1000 Im + Im + Im I R1 = I R2 = I R3 = 1000 1000 1000 1000 1000 1000 Im + Im + Im = 6Im + 1000 1000 1000 500 500 500 Im + Im + Im I R4 = 1000 1000 1000 500 500 500 Im + Im + Im = 3Im + 1000 1000 1000 600 600 600 Im + Im + Im I R5 = 1000 1000 1000 600 600 600 Im + Im + Im = 3.6Im + 1000 1000 1000
34
V. BalaRaju and Ch. Chengaiah
800 800 800 Im + Im + Im 1000 1000 1000 800 800 800 Im + Im + Im = 4.8Im + 1000 1000 1000
I R6 =
3. Long Shading Pattern (L-S) The L-S shading pattern is observed in Fig. 7(iii). This type of shading modules in 3rd, 4th, 5th and 6th rows has solar insolation of 300, 500, 600 and 800 W/m2 . Row currents are expressed as from Eq. 11, 1000 1000 1000 Im + Im + Im = I R2 = 1000 1000 1000 1000 1000 1000 Im + Im + Im = 6Im + 1000 1000 1000
I R1
From Fig. 7(iii): In long shadingι, PV modules in 3–6 rows are under shaded with different solar insolation. The current generated by the 3rd, 4th, 5th and 6th rows are calculated as follows: 300 300 300 Im + Im + Im I R3 = 1000 1000 1000 300 300 300 Im + Im + Im = 1.8Im + 1000 1000 1000 500 500 500 Im + Im + Im I R4 = 1000 1000 1000 500 500 500 Im + Im + Im = 3Im + 1000 1000 1000 600 600 600 Im + Im + Im I R5 = 1000 1000 1000 600 600 600 Im + Im + Im = 3.6Im + 1000 1000 1000 800 800 800 Im + Im + Im I R6 = 1000 1000 1000 800 800 800 Im + Im + Im = 4.8Im + 1000 1000 1000 The currents generated in six rows in an SPV TCT topology is different due to non-uniform irradiance falling on the modules in an SPV array topology, and it results in the multiple peaks occur on the output P–V characteristics. The location of global MPP in output P–V characteristics of conventional TCT array topology under different shadings are presented in Table 2 and the module currents in each row is based on the order in which PV modules were bypassed.
Mathematical Evaluation of Solar PV Array with T-C-T Topology …
35
Table 2 Theoretical calculations of array power and GMPP in TCT topology Shading patterns
Row currents (I R )a
Array voltages (V R )
Array power (PR = V R .I R )
Un-shaded case
I R6 = 6 I m
6 Vm
36 V m I m
I R5 = 6 I m
5 Vm
30 V m I m
I R4 = 6 I m
4 Vm
24 V m I m
I R3 = 6 I m
3 Vm
18 V m I m
I R2 = 6 I m
2 Vm
12 V m I m
I R1 = 6 I m
1 Vm
6 V m Im
I R5 = 3.6 I m
6 Vm
21.6 V m I m
I R6 = 4.8 I m
5 Vm
24 V m I m
I R4 = 6 I m
4 Vm
24 V m I m
I R3 = 6 I m
3 Vm
18 V m I m
I R2 = 6 I m
2 Vm
12 V m I m
I R1 = 6 I m
1 Vm
6 V m Im
I R4 = 3 I m
6 Vm
18 V m I m
I R5 = 3.6 I m
5 Vm
18 V m I m
I R6 = 4.8 I m
4 Vm
19.2 V m I m
I R3 = 6 I m
3 Vm
18 V m I m
I R2 = 6 I m
2 Vm
12 V m I m
I R1 = 6 I m
1 Vm
6 V m Im
I R3 = 1.8 I m
6 Vm
10.8 V m I m
I R4 = 3 I m
5 Vm
15 V m I m
I R5 = 3.6 I m
4 Vm
14.4 V m I m
I R6 = 4.8 I m
3 Vm
14.4 V m I m
I R2 = 6 I m
2 Vm
12 V m I m
I R1 = 6 I m
1 Vm
6 V m Im
Short shading pattern (S-S)
Half array shading pattern (H-S)
Long shading pattern (L-S)
a Order
of row currents in which the modules are bypassed The bold values represents, global maximum power under shading patterns
By neglecting the voltage drops across the diodes and voltage variations across individual row, The output voltage of array is given as, VPV = 6Vm
(22)
PPV = VPV .IPV
(23)
Total array power is given as,
36
V. BalaRaju and Ch. Chengaiah
5 Results and Discussions 5.1 Theoretical Calculations and Location of GMPP In this paper, mathematical evaluation of a 6 × 6 size solar PV array with TCT topology is performed under three shading cases such as short shading, Half array shading and long shading type of shading patterns shown in Fig. 7(i)–(iii). Table 2 shows the theoretical calculation for determining the global maximum power for Total-Cross-Tied solar PV array topology. From the mathematical analysis of 6 × 6 SPV array topology, it can be concluded that, • In TCT connection topology, under uniform irradiance, i.e., full irradiance case: array current, voltage, and power are 6 I m , 6 V m, and 36 V m I m , respectively. • Under uniform irradiance means non-shading case, the TCT topology has maximum array power of 36 V m I m and in shading cases, the currents in each row are different due to change in irradiance falling on the PV modules in an array and corresponding power also changed. • In case of short shading (S-S), currents in 5th and 6th rows are 3.6 I m and 4.8 I m , respectively, and remaining 1st, 2nd, 3rd, 4th row currents are equal to 6 I m . • In half array shading (H-S), currents in 1st, 2nd, 3rd rows are 6 I m and in 4th, 5th, 6th rows are 3 I m , 3.6 I m , 4.8 I m , respectively. • In long shading pattern (L-S), currents in 1st, 2nd rows are 6 I m and in 3rd, 4th, 5th, 6th rows are 1.8 I m , 3 I m , 3.6 I m , 4.8 I m , respectively. Theoretical calculations of array power in TCT topology is given in Table 2 and simulation results of location of GMPP for TCT topology are presented in Table 4.
5.2 Simulation Results For the modeling and simulation of 6 × 6 size SPV array TCT topology, Vikram solar PV model with 270 W are used in MATLAB/Simulink environment. Table 3 shows the specifications of Vikram solar PV module. Table 3 Datasheet of Vikram solar module Electrical parameters
Values
Maximum power, Pmax
270 W
Cells per module, N cell
72
Open circuit voltage, V OC
44 V
Short circuit current, I SC
8.1 A
Voltage at MPP, V MP
34.7 V
Current at MPP, I MP
7.8 A
Mathematical Evaluation of Solar PV Array with T-C-T Topology …
37
Figure 9a, b are the output P–V and I–V curves, respectively, illustrate the simulation results of TCT SPV array topology under un-shaded case and various shading patterns (Table 4). 10000 Un shading condition
9000
Shading pattern-1 Shading pattern-2
8000
Shading pattern-3
Power(W)
7000 6000 5000 4000 3000 2000 1000 0 0
50
100
200
150
250
Voltage(V)
(a)
50 Un shading case Shading pattern-1
45
Shading pattern-2
40
Shading pattern-3
Current(A)
35 30 25 20 15 10 5 0 0
50
100
150
200
250
300
Voltage(V)
(b) Fig. 9 Simulation results of TCT array topology under shading patterns. a P–V curves b I–V curves
38
V. BalaRaju and Ch. Chengaiah
Table 4 Location of GMPP of TCT topology under shading patterns Shading patterns
Location of TCT array GMPP: PGMPP (W) Theoretical value
Simulation result (W)
Pattern-1
24 V m I m
7053
Pattern-2
19.2 V m I m
5613
Pattern-3
15 V m I m
4605
Un shaded
36 V m I m
9620
6 Conclusions In this paper, mathematical evaluation of conventional 6 × 6 size solar PV array with TCT topology under uniform irradiance case, i.e., non-shading case and proposed partial shading cases such as short shading, half array shading, and long shading patterns are presented. Under different shading pattern cases, each row currents are calculated based on the order of row currents in which the modules are bypassed for the identification of global peak power position in the output characteristics of SPV array with TCT topology. This mathematical analysis is based on the KVL and KCL equations of module connections in a 6 × 6 size SPV array. From the theoretical calculations, the output power of TCT topology under short shading case is more compared to other two shading pattern cases. If the shading pattern is covered under smaller area in SPV TCT topology, the total array power is more and vice versa. From this analysis, it can be concluded that output power of array depends on the shading pattern in the SPV array topologies.
References 1. A.L. Fahrenbruch, R.H. Bube, Fundamentals of Solar Cells: Photovoltaic Solar Energy Conversion (Academic Press, Cambridge, 1983) 2. D. Assmann, U. Laumanns, U. Dieter, Renewable Energy: A Global Review of Technologies, Policies and Markets (Earthscan Publications Ltd., London, 2016) 3. B. Okan, O. Burçin, Analysis and comparison of different PV array configurations under partial shading conditions. Sol. Energy 160, 336–343 (2018) 4. S.P. Koray, PV array reconfiguration method under partial shading conditions. Int. J. Electr. Power Energy Syst. 63, 713–721 (2014) 5. Y.-J. Wang, P.-C. Hsu, An investigation on partial shading of PV modules with different connection configurations of PV cells. Energy 36, 3069–3078 (2011) 6. V. Bala Raju, Ch. Chengaiah, Power enhancement of solar PV arrays under partial shading conditions with reconfiguration methods, in 2019 Innovations in Power and Advanced Computing Technologies (i-PACT) (2019) 7. G. Sai Krishna, T. Moger, Enhancement of maximum power output through reconfiguration techniques under non-uniform irradiance conditions. Energy 187, 11591 (2019) 8. V. Bala Raju, Ch. Chengaiah, Performance analysis of conventional, hybrid and optimal PV array configurations of partially shaded modules. Int. J. Eng. Adv. Technol. (IJEAT) 9(1) (2019). ISSN: 2249-8958
Mathematical Evaluation of Solar PV Array with T-C-T Topology …
39
Mr. V. BalaRaju received B.Tech. degree in 2010 from SV University, Tirupati and M.Tech. degree in 2013 from JNTU Anantapur. Currently pursuing Ph.D. in Department of EEE, SV University, Tirupati. His research interests include Reconfigurations of Solar PV arrays, Power quality and grid integration of renewable energy systems. Dr. Ch. Chengaiah is a Professor of the Department of EEE in SV University college of Engineering, Tirupati. He completed his Ph.D. in power system operation and control from SV University, Tirupati in 2013. He obtained his ME in power systems from NIT Trichy in 2000 and B.Tech. in EEE from SV University in 1999. His research interests include Renewable energy systems, power system operation and control.
Discrete Wavelet Transform for CNN-BiLSTM-Based Violence Detection Rajdeep Chatterjee and Rohit Halder
Abstract In this paper, our approach aims to enhance the classification of violent and non-violent activities in public areas. Violent activities lead to the destruction of loss of life and general properties. These anti-social activities have been increasing at an alarming rate over the past years. Our approach, when merged with the camera surveillance system, can bring about real-time automation, in the detection of criminal activities. DWT-based convolutional bidirectional LSTM has been used to detect violent actions, and the results have been compared with the other existing approaches. Our proposed plan gives 94.06% classification accuracy for the widely used standard Hockey dataset. Keywords Discrete wavelet transform · Deep learning · Long short-term memory · Smart society
1 Introduction Violence and vandalism remain an alarming social issue in public places, political, social, and economic insecurities being the backbone in these acts of violence. Despite being one of the most significant social issues, not many works have been done for automation in the detection of these acts [1]. Complete prevention of acts of violence, terrorism, vandalism, and criminal activities is only possible if we can analyze the brain signals and its pattern, which triggered the anti-social thoughts of the individuals in real-time [2, 3]. However, the process involves the installation of advanced hardwares, to record the brain signals, which is technically not feasible. However, computer vision and video analytics can be the major guide as far as the study of human action is considered. Surveillance cameras installed at public places R. Chatterjee (B) · R. Halder School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha 751024, India e-mail: [email protected] R. Halder e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_4
41
42
R. Chatterjee and R. Halder
and societies can be used to monitor human actions. Artificial intelligence-based models, which, when integrated with these pre-installed surveillance cameras, can be helpful to automate the process and inform the security poles in real-time. The smart surveillance can help the government or society to take an immediate approach to minimize the damage caused to life and property by such anti-social activities. To be a part of a healthy society, we all need safer streets, parks, work-area, and other public places. Roadmap: Our paper has been organized as follows. In Sect. 2, we describe the related approaches. It is followed by Sect. 3, it gives the conceptual idea about our proposed approach. In Sect. 4, we discuss about the model architecture. It is followed by Sect. 5, which explains the experimental setup of our study. The detailed results have been discussed in Sect. 6. Finally, we conclude with Sect. 7.
2 Related Works Researches on human action detection have started in the twentieth century itself, using the presence of blood or presence of characteristic sound, even the angle of rotation as the features for the violent activities [4]. However, in today’s context, surveillance cameras are inefficient in capturing audio signals [5] related to certain activities. Acts of violence can be of many types, including one to one violence [6], crowd violence [7, 8], violence with weapons [9], and still counting. The acts violence by a crowd or mob has been identified using latent Dirichlet allocation (LDA) [10] and support vector machines (SVMs) [11]. Different approaches for detection of any act of violence was proposed by researchers from time to time including, motion scale-invariant feature transform (MoSIFT) [histogram of gradients (HoG) + histogram of optical flow (HoF)] [12], Harris corner detection [13], long short-term memory (LSTM) cells, weakly supervised semantic segmentation, convolutional neural networks (CNN) [14], and ConvL-STM networks (CNN + LSTMS) [15, 16]. The field of Natural Language Processing, then emerged with a new algorithm demonstrating the use of bidirectional long short-term memory, to mimic the activity of the human brain, that is to remember pattern among a sequence of words. In our approach, we have first extracted the diagonal spatial features using a one-level DWT, which has been passed through a CNN to extract the essential, high-quality features in each frame further. The temporal characteristics have then been considered to predict any act of violence. The detailed comparison of the efficiency of our approach, with the previous methods, has been shown in the later sections.
Discrete Wavelet Transform for CNN-BiLSTM-Based Violence …
43
3 Proposed Approach In this paper, we have used a convolutional neural network, on top of one-level discrete wavelet Transform, to extract the spatial features, followed by bidirectional long short-term memory (DWT-CNN-BiLSTM) architecture to predict an act of violence in the sequence of frames. A discrete wavelet transform has been applied to each frame of the clip, to extract some encoded information. Sequences were made using these modified frames and was passed through a convolutional neural network, to extract the spatial features. Then, we used an LSTM architecture is to compare the information of the current frame once with the previous frames and once with the upcoming frames to identify any sequential flow of events (Bidirectional LSTM). Finally, the binary classifier has been used to classify the action based on the spatiotemporal features [17]. Hence, the architecture used in this approach gets trained on spatial features and temporal features in both directions. The model architecture has been further discussed in the later sections.
4 Model Architecture 4.1 Discrete Wavelet Transform Wavelet transform, based on a few functions, called wavelets, decomposes an image or a signal. Through dilations and shifting, wavelets are obtained from (a single prototype) wavelet called mother wavelet. Discrete wavelet transform [18–20] has been introduced to compute fast, highly efficient, and flexible wavelet transform of a signal, for the decomposition of signals into its sub-bands. In discrete wavelet transform [21], the energy is associated with a signal concentrates on specific wavelet coefficients. The 2D-DWT [22] is now used as an essential operation to decompose an image into approximation and details, thereby representing digital signals in terms of digital filtering techniques. It is an optimized solution to reduce computational time overhead [23]. In our approach, we have rearranged this technique for the spatial domain to extract different types of information about an image, using Haar wavelets. DWT is, in fact, is the most popular technique for compression of an image. Here, we have used only one-level DWT on the image to obtain images corresponding to approximation, vertical features, horizontal features, and diagonal features, as shown in Fig. 1. Our model has been fairly trained on the diagonal features.
44
R. Chatterjee and R. Halder
Fig. 1 Discrete wavelet transform
4.2 Convolutional Neural Network As we know in case of multi-layer perceptron, output is based on real-valued inputs (wi ) multiplied with the input vectors (x i ), in addition to a bias b and then passing the output through a nonlinear activation function [24]. y=σ
n
Wi ∗ xi + b
(1)
i=1
The convolution operation for f and g is expressed as: ∞ ( f ∗ g)(t) =
f (t)g(t − τ )dt
(2)
−∞
However, in a CNN, the weights are a small matrix (often 3 × 3) which when multiplied with each pixel, produce a new pixel thus acting as image filters. Since these randomized weights can be negative also, hence to limit our feature values to be positive, we pass the output of a CNN layer through a rectified linear (ReLu) activation function. A simple CNN architecture is shown in Fig. 2.
Discrete Wavelet Transform for CNN-BiLSTM-Based Violence …
45
Fig. 2 A simple CNN architecture
P 1 = (P1 ∗ F1) + (P2 ∗ F2) + (P3 ∗ F3) + (P5 ∗ F5) + (P6 ∗ F6) + (P7 ∗ F7) + (P9 ∗ F9) + (P10 ∗ F10) + (P11 ∗ F11)
(3)
The Pi represents the pixels intensity values, F i represents the values of each cell of the filter, and the Pi represents the modified pixel values after passing the image through a convolutional layer. A max-pooling sub-layer is added over the convoluted image. In a max-pooling sub-layer, a window of x * y, is replaced with the maximum pixel intensity value from within the window, as shown in Fig. 2. It scales down the number of the input parameter while conserving the essential information [25].
4.3 Bidirectional-LSTM Cells The basic long short-term memory cells are generally used to mimic the activity of the human brain [26]. The first layer in a basic LSTM cell is often known as the forget gate layer denoted by f t . The hidden state vector ht (also known as output vector of the LSTM unit) of the previous LSTM cell (ht−1 ) is passed through a “sigmoid” function. The output value from the sigmoid function is either a 0 or 1. The value 0 indicates a forget state, and 1 denotes a remember state. The mathematical representation of the forget gate layer is, f t = W f . h t−1 , xt , b f
(4)
46
R. Chatterjee and R. Halder
The next layer is called the input gate layer it , i t = Wi . h t−1 , xt , bi
(5)
The output from the forget gate layer is multiplied to the cell state vector (ct ) of the previous LSTM cell (ct−1 ). The result is added to the output from the input gate layer, multiplied to the hidden state vector of the last state upon passing through a “tanh” function, to form a cell state vector for the next LSTM cell. This feature upon passing through a “tanh” function is multiplied to the hidden state vector of the previous state upon passing through a “sigmoid” function to form a hidden state vector for the next LSTM cell. Hence, in the final layer C rt , a part of the features from the previous state and the newly constrained features of the current cell are added up and passed to the next state. Ctr = tanh Wc . h t−1 , xt , bc
(6)
Ct = f t ∗ Ct−1 + i t ∗ Ctr
(7)
Ot = σ ((Wo .[h t−1 , xt ], bo ))
(8)
h t = Ot ∗ tanh(Ct )
(9)
where x t is an input vector to the LSTM unit and bf , bi , bo are the weight vectors for the forget gate layer, input gate layer, and the output gate layer, respectively, as shown in Fig. 3. In the LSTM, the features are remembered and passed from state 1 to state 2 to state n. In a bidirectional LSTM cell [27], there are two sets of LSTM cells working parallelly in opposite direction. The bi-directional LSTM cells adds robustness and increases efficiency of the model.
Fig. 3 A simple LSTM cell
Discrete Wavelet Transform for CNN-BiLSTM-Based Violence …
47
Fig. 4 Model architecture
4.4 Dense Layer The fully connected dense layers add random weights W i to random features X i . The dense layers work similarly, as the multi-layered perceptron, as said in Sect. 4.2. The entire model architecture is shown in Fig. 4.
5 Experimental Setup 5.1 Dataset The efficiency of the DWT-CNN-BiLSTM model architecture was validated by using one of the standard datasets in the paradigm of violent and non-violent action detection, namely the Hockey Fights dataset [28]. Hockey Fights dataset: The Hockey Fights dataset comprises of clips from icehockey matches. The original video-based dataset has 500 clips, depicting acts of violence, and another 500 clips, depicting acts of non-violence. The average duration of the 1000 clips is around 1 s. The clips had a similar background and subjects.
48
R. Chatterjee and R. Halder
5.2 Data Preprocessing Any model cannot be directly trained on videos. Hence, we had extracted frames from these clips of average duration of 1 s. The average number of frames extracted per clip around 41. The frames were reshaped into 100 × 100 pixels (denoted as xy). The frames were subjected to a first-level discrete wavelet transform (DWT), using the Haar wavelets. The resultant frame was sub-divided into four frames, as discussed in the previous sections. The frame containing only the diagonal features have been used to build the sequence. The training data is an array, (NumPy1 array, in Python2 ) with a sequence being represented in each of its row. We have used a sequence of 12 consecutive frames, with overlapping of the frames to build sequences. A sequence of frames might include the identical pattern (like gait features) in which an individual fights, or the degree of movement of fists or the degree of rotation of an arm, whether a movement of the arm is a punch or a handshake, etc. The transitions in the pixel intensity values, between two or more frames, depict timerelated features in a sequence. The total number of samples or sequences present in the dataset denoted by N (N = (total number of frames)/(number of frames to be considered in a sequence)). For ease of implementation, NumPy allows an arbitrary value of −1 to be used. Hence, an array containing a sequence of 12 consecutive frames, each of 100 × 100 pixels, for each color channel (RBG), with their respective class labels, was prepared. The shape of the training data is (N, n, l, b, c).3 Here, the number of channels in each frame is represented by c.
5.3 System Configuration The paper is implemented using Python 3:6 on an Intel(R) Core(TM) i5 7200U CPU 2:50 GHz with 8 GB RAM with 64 bits Windows 10 Home operating system.
5.4 Training Methodology A sequence of 12 consecutive frames, with each frame 100 × 100 dimensions, with a shape of (N, 15, 100, 100, 3), was used to train the model from scratch. The customized “adam” optimizer was used with a learning rate of 0.0005 and decay = 1e6 , with a “sigmoid” activation function at the output layer and in a batch size of 9 samples. We have used “0 or 1” as class labels instead of one-hot encoding; hence, the loss function used in our approach was “sparse categorical crossentropy.” The 1 https://www.numpy.org/. 2 https://www.python.org/. 3N
= total number of sequences, n = 15, l = 100, b = 100, c = 3.
Discrete Wavelet Transform for CNN-BiLSTM-Based Violence …
49
datasets were divided into a 9:1 ratio, for training and validation purposes. The entire model has been build and trained from scratch for 10 epochs only.
6 Results and Analysis Though the volume of the dataset was small, yet DWT-CNN-BiLSTM model worked quite well, overcoming the low accuracies of many of the approaches that were taken previously by scholars from all over the world. To overcome the low volume of dataset issues, we have used overlapping the frames for a 15 framed sequence. The dataset was divided into a 9:1 ratio for training and validating purposes. The following Sects. 6.1, 6.2 and 6.3 give a detailed analysis of our approach.
6.1 Accuracy Evaluation The accuracy has been evaluated by taking the mean of the radius of 1 epoch around the epoch of the maximum accuracy. The best result was for the epoch 6. So a mean of the accuracies from epoch 5 to epoch 7 was calculated to obtain a resultant accuracy of 94:06%. The training and validation accuracy graphs are shown in Figs. 5 and 6, respectively.
Fig. 5 Validation accuracy
50
R. Chatterjee and R. Halder
Fig. 6 Training accuracy
Table 1 Results obtained from different detection models
Method
Hockey fights (%)
MoSIFT + HIK [28]
90.9
ViF [7]
82.9 ± 0.14
Deniz et al. [29]
90.1 ± 0
Gracia et al. [30]
82.4 ± 0.4
Bilinski and Bremond [31]
93.4
ViF + OViF [8]
87.5 ± 1.7
DWT-CNN-BiLSTM (our model)
94.06
6.2 Accuracy Comparison Our proposed approach has given so far obtained good results around 94:06% accuracy for the hockey fights dataset, as shown in the figure. Comparative analysis has been given in Table 1.
6.3 Efficiency of DWT-CNN-BiLSTM The significant positive aspect was using a sequence learner model. Moreover, this approach is computationally lighter because instead of passing a sequence of complete frames to the neural network, we have used only a part of the structure,
Discrete Wavelet Transform for CNN-BiLSTM-Based Violence …
51
which is only the diagonal features. It makes our approach time efficient, with serving the prime problem with a satisfactorily amount of accuracy.
7 Conclusion Our proposed DWT-CNN-BiLSTM variant provides quite well classification accuracy for the used Hockey Fights dataset. The temporal information (relevant to both the past trajectory and probable future trajectory) in a sequence of frames helps in better classification of a violent event from a non-violent event. Despite the satisfactory performance, as of our proposed approach, the model architecture might need to be further tuned for other datasets. Validating our model, for a variety of real-life and other standard datasets, where the subjects plunge into crowd violence, with the use of various types of weapons, is challenging. The said challenges in detecting violent and non-violent activities will be addressed in our future works.
References 1. J. Yuan, Z. Liu, Y. Wu, Discriminative subvolume search for efficient action detection, in 2009 IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 2442–2449 2. R. Chatterjee, T. Bandyopadhyay, EEG based motor imagery classification using SVM and MLP, in 2016 2nd International Conference on Computational Intelligence and Networks (CINE) (IEEE, 2016), pp. 84–89 3. R. Chatterjee, T. Maitra, S.K.H. Islam, M.M. Hassan, A. Alamri, G. Fortino, A novel machine learning based feature selection for motor imagery eeg signal classification in internet of medical things environment. Future Gener. Comput. Syst. 98, 419–434 (2019) 4. D. Nagin, R.E. Tremblay, Trajectories of boys’ physical aggression, opposition, and hyperactivity on the path to physically violent and nonviolent Juvenile delinquency. Child Dev. 70(5), 1181–1196 (1999) 5. J. Nam, M. Alghoniemy, A.H. Tewfik, Audio-visual content-based violent scene characterization, in Proceedings 1998 International Conference on Image Processing, ICIP98 (Cat. No. 98CB36269), vol. 1 (IEEE, 1998), pp. 353–357 6. A. Datta, M. Shah, N. Da Vitoria Lobo, Person-on-person violence detection in video data, in Object Recognition Supported by User Interaction for Service Robots, vol. 1 (IEEE, 2002), pp. 433–438 7. T. Hassner, Y. Itcher, O. Kliper-Gross, Violent flows: real-time detection of violent crowd behavior, in 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (IEEE, 2012), pp. 1–6 8. Y. Gao, H. Liu, X. Sun, C. Wang, Y. Liu, Violence detection using oriented violent flows. Image Vis. Comput. 48, 37–41 (2016) 9. B. Martin, S. Wright, Countershock: mobilizing resistance to electroshock weapons. Med. Confl. Survival 19(3), 205–222 (2003) 10. M. Hoffman, F.R. Bach, D.M. Blei, Online learning for latent Dirichlet allocation. Adv. Neural Inf. Process. Syst. 856–864 (2010) 11. H. Mousavi, S. Mohammadi, A. Perina, R. Chellali, V. Murino, Analyzing tracklets for the detection of abnormal crowd behavior, in 2015 IEEE Winter Conference on Applications of Computer Vision (IEEE, 2015), pp. 148–155
52
R. Chatterjee and R. Halder
12. L. Xu, C. Gong, J. Yang, Q. Wu, L. Yao, Violent video detection based on MoSIFT feature and sparse coding, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2014), pp. 3538–3542 13. D. Chen, H. Wactlar, M. Chen, C. Gao, A. Bharucha, A. Hauptmann, Recognition of aggressive human behavior using binary local motion descriptors, in 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE, 2008), pp. 5238– 5241 14. A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems (2012), pp. 1097–1105 15. X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, W. Woo, Convolutional LSTM network: a machine learning approach for precipitation nowcasting, in Advances in Neural Information Processing Systems (2015), pp. 802–810 16. J.R. Medel, A. Savakis, Anomaly detection in video using predictive convolutional long shortterm memory networks (2016). arXiv preprint arXiv:1612.00390 17. C. Vinette, F. Gosselin, P.G. Schyns, Spatio-temporal dynamics of face recognition in a ash: it’s in the eyes. Cogn. Sci. 28(2), 289–301 (2004) 18. M. Lang, H. Guo, J.E. Odegard, C.S. Burrus, R.O. Wells, Noise reduction using an undecimated discrete wavelet transform. IEEE Sign. Process. Lett. 3(1), 10–12 (1996) 19. M.J. Shensa et al., The discrete wavelet transform: wedding the a Trous and Mallat algorithms. IEEE Trans. Sign. Process. 40(10), 2464–2482 (1992) 20. H. Demirel, C. Ozcinar, G. Anbarjafari, Satellite image contrast enhancement using discrete wavelet transform and singular value decomposition. IEEE Geosci. Remote Sens. Lett. 7(2), 333–337 (2009) 21. H. Demirel, G. Anbarjafari, Image resolution enhancement by using discrete and stationary wavelet decomposition. IEEE Trans. Image Process. 20(5), 1458–1460 (2010) 22. A.A. Abdelwahab, L.A. Hassaan, A discrete wavelet transform based technique for image data hiding, in 2008 National Radio Science Conference (IEEE, 2008), pp. 1–9 23. D. Gupta, S. Choubey, Discrete wavelet transform for image processing. Int. J. Emerg. Technol. Adv. Eng. 4(3), 598–602 (2015) 24. S.K. Pal, S. Mitra, Multilayer perceptron, fuzzy sets, classification (1992) 25. M. Rastegari, V. Ordonez, J. Redmon, A. Farhadi, XNOR-Net: ImageNet classification using binary convolutional neural networks, in European Conference on Computer Vision (Springer, Berlin, 2016), pp. 525–542 26. M. Sundermeyer, R. Schluter, H. Ney, LSTM neural networks for language modeling, in Thirteenth Annual Conference of the International Speech Communication Association (2012) 27. R. Zhao, R. Yan, J. Wang, K. Mao, Learning to monitor machine health with convolutional bi-directional LSTM networks. Sensors 17(2), 273 (2017) 28. E.B. Nievas, O.D. Suarez, G.B. García, R. Sukthankar, Violence detection in video using computer vision techniques, in International Conference on Computer Analysis of Images and Patterns (Springer, Berlin, 2011), pp. 332–339 29. O. Deniz, I. Serrano, G. Bueno, T.-K. Kim, Fast violence detection in video, in 2014 International Conference on Computer Vision Theory and Applications (VISAPP), vol. 2 (IEEE, 2014), pp. 478–485 30. I.S. Gracia, O.D. Suarez, G.B. Garcia, T.-K. Kim, Fast fight detection. PloS One 10(4), e0120448 (2015) 31. P. Bilinski, F. Bremond, Human violence recognition and detection in surveillance videos, in 2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (IEEE, 2016), pp. 30–36
Agrochain: Ascending Blockchain Technology Towards Smart Agriculture Pratyusa Mukherjee, Rabindra Kumar Barik, and Chittaranjan Pradhan
Abstract The agriculture sectors play a pivotal role in Indian economy by contributing about 16% of the total GDP. It not only proffers food and raw material but also employment opportunities to a gigantic proportion of the population. But issues concerning crop production, harvest, damage and destruction have been hindering the development. Smart agriculture revolutionizes the crop sector by reorienting the system to ensure food security, agriculture production and income using diverse technology such as IoT, Bigdata, ML and AI. Each of these constantly monitor a myriad of crop production and storage factors, analyze them and help in better decision making. The security and privacy of this information while attained, analyzed and stored is of major concern. The fundamental characteristics of a blockchain make them the most lucrative platform to store valuable data essential for smooth functioning of smart agriculture while safeguarding it. This paper first studies the nitty-gritty of smart agriculture and then establishes how blockchain contributes toward its effective implementation. Keywords Smart agriculture · Blockchain · Distributed ledger · IoT-based farming · Security and privacy
P. Mukherjee (B) · C. Pradhan School of Computer Engineering, KIIT Deemed to be University, Bhubaneshwar, India e-mail: [email protected] C. Pradhan e-mail: [email protected] R. K. Barik School of Computer Application, KIIT Deemed to be University, Bhubaneshwar, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_5
53
54
P. Mukherjee et al.
1 Introduction Agriculture [1] is the backbone of Indian economy proffering not only food security and raw materials but also ample employment opportunities to an enormous fraction of the population. Thus, prosperity in agriculture domain is quite crucial for the expansion of economic condition in the country. Adoption of conventional methods of farming, diverse climatic conditions, thefts while harvest, attack of insects and pests, storage hazards lamentably results in a beggared yield and crop destruction. So, in order to mitigate all akin hindrances, it is imperative to establish an interspersed system which will cater to all aspects affecting the productivity in each stage such as cultivating, harvesting and storing. Smart agriculture [2] is an approach that uses modernized technology and machinery to enhance the quantity and quality of agricultural products without incurring additional costs. It also modifies and reorients agricultural practices to ensure effective development and also guarantee adequate food security despite the ever-challenging climatic changes. Smart agriculture thus has abundant potential to deliver a more qualitative, quantitative and sustainable agricultural production, based on explicit approaches. It also improves the quality of life for farm workers by diminishing their heavy labor and tedious activities. Figure 1 represents some of the components of smart agriculture. Blockchain facilitates a broad spectrum of applications such as transaction of cryptocurrency, catering to financial services, designing and constructing smart cities, securing Internet of Things and Bigdata and so on. It has astounding benefits including anonymity, accountability, consistency, decentralization and persistency. It first came into limelight since the inception of Bitcoin by Nakamoto [3]. Blockchain [4] is actually a public distributed database holding the encrypted ledger. It primarily differs Fig. 1 Components of smart agriculture
Agrochain: Ascending Blockchain Technology Towards …
55
Fig. 2 Block diagram of a blockchain
from a database due to its decentralization. All records in database are centralized, whereas in a blockchain, each participant has a copy of all the records. A block is a set of most recent and previously not occurred or included records. The first antecedent block of any blockchain is called the genesis. Each of the successive blocks contains the hash of its preceding block. A hash is non-invertible, i.e., one wayness. From a particular input, its hash can be calculated but the vice versa is not possible. Also, hash is collision resistant which means it is difficult to find two different inputs with same hash value. This feature of blockchain makes it secure as even a minute change in a block alters its hash which is reflected in each of the successive blocks, and thus any kind of tampering is noticed. Although each of the blocks or ledgers in a blockchain is visible to every peer in the blockchain network, they cannot to replaced, modified or newly added unless verified and validated by each and every peer. This is termed as Proof of Consensus [5]. Proof of Work [6] states that for a new node to be a part of the network or for any existing node to add or modify a block, apart from Proof of Consensus to be fulfilled, they also need to solve certain difficult mathematical equations or puzzles to prove their eligibility. Proof of Stake [7] states that for nodes to participate in a blockchain, they have to put something at stake. For example, each has to prove their identity and validate themselves. Since an intruder will never successfully pass the Proof of Consensus, Work and Stake, he cannot tamper with a blockchain, thus making it secure. Figure 2 represents a block diagram of a blockchain. Blockchain is widely used to enhance the security in several components of a smart agriculture. This paper provides a detailed literature survey of existing technologies enabling smart agriculture. The survey is followed by analysis of these technologies to understand the need to incorporate blockchain. Finally, it gives the novel proposed architecture.
2 Related Work Information and Communication Technology (ICT) is being well exploited these days to enable smart farming. The capability to predict the output of the production allows prior planning for better product distribution. This also reduces the overall costs incurred, enhances profits reaped and also reduces crop wastage. Thus, smart
56
P. Mukherjee et al.
Fig. 3 Steps in smart agriculture
agriculture is excessively using IoT-based technologies and artificial intelligence to induce and apply ICT in the sector of crop production. Precision farming [8] is an important application of IoT in agriculture that makes cultivation more controlled and accurate. It enhances the precision by realizing smart activities like crop monitoring, vehicle tracking, field observation and inventory surveillance in order to analyze the data collected by sensors and make intelligent decisions. The diagrammatic representation of the steps involved in precision farming is depicted in Fig. 3. Lee et al. [9] designed IoT-based agricultural production system that predicts the production amount by gathering environmental information using sensors. This scheme also helps to improve harvest statistics by enabling efficient decision making. Channe et al. [10] utilized IoT to study the details of soil properties for gauging its fertility and fertilizer requirements for cultivation. Because of inappropriate weather predictions and improper irrigation methods, farmers suffer huge financial losses. Satyanarayana and Mazaruddin [11] designed a scheme that studies the water distribution in the field, irrigation levels, rainfall predictions using GPS and Zigbee. IoT-based devices can also be trained to monitor the crop growth and detect any anomalies to effectively avert any diseases or infestations that can harm the yield. Agricultural drones or popularly called Unmanned Aerial Vehicles (UAV) [12– 14] are another example of implementation of IoT-based technologies in agriculture. Two types of drones are mostly used, first is, ground-based and second is aerial-based drones for yield healthiness estimation, irrigation, seeding and sowing, soil fertility, weed identification and mid-season field analysis, etc. Several such functionalities have been identified by Veroustraete [15]. It is thus time saving and results in higher yield. Puri et al. [16] investigated the utility of drones in agriculture and highlighted the several types of drones convenient for divergent agricultural applications along with their technical blueprints. Climate-Smart Agriculture (CSA) is a method for remodeling agricultural systems to ensure food availability in spite of harsh climatic condition. Lipper et al. [2], Scherr et al. [17] identified the essential elements of the CSA approach as well as its need to be implemented in their work.
Agrochain: Ascending Blockchain Technology Towards …
57
Thus, the advantages of smart agriculture using IoT and AI can be leveraged to data collection and storage to be analyzed later for better decision making and agility, higher yield, improved quality, reduced risks and wastage and excelled efficiency.
3 Analysis of Related Work Smart agriculture using IoT, Bigdata and AI technologies has the potential to be the savior of the entire industry but has its own set of difficulties and barriers that have been analyzes in this section of the paper. Table 1 showcases the same. Table 1 Analysis of smart agriculture using IoT, bigdata and AI Parameter
Analysis
Connectivity
Uninterrupted connection capable to withstand severe weather and open space conditions, connecting every sensor, field, barn and storehouse is challenging
Durability
Drones and portable sensors need to be robust enough to function even in adverse climatic conditions
Choice of sensors
Compromising on the quality and selection of the sensors affects the accuracy of the collected data and its reliability
Effective analysis
Efficient analysis of the collected data is crucial to obtain actionable insights
Expenses
Maintenance of the hardware and sensors is challenging as they are typically used in the field and can be easily damaged and need to be replaced more often that will in turn incur huge expenses
Technical awareness Lack of required skills and knowledge among the farmers regarding IoT devices and technologies may hinder them from reaping its benefits Integration
Improper integration of the various IoT products and platforms lead to abnormalities in functioning of these technologies and are rendered inefficient to deliver services to the farmers’ consumers
Security and privacy Confidentiality, integrity and privacy of the data collected are important for proper analysis Data storage
Voluminous amount of data is to be dealt with. Storage and analysis of this data is important. To get the best of the analysis and service, this data has to be absolutely confidential and untampered
Malware infiltrations Malware infiltrations and phishing attacks are a major concern as every nitty gritty of a farm is available online
58
P. Mukherjee et al.
4 Proposed Smart Agrochain Architecture On the basis of its fundamental features, incorporation of blockchain along with existing technologies will prove to be more advantageous and beneficial. Blockchain in agriculture makes the process of growing, cultivating and supplying food simpler. • All devices in the IoT setup are connected, identified and authenticated through centralized cloud servers. These cloud servers are vulnerable to security breach, and their failure can affect the entire IoT system. Decentralized feature of blockchain thus makes them more alluring. This ensures that computation and storage of data are spread across several devices and not on one centralized server. As a result of which, the situation where server failure leads to breakdown of the entire network will no longer persist. • Blockchains significantly minimize the installation and maintenance cost servers and make their scalability and connectivity easy. • The security and confidentiality provided by blockchain ensures that data is safe and untampered. The successive blocks of blockchain store hash of previous block. Any change in the block alters the hash; hence, intrusion can be easily detected. Also, by adding Proof of Consensus no modification can be made in the stored data unless all nodes agree. Unauthorized persons are not allowed to view the blockchain with providing substantial Proof of Work. • The agriculture supply chain provides all involved participants in producing and consuming a single source of information. Blockchain tracks abundant commodities and production thus reducing illegal harvesting and shipping frauds. • Essential parameters for cultivation like soil, weather, irrigation and crop can be constantly monitored using network of sensors, and the collected data is fed into a blockchain. This ensures the decentralized and secure availability of data, which is then analyzed for better decision making. It helps to enhance the crop quality, crop yield prediction, better irrigation management. • Blockchain makes the food supply chain more transparent and trustworthy. Farm origination details, transportation details, warehouse details, expiration dates, storage temperature everything is digitally linked to the food items within the blockchain. Consumers and stakeholders can explore everything to be assured of the food quality. • Blockchain gives a clear understanding of the price differences in the food distribution market to ensure highest traffic. • The information regarding billing and taxation is made unambiguous and equivocal. • Blockchain also assists in stock management and ordering refills in retail shops. Figure 4 gives a block diagram of the proposed Smart Agrochain using blockchain.
Agrochain: Ascending Blockchain Technology Towards …
59
Fig. 4 Proposed smart agrochain architecture using blockchain
5 Conclusion Blockchain is thus a collaborative ecosystem that establishes trust between all the parties involved. It is a technology that offers decentralized and distributed database along with cryptographic security to share information in a safe manner. It can be concluded that blockchain addresses the challenges of urbanization by assuring better implementation of the smart agriculture system. By linking together multiple technologies like IoT and data analytics with blockchain, the smart agriculture could begin to provide better agility, higher yield, improved quality, reduced risks and wastage and excelled efficiency. A detailed literature of all the existing smart agriculture techniques has been provided in this paper. The existing technologies have been vigilantly analyzed, and advantage of blockchain on them has been studied. Proposed scheme for the novel Smart Agrochain system after incorporating blockchain has been provided. Although at an infancy, still blockchain if carefully implemented will give havoc benefits to establish safe, secure and intrusion-free smart agriculture in practicality.
60
P. Mukherjee et al.
References 1. T.W. Schultz, Transforming traditional agriculture, in Transforming Traditional Agriculture (1964) 2. L. Lipper, P. Thornton, B.M. Campbell, T. Baedeker, A. Braimoh, M. Bwalya, R. Hottle, Climate-smart agriculture for food security. Nat. Clim. Change 4(12), 1068–1072 (2014) 3. S. Nakamoto, A Peer-to-Peer Electronic Cash System. Bitcoin (2008). URL: https://bitcoin. org/bitcoin.pdf 4. L. Carlozo, What is blockchain? J. Account. 224(1), 29 (2017) 5. A. Baliga, Understanding blockchain consensus models. Persistent 2017(4), 1–14 (2017) 6. I.C. Lin, T.C. Liao, A survey of blockchain security issues and challenges. IJ Netw. Secur. 19(5), 653–659 (2017) 7. A. Kiayias, A. Russell, B. David, R. Oliynykov, Ouroboros: a provably secure proof-of-stake blockchain protocol, in Annual International Cryptology Conference (Springer, Cham, 2017, August), pp. 357–388 8. H. Auernhammer, Precision farming—the environmental challenge. Comput. Electron. Agric. 30(1–3), 31–43 (2001) 9. M. Lee, J. Hwang, H. Yoe, Agricultural production system based on IoT, in 2013 IEEE 16th International Conference on Computational Science and Engineering (IEEE, 2013, December), pp. 833–837 10. H. Channe, S. Kothari, D. Kadam, Multidisciplinary model for smart agriculture using internetof-things (IoT), sensors, cloud-computing, mobile-computing & big-data analysis. Int. J. Comput. Technol. Appl. 6(3), 374–382 (2015) 11. G.V. Satyanarayana, S.D. Mazaruddin, Wireless sensor based remote monitoring system for agriculture using ZigBee and GPS, in Proceedings of the Conference on Advances in Communication and Control Systems-2013 (Atlantis Press, 2013, April) 12. G.J. Grenzdörffer, A. Engel, B. Teichert, The photogrammetric potential of low-cost UAVs in forestry and agriculture. Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci. 31(B3), 1207– 1214 (2008) 13. H. Eisenbeiss, A mini unmanned aerial vehicle (UAV): system overview and image acquisition. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 36(5/W1), 1–7 (2004) 14. P. Tripicchio, M. Satler, G. Dabisias, E. Ruffaldi, C.A. Avizzano, Towards smart farming and sustainable agriculture with drones, in 2015 International Conference on Intelligent Environments (IEEE, 2015, July), pp. 140–143 15. F. Veroustraete, The rise of the drones in agriculture. EC Agric. 2(2), 325–327 (2015) 16. V. Puri, A. Nayyar, L. Raja, Agriculture drones: a modern breakthrough in precision agriculture. J. Stat. Manage. Syst. 20(4), 507–518 (2017) 17. S.J. Scherr, S. Shames, R. Friedman, From climate-smart agriculture to climate-smart landscapes. Agric. Food Secur. 1(1), 12 (2012)
Random Subspace-Based Hybridized SVM Classification for High-Dimensional Data Sarita Tripathy and Prasant Kumar Pattnaik
Abstract In the modern world of significant advances in technology along with the need to handle massive amount of data in the field of computation biology, microarray gene expression analysis has posed significant challenges to the state-of-the-art data mining techniques. These types of data generally deal with a large number of dimensions and are known as high-dimensional data. Classification is one of the widely used data mining techniques which is being used in diverse range of applications like credit card fraud detection, recognizing cancerous cell, in retail sector, etc. The high dimensions have rendered many existing classification techniques impractical. From the information collected by extensive studies, it is found that highest number of misclassified objects lie beside the hyperplane by which the classes are separated. We have proposed an efficient SVM classifier which hybridizes outlier detection method with SVM, and random forest classifier is used as an auxiliary classifier. This approach significantly improves the accuracy of the existing SVM classifier. Keywords High dimension · SVM algorithm · k-nearest neighbor’s algorithm · Random forest algorithm · Support vector · SVM linear · Angle-based outlier detection · Hybridization
1 Introduction Classification methods are being extensively studied in various areas. Traditional methods which consist of most popular techniques, such as the method of artificial neural networks, the decision trees, the kNN (k-nearest neighbor algorithm), the Parzen window, etc., were widely used to solve different types of problems of classification. These methods when applied on low-dimensional data resulted in high S. Tripathy (B) · P. K. Pattnaik KIIT University, Bhubaneswar, India e-mail: [email protected] P. K. Pattnaik e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_6
61
62
S. Tripathy and P. K. Pattnaik
accuracy, but in today’s world of big data, where the dimension of data is very high these methods are no more efficient. As there is a tremendous advancement in technology, several novel techniques are being designed for dealing with classification problems involving datasets with high dimension. Vapnik [1] has proposed a very efficient classification method known as support vector machine algorithm (SVM) [2] which is now emerging and widely used. The different properties of SVM are highly suitable for tackling problems related to high-dimensional data they are: (1) ability to efficiently handle large input spaces; (2) this method is also robust to noise; and (3) sparse solution production (i.e., the model can be defined as a function the subset of training samples). In recent times, hybridized approaches of using data mining techniques have gained much popularity. For instance, the approach of hybridization, suggested in [3], the KNN is used in combination with SVM. It is also observed from various experimental analyses that by the use of this approach, the most crucial information will be extracted and utilized classifier, KNN is used for specifying the results of classification for the data points which are near the hyperplane has not led to much improvement of the results for high-dimensional datasets. In this work, we suggest hybridizing the SVM with feature bagging-based outlier detection (FBOD) technique and use of median MAD method for selection of relevant features from the high-dimensional datasets as adoption of SVM [4] without considering feature selection will be a very complex and time-consuming approach as the input space will be very large and prone to outliers which will ultimately degrade the performance. Hence, adopting feature selection method which is robust enough and which will maintain the discriminating power of the data is an important requirement. An auxiliary classifier [5–7] random forest classifier is used for the data points which are the support vectors which are present in the hyperplane and most of the misclassification is in this region. In Sect. 2, SVM classification is explained briefly. The median MAD normalization is explained in Sect. 3. FBOD method is briefly presented in Sect. 4. Section 5 describes the proposed SVM (hybridized). The experimental analysis is presented in Sect. 6, and finally, Sect. 7 consists of conclusion.
2 Support Vector Machine In this algorithm, a separating hyperplane [8] is determined to solve the problem of saddle point searching of the Lagrange function, and also quadratic programming problem can be derived from this [1, 2]: −L(λ) = −
S i=1
λ+
S S
λi ∗ λn ∗ Yi ∗ Yn ∗ K (Z i , Z n ) → min λ
(1)
i=1 n=1
λi is a variable that can take dual value; a data point of training set is Z i ; value of yi can be in the range (+1 or −1); it determines the class of the object, Z i from the experimental dataset. The kernel function is K(Z i , Z n ); the regularization parameter
Random Subspace-Based Hybridized SVM Classification for …
63
is C; S represents the count of objects in the dataset. The accuracy of this classifier can be further be enhanced by locating and extracting the area and data points which have very high chances of being misclassified. It is observed that in general, misclassified objects are located near the hyperplane which is the line for separating classes. For these areas, a different technique can be used to avoid this type of misclassification.
3 Median MAD Normalization The measure of statistical dispersion is termed as median absolute deviation. It is a robust statistic, and its resiliency to outliers in a dataset is higher than that of standard deviation. In the statistics of distance, the distance from the mean is squared; hence, the deviation is large deviations weighted more heavily, and thus, outliers can heavily influence it. In the MAD, the deviations of a small number of outliers are irrelevant. The median absolute deviation (MAD) is a robust measure [9] of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample. For a univariate dataset X 1 , X 2 , …, X n , with the residuals (deviations) from the data’s median, the MAD is the median of their absolute values.
3.1 Feature Bagging-Based Outlier Detection Method The feature bagging (FB) detection method aims to train multiple models on different feature subsets sampled from a given high-dimensional space and then combines the model results into an overall decision. A typical example of this technique is the work done by Lazarevic and Kumar [10], in which feature subsets are randomly selected from the original feature space. On each feature subset, the score of each object is estimated with an anomaly detection algorithm. Then, the scores for the same object are integrated as the final score. Nguyen et al. used different detection techniques, rather than the same one, to estimate anomaly scores for each object on random subspaces. In this process, outliers are accurately determined form the outlier scores obtained by the individual outlier detection algorithms, which are then combined in order to find the better quality outliers. The computation time is significantly decreased with the application of feature bagging-based method.
4 RF Classification The ensemble decision trees are used by the RF algorithm [11, 12]. This method achieves high-quality classification due to use of combination of number simple
64
S. Tripathy and P. K. Pattnaik
classifiers. The responses from many trees are aggregated to obtain the final classification result. The quality of classification is improved here as the problem of over fitting is overcome by this method. The ideas of bagging, bootstrap aggregating and the random subspace method are combined in this technique.
5 Proposed SVM Classification This research proposes an enhanced classification method for high-dimensional datasets by a hybridized approach. The proposed approach uses median MAD normalization which is robust to outlier. The feature bagging-based outlier factor is determined for each of the data point. To the resulting filtered dataset, SVM classification is applied and the support vectors are extracted. The next step is to apply auxiliary classifier random forest on the data points which are the support vectors and relabel them. This approach results in a significant increase in accuracy.
6 Proposed Algorithm Hybridized SVM Algorithm Input: D(n, a) dataset //n = number of instances of the dataset, a = number of attributes of the dataset Output: Outlier set O(D) 1. 2. 3. 4. 5. 6.
O(D) = Ø, N(D) = Ø //O symbolizes outlier set, N symbolizes normal set, Ø symbolizes that the set is empty Dataset is pre-processed by applying MAD normalization PCA is applied on the dataset D to get dataset P(n, a) with relevant features For every point Di present in D, where i = 1 to n, calculate FBOF. T = Average (of all FBOF scores in D) For every point in D if (FBOF < T ) N(D) = N(D) ∪ Di , i = 1 to n else O(D) = O(D) + Di
7. 8. 9. 10. 11. 12.
Apply SVM on N(D) Set N sv (D) = set of support vectors Set N c (D) = set points which are not support vectors Apply random forest on the set N sv (D) Upgrade the labels of data points in N sv (D) Print Result
Random Subspace-Based Hybridized SVM Classification for …
65
7 Experimental Analysis and Results R-programming language was used for implementation of the algorithms. The effectiveness of the proposed model is examined through the computational experiments. The efficiency of our proposed approach is tested with some traditional classification methods, on the four re given in Table 1. Computational experiments were conducted for checking the effectiveness of the proposed work. The above four datasets were used, which are taken from the UCI repository. The accuracy and time of execution were tested for the datasets and compared with the SVM algorithm; it is observed that the hybrid SVM performs better than SVM in terms of accuracy (Table 2). In Table 3, the accuracy of proposed approach III is compared with the existing SVM and SVM-KNN. This is also represented in graph in Fig. 1. In the next experiments, it investigated the running time of the proposed algorithm. It was observed that the running time of the algorithm is slightly higher in the proposed algorithm.
Table 1 Datasets used in experiment analysis Dataset WDBC Lung cancer
Dimension 32
Number of instances 569
Attribute characteristics Real
56
32
Integer
MUSK
168
6598
Integer
ISOLET
617
7797
Integer
Table 2 Error rate of hybrid approach for different datasets Dataset WDBC
No. of attributes
No. of support vectors
569
456
32
25
MUSK
6598
ISOLET
7797
Lung cancer
Error
Error rate
33 (out of 113)
0.2920
2 (out of 6)
0.3333
5278
461 (out of 1319)
0.3495
6237
577 (out of 1559)
0.3701
Table 3 Accuracy comparison of SVM with SVM, SVM-KNN and SVM-RF Dataset
No. of attributes
Accuracy SVM
WDBC
SVM-KNN
SVM-RF
SVM-Hybrid III
569
92.00
97.76
98.12
99.93
32
94.00
97.86
98.34
99.98
MUSK
6598
90.23
95.37
97.62
98.84
ISOLET
7797
89.56
94.45
97.63
98.33
Lung cancer
66
S. Tripathy and P. K. Pattnaik 102 100 98 96 94
WDBC
92
LungCancer
90
MUSK
88
ISOLET
86 84 SVM
SVM-KNN
SVM-RF
SVM-Hybrid III
Accuracy
Fig. 1 Accuracy comparison of different algorithms with proposed approach 60 50 40 WDBC
30
Lung Cancer MUSK
20
ISOLET
10 0
SVM
SVM-KNN
SVM-RF
Hybrid-SVM
Time in(sec)
Fig. 2 Execution time comparison of proposed approach with SVM, SVM-KNN, SVM-RF Table 4 Execution time comparison of hybrid SVM with SVM, SVM-KNN and SVM-RF Dataset
Time in seconds SVM
SVM-KNN
SVM-RF
Hybrid-SVM
WDBC
32.40
37.66
36.45
37.33
Lung cancer
28.33
34.54
35.23
36.14
MUSK
47.5
49.67
51.43
53.06
ISOLET
42.3
45.06
46.51
47.65
Random Subspace-Based Hybridized SVM Classification for …
67
Table 4 presents the comparison of execution time of the proposed algorithm with the existing SVM and two hybrid approaches. It is followed by graphical representation in Fig. 2. In table below, a comparative analysis of the hybrid approach is done with existing algorithms SVM, KNN and random forest. From the analysis, it was found that our approach is better in terms of accuracy from the existing algorithms. Datasets
No. of support vectors
Classification type
F-measure (%)
Acc (%)
Se (%)
Sp (%)
WDBC (569 × 30)
425
SVM (hybrid)
99.85
99.95
99.05
99.75
SVM
92.00
94.3
84.43
100
KNN
99.52
99.65
99.06
100
RF
99.76
99.84
99.98
99.72
SVM (hybrid) + ABOD
99.88
97.75
97.45
98.68
SVM
96.92
97.72
100
96.44
KNN
97.58
98.29
96.03
99.56
RF
99.60
99.72
100
99.56
SVM (hybrid)
98.87
97.04
96.34
99.64
SVM
97.76
96.03
98.78
98.78
KNN
97.89
97.03
98.34
99.07
Lung cancer (32 × 56)
246
ISOLET (617 × 7797)
497
MUSK (168 × 6598)
66
RF
98.00
97.67
98.55
99.79
SVM (hybrid)
99.56
98.67
99.34
99.65
SVM
97.65
98.45
99.45
99.71
KNN
98.34
99.67
98.54
99.02
RF
98.56
99.57
98.73
99.06
8 Conclusion The hybridized approach of SVM works more efficiently as compared to normal SVM classification and other traditional approaches. The inclusion of feature baggingbased outlier detection ensures the removal of outliers efficiently from a highdimensional dataset, and also the use of auxiliary classifier random forest on the support vector data points obtained ensures the correction of misclassification done in SVM classification which generally occurs near the hyperplane. This has been proved by the experimental analysis done. In future, some other classifiers will be tried as an auxiliary classifier to improve the classification further.
68
S. Tripathy and P. K. Pattnaik
References 1. V. Vapnik, Statistical Learning Theory (Wiley, New York, 1998) 2. L. Demidova, I. Klyueva, Y. Sokolova, N. Stepanov, N. Tyart, Intellectual approaches to improvement of the classification decisions quality on the base of the SVM classifier. Procedia Comput. Sci. 222–230 (2017) 3. L. Demidova, S. Yu, A novel SVM-kNN technique for data classification, in 6-th Mediterranean Conference on Embedded Computing (MECO’2017) (2017), pp. 459–462 4. R. Li, H.-N. Wang, H. He, Y.-M. Cui, Z.-L. Du, Support vector machine combined with Knearest neighbors for solar flare forecasting. Chin. J. Astron. Astrophys. 7(3), 441–447 (2007) 5. F. Angiulli, C. Pizzuti, Outlier mining in large high-dimensional data sets. IEEE Trans. Knowl. Data Eng. 17(2), 203–215 (2005) 6. L. Yu, S. Wang, K.K. Lai, L. Zhou, Bio-Inspired Credit Risk Analysis. Computational Intelligence with Support Vector Machines (Springer, Berlin, Heidelberg, 2008) 7. L.I. Kuncheva, J.J. Rodriguez, C.O. Plumpton, D.E.J. Linden, S.J. Johnston, Random subspace ensembles for fMRI classification. IEEE Trans. Med. Imaging 29(2), 531–542 (2010) 8. L. Demidova, I. Klyueva, Improving the classification quality of the SVM classifier for the imbalanced datasets on the base of ideas the SMOTE algorithm, in ITM Web of Conferences (2017) 9. L. Demidova, I. Klyueva, SVM classification: optimization with the SMOTE algorithm for the class imbalance problem, in 6th Mediterranean Conference on Embedded Computing (MECO’2017) (2017), pp. 472–476 10. N. Chawla, A. Lazarevic, L. Hall, K. Bowyer, SMOTEBoost: Improving the Prediction of Minority Class in Boosting, In Proceedings of the Principles of Knowledge Discovery in Databases, PKDD-2003, Cavtat, Croatia, September (2003) 11. T. Hastie, R. Tibshirani, J. Friedman, Chapter 15. Random forests, in The Elements of Statistical Learning: Data Mining, Inference, and Prediction (Springer, Berlin, 2009) 12. M. Pal, Random forest classifier for remote sensing classification. Int. J. Remote Sens. 217–222 (2015)
MLAI: An Integrated Automated Software Platform to Solve Machine Learning Problems Sayantan Ghosh, Sourav Karmakar, Shubham Gantayat, Sagnik Chakraborty, Dipyaman Saha, and Himansu Das
Abstract Artificial Intelligence has enhanced the capability of many organizations’ decision-making at an extreme level. The demand for analyzing the structured and unstructured data that increases exponentially in recent days is really challenging task. To address this issue, we have developed MLAI software tool which could automate the process of typical data science problems. MLAI combines a number of efficient approaches to enhance the accuracy of several complex problems on predictive analysis for both structured and unstructured data. It creates a seamless interface for the user to interpret the data and also provides custom parameters for tuning the hyperparameters to address the specific business needs. The objective of the paper is to automate the classification problems through a software tool. It can intelligently analyze the whole dataset properly and also determines the features’ importances and also reduces the dimensions of the dataset. It incorporates the advanced visualization techniques using Plotly Express graphs. MLAI presents a development environment that automate various supervised machine learning techniques such as Logistic Regression (LR), K-Nearest Neighbors (KNN), Decision Tree (DT), Random Forest Please note that the LNCS Editorial assumes that all authors have used the western naming convention, with given names preceding surnames. This determines the structure of the names in the running heads and the author index. S. Ghosh (B) · S. Karmakar · S. Gantayat · S. Chakraborty · D. Saha · H. Das School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha, India e-mail: [email protected] S. Karmakar e-mail: [email protected] S. Gantayat e-mail: [email protected] S. Chakraborty e-mail: [email protected] D. Saha e-mail: [email protected] H. Das e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_7
69
70
S. Ghosh et al.
(RF), Naive Bayes (NB), and Support Vector Machine (SVM) on several datasets. The comparative analysis of results among aforesaid algorithms are presented and also observed that RF outperforms than rest of the classification algorithms. Keywords Driverless machine learning · Classification algorithms · Feature extraction · PCA · Natural language processing
1 Introduction Machine Learning (ML) [1, 2] is a key research area in the field of scientific and statistical models. It learns and solves the real world problems that improves automatically from the experience gained without being explicitly programmed. The main objective of ML is to enhance the machines’ logical ability to solve data extensive problems. It is categorised into three major areas such as supervised Learning, unsupervised learning and reinforcement Learning. Supervised ML comes into play in the classification as well as predictive analysis problems. It acts as a pipeline architecture that comprises the following steps such as feeding of raw data, comprising data processing techniques, extracting meaningful features, and applies the ML models on these features.Finally, it brings into play the generated model for future related problems. Supervised Learning is further divided into two categories such as Classification [3–8] and regression. Classification is used to make predictions, especially on categorical data. There are two types of Classification models such as binary classification and multi-class classification. Binary Classification works on only two categories whereas multi-class classification is used where there is a presence of more than two categories. Modern Data Science is inevitable to think without proper and faster processing of data. As the crisis of data scientists are exponentially increasing there is a need in automating the machine learning pipeline process which could process the data very fast and efficiently. In order to solve real-world data (structured and unstructured) related problems, we have developed MLAI where all the machine learning supervised algorithms are integrated which also supports all types of data preprocessing techniques such as Handling Categorical features, filling the Missing Values, and also creates an interactive visualization for a proper understanding of data. The users need not be a professional to use this software as it does all the process in the backend. We have included six Classification algorithms such as DT [9, 10], KNN [11, 12], NB [13], SVM [14], RF [15, 16], LR [17, 18], and RT [19]. Nowadays, the need for text data processing in the field of Natural Language processing like sentiment Analysis, text Summarizer is increasing exponentially. So, we have integrated NLP [20] in our Software to make it in full-fledged. MLAI uses TF-IDF based extractive Text Summarizer [21] and also helps to understand the embedded processing of text data such as tokenization, lemmatization, a bag of words. The details literature review is described in Sect. 4.
MLAI: An Integrated Automated Software Platform …
71
We know that recent research areas are converging into image processing unstructured data like Image Classification, Detection of edges in an image, and most important Optical Character Recognition(OCR) [22]. Alpha AI uses most advanced Google Tessaract API for OCR implementation which is accurate than any other existing OCR models and fast enough in processing. LAI provides a seamless interface for comparing and implementing model accuracy without writing a single line of code. The end-user needs only to provide dataset as well as text data for NLP application purposes and image data for Face Detection, OCR recognition. In Sect. 2, Data pre[processing techniques which are applied is described, Sect. 3, Classification Algorithms have been discussed, in Sect. 5, software architecture is described, and in Sect. 6, experimental results have been shown.
2 Data Preprocessing The most crucial part in Data Analytics and Big Data is proper processing of data as a well better understanding of data which increases the accuracy of Machine Learning algorithms for extracting the important information which directly affects the quality of the model to learn. In order to process complicated datasets, we have applied six primary data preprocessing techniques such as handling missing values, standardization, handling categorical features, multicolinearity removal, and Principal Component Analysis (PCA) [23]. To handle missing values, We have used iterative Imputer (mean) from ScikitLearn which is based on the popular R Algorithm for imputing such missing variables, multivariate imputation by chained equation. Another indispensable step before data preprocessing is standard scaling. In this step, we convert the values such that the mean of the values becomes 0 and the standard deviation becomes 1. Standard scaling is the method of modifying a variable to one with a mean of 0 and a standard deviation of 1. We have used standard scalar and Min-Max scalar for Normalization Technique. The normalizer class from Sklearn normalizes samples exclusively to unit standard. It is not column-based but a row based normalization procedure. To handling categorical variables, the categorical data are very frequent in data science and machine learning problems. As numerical data are only allowed to feed in the classification algorithms so the features which are categorical in nature must be transformed into numerical features. Multicollinearity transpires when correlativity between the independent variable in features dataset, which may cause inconsistency and dependency among independent features. The interpretation of a regression coefficient is that it represents the mean change in the dependent variable assuming the other factors to be constant. We have used Pearson’s Correlation to identify the linear relationship between two data samples. PCA is the best certified and the most primal algorithm for multivariate interpretation. PCA is a feature extraction technique that convert the input features into its corresponding transformed features called principal components.
72
S. Ghosh et al.
These transformed features may contains both relevant and irrelevant transformed features. The irrelevant features are eliminated before processing in the model that reduces the dimensions of the feature space and also increases the performance of the model.
3 Supervised Machine Learning Algorithms The proposed MLAI software deals with the supervised machine learning algorithms that searches for the best fit algorithm based on the data provided by the user and automatically deploys data preprocessing techniques. This paper enlightens the automation of classification techniques, generates advanced comparative visualizations of supervised learning classifiers as well as help to determine the best features to consider. Five Different Classification algorithms were considered: DT, SVM, KNN, NB. We have developed an automated pipeline integrating with supervised classifiers. The details of the aforesaid five classification algorithms are explained in details as follows.
3.1 Decision Tree Decision Tree follows the top down approach and consists of recursive splits. It consists of nodes, branches and leaves. Generally, the input data are represented as attribute-value. Each node in the tree represents the attribute or feature in an instance to be classified, and each branch represents a specific rule or decision, and each leaf represents the outcome of that rule or decision. This process starts from the root node of the tree and moves down according to the decisions made by the branches and moves to the next node. The process continues until a leaf is encountered and the corresponding outcome is given. The following graph shows the fitness of a person according to his/her age and daily practices. Pruning: Pruning is a method that is used to deal with the overfitting dilemma by decreasing the size of the Decision Tree by eliminating those segments of the tree which provide less ominous or classification power. The purpose of the process is to lessen the complexity and achieve more efficiency. There are two types of pruning the tree: CART Algorithm: For classification, CART uses a metric called Gini Impurity to get the decision points. Gini Impurity provides an idea of how narrow the split is by analysing the mixture of the classes in the two groups formed by the split. If all the measurements belong to the very label, then there is a perfect classification and a Gini Impurity value is 0, while if all the observations are uniformly divided among the different labels then it is the gravest split case and the Gini Impurity value is 1. Pros and Cons of Decision Tree
MLAI: An Integrated Automated Software Platform …
73
• Pros – It is a self explanatory model that schematically represents the problem. – It generates the rule set that can easily be interpreted by the readers. – It is a non parametric model that does not require any functional specifications. • Cons – This model is more computationally expensive. – It can easily be prone to overfit the data. – It is generally used in classification problems rather than regression problems.
3.2 Support Vector Machine Support Vector Machine (SVM), is possibly one of the most powerful and versatile machine learning models used today. They were generated during the 1990s and remain to be the go-to approach for a high performing algorithm with a slight tuning. It is applied for classification, regression and feature selection. In classification, support vector machine practices the idea of margin and defines an optimal separating hyperplane. The margin is the distance between the hyperplane and the most nearby points to it on each side. The points nearest to the hyperplane are known as the support vector points. The more isolated the support vector points from the hyperplane, higher is the probability of accurately classifying the points in their particular classes or regions. The hyperplane is complicated to discover because if the position of the support vector points alters then the position of the hyperplane is also altered. Radial Basis Function Kernel (RBF) is one of the most popular kernels used in Support Vector Machine algorithm. RBF kernel is a function whose value depends on the extent from the origin or some point. Pros and Cons of Support Vector Machine • Pros – It is really effective when the data is of higher dimension. – If the number of training examples are more than the number of features then Support Vector Machine is very effective. – It is considered as the best algorithm when the classes are separable and is highly suited for extreme cases of binary classification. – The hyperplane only gets affected by the support vector points so basically the outliers have no significant impact on the hyperplane. • Cons – If the dataset is very large then it is very time consuming as the processing of the data takes lots of time. – If the classes of the dataset are overlapped then Support Vector Machine may fail to perform well.
74
S. Ghosh et al.
– Selection of the appropriate hyperparameters of the Support Vector Machine that will allow for sufficient generalization performance and also the selection of the appropriate kernel can be tricky.
3.3 K-Nearest Neighbours (KNN) K-Nearest Neighbours is a lucid, easy-to-implement, instance-based learning algorithm used for classification tasks. The training data present in d-dimensions where d is the number of features. Given a distinct data-point, it is marked on the basis of its relationship to the rest of the data-points by some similarity (based on distance, proximity or closeness) metrics. Here the distance can be based on euclidean distance (which is the most common) or some other kind of distance. KNN decides the label of this new point by using K closest points and the most common labels among them using the majority vote. Choosing Optimal Value of K Practically there is no optimal value of K , it all depends on the data given. If K is too large then the algorithm may misclassify the point because it performs over smoothing and the introduction of outliers further degrades the performance. If K is too small then local estimates tend to be poor because of sparsity, noise or mislabeled points. The optimal value of K is the one which reduces the classification error and increases the accuracy. In Fig. 1 the following result is obtained. The distance metric used must minimize the distance between two similarly classified instances. For discrete variables, such as text classification a metric called overlap metric or hamming distance can be used. For gene expression data Pearson
Fig. 1 K value versus error rate distribution in titanic dataset
MLAI: An Integrated Automated Software Platform …
75
correlation matrix can be used. The classification accuracy of KNN can be improved if distance metric is learned with special algorithms such as Large Margin Nearest Neighbor or Neighbourhood Components Analysis. Some popular distance metrics are given below. Pros and Cons of KNN • Pros – KNN is one of the simplest machine learning algorithms and it is easy to implement. – In KNN there is no need to tune several hyper-parameters, make additional assumptions. – The algorithm is versatile i.e. it can be used for regression and classification. • Cons – KNN is computationally very expensive. – It is sensitive to outliers. – KNN becomes significantly slower as the number of independent variables increases.
4 Natural Language Processing Natural Language Processing, commonly known as NLP empowers a machine to interact like a real human being with other humans in order to achieve some desired results. It has radiated its relevance in many fields such as Sentimental Analysis, Machine Translation, Speech Recognition, Chatbots, Text Summarizer, Spam Detection, Extraction of Information etc. and many more.
4.1 Text Summarization Text summarization relates to the way of shrinking lengthy portions of writings along with preserving essential information and overall sense. The motive is to build an understandable, uniform and a concise summary containing simply the main points mentioned in the document which would help in saving a lot of time in return. • TF-IDF Summarizer TF-IDF consists of two terms, Term Frequency and Inverse Document Frequency, Term Frequency gives the frequency of a word in a particular document whereas Inverse Document Frequency gives the frequency of a word in the entire corpus. Nd (1) Sa,b = t f a,b ∗ log dfa
76
S. Ghosh et al.
Fig. 2 TF-IDF summarizer workflow
t f a,b frequency of word a in document b d f a number of documents containing word a Nd total number of documents in the corpus Sa,b tf-idf score of word a in document b The above formula penalises the most frequently occurring word in all documents and gives more weight to uncommon words.After determining tf-idf score for each word, a tf-idf matrix is generated from which score of each sentence is calculated and at last, we get the summary by considering sentences with top scores. The detail workflow of the summarizer is shown in Figure 2.
5 Software Architecture and Pipeline Automation The Software takes the raw data from the user and as per the user requirements (i.e, Predictive Analysis based on Supervised Machine Learning Algorithms, Text Summarization based on TF-IDF basis in the field of NLP and also users can provide
MLAI: An Integrated Automated Software Platform …
77
Fig. 3 Workflow diagram of the software
image data for OCR (Optical Image Recognition), Face Detection, Canny Edge Detection) Algorithms. The embedded automation pipeline has the capability to process high amount of data and also provides advanced visualizations based on the dataset. As shown in Fig. 3, when a user’s requirement is predictive analysis or supervised classification problem based on structured data set. When a user provides a data set the automated pipeline comes into play. In the preprocessing stage the dataset
78
S. Ghosh et al.
is analyzed in the backend with the techniques discussed in the Data preprocessing section so that the dataset dimension is reduced and being concentrated into most relevant features. Then, we used K-Fold Cross Validation with 10 splits to determine the best possible accuracy in each classification algorithms. Now if the user supplies a text data as an input, then our software presents sentiment analysis and text summarization as the key services. The user can pick either one. Now if the user selects Sentiment Analysis then the TextBlob library in Python runs the Sentiment Analysis algorithm in the background and the result is attested according to the polarity score and the subjectivity score of the text. If the Text Summarization is selected then the TF-IDF summarizer algorithm runs in the background and the summary of text is yielded as a result. First, all the stop-words are withdrawn from the text, then the text is split into tokens and lemma and according to the TF-IDF score, the summary of the text is generated.
6 Result Analysis and Discussion This section shows the overall performance of six classification algorithms and the crucial parameters like accuracy,error-rate, Specificity score, sensitivity, false Positive rate, Precision and the Time needed to perform the tasks by applying in six different datasets gathered from Kaggle and UCI machine learning repository [24]. These datasets are being preprocessed in the preprocessing stage (Cleaning the data, reducing the dimension). MLAI automatically detects the missing values in multiple attributes if present and fill the field which is missing. To handle this issue it replaces the missing value with the mean of the corresponding attributes. MLAI imposes the Standard scaling normalization procedure to normalize certain data and convert into a specific range (0.0–1.0). The dataset instances are spllitted into two subsets with splitting amount 0.3. The backend architecture of the software is developed using Python 3.6.5. The details of the experimental results are shown in Table 1.
7 Conclusion and Future Works This paper demonstrates a software named MLAI which could enhance the performance of many real world data intensive problems based on supervised learning and natural language processing in very less amount of time. It scales down human labour at a very certain level and presents advanced graph visualisations for better interpreting the results.It is currently in development for handling huge amount of data based on time series analysis and processing speed of Computer Vision applications.
MLAI: An Integrated Automated Software Platform …
79
Table 1 Experimental results on 5 dataset Model
Accuracy
Error
Time (s)
Sensitivity
Specificity
False positive rate
Precision
0.973671
0.026329
0.025928
0.957447
0.970149
0.029851
0.957447
KNN
0.964879
0.035121
0.027922
0.893617
1.000000
0.000000
1.000000
SVM
0.973671
0.026329
0.042947
0.957447
0.985075
0.014925
0.978261
NB
0.947343
0.052657
0.009440
0.914894
0.910448
0.089552
0.877551
DT
0.927343
0.072657
0.009973
0.936170
0.880597
0.119403
0.846154
RF
0.953865
0.046135
0.024936
0.978723
0.955224
0.044776
0.938776
Credit card LR classification (Kaggle)
0.999500
0.000500
0.999161
0.666667
0.999499
0.000501
0.666667
KNN
0.999500
0.000500
6.076071
0.666667
0.998998
0.001002
0.5000000
SVM
0.998001
0.001999
2.377783
0.000000
0.999499
0.000501
0.000000
NB
0.982872
0.017128
0.372003
0.666667
0.984477
0.015523
0.060606
DT
0.999125
0.000875
0.604126
0.666667
0.999499
0.000501
0.666667
RF
0.999375
0.000625
0.269860
0.666667
0.999499
0.000501
0.666667
LR
0.783786
0.216214
0.056911
0.911765
1.000000
0.000000
1.000000
KNN
0.738258
0.261742
0.029918
0.937500
1.000000
0.000000
1.000000
SVM
0.769478
0.230522
0.119678
0.911765
1.000000
0.000000
1.000000
NB
1.000000
0.541563
0.100878
0.515152
0.969697
0.030303
0.944444
DT
0.703933
0.296067
0.015621
0.705882
0.937500
0.062500
0.923077
RF
0.751754
0.248246
0.015625
0.825000
0.970588
0.029412
0.970588
Ionosphere LR classification (Kaggle)
0.893003
0.106997
0.013904
0.681818
0.979592
0.020408
0.937500
KNN
0.836061
0.163939
0.016766
0.681818
0.979592
0.020408
0.937500
SVM
0.946497
0.053503
0.022152
0.772727
0.979592
0.020408
0.944444
NB
0.881769
0.118231
0.001777
0.727273
1.000000
0.000000
1.000000
DT
0.896442
0.103558
0.008998
0.772727
0.918367
0.081633
0.809524
RF
0.913948
0.86052
0.022297
0.772727
0.979592
0.020408
0.944444
Sonar clas- LR sification (Kaggle)
0.796324
0.203676
0.010406
0.72
0.705882
0.294118
0.782609
KNN
0.802202
0.197794
0.011898
0.88
0.647059
0.352941
0.785714
SVM
0.820956
0.179044
0.031429
0.84
0.764706
0.235294
0.840000
NB
0.675368
0.324632
0.010077
0.60
0.764706
0.235294
0.789474
DT
0.709926
0.290074
0.011863
0.56
0.823529
0.176471
0.823529
RF
0.837868
0.162132
0.039412
0.72
0.823529
0.176471
0.857143
Breast LR tumor classification (Kaggle)
Vehicle classification (Kaggle)
80
S. Ghosh et al.
As we know in our day to day life Optical Character Recognition is a crucial application to extract valuable information from an image. It would be very helpful in extracting text information from old books, research journals. Our plan to integrate Google’s Tessaract API to implement the OCR which will be very efficient in terms of time complexity and accuracy. Also, we are working on improving our Text Summarizer using abstractive type which will be more accurate than existing summarizer.
References 1. H. Das, B. Naik, H.S. Behera, An experimental analysis of machine learning classification algorithms on biomedical data, in Proceedings of the 2nd International Conference on Communication, Devices and Computing Springer (Singapore, 2020), pp. 525–539 2. A.K. Sahoo, C. Pradhan, H. Das, Performance evaluation of different machine learning methods and deep-learning based convolutional neural network for health decision making, in Nature Inspired Computing for Data Science (Springer, Cham, 2020), pp. 201–212 3. A.K. Tanwani, J. Afridi, M.Z. Shafifiq, M. Farooq, Guidelines to select machine learning scheme for classification of biomedical datasets, in European Conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics (Springer, Berlin, Heidelberg, 2009), pp. 128–139 4. C. Pradhan, H. Das, B. Naik, N. Dey, Handbook of Research on Information Security in Biomedical Signal Processing (IGI Global, Hershey, PA, 2018), pp. 1–414. https://doi.org/10. 4018/978-1-5225-5152-2 5. Y. Freund, R. Schapire, N. Abe, A short introduction to boosting. J. Jap. Soc. Artif. Intell. 14(771–780), 1612 (1999) 6. H. Das, B. Naik, H.S. Behera, Classification of diabetes mellitus disease (DMD): a data mining (DM) approach, in Progress in Computing, Analytics and Networking (Springer, Singapore, 2018), pp. 539–549 7. R. Sahani, C. Rout, J.C. Badajena, A.K. Jena, H. Das, Classification of intrusion detection using data mining techniques, in Progress in Computing, Analytics and Networking (Springer, Singapore, 2018), pp. 753–764 8. H. Das, A.K. Jena, J. Nayak, B. Naik, H.S. Behera, A novel PSO based back propagation learning-MLP (PSO-BP-MLP) for classification, in Computational Intelligence in Data Mining, vol. 2 (Springer, New Delhi, 2015), pp. 461–471 9. M.N. Murty, V.S. Devi, Pattern Recognition: An Algorithmic Approach (Springer Science & Business Media 2011) 10. J.R. Quinlan, Induction of decision trees. Mach Learn 1(1), 81–106 (1986) 11. T. Cover, P. Hart, Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 13(1), 21–27 (1967) 12. P. Hall, B.U. Park, R.J. Samworth, Choice of neighbor order in nearest-neighbor classification. Ann. Stat. 36(5), 2135–2152 (2008) (An Experimental Analysis of Machine Learning .... 539) 13. I. Rish, An empirical study of the naive Bayes classififier, in IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence, vol. 3. no. 22 (IBM, New York, 2001), pp. 41–46 14. C. Cortes, V. Vapnik, Support-vector networks. Mach Learn 20(3), 273–297 (1995) 15. T.K. Ho, Random decision forests. In Document analysis and recognition, in Proceedings of the Third International Conference on, vol. 1. (IEEE, 1995), pp. 278–282 16. I. Barandiaran, The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8) (1998) 17. D.G. Kleinbaum, K. Dietz, M. Gail, M. Klein, M. Klein, Logistic Regression (Springer-Verlag, New York, 2002)
MLAI: An Integrated Automated Software Platform …
81
18. S. Menard, Applied Logistic Regression Analysis, vol. 106 (Sage, 2002) 19. L. Breiman, Classification and Regression Trees (Routledge, 2017) 20. C.D. Manning, C.D. Manning, H. Schütze, Foundations of Statistical Natural Language Processing (MIT press, 1999) 21. M. Maybury, Advances in Automatic Text Summarization (MIT press, 1999) 22. R. Smith, An overview of the tesseract OCR engine, in Ninth International Conference on Document Analysis and Recognition (ICDAR 2007), vol. 2 (IEEE, 2007), pp. 629–633 23. S. Wold, K. Esbensen, P. Geladi, Principal component analysis. Chemometr. Int. Lab. Syst. 2(1–3), 37–52 (1987) 24. C. Blake, UCI repository of machine learning databases. http://www.ics.uci.edu/~mlearn/ MLRepository.html (1998)
Frequency Regulation of a Multi-area Renewable Power System Incorporating with Energy Storage Technologies Subhranshu Sekhar Pati, Prajnadipta Sahoo, Santi Behera, Ajit Kumar Barisal, and Dillip Kumar Mishra
Abstract In a power system, the excursions of the frequency level are must be tunable automatically, which will provide reliable and safe operation. Moreover, the incorporation of renewable sources into the conventional generation leads to an increase in the system complexity and further fails to provide a stable operation. However, the role of energy storage devices can play significantly in terms of instant power injection after the loss of generation. In this paper, different energy storage technologies such as battery storage, supercapacitor, and superconducting magnetic energy storage are tested with three different types of controllers in the three-area power systems. Further, the Jaya optimization method is applied to tune the control parameters, and with the optimal value of controller gains, the three-area power system gives better dynamic frequency regulation characteristics. Finally, the tilt integral derivative (TID) controller with three different energy storage devices is incorporated and tested, where the battery storage technology provides better dynamic characteristics as compared to the other two storage devices. Keywords Load frequency control · Renewable source · Energy storage technology · Tilt integral derivative controller · Jaya algorithm S. S. Pati (B) Department of Electrical Engineering, IIIT Bhubaneswar, Bhubaneswar 751003, India e-mail: [email protected] P. Sahoo · S. Behera Department of Electrical Engineering, VSSUT, Burla 768108, India e-mail: [email protected] S. Behera e-mail: [email protected] A. K. Barisal Department of Electrical Engineering, CET, Bhubaneswar, Bhubaneswar 751003, India e-mail: [email protected] D. K. Mishra School of Electrical and Data Engineering, UTS, Sydney 2007, Australia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_8
83
84
S. S. Pati et al.
1 Introduction The power system is generally highly complex and interconnected with each other, thus forming several control areas for the smooth operation of the power system and continuous supply of reliable power to electrical customers. The resultant interrelated network keep holds diverse power sources that are operated in synchronize fashion to maintain the balance of generation-demand of electrical power. In the case of power deficits in one or multiple control areas, the surplus power of other control networks can compensate power deficits [1]. Due to the unbalancing of generation-demand, the fluctuation of frequency and incremental power change within the control area called tie-line power occurs, which should be maintained as per standard. Moreover, the control area consists of diverse power sources like thermal, hydro, gas plant and if frequency deviates abnormally, then the whole power structure goes out of synchronism and cascaded blackout may occur. Hence, in these cases, the use of load frequency control (LFC) becomes vital. The primary objective of LFC is to maintain system frequency and tie-line power at a constant level [2]. With the integration of renewable power sources with the traditional power structure, monitor and regulation of frequency are more dynamic and crucial. With the exhaustion of fossil fuel and increased global warming, renewable-based power sources gaining more popularity in recent days [3]. Also, research is carried out to utilize a number of renewable plants as distributed generation. To improve the thermal efficiency of the overall system, the reheat system is included in a solar thermal and hydel power plant. The reheat engines are described by Sahu et al. in the field of frequency regulation [4]. The authors in [4] also discussed the combination of thermal-hydro-wind sources for a multi-area system. Generally, thermal-based power sources are used as base load power plants, whereas gas power plants act as peak load plants due to quick delivery of power. Saikia et al. used as a gas power plant in LFC study and used as back-up sources that provide power during peak demand hour [3]. Sharma et al. recently used solar thermal plants with a combination of other power sources in the LFC mechanism [5]. The power derived from renewable sources is unpredictable, and the quality of power may be distorted. To avoid such a scenario, various energy storage technologies (EST) may be incorporated into the system [6]. Recently, researchers are focusing on the application of EST devices such as battery storage, flywheel storage, ultracapacitor, super-magnetic energy storage, compressed energy storage, and redox flow battery. The integration of such a storage system not only enhances the quality of power but also the cost of the generation is minimized simultaneously [7]. When excess power available by the power sources, EST stores it in a short period and releases the power to the grid if the demand is more than the generation thereby lessens the frequency fluctuation, thus increasing the quality of power. Various researchers used EST in the diverse research area to analyze the system efficiency, but few papers analyze the integration of different EST in LFC study. Regulate the system performance more effectively; various secondary controllers are used in the test system. Some of the commonly used controllers are proportional (P), integral (I), and derivative (D) gain-based controller, PID gain with
Frequency Regulation of a Multi-area Renewable Power System …
85
filter (PID-F) controller, fuzzy rule-based controller and fractional-order controller [8–10]. However, this paper utilized a new order controller named as TID controller for monitor and regulation of frequency [11]. The projected controller has supplanted a part of the tilted component involving the transfer function s 1/n . The derived transfer function of the proposed TID feedback controller gives an optimal enhanced result. Although the PID controller is simpler to apply and optimized rapidly by application of optimization technique, it produces irrational size control input to the system. The optimal controller parameters of the TID controller are derived with the minimization of performance indices and cost function. Usually, in an optimization problem, requisite problem variables are tuned using various optimization algorithms. Various algorithms, such as the cuckoo search algorithm, whale algorithm, ant bee colony technique, grey wolf algorithm, are used by past works of literature [7–12]. In this context, the Jaya algorithm provides an optimum solution with minimum iteration owing to does not have problem-centric parameters [13, 14]. So authors used the Jaya technique for tuning of controller parameters.
2 System Description To realize a realistic interrelated power system, three interconnected unequal area test systems are taken under consideration, shown in Fig. 1. The thermal and hydropower source is considered in all three areas. Different nonlinearity involved in actual running plants like BD and GRC is also included in the system. BD helps the thermal plant to produce high-pressure steam and also controls the flow of steam through the opening and closing of the required number of valves. Similarly, GRC helps the thermal as well as a hydel power plant to rise or fall the generation as per the situation prevails. Moreover, renewable-based sources like solar thermal and wind power systems are also considered in areas 1 and 2, respectively, as the demand for clean harvesting power is increasing in an incremental manner. The generalized structure of the interrelated multi-area system is displayed in Fig. 1. The parameters of such plants are reported by literature [7–12]. However, with the inclusion of renewable plants, the system response may vary according to input constraints. Further evaluating the dynamic performance of the test system with other power sources, the gas-based power plant is incorporated in area 3. The block diagram representation of the transfer function of the power plant is presented in Fig. 1. Battery energy storage is an effective element that can easily store energy in its constituent battery storage unit. It can provide power for a short duration and a high amount of energy for a longer duration due to high energy density and rapid access time interval. As it stores energy in the form of DC, a converter is required for the conversion of DC into alternating form. The first-order transfer function of battery storage is presented in Eq. 1 [15]. TFbattery =
K battery 1 + sTbattery
(1)
86
S. S. Pati et al.
+ +
INSOLATION
Kds Tdss+1
Ui
-
ACE
Σ -
Σ
Σ
+
+
a12
Ui
-
× X
1 Tts s+1
1 Tgs s+1
Σ
GRC + BD
Kr1Tr1s+1 Tr1s+1
B1
1 R1
1 R1
Ui
+
-
Σ
-
ACE
-
1 R1
Σ
1 R2
1 R2
-
Σ
-
Σ
Ui
Σ
1 + Ka s 1 + Ta s
1 R2
ACE
-
1 Tgs s+1
+ Σ
1 R3
1 R3
Ui
1 R3
Σ
+ -
+ + 2*pi*T23 s
a23
Step EST load
GRC + BD
Kr1Tr1s+1 Tr1s+1
1+Kgs 1+Kgs
Tws+1 0.5Tws+1 1+Kcs 1+Tcs
-
+ Σ
Kp1 Tp1s+1
Σ
Σ +
-
+
a13
- Step EST load
1 Tts s+1
GRC + BD
Kr1Tr1s+1 Tr1s+1
-
Σ
2
-
2*pi*T12 s
+ -
Thermal+Hydro+Wind
+
Ui
B3
1 Tts s+1
Kds +Kps+ki f Kds 2 +(Kp+ )s+ki R2
Ui
-
Tws+1 0.5Tws+1
2
Ui +
-
-
Σ
Solar Thermal+Thermal+Hydro
1 Tgs s+1
Σ
B2
Kds 2 +Kps+ki f Kds 2 +(Kp+ )s+ki R2
Kp1 Tp1s+1
Σ
Ui
+ Σ
Σ
Delay
2*pi*T13 s
Σ
+
Kds +Kps+ki f Kds 2 +(Kp+ )s+ki R2
1 Bgs + Cg
Xgs + 1 Ygs + 1
Tws+1 0.5Tws+1
Dcrs+1 Tcs+1
Kp3 Tp3s+1
- -
Step EST load
Thermal+Hydro+Gas
Fig. 1 Generalized power structure of test model considering transfer function
Similarly, ultracapacitors or in other term supercapacitors (SC) also deliver rapid power to the utility grid when sudden disturbances persist in the system. Additionally, the combination of SC and other storage elements yields a high-power rating. The transfer function of SC is depicted in Eq. 2 [16]. TFSC = TFSMES =
K SC 1 + sTSC
(2)
K SMES 1 + sTSMES
(3)
One of the recently developed energy storage systems is superconducting magnetic energy storage (SMES). It stored energy in the electromagnetic field and made up of superconducting wire [17]. As it uses wire that is kept under cryogenic temperature, the loss involved in this type of storage is comparatively less. The first-order transfer function of SMES is given by Eq. 3.
Frequency Regulation of a Multi-area Renewable Power System …
87
3 Controller Structure The projected controller is a tunable controller, and controlling variables are K T , K I , and K D with a gain of n. The TID controller is similar to the proportional–integral derivative controller, but the proportional gain is succeeded by a tilted proportional gain term whose transfer function is defined as s − 1/n. This transfer function is termed as ‘tilt’ structure, and the overall compensator is known as a tilt integral– derivative controller. The complete arrangement of the TID controller is shown in Fig. 2 [11]. The tilted component gives an extra tuning knob for better coordinating during pick gain or ordinary controller. The same also provides a feedback path, and the tuning knob is a part of frequency regulation. Hence, it can offer better stability under the variation of parameters and disturbances. The mathematical transfer function of the projected controller is TFPID(s,α) in which S is complex variable and α belongs to R4 presented in Eq. 4. TFPID(s,α) =
KT KT + + KT S 1/n 1+s S
(4)
α t = [K T K I K D ]
(5)
α is a vector group of TID controller parameters, and the value of n is assumed in the range from 1 to 10. The TID controller possesses a high amount of flexibility and has the predominant properties of rapid tuning, higher rejection ratio, and smaller effect of plant parameter with the feedback control. In an optimization problem, the controller parameters are optimized through an algorithm. Hence, a fitness function is expected for the successful tuning process. For present configuration, fitness function involved time-domain parameters like peak over or undershoot, settling time, error at steady state. In this study, integral of time-multiplied-absolute error (ITAE) is considered and presented in the following Eq. 6. Fig. 2 Block diagram of TID controller
KT × (1/ s1/ n )
ACE
K I × (1/ s )
+
+
Σ
+
K D × (1/ s )
Ui
88
S. S. Pati et al.
tsim ITAE = (F1 + F2 + F3 + Ptie12 + Ptie23 + Ptie31 )t.dt
(6)
0
where F is the frequency deviation, Ptie is the incremental change in tie-line power, and t is time period of the simulation
4 Optimization Technique In an optimization-based power system problem, the controller variables are tuned through proper technique which is a tedious task. Moreover, to identify the requisite algorithm for the complex problem for locating optimal parameters is an again difficult duty but an important one for the optimization point of view. In this regard, Jaya optimization algorithm proposed by Rao and Saroj [13] which is better than the other swarm-based optimization algorithm concerning simpler to implement, and most prominent factor point is that it takes less time to reach the global solution, and thus, time complexity of such algorithm is less with accuracy, which is important aspect of formulating any optimization algorithm [14]. The Jaya algorithm is tested using several test functions known as benchmark function and found superior to the other traditional algorithms. Furthermore, the procedure of the Jaya algorithm is so simple and effective that the updated solution after each iteration is always moving toward the best solution and not to divert to the worst solution. Let ‘P’ is defined as population size or the candidate solution (i = 1, 2, 3, 4, …, P), normally the same as variables of the controller to be optimized, and ‘Q’ is the number of decision states (j = 1, 2, 3, 4, …, Q) for every element of the candidate solution. With a compilation of each iteration termed as the Kth cycle, the optimum solution of best and worst is designated as X j,best K and X j,worst K respectively. Through the Kth cycle, if X i j K is the worth of the jth decision state on behalf of the ith population component, then X i j K is revised by adopting the following equation [14]. X i j K = X i j K + μ1, j k X j,best K − X i j K − μ2, j k X j,worst K − X i j K
(7)
in which X j,best K and X j,worst K are the values of the global solution and the worst solution achieved for jth decision state, respectively. Two arbitrary random numbers μ1 and μ2 are used in the update equation, and the value lies in the range from 0 to 1. The revised value of X i j K takes part in the succeeding cycle, and the procedure is continued unless and until the optimum global solution is reached or termination criteria are satisfied.
Frequency Regulation of a Multi-area Renewable Power System …
89
5 Results and Discussions The interrelated test system used in this present study has a conventional power source like thermal and hydel plants along with renewable-based power plants. Several electrical, chemical, and mechanical energy storage elements have been considered in each area. To monitor and regulate such a complex system, a distinct tilt integral derivative controller is used. The range of the controller used in the model is 0 < K T , K I , K D < 3 and the value of n is taken in the range of [1–10]. The results drawn from the proposed model are presented in Fig. 3 with the 1% step load perturbation in area 1. However, from the graphical outcome, it can be concluded that the projected controller offers a superior result than the further classical controller. A number of energy storages are tested on the model, and graphical presentation of the derived dynamic response is a showcase in Fig. 4. A comparison of all storage among them-3
F2 (Hz)
F1 (Hz)
0 -0.01 -0.02
PID PIDF TID
0
5
10
15
20
25
(a)
(b) Ptie 1-2 (p.u. MW)
PID PIDF TID
-3
Ptie 3-1 (p.u. MW)
10
15
20
10
20
25
-4
10
PID PIDF TID
10 5 0 0
5
10
15
20
Time (sec)
Time (sec)
(c)
(d)
25
30
35
10-4
-3
-5
PID PIDF TID
5
5
15
25
0
0
0
Ptie 2-3 (p.u.)
F3 (Hz)
-2
10
PID PIDF TID
15 Time (sec)
-1
5
-2
Time (sec)
0
0
0
-4
10-3
-10
10
2
0.01
10
15
20
25
30
35
PID PIDF TID
10 5 0 0
5
10
15
20
Time (sec)
Time (sec)
(e)
(f)
Fig. 3 Frequency (a–c) and tie-line (d–f) responses derived from the system
25
30
35
90
S. S. Pati et al. 10-3
10-3
-5 TID-BES TID-SC TID-SMES
-10 0
5
10
15
0
F 1 (Hz)
F 1 (Hz)
0
-5 TID-BES TID-SC TID-SMES
-10 -15
20
0
5
Time (sec)
10-3 0 -2 -4
TID-BES TID-SC TID-SMES
-6 5
10
15
20
(b)
15
20
25
Ptie 1-2 (p.u. MW)
Ptie 1-2 (p.u. MW)
(a)
0
10
Time (sec)
10
-3
0 -2 -4 TID-BES TID-SC TID-SMES
-6 -8 0
5
10
15
Time (sec)
Time (sec)
(c)
(d)
20
25
Fig. 4 Responses of frequency (a, b) and tie-line power (c, d) diverse storage applied to the system
selves is highlighted minutely in the same figure. Using the Jaya algorithm, the optimum controller value for each storage technology is calculated and applied efficaciously in the test system. The outcomes with regard to battery-based storage technology are revealed to be improved than other storage elements in terms of the degree of oscillations, settling time, and peak over/undershoot.
6 Conclusion The study highlighted the performance of a three-area power system considering the renewables and storage devices. The conventional generation plants such as hydro, thermal, and gas with the incorporation of renewables like solar, wind sources are considered. Further, a secondary controller is implemented, such as PID, PIDF, and TID, and the distinguishing characteristics of the proposed three-area power system are presented. The Jaya optimization method is used here, and with the optimal value of gains of the controller, the proposed power system with the TID controller gives a better transient response as compared to others. Finally, the storage devices are tested into the proposed system, where BES provides superior dynamic characteristics on account of frequency deviation.
Frequency Regulation of a Multi-area Renewable Power System …
91
References 1. L.L. Grigsby, Power System Stability and Control (CRC Press, Boca Raton, 2016) 2. M. Raju, L.C. Saikia, N. Sinha, Maiden application of two degree of freedom cascade controller for multi-area automatic generation control. Int. Trans. Electr. Energy Syst. 28(9), e2586 (2018) 3. R. Rajbongshi, L.C. Saikia, Performance of coordinated interline power flow controller and power system stabilizer in combined multiarea restructured ALFC and AVR system. Int. Trans. Electr. Energy Syst. 29(5), e2822 (2019) 4. A. Barisal, Comparative performance analysis of teaching learning based optimization for automatic load frequency control of multi-source power systems. Int. J. Electr. Power Energy Syst. 66, 67–77 (2015) 5. Y. Sharma, L.C. Saikia, Automatic generation control of a multi-area ST–thermal power system using grey wolf optimizer algorithm based classical controllers. Int. J. Electr. Power Energy Syst. 73, 853–862 (2015) 6. L.-R. Chang-Chien, C.-C. Sun, Y.-J. Yeh, Modeling of wind farm participation in AGC. IEEE Trans. Power Syst. 29(3), 1204–1211 (2013) 7. D.K. Mishra, T.K. Panigrahi, A. Mohanty, P.K. Ray, A.K. Sahoo, Robustness and stability analysis of renewable energy based two area automatic generation control. Int. J. Renew. Energy Res. (IJRER) 8(4), 1951–1961 (2018) 8. S.S. Pati, S.K. Mishra, A PSO based modified multistage controller for automatic generation control with integrating renewable sources and FACT device. Int. J. Renew. Energy Res. (IJRER) 9(2), 673–683 (2019) 9. D.K. Mishra, T.K. Panigrahi, P.K. Ray, A. Mohanty, Performance enhancement of AGC under open market scenario using TDOFPID and IPFC controller. J. Intell. Fuzzy Syst. 35(5), 4933– 4943 (2018) 10. P.C. Sahu, R.C. Prusty, S. Panda, A gray wolf optimized FPD plus (1 + PI) multistage controller for AGC of multisource non-linear power system. World J. Eng. (2019) 11. R.K. Sahu, S. Panda, A. Biswal, G.C. Sekhar, Design and analysis of tilt integral derivative controller with filter for load frequency control of multi-area interconnected power systems. ISA Trans. 61, 251–264 (2016) 12. D.K. Mishra, S.S. Pati, T.K. Panigrahi, A. Mohanty, P.K. Ray, Enhancement of dynamic performance of automatic generation control of a deregulated hybrid power system. Int. J. Pure Appl. Math. (JPAM) 118(5), 304–319 (2018) 13. R.V. Rao, A. Saroj, A self-adaptive multi-population based Jaya algorithm for engineering optimization. Swarm Evol. Comput. 37, 1–26 (2017) 14. R. Rao, Jaya: a simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput. 7(1), 19–34 (2016) 15. D.H. Tungadio, Y. Sun, Load frequency controllers considering renewable energy integration in power system. Energy Rep. 5, 436–453 (2019) 16. M. ud din Mufti, S.A. Lone, S.J. Iqbal, M. Ahmad, M. Ismail, Super-capacitor based energy storage system for improved load frequency control. Electr. Power Syst. Res. 79(1), 226–233 (2009) 17. G. Magdy, G. Shabib, A.A. Elbaset, Y. Mitani, Optimized coordinated control of LFC and SMES to enhance frequency stability of a real multi-source power system considering high renewable energy penetration. Prot. Control Modern Power Syst. 3(1), 39 (2018)
A Short Survey on Real-Time Object Detection and Its Challenges Naba Krushna Sabat, Umesh Chandra Pati, and Santos Kumar Das
Abstract Object detection is an attractive research interest in recent years. It plays a significant role to understand the image in video analysis. Object detection technique has improved in computer vision when deep learning method came to the picture. From the last two decades, several algorithms have been developed and improved to detect the object in different conditions, and still, it has a number of challenges. The objective of this paper is to provide a short survey on object detection and its challenges not only in computer vision but also in the wireless sensor network. Keywords Deep learning · Object detection · Convolution neuron network · Sensor · IoT
1 Introduction The term detection and tracking are quite different from each other. The idiom detection describes the identification (the type of object say human, chair, bottle, toll gate [1, 2], etc.) as well as the exact location of the object [3]. On the other hand, tracking means getting information about the object. It consists of different sub-tasks. For example, in human tacking, the sub-tasks include pedestrian detection [4], skeleton detection [5], face detection [6], etc. Hence, the object tracking is an important key component in today’s scenario, especially in security and surveillance system. For example, suppose an unauthorized person has entered into the restricted area, then it requires to detect and track the person (say intruder). The direction of movement, N. K. Sabat (B) · U. C. Pati · S. K. Das Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Rourkela, India e-mail: [email protected] U. C. Pati e-mail: [email protected] S. K. Das e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_9
93
94
N. K. Sabat et al.
position and behaviour information of intruder can be collected from the deployed sensors. There are several techniques evolved for object detection and tracking still, different challenges are present like low-intensity light, occlusion area, crowded area, long-distance tracking, field of view (FoV) of sensors [7–9], etc. Different sensors like passive infrared (PIR), light detection and ranging (LiDAR) sensor, etc., can also be deployed for detection and tracking of objects. Presently, the most popular deep learning technique is being used for detection and tracking in computer vision. It has been found the advanced deep learning algorithm is giving higher accuracy and greater performance in terms of efficiency and execution time as compared to the previous algorithm with few constraints. In deep learning method, the complete human detection model is segregated to different stages such as informative region selection, feature extraction and classification [10]. Once the model classifies the type of object, then tracking of the object can be possible. In the case of informative region selection, the image is scanned by the neural network which further extracts the features and classifies the objects. For the detection of an object, it needs to extract visual features. The feature extraction techniques such as scale invariant feature transform (SIFT), local binary pattern (LBP), haar-like and histogram of oriented gradients (HOG) are used to extract the feature from an image or video. The organization of this paper is as follows: Sect. 2 interprets the basics of deep learning using neural network and its architecture. An overview of different types of models used for detection and tracking of objects is described in Sect. 3. The application and uses of sensors for object detection are mentioned in Sect. 4. The research challenges are explained in Sect. 5, and Sect. 6 concludes the review.
2 Deep Learning and Its Architecture Deep learning (DL) is a subpart of machine learning (ML), which is a part of artificial intelligence (AI). It is similar to machine learning, but the main difference between ML and DL is the learning process. DL has more learning capability as compared to ML. The basic blocks to design a DL architecture is neuron network (NN), where more than one layers are present. Each layer has number of neurons, all neurons of the previous layer are connected to each neuron of the next layer. They are computed with some activation function like sigmoid, rectified linear unit (ReLU) to produce the desired result [11]. The neurons are updated continuously with their weights and bias value using feed forward and backward propagation method with a learning rate termed as ‘α’ (Alpha). It helps for smoothly and slowly updates the weights having a small value between 0 and 1. A network has more than two networks it is treated as a deep neural network (DNN), and if the network has more than ten network, then it is called very deep neural network. A fully connected (FC) DNN architecture is depicted in Fig. 1. The DNN gives better performance when a convolution neural network (CNN) is used. The input frame processed with number of cascading convolution and pooling
A Short Survey on Real-Time Object …
95
Fig. 1 Fully connected DNN architecture
layer in CNN. It extracts the features in each layer, then the last one or two layers are fully connected where the output is taken. Figure 2 shows that the LeNet architecture has two layers of CNN for object detection.
Fig. 2 LeNet architecture
96
N. K. Sabat et al.
3 Types of Model Used for Object Detection and Its Comparison Initially the object detection model was designed by Viola jones but later it is not treated/ used as object detection model. Rather it is used as the basic model for face detection [12]. The first deep learning object detection model OverFeat network [13] was designed which uses CNN and sliding window approach to detect objects from the image. In the present scenario, the generic object detection models have been classified based on region proposal and regression/classification proposal. The classification of object detection model is illustrated in Fig. 3. In 2014, Girshick et al. [7] introduced region convolutional network (RCNN), where only 2000 regions are selected from the complete input image using selective search algorithm. These regions are warped and fed to CNN, to extract the feature. Then, applied to support vector machine (SVM), which helps to classify the object. This model has some issues like it takes more time to training and also because of 2000 selective region, and it requires long time to extract the features. Hence, it is not preferable to deploy for real-time application. In 2015, the same author improved their network called fast RCNN [14]. Here, the input image is given directly to the convolution layer instead of giving region proposal. The image is processed and generates feature map. Then, by using region of interest (RoI) and polling layer fixed feature vectors are extracted from each region which is then fed to the FC layer and at the output the classifier helps to classify the object with the bounding box. The drawback of RCNN overcomes training speed and accuracy improved using fast RCNN. Both RCNN and fast RCNN uses selective search algorithm to find the region proposal, which is a sluggish process and also affects the network performance. Ren et al. proposed a model called faster RCNN, which has similar work to fast RCNN, but instead of selective search, it uses region proposal network (RPN), which has the capability of predicting region using the concept of anchor [9]. Later regional fully convolution network (RFCN) introduced in 2016, and in 2017, feature pyramid
Fig. 3 Classification of generic object detection model based on region proposal and classification methods
A Short Survey on Real-Time Object …
97
Fig. 4. Basic YOLO concept [8]
network (FPN) and mask RCNN are developed. These all are an extension of faster RCNN with higher accuracy and network performance. You Only Look Once (YOLO) and single shot multi-box detection (SSD) are the two most popular object detection models in regression/classification-based object detection. Hence, overviews of these two models are only described. Redmon et al. in 2016, introduces YOLO model. Here, the input frame segregates to S × S grid; the grid cell has responsible to predict the confidence as well as bounding box about the object. The confidence shows how accurate the object is; in other words, it reflects the accuracy and the bounding box states the object is present or not [8]. It is designed with 24 convolution layer and 2 fully connected layer. It gives a fast response and in real-time processing, it takes 45 frames per second (FPS), whereas fast YOLO process 155 fps and better response than other models [10]. YOLO has advantages of speed detection. But, it faces problem to detect very small object (Fig. 4). In 2016, another model has been proposed by Liu et al. which is called single shot multi-box detector (SSD) [15]. It uses anchor in the frame to define the number of regions. Generally, anchor predicts the bounding box co-ordinate and class score. The convolution network VGG16 is the backbone of the SSD model used to detect the object. It uses multiple feature layers in the network, which helps to detect exact bounding box for the object irrespective of their aspect ratio [10]. This model gives better accuracy as compared to YOLO. YOLOv2 [8] and YOLOv3 [16] models has been designed with changing some parameters from basic yolo (say YOLO v1)model, and it has been seen that the object detection accuracy and speed of detection increases with new versions. Basically, four popular datasets used for object detection are MS COCO, ImageNet, PASCAL VOC and Open images. Each model performance can be judged over a dataset and calculating its mean average precision (mAP). Table 1 expresses the performance of different models with the COCO dataset.
98
N. K. Sabat et al.
Table 1 Performance comparison of different object detector model Object detector type
Backbone
AP
AP50
AP75
Faster RCNN+++ [7]
ResNet-101-C4
34.9
55.7
37.4
Faster RCNN w FPN [7]
ResNet-101-FPN
36.2
59.1
39.0
Faster RCNN by G-RMI [7]
Inception ResNet-v2 [13]
34.7
55.5
36.7
Faster RCNN w TDM [7]
Inception ResNet-v2-TDM
36.8
57.7
39.2
YOLOv2 [8]
DarkNet-19 [8]
21.6
44.0
19.2
SSD513 [15, 17]
ResNet-101-SSD
31.2
50.4
33.3
DSSD513 [15]
ResNet-101-DSSD
33.2
53.3
35.2
RetinaNet [18]
ResNet-101-FPN
39.1
59.1
42.3
RetinaNet [18]
ResNeXt-101-FPN
40.8
61.1
44.1
YOLOv3 608 × 608
DarkNet53
33.0
57.9
34.4
4 Deployment of Physical Sensor for Object Detection Sensors like PIR, LiDAR and RADAR (radio detection and ranging) are used for object detection and tracking. This paper discussed some moving object detection and tracking based on physical sensors deployment. Unauthorized human detection and intimation are done using PIR sensor and GSM module in [19]. They used IoT concept for detection and intimation about the presence of an intruder. In [20], Luo et al. proposed a technique of indoor human object localization, by mounting PIR sensor nodes on the ceiling of a room. Yun et al. [21] designed and implemented a hardware using PIR sensor which detect the human (consider as object) movement direction. The movement data are collected from the array of PIR sensors and are applied to machine learning algorithm. The model gives an accuracy of result 89–95%. Here, the measure difficulty is to arrange the PIR sensor array. Wu Jianqing et al. designed a model to detect roadside vehicle from LiDAR sensor data. They used background subtraction (3D-DSF) method to eliminate static objects and identify the lane. Then, applied the clustering method to identify the object [22]. In [23], Chen et al. used a 3D LiDAR for detecting deer crossing on the highway. When a deer or group of deer are detected on the specific area, it activates a warning signal so that the driver will be alert before the accident.
5 Research Challenges The sensors LiDAR or PIR can detect the object individually but not able to recognize the type of object. These sensors fail to detect object in bad weather conditions (like foggy and snow), group of objects and occluded objects. To avoid such problems, multiple sensors with cameras can be deployed. The 3D sensor can be deployed to
A Short Survey on Real-Time Object …
99
detect an object from which more depth information can be collected and processed for detection. Reducing the training time period of deep learning architecture is also a challenging task because each model takes a long time to train. Using cyclic learning rate and super-convergence technique, the training time can be reduced. It is difficult to find the object if the image is blurred or the video is defocused. The accuracy is also degraded in the network because it is not trained end to end. For this problem LSTM, optical flow and spatiotemporal tabulate can be used in consecutive frames. The other challenges include multimodal information fusion, network optimization, cascade network, unsupervised, weakly supervised learning, etc.
6 Conclusion This paper provides the short survey on different most useful deep learning object detection methods. Every year, the performance of the network is improved and overcomes the demerit of older networks like RCNN to fast RCNN, then faster RCNN, and so on. It has been seen that each model has their own priority, i.e., for more accurate detection. SSD model gives a better result, but for speed detection approach, YOLO is preferred. It also provides some research challenges which will guide to improve the network model as well as understand the object landscape. Acknowledgements This research is supported by the Defence Research Development Organization (DRDO), India, Sanction no. ERIP/ER/1506047/M/01/1710.
References 1. N.K. Sabat, U.C. Pati, B.R. Senapati, S.K. Das, An IoT concept for region based human detection using PIR sensors and FRED cloud. In 1st International Conference on Energy, Systems and Information Processing (ICESIP) (IEEE, 2019). Chennai, India, pp. 1–4 2. B.R. Senapati, P.M. Khilar, N.K. Sabat, An automated toll gate system using VANET. In 1st International Conference on Energy, Systems and Information Processing (ICESIP) (IEEE, 2019). Chennai, India, pp. 1–5 3. P.F. Felzenszwalb, R.B. Girshick, D. McAllester, D. Ramanan, Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2009) 4. P. Dollar, C. Wojek, B. Schiele, P. Perona, Pedestrian detection: an evaluation of the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 34(4), 743–761 (2011) 5. H. Kobatake, Y. Yoshinaga, Detection of spicules on mammogram based on skeleton analysis. IEEE Trans. Med. Imaging. 15(3), 235–245 (1996) 6. K.K. Sung, T. Poggio, Example-based learning for view-based human face detection. IEEE Trans. Pattern Anal. Mach. Intell. 20(1), 39–51 (1998) 7. R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation. Proc. IEEE Conf. Comput. Vis. Pattern Recogn. 580–587 (2014)
100
N. K. Sabat et al.
8. J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: unified, real-time object detection. Proc. IEEE Conf. Comput. Vis. Pattern Recogn. 779–788 (2016) 9. S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Sys. 91–99 (2015) 10. Z.Q. Zhao, P. Zheng, S. Xu, X. Wu, Object detection with deep learning: a review. IEEE Trans. Neural Netw. Learn. Sys. 30(11), 3212–3232 (2019) 11. J. Schmidhuber, Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015) 12. P. Viola, M. Jones, Rapid object detection using a boosted cascade of simple features. CVPR, 511–518 (2001) 13. P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, Y. LeCun, Overfeat: integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv: 1312.6229 (2013) 14. R. Girshick, Fast R-CNN. Proc. IEEE Conf. Comput. Vis. 1440–1448 (2015) 15. W. Liu et al., SSD: single shot multibox detector. In European Conference on Computer Vision (Springer, Cham, 2016), pp. 21–37 16. J. Redmon, A. Farhadi, Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018) 17. J. Redmon, A. Farhadi, YOLO9000: better, faster, stronger. Proc. IEEE Conf. Comput. Vis. Pattern Recogn. 7263–7271 (2017) 18. C.Y. Fu et al., DSSD: deconvolutional single shot detector. arXiv preprint arXiv:1701.06659 (2017) 19. K.C. Sahoo, U.C. Pati, IOT based intrusion detection system using PIR sensor. In 2nd International Conference on Recent Trends in Electronics, Information and Communication Technology (RTEICT) (IEEE, 2017), pp. 1641–1645 20. X. Luo et al., Human indoor localization based on ceiling mounted PIR sensor nodes. In 13th Annual Consumer Communications and Networking Conference (CCNC) (IEEE, 2016), pp. 868–874 21. J. Yun, M.H. Song, Detecting direction of movement using pyroelectric infrared sensors. IEEE Sens. J. 14(5), 1482–1489 (2014) 22. J. Wu et al., Automatic vehicle classification using roadside LiDAR data. Transp. Res. Rec. 2673(6), 153–164 (2019) 23. J. Chen et al., Deer crossing road detection with roadside lidar sensor. IEEE Acc. 7, 65944– 65954 (2019)
Small Object Detection From Video and Classification Using Deep Learning R. Arthi, Jai Ahuja, Sachin Kumar, Pushpendra Thakur, and Tanay Sharma
Abstract Object detection could be a technology associated with computer vision and image process that deals with crime squad work instances of semantics objects of a specific category in digital pictures and videos. Applications of object detection are in various fields of computer vision, together with image retrieval. In the existing work, SVM model has been applied to sight the article in CCTV videos, unlike many other classification or detection problems there is a strong real-time requirement for detecting. Hence, a trade-off between high accuracy and speed is inescapable. The main parameters that influence the performance are the length of the feature vector along with the algorithmic programme, and also, it is the best in two classifications. The proposed system introduces You Only Look Once (YOLO V2). YOLO V2 is highly efficient and quicker than the SVM. The simulated results set the threshold level to any confidence when compared to SVM. Keywords Object detection · SVM · YOLO V2 · Image processing · Deep learning
R. Arthi (B) · J. Ahuja · S. Kumar · P. Thakur · T. Sharma Department of ECE, SRM Institute of Science and Technology, Ramapuram Campus, Chennai, India e-mail: [email protected] J. Ahuja e-mail: [email protected] S. Kumar e-mail: [email protected] P. Thakur e-mail: [email protected] T. Sharma e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_10
101
102
R. Arthi et al.
1 Introduction Object recognition in real-time analysis is a difficult task to achieve. The process of motion detection [1] generally involves environment modelling, motion segmentation and the classification of an object, which obstruct one other during identification. Recent analysis datasets like PASCAL VOC, IMAGENET [2] or the Caltech Pedestrian Dataset, increasing training and testing examples to practice in real-world issues, and the capacity of detectors to analyse big data sets in the least possible time is another important aspect along with accuracy. A root of techniques that can process a large quantity of training data efficiently and that are built-in suited for multi-class problems are random forests dependent. Random forests are a group of randomized decision trees useful for regression, classification tasks, and even both at the same time. Despite the fact that amazing modifications are available, the video is generally processed at 24 fps which fast R-CNN [3] would not be able to match up to. Regionalbased designs consist of two separate stages: proposing regions and then processing them which somewhat proves to be inefficient. Another type of object detection system depends on a unified one-state approach instead. The short-sighted detectors detect all the object and classify them in the image in one time. This decreases the process time considerably. YOLO begins by birth out a grid over the image and permits every grid cell to detect a set variety of objects of variable sizes. For every true object present within the image, the grid cell related to the object’s centre is accountable for predicting that object. Another version of this practice, fast YOLO has gained high rates of frame detection fps but then loses accuracy and localization at such great speeds. The article preparation is as follows: Sect. 2 gives an outline of the related work, while Sect. 3 summarizes the methodology which speaks about scope of the project, the objective and the statement of problem. Section 4 states the simulated results. The conclusion has been presented in Sect. 5.
2 Related Work In [4], the model proposes a video-defined identification and locating process by studying the distribution characteristics of traffic states. Repeated traffic incidents often lead to traffic blocking. It is necessary to recognize and locate them before hand, timely event disposal, and to quickly release audiences. Every lane is divided into an array of cells for the monitoring section. The traffic parameters are obtained from recognizing and tracking traffic objects, including the rate of flow, average speed, and average position occupancy. Based on the parameters, it is assumed that a cell corresponds to an event point for at least two consecutive periods, an event is detected and its position is calculated based on the cell’s identity number. Experiments prove the efficiency and practicality of the proposed method.
Small Object Detection From Video …
103
In [5], visible tracking algorithm formulated on the basis of presentations from a distinguishing CNN. It relates to tracking of video’s ground-truth to collect a general targeted presentation. It is composed of shared branches as well as various branches in the specified domain. All domains are programmed in the network iteratively to obtain general targeted presentations. While locating targets, the respective CNN along with a classifying layer of binary value builds a new network by merging the different layers. In [6], they proposed a tracking scheme based on sampling method to monitor quick motions accurately. To address these situations, WLMC sampling method was introduced, integrated into MCMC-based tracking framework. To maintain the accuracy of swift motions, the acceptance ration is incorporated. Further, related to performance, it is extended for higher dimensional states.
3 Methodology Object detection is breaking into a good vary of industries, with use cases starting from personal security to productivity within the work. The aim of object detection is to detect all objects from an acknowledged category, like individuals, cars or faces in a picture. Applications of object detection are in various fields of computer vision, image retrieval, machine-driven vehicle systems, and machine scrutiny as well as security and police work. Various difficulties occur on the field of object recognition. It has endless possibilities for the future. Object detection included police work illustrations of objects from a selected category in a picture. Object detection systems construct a model for associate in nursing object categories from a group of coaching examples. The existing model [7] proposes of three stages of detection involving image detection at stage one. Stage two comprises of tracking their position over the period of time and then taking images from the second stage, feeding it to the third stage, to SVM classifier for detecting the car incidents. The proposed model involves the image detection in three stages using YOLO. The process involves obtaining images from video-based detection at stage one. Then, the real-time analysis has been done by feeding those images to YOLO for object detection. Using video labeller and ground-truth values, the final conclusion has been drawn in real-time applications would be the third stage. There is a vast difference in the output of SVM classifier and YOLO V2. The basic working of SVM involves object recognition using bounding boxes which fails to detect multiple objects in a single frame due to interference. The biggest drawback of SVM is that it has various parameters set and in order to get the desired output, and these parameters should be achieved correctly. As said earlier, the algorithm will fail in cases of noise in images that is overlapping of frames. Also, in cases of large datasets, SVM algorithm is not suitable due to long training time and slow process. The reason YOLO V2 is being introduced in this domain is because of its realtime applications and better accuracy. It has the ability to detect multiple objects
104 Fig. 1 Block diagram of YOLO
R. Arthi et al.
INPUT VIDEO
GROUND TRUTH VIDEO LABELLER YOLO TRAINING - VGG50 NET
TEST YOLO
OBJECT TRACKED VIDEO OUTPUT
in a single frame without any trouble and the time consumed by the trainer is less. YOLO has a rate of 50–55 fps. The time taken by it to analyse any image or video for recognition is quite less than SVM and accuracy levels are also better. It uses S × S matrix which helps to identify every small object in the frame as per defined data sets. In cases of small object recognition [8], SVM always faces trouble, but the latter is quite efficient in small object recognition. Object identification is thought to be harder than the classification of image because of various challenges like quick motions, various scales, and a specific amount of data and different priorities. To overcome this problem, three stages of object detection have been used by YOLO V2 as shown in Fig. 1. The block diagram of YOLO starts with collecting the training dataset as an input video and fed to the ground-truth video labeller. The video labeller passes the video as frames to YOLO for training and testing. The output racks the video and locates the object.
4 Simulated Results The simulated results are as shown in Table 1. It is the time taken by the trainer to run the video and obtain the frames in order to detect object and is a video of 14 min 56 s. A few results of the training have been included in the table and the total time elapsed is 935.489605 s. Figures 2 and 3 describe the area coverage by grid boxes in cases of bounding boxes and clustering boxes. The graph depicts the comparison of an area with respect to the aspect ratio of the bounding boxes of the localized object.
Small Object Detection From Video …
105
Table 1 Simulation results of training table Epoch
Iteration
Time lapsed
Mini-batch RMSE
Mini-batch loss
Base learning rate
1
1
00:00:08
6.45
41.5
1.0000e-04
2
8
00:01:07
1.29
1.7
1.0000e-04
3
15
00:02:06
0.84
0.7
1.0000e-04
4
22
00:03:06
0.78
0.6
1.0000e-04
5
29
00:04:04
0.60
0.4
1.0000e-04
6
36
00:05:04
0.58
0.3
1.0000e-04
7
43
00:06:03
0.41
0.2
1.0000e-04
8
50
00:07:02
0.51
0.3
1.0000e-04
9
57
00:08:02
0.59
0.3
1.0000e-04
10
64
00:09:03
0.42
0.2
1.0000e-04
11
71
00:10:04
0.41
0.2
1.0000e-04
12
78
00:11:03
0.37
0.1
1.0000e-04
13
85
00:12:03
0.38
0.1
1.0000e-04
14
92
00:13:04
0.49
0.2
1.0000e-04
15
99
00:14:05
0.40
0.2
1.0000e-04
Fig. 2 Comparison of bounding box area with respect to aspect ratio
106
R. Arthi et al.
Fig. 3 Comparison of clustering box area with respect to aspect ratio
Once the training is done, the software detects the presence of helmet as shown in Fig. 4 highlighted as blue grid box, while the absence of helmet has been detected as shown in Fig. 5 highlighted as red colour grid box. The presence and absence of object have been detected by the proposed method. Fig. 4 Helmet detection box in blue colour
Small Object Detection From Video …
107
Fig. 5 No helmet detection box in red colour
5 Conclusion The proposed work detects the object using You Only Look Once (YOLO), and the localizing objects have been classified with the images and combined with convolution implementation in the sliding window. Hence, building an object detecting algorithm using YOLO V2 has been treated as important for application such as crime investigation. In future, there is a scope to explore more in the algorithmic point of view, develop and implement more ideas, making the upgraded algorithm along with real-time learning and object detection.
References 1. W. Hu, T. Tan, L. Wang, S. Maybank, A survey on visual surveillance of object motion and behaviors. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 34(3), 334–352 (2004) 2. A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Sys. 1097–1105 (2012) 3. R. Girshick, Fast R-CNN. Proc. IEEE Int. Conf. Comput. Vis. 1440–1448 (2015) 4. J. Ren, Y. Chen, L. Xin, J. Shi, B. Li, Y. Liu, Detecting and positioning of traffic incidents via video-based analysis of traffic states in a road segment. IET Intell. Trans. Syst. 10(6), 428–437 (2016) 5. H. Nam, B. Han, Learning multi-domain convolutional neural networks for visual tracking. Proc. IEEE Conf. Comput. Vis. Pattern Recogn. 4293–4302 (2016) 6. J. Kwon, K.M. Lee, Wang-Landau Monte Carlo-based tracking methods for abrupt motions. IEEE Trans. Pattern Anal. Mach. Intell. 35(4), 1011–1024 (2012) 7. V.E.M. Arceda, E.L. Riveros, Fast car crash detection in video. In XLIV Latin American Computer Conference (CLEI). (IEEE, 2018), pp. 632–637 8. J. Wang, S. Jiang, W. Song, Y. Yang, A comparative study of small object detection algorithms. In Chinese Control Conference (CCC). (IEEE, 2019), pp. 8507–8512
An Efficient IoT Technology Cloud-Based Pollution Monitoring System Harshit Srivastava, Kailash Bansal, Santos Kumar Das, and Santanu Sarkar
Abstract Air pollution is one of the major concerns in the world; especially, some of the toxic gases when in excess may have dire impacts on human health and are like CO2 , NH3 , particulate matter, etc. The temperature, humidity, and wind speed are also the weather parameters having their effects and cause for other gases in the environment. This paper concerns with the development of hardware which provides the concentration level of significant gases, i.e., CO2 , NH3 , O2 , PM2.5 using MQ-Series gas sensors and the environment parameters, i.e., temperature, humidity, dew point, wind speed in real-time using the Raspberry Pi based on Internet of Things (IoT) platform. The data has been stored in the firebase database for realtime monitoring. The cloud computing-based monitoring system with inbuilt Wi-Fi connectivity ensures the analysis of different air pollutants and weather parameters on a periodical basis to provide the general air quality index (AQI) on a real-time basis. In the case of undesired conditions, the notification alert message will be sent to the user. Keywords Raspberry Pi · Firebase database · IoT · MQ-series gas sensors · AQI · Cloud computing
H. Srivastava (B) · K. Bansal · S. Kumar Das · S. Sarkar NIT Rourkela, Rourkela, Odisha 769008, India e-mail: [email protected] K. Bansal e-mail: [email protected] S. Kumar Das e-mail: [email protected] S. Sarkar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_11
109
110
H. Srivastava et al.
1 Introduction The primary sources of air pollution are plume and vehicular exhaust, the increasing rate of industrialization, burning of fossil fuels, and emission of CFCs from various households (air conditioner and refrigerator). This further enhances the degradation of air composure which is unsuitable for breathing and has detrimental effects on human health. These effects can be both short as well as long term; short-term effects are temporary, viz. Dizziness, headache, irritation, nausea etc., while longterm effects are due to the continuous exposure to polluted air which can have an adverse impacts on kidneys, liver, individual’s nervous framework, mental and cardiac disorders, respiratory ailments, and so on. An IoT is a term, coined by Kevin Ashton, that refers to the system connecting different devices such as mechanical, digital, and computing devices through the Internet and accessing over it. Different devices then communicate with each other for transferring the data without the need of computer-to-human or human-to-human interaction. The existing pollution monitoring systems are bulkier and costlier and provide AQI on a general basis by considering the weighted average calculation, which is common for all individuals. Hence, the need for a methodology is providing cost-effective, mobile, and more individual-centric specific data analysis and alert notification services. An IoT-based pollution monitoring platform is the solution for the above-stated problem. The key challenges in the design of this personalized pollution monitoring system. In the proposed model, we are using the processing unit, i.e., Raspberry Pi 3B Model, analog-to-digital convertor, i.e., ARPI600 module, MQ-series gas sensors, and the monitor for analysis and visualization of different air pollutants.
2 Related Work Kim et al. [1] proposed a model for pollution monitoring based on IoT for monitoring air pollutant levels such as ozone and particulate matter. And analyzing it using an LTE network. The observed data is then compared with the data provided by the National Ambient Air Quality Monitoring Information System (NAMIS). Dhingra et al. [2] presented an air quality monitoring system consisting of the Arduino processing unit, sensor array and Wi-Fi module and a server. It gathers air pollutant levels, transfers to the cloud using Wi-Fi, and sent to a server, stored in a database, and an Android application called IoT-Mobair is being developed for the real-time monitoring and analysis of data. Huang et al. [3] proposed an algorithm for the detection of air exchange state in vehicles, sampling the air quality parameter, and analyzing it based on the cloud IoT platform. The real-time analysis is done particularly on finegrained dust particles for the feature extraction and classification. Venkatanarayanan et al. [4] proposed a system of air pollution monitoring aided in bicycle for real-time monitoring provided mapping with an Android application for visualization. The
An Efficient IoT Technology Cloud-Based Pollution …
111
data is stored in an open IoT-based Thingspeak platform and a fingerprint sensor deployed for alert in case of theft. Kumar et al. [5] proposed a system that uses a Raspberry Pi interfaced to an Arduino interfaced with the sensors for measurement of SO2 and NO gases along with the humidity and temperature. The system uses IBM Cloud Bluemix to upload the data obtained on the Pi to the cloud. It is a lowcost, low-power system for environment monitoring. Firdhous et al. [6] proposed an indoor environment monitoring method having IoT to monitor ozone (O3 ) concentration around the photocopy machine. The system takes the data at a span of every five minutes which is stored in the cloud, and the warning message is given in case exceeds the threshold limit. Parmar et al. [7] proposed a prototype model system of monitoring air quality using MQ7 and other with MQ135 (MQ7 is a CO sensor and MQ135 detects NH3 , CO2 , etc.). The Nucleo F401RE microcontroller is used as a processing unit along with a Wi-Fi module provided interface with a webpage using MEAN stack for visualization of JSON formatted data.
3 The Layer Architecture Model The proposed system design constitutes four layers, i.e., sensing layer, the network layer, the processing layer, and the application layer, as depicted in Fig. 1. The sensing layer comprises sensor array interfaced with processing unit which senses the air pollutants in real time and collects the data [8]. The network layer is a communication medium between the sensing layer and the processing layer. The processing layer uses a cloud server database to receive, store, and process the data based on big data Fig. 1 Layer architecture of system model
112
H. Srivastava et al.
analysis and is used for forecasting and prediction of data. The application layer makes use of Android and Web application used to retrieve data from the cloud database for end users to monitor the real-time data.
4 Hardware and Software Unit 4.1 Gas Sensors MQ-Series Sensors: The MQ-Series gas sensors [7], i.e., MQ-137 for NH3 and MQ-135 for CO2 , as shown in Figs. 2 and 3, respectively,, are used for monitoring of air pollutants, and finally, the air quality index is determined itself by MQ-135. The operating voltage and current of the sensors are 5 V and 40 mA, respectively. Oxygen and Dust Sensor: KE-25 is the oxygen sensor as shown in Fig. 4, which is an electrochemical gas sensor used to monitor the oxygen level in the environment. Its characteristics are no external power required, long span time, no influence of other air pollutants, and no warm-up time is required provided with stable output. Fig. 2 CO2 sensor (MQ-135)
Fig. 3 NH3 sensor (MQ-137)
Fig. 4 O2 sensor (KE-25)
An Efficient IoT Technology Cloud-Based Pollution …
113
Fig. 5 PM sensor (GP2Y1010AU0F)
Fig. 6 DHT11 sensor
Fig. 7 Wind speed sensor (anemometer)
GP2Y1010AU0F is the dust sensor as shown in Fig. 5 based on optical sensing technology. It can distinguish smoke from the dust. Temperature and Humidity Sensor and Wind Speed Sensor: DHT11 as in Fig. 6 is a sensor used to monitor the real-time temperature and humidity levels. Then, to monitor the dew point, we calculate it by using the formula discussed below in eqn. (iii). An anemometer as in Fig. 7 is a device to measure the wind speed in m/sec. It measures the analog voltage, which ranges from 0.4 to 2.0 V, i.e., wind speed ranges from 0 to 32 m/s.
4.2 Hardware Unit Raspberry Pi, ARPI600, and GPRS Module: An ARM-based Raspberry Pi 3B Model [5] as in Fig. 8 is used as a processing unit. Its feature is that no external hardware is required for network connectivity. It is used to analyze the converted digital data and send it to the cloud database used for monitoring purposes. ARPI600 module as in Fig. 9 is an analog-to-digital converter used for the gas sensors that provide analog values which are to be converted to digital form for processing in the
114
H. Srivastava et al.
Fig. 8 Raspberry Pi 3B
Fig. 9 ARPI600 module
Pi. GPRS module as in Fig. 10 provides communication link between the PC and GSM systems. It requires a SIM for activating a connection to the module. The alert notification is sent to a mobile using the same module. Fig. 10 GPRS module
An Efficient IoT Technology Cloud-Based Pollution …
115
5 The Complete System Model—Methodology The system model [2] comprises a hardware unit, gateway, cloud database, and user interface as shown in Fig. 11. The hardware unit consists of MQ-Series gas sensors, anemometer, etc., along with processing unit as Raspberry Pi A/D converter (ARPI600) and power supply unit. The gateway provides the communication link between the hardware unit and a cloud server. The cloud database stores the data in real time which is used for analysis and forecasting [9] (Fig. 12).
5.1 Air Quality Index The AQI is the value given by the Central Pollution Control Board (CPCB), which is obtained by calculation of average pollutant concentration. Higher the value of AQI, a number of health hazards add up. The sub-index (I i ) for a given concentration (C P ) of each pollutant can be represented as [10],
Fig. 11 System model diagram
Fig. 12 Experimental setup
116
H. Srivastava et al.
Ii =
(IHI − ILO ) × (CP − BLO ) + ILO (BHI − BLO )
(1)
where BHI is the concentration breakpoint, i.e., ≥ C P , BLO is the concentration breakpoint, i.e., ≤ C P , I HI and I LO are the AQI values corresponding to BHI and BLO , respectively. The overall AQI is calculated as, AQI =
n
Wi Ii
(2)
i=1
where W i is the weight of each pollutant such that air pollutants.
W i = 1 and n is the number of
5.2 Dew Point Calculation The dew point (D.P.) is calculated by using temperature (T ) and relative humidity (RH) as per August–Roche–Magnus approximation, D.P. = 243.04 ×
ln(RH) 100
17.625 − ln
+
17.625×T 243.04+T
RH 100
−
17.625×T
(3)
243.04+T
5.3 Wind Speed Calculation As discussed above, by using two-point equation formulations, the wind speed (V w ) can be calculated as, V w = 20.25 × V − 8.1
(4)
where V is the sensor output voltage.
6 Proposed Flowchart Model The proposed model, as shown in Fig. 13, depicts the overall process of the working system model. The hardware setup is done first, and then, the processing and each air pollutant level are obtained and monitored. Then, the AQI is calculated, and if each
An Efficient IoT Technology Cloud-Based Pollution …
117
Fig. 13 Flowchart diagram
air pollutant level crosses a certain threshold limit, the alert notification message is sent to the user’s mobile [4]. Otherwise, the concentration of each pollutant is continuously stored in the firebase cloud database.
7 Results and Discussions As shown in Fig. 12, the complete experimental setup is being done based on which the real-time sensor parameters are obtained, as shown in Fig. 14. The values are obtained as per in our campus environment. In Fig. 15, the data output stored in the firebase database, in the packet form, is shown and as the pollutant level, NH3 level as it crosses some threshold limit, i.e., here, 21 ppm, it sends a notification message to the mobile as an alert message and
118
H. Srivastava et al.
Fig. 14 Real-time sensor reading output
if it comes back to normal, then also it notifies for it as shown in Fig. 16. Thus, different pollutant parameters are monitored and stored in the database server for further analysis.
8 Conclusion and Future Work In this paper, we have discussed the development of air quality monitoring systems using the Raspberry Pi based on IoT platform that monitors different air pollutants providing low-cost and efficient methods of developing the whole system. There is
An Efficient IoT Technology Cloud-Based Pollution …
119
Fig. 15 Firebase data output
Fig. 16 Notification message in mobile
a tradeoff between cost and accuracy of the sensing system. Different gas parameters CO2 , NH3 , O2 , particulate matter PM2.5, and weather parameters temperature, humidity, dew point, and wind speed are monitored and are stored in the firebase cloud database. The notification message is sent to the user’s mobile in an alert case when value exceeds the threshold limit. Furthermore, the machine learning and deep learning can be implemented on the stored data to predict and forecast it such that estimation of upcoming environmental conditions can be done. The user-specific
120
H. Srivastava et al.
Android application is to be developed for monitoring the air quality and providing navigation through Google maps by using routing algorithms to provide healthier paths for a specific individual. Acknowledgements This work is financially supported by the IMPRINT (Grant No.—7794/2016), a joint initiative of the Ministry of Human Resource Development and Ministry of Housing and Urban Affairs, Government of India.
References 1. S.H. Kim, J.M. Jeong, M. T. Hwang, C.S. Kang, Development of an IoT-based atmospheric environment monitoring system, in 2017 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, pp. 861–863 (2017) 2. S. Dhingra, R.B. Madda, A.H. Gandomi, R. Patan, M. Daneshmand, Internet of things mobileair pollution monitoring system (IoT-Mobair). IEEE Internet Things J. 6(3), 5577–5584 (2019) 3. J. Huang, N. Duan, P. Ji, C. Ma, F. Hu, Y. Ding, Y. Yu, Q. Zhou, W. Sun, A crowdsource-based sensing system for monitoring fine-grained air quality in Urban environments. IEEE Internet Things J. 6(2), 3240–3247 (2019) 4. A. Venkatanarayanan, A. Vijayavel, A. Rajagopal, P. Nagaradjane, Design of sensor system for air pollution and human vital monitoring for connected cyclists. IET Commun. 13(19), 3181–3186 (2019) 5. S. Kumar, A. Jasuja, Air quality monitoring system based on IoT using Raspberry Pi, in 2017 International Conference on Computing, Communication and Automation (ICCCA), Greater Noida, pp. 1341–1346 (2017) 6. M.F.M. Firdhous, B.H. Sudantha, P.M. Karunaratne, IoT enabled proactive indoor air quality monitoring system for sustainable health management, in 2017 2nd International Conference on Computing and Communications Technologies (ICCCT), Chennai, pp. 216–221 (2017) 7. G. Parmar, S. Lakhani, M.K. Chattopadhyay, An IoT based low cost air pollution monitoring system, in 2017 International Conference on Recent Innovations in Signal processing and Embedded Systems (RISE), Bhopal, pp. 524–528 (2017) 8. J. Esquiagola, M. Manini, A. Aikawa, L. Yoshioka, M. Zuffo, Monitoring indoor air quality by using IoT technology, in 2018 IEEE XXV International Conference on Electronics, Electrical Engineering and Computing (INTERCON), Lima, pp. 1–4 (2018) 9. K. Zheng, S. Zhao, Z. Yang, X. Xiong, W. Xiang, Design and implementation of LPWA-based air quality monitoring system. IEEE Access. 4, 3238–3245 (2016) 10. https://www.indiaenvironmentportal.org.in/files/file/Air%20Quality%20Index.pdf
A Semi-automated Smart Image Processing Technique for Rice Grain Quality Analysis Jay Prakash Singh and Chittaranjan Pradhan
Abstract India is among the top producers of rice in the world. It is the staple food for the eastern and southern people of India. Therefore, the need to measure the quality of rice has become a necessity, but there are still a limited number of viable and safe options that can be used for the grading of the rice grains. This paper solves the prior problem of quality assessment using the reliable method of image processing. This technique allows us to get an idea of the dimensions of the rice grains and accordingly grade them. Previously, there have been numerous researches in the field of rice grain quality assessment taking chalky, the opaque white part of the grain into consideration. It is one of the most important characteristics while analyzing and grading the rice grain. This paper calculates the dimensions, classifies the grains into different quality grades and checks for chalkiness. Keywords Quality · Grading · Image processing · Dimensions
1 Introduction Agriculture is the prime source of economy for many countries in the world. Rice, a product of the agriculture industry, has various benefits like being easily available and being suitable for long-term storage. Rice has numerous by-products which are helpful for humans. In India, the major rice producing state is West Bengal which produces 146.05 lakh tons of rice. Uttar Pradesh quickly follows West Bengal by producing 140.22 lakh tons of rice. Other states also produce rice in lakhs of tons. A lot of improvement and advancement have been done in agriculture industry with the emergence of automated machinery [1]. Thus, such huge quantity of rice needs to be examined and graded to prevent substandard rice from reaching the consumers. The J. P. Singh (B) · C. Pradhan KIIT Deemed to be University, Bhubaneswar, India e-mail: [email protected] C. Pradhan e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_12
121
122
J. P. Singh and C. Pradhan
states also are competing among themselves to produce better and good quality of rice to help their economy. For all these reasons, it is clear that a quick and efficient quality assessment needs to be done on rice grains [2]. Thus, this article is based on the quality assessment of rice grains. Earlier, a lot of physical labor and time was needed for quality testing, but recently, with the onset of automation and efficient technologies, one can reduce the workload and also test quality in a much shorter time [3]. Image processing is one such widely used technology. In our experimentation, we have initially scanned the grains and then used the image processing technique. Secondly, we calculated the length, width and area and then categorized them in different grades. Finally, if chalkiness was found in the grain, then that particular grain was completely discarded. This method is not only efficient but also time-saving. In this paper, we have done the quality assessment, grading and chalky detection by using image processing, and various attributes of the grain were taken into consideration such as length, width, area and chalkiness.
2 Related Work Many researchers have contributed their work in order to analyze rice grain quality. Computer vision techniques have been used by Kaur [4] to classify rice based on sizes, i.e., full, medium and half. Neelamegam et al. [5] have developed a neural network model for the classification of rice grain. Quality assessment of Oryza sativa rice is presented by authors [6, 7] based on its size. They used machine vision technique to achieve their goal to assess the quality of rice. Aulakh et al. [8] proposed a method to grade rice grain based on their size. They used flatbed scanner (FBS) and high-end camera to rice grain images. They applied morphological operations by converting RGB image to binary image. Features were extracted based on connected components. Maheshwari et al. [9] provided machine vision technique in order to grade rice grains as Average, Small and Long. Verma [10] proposed watershed method to grade rice as Discolored, Chalky, Damaged and Broken. Then, for classification of rice grains, purposive parameters are forwarded to neural network as input data. Guang-rong [11] provided a method using image processing technique to recognize the color of rice grains and color models such as HIS color model and RGB color model. Hu et al. [12] proposed neural network concept to determine whole rice and broken rice grains, and for this, they considered morphological characteristics. Devi et al. [13] proposed machine vision technique to analyze rice quality. They used canny edge detection for edge preservation and to analyze the quality they calculated average of geometric features length, width and diagonal of individual rice grains.
A Semi-automated Smart Image Processing Technique for Rice …
123
3 Material and Methodology The research was carried out in five steps. Initially, in the first step, images of the rice grain were taken. Secondly, these images were preprocessed and smoothened to remove various external factors. Thirdly, we performed edge detection technique to reduce error. Fourthly, data regarding length, width, and area were obtained. In order to obtain the area of rice grain, we have counted number of pixels present in the bounded region. In the fifth step, the rice grains were checked for the presence of chalkiness. The process diagram is shown in Fig. 1.
3.1 Image Processing and Analysis Using a Nikon D3300 camera, we took images of the rice grain with a black background under proper lighting conditions. The obtained images were saved in Jpeg format after which image processing was used. Figure 2 shows the scanned images.
3.1.1
Pre-processing and Smoothing
The images are now filtered to remove the external factors like noise, dust and much more. For removing noise, Gaussian filter was used. Rice image after Gaussian filter application is shown in Fig. 3. The mathematical equation of the Gaussian filter is as follow G(x) =
x2 1 √ e− 2σ 2 σ 2π
(1)
where σ = sigma (the result will be blur if the value of sigma is increased) and x is the mean distribution value centered at x = 0.
3.1.2
Edge Detection
In order to detect and preserve edge and to minimize the localization error, canny edge detection technique is used. It is applied on grayscale images. Weak edges can also be detected using the prior mentioned method. Edge detection and conservation are very important to obtain the high accuracy results. Figure 4 shows an image after applying canny edge detection technique.
124
J. P. Singh and C. Pradhan
Fig. 1 Process diagram
3.1.3
Morphological Features
From the rice sample images, the following attributes were obtained: Area—It is the summation of all the extracted pixels from the binary image of the rice grain. Length (l)—A rectangular box surrounds the rice grain. The length of this rectangular box comprising the grain provides the length.
A Semi-automated Smart Image Processing Technique for Rice … Fig. 2 Original input image
Fig. 3 Image after gaussian filter
125
126 Fig. 4 Image after canny edge detection
Fig. 5 Image after filling
J. P. Singh and C. Pradhan
A Semi-automated Smart Image Processing Technique for Rice …
127
Fig. 6 Area of rice grains present in Sample 1
Width (w)—The width of the prior stated rectangular box provides the width of the rice grain.
3.2 Rice Chalky Rice chalkiness degrades the rice quality. It is undesirable in rice grains. Along with genetic factors, high temperature is also responsible for the chalkiness in the rice. In a chalky rice grain, the pixels which are highly bright are the chalky part and the area which is, in contrast, darker is the field of endosperm. Using the naked eye, it is difficult to accurately identify the chalky part, as the pixels depicting the chalkiness are irregularly distributed. This leads to different conclusions by different individuals while detecting the chalkiness. With the help of image processing, the image’s brightness can be distributed into 256°grayscales. After which, a suitable gray threshold value can be set from which we can easily and automatically filter out the chalky area from the rice grain after filling the image as shown in Fig. 5. The major step here is that the images which are being tested are automatically identified and the chalky part being distilled out. Accordingly, the degree of chalkiness and the grain ratio of the chalky were calculated using the two formulas stated below. Degree of chalkiness (in percentage) = (total summation of the chalky area / total summation of the rice area)*100% Grain ratio of the chalky (in percentage) = (number of grains having chalky / sum of the total number of grains)*100%.
128
J. P. Singh and C. Pradhan
3.3 Rice Detection To detect rice grains from the sample pictures which were captured using a Nikon D3300 camera, initially on the same object, the chronologically similar pixels were focused in space; then to form independent objects from the sample picture, each object was accurately and separately divided. To distill, these objects were used as primary standards and then used various diagnostic information to further classify the rice grain. To perfectly classify the rice grains, it is essential to accurately divide the images into objects on the basis of different sizes, which is an important factor for the entire method. Now, to detect the rice grains, the gray threshold value is considered, whose spectrum and location distribution information depict numerous single pixels and to get the required rice grains, which are to be analyzed. If all the distances between the pixel and the gray center of the current object are less than the given threshold value, then we can distinguish between the background and the rice grain. Due to the minute differences in the grain’s shape and size, in the algorithm, the threshold values of noise and background can also be used to control the accuracy of the division.
3.4 Chalky Analysis Algorithm The rice grains were detected and then analyzed for the chalkiness. The algorithm to detect chalkiness in the rice grain is given below.
A Semi-automated Smart Image Processing Technique for Rice …
129
public void Analysis() { intTotal_Num=Num_Rice=Num_WRice=Num_WPixel=0; AvgL_Rice=AvgW_Rice=AvgL_WRice=AvgW_WRice=0; MaxL_Rice=MaxW_Rice=MaxL_WRice=MaxW_WRice=-1; MinL_Rice=MinW_Rice=MinL_WRice=MinW_WRice=-1; booleanIs_White=false; booleanIs_RiceWhite=false; float[] PixelValue= new float[3]; for each(obj in entities) { obj.Num_White=0; if(obj.back=="B_Back") continue; Is_WhiteRice=False; Num_Rice++; ArrayList pixel = entity.TotalPixels; Total_Num+=pixel.Count; for each(Pt pt in pixel) { Is_White=true; GetPixelValue(ptX,ptY,refPixelValue); for(inti=0i0.05 m) 2 = showing probable or definite left ventricular hypertrophy by Estes’ criteria
8
Maximum heart rate achieved (thalach)
Number
9
Exercise-induced angina (exang)
1 = yes; 0 = no
10
Oldpeak
ST depression induced by exercise relative to rest
11
The slope of the peak exercise ST segment (slope)
1 = upsloping, 2 = flat, 3 = downsloping
12
Vessels (0–3) colored by fluoroscopy (ca number of major v)
No
13
Thalassemia (thal)
3 = normal; 6 = fixed defect; 7 = reversable defect
14
Diagnosis of heart disease (angiographic disease status, target)
0 = 50% diameter narrowing
4 Data Preprocessing This dataset is free from any missing values the only preprocessing techniques are followed is normalization. Basically, it is a technique to rescale attribute values to fit in a specific range between 0 and 1. Some of the frequently used normalization techniques are min-max, Z-score, and decimal scaling. A sample of the preprocessed dataset is shown in Table 3.
Age
0.82
0.48
0.53
0.73
0.74
0.74
0.73
0.57
0.68
0.74
Sl
0
1
2
3
4
5
6
7
8
9
1
1
1
0
1
0
1
0
1
1
Sex
0.67
0.67
0.33
0.33
0
0
0.33
0.33
0.67
1
cp
0.75
0.86
0.6
0.7
0.7
0.6
0.6
0.65
0.65
0.725
trestbps
0.298
0.353
0.466
0.521
0.34
0.628
0.418
0.362
0.443
0.413
chol
Table 3 A sample of the preprocessed heart disease dataset
0
1
0
0
0
0
0
0
0
1
fbs
0.5
0.5
0.5
0
0.5
0.5
0.5
0
0.5
0
restecg
0.861
0.802
0.856
0.757
0.733
0.807
0.881
0.851
0.926
0.743
thalach
0
0
0
0
0
1
0
0
0
0
exang
0.258
0.081
0
0.209
0.065
0.097
0.129
0.223
0.565
0.371
Oldpeak
1
1
1
0.5
0.5
1
1
1
0
0
Slope
0
0
0
0
0
0
0
0
0
0
ca
0.67
1
1
0.67
0.33
0.67
0.67
0.67
0.67
0.33
thal
1
1
1
1
1
1
1
1
1
1
Target
Analysis and Prediction of Cardiovascular … 139
140
B. K. Mengiste et al.
Table 4 Comparison of results for different classifiers Classifiers
Accuracy: train-test split method
Accuracy: cross-validation method
90:10
80:20
70:30
tenfold
fivefold
fourfold
SVM
0.77
0.80
0.82
0.79
0.81
0.80
NB
0.71
0.77
0.78
0.79
0.79
0.79
DT
0.81
0.77
0.76
0.76
0.77
0.76
k-NN
0.74
0.70
0.71
0.66
0.66
0.66
RF
0.81
0.77
0.78
0.77
0.75
0.73
LR
0.77
0.82
0.82
0.82
0.81
0.81
4.1 Hardware and Software Requirements The required hardware and software used for experiment are listed below: • Hardware components: RAM 8 GB, Hard disk 1 TB. • Software components: Windows 10 OS, Python version 3.7.1, and processor speed 2.40 GHz.
5 Results and Discussion We have used two methods such as the train-test split as well as the k-fold crossvalidation. In the train-test split method, the dataset is divided into 90:10%, 80:20%, and 75:25%, respectively. Various ML algorithms like k-NN, DT, random forest, SVM, NB, and logistic regression were used. In cross-validation method, the different values of k used are 10, 5, and 4, respectively. The results of both the methods are shown in Table 4. Logistic regression gives the best accuracy of 82% for 80:20 and 75:25 split as well as for tenfold cross-validation (as the data is evenly distributed). SVM also gives the best accuracy of 82% for 75:25 split.
6 Conclusion Our paper used heart disease patient dataset and six different kinds of supervised learning classification techniques that is k-NN, DT, SVM, logistic regression, NB, and random forest and compared the accuracy of each classifier by train-test split as well as k-fold cross-validation methods with different ratio and values. The dataset contains 303 samples, 13 independent attributes, and one dependent or target attribute and it has two classes 0 and 1. 0 indicates a person has heart disease and 1 no heart disease. The analysis result under train-test split method for the logistic regression model gives the best accuracy result of 82% for 80:20 and 75:25 split. Similarly,
Analysis and Prediction of Cardiovascular …
141
SVM also gives an accuracy of 82% for 75:25 split. On the other hand, the tenfold cross-validation method using the LR model gives an accuracy of 82%. In general, LR and SVM have better accuracy than the other classifiers. The proposed future work includes the need to verified real and large heart disease datasets. Heart disease mostly happened above 45 ages so it will be analyzed in terms of age and sex for real datasets. The techniques also can be extended for semi-supervised learning methods as used in other applications [12].
References 1. A. Methaila, P. Kansal, H. Arya, Early heart disease prediction using data mining techniques. Comput. Sci. Inf. Technol. J. 7, 53–59 (2014) 2. T. Mythili, D. Mukherji, N. Padalia, A. Naidu, A heart disease prediction model using SVMdecision trees-logistic regression (SDL). Int. J. Comput. Appl. 68(16), 11–15 (2013) 3. V. Chaurasia, S. Pa, Data mining approach to detect heart diseases. Int. J. Adv. Comput. Sci. Inf. Technol. (IJACSIT) 2, 56–66 (2014) 4. M. Abdar, S. Kalhori, T. Sutikno, I.M.I. Subroto, G. Arji, Comparing performance of data mining algorithms in prediction heart diseases. Int. J. Electr. Comput. Eng. 6, 1569–1576 (2015) 5. K. Saxena, R. Sharma, Efficient heart disease prediction system using decision tree. In International Conference on Computing, Communication & Automation. (IEEE, 2015), pp. 72–77 6. B.D. Kanchan, M.M. Kishor, Study of machine learning algorithms for special disease prediction using principal of component analysis. In International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC), pp. 5–10 (2017) 7. N. Priyanka, P. Ravikumar, Usage of data mining techniques in predicting the heart diseases— Naive Bayes and decision tree. In International Conference on Circuit, Power and Computing Technologies (ICCPCT). (IEEE, 2017), pp. 1–7 8. M. Jabbar, S. Samreen, Heart disease prediction system based on hidden naïve bayes classifier. In International Conference on Circuits, Controls, Communications and Computing (I4C). (IEEE, 2018), pp. 1–5 9. C. Sowmiya, P. Sumitra, Analytical study of heart disease diagnosis using classification techniques. In IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS). (IEEE, 2017), pp. 1–5 10. N.C. Reddy, N.S. Nee, L.Z. Min, Classification and feature selection approaches by machine learning techniques: heart disease prediction. Int. J. Innovative Compu. 9(1), 39–46 (2019) 11. M. Eskandari, Z. Hassani, Intelligent application for heart disease detection using hybrid optimization algorithm. J. Algorithms Comput. 51(1), 15–27 (2019) 12. J.K. Rout, A. Dalmia, K.K.R. Choo, S. Bakshi, S.K. Jena, Revisiting semi-supervised learning for online deceptive review detection. IEEE Acc. 5, 1319–1327 (2017)
Fundus Image-Based Macular Edema Detection Using Convolutional Neural Network C. Aravindan, Vedang Sharma, A. Thaarik Ahamed, Mudit Yadav, and Sharath Chandran
Abstract Macular disorders are a set of diseases that damage the macula in retina. They give rise to distorted vision and in some cases may even lead to visual impairment. Macular edema (ME) is one of the most crucial types among macular disorders and it is caused by an accumulation of fluid under the macula. Various techniques have been developed till date to detect macular edema; however, the detection of macular edema alone is not enough. It is also essential to give the correct diagnosis and appropriate medical treatments based on the severity of edema. In this paper, an automated system for the detection of macular edema from fundus images has been discussed. The system uses features of convolutional neural networks (CNN), a sophisticated deep learning module for classification of exudates. The information extracted from the following input of a fundus image is distinguished with the training data provided on the CNN module to grade the macular edema. The edema, at an early stage, is categorized as a “mild case of ME,” a slightly advanced stage as a “moderate case of ME” and at a highly advanced stage as a “severe case of ME”; based on these categorizations, appropriate medications and treatments are suggested to the patients. Keywords Convolutional neural network (CNN) · Macular edema · Optimizer · Learning rate · Epoch C. Aravindan (B) · V. Sharma · A. Thaarik Ahamed · M. Yadav · S. Chandran Department of ECE, SRM Institute of Science and Technology, Ramapuram Campus, Chennai, India e-mail: [email protected] V. Sharma e-mail: [email protected] A. Thaarik Ahamed e-mail: [email protected] M. Yadav e-mail: [email protected] S. Chandran e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_14
143
144
C. Aravindan et al.
1 Introduction Macular edema, a disorder caused by accumulation of fluids underneath the macula— the most sensitive tissue of retina, causes the macula to swell and thicken resulting in distorted vision, becoming a serious problem since it hinders all day to day activities like driving, reading, cycling, etc. Henceforth, it is crucial to detect macular edema at the earliest stage possible since the damage caused can become irreversible if left untreated and has been estimated to be one of the leading causes of permanent blindness. To achieve the goal of severity level determination, image processing using convolutional neural network (CNN) is performed for the detection. Convolutional neural network (CNN) is a deep learning module, capable of extracting feature information by utilizing a series of convolutional layers. These layers are performing convolution operation across the three dimensions (length, breadth and height) of an image to estimate the feature information. The subsequent layers then repeat this operation coherently to improve the accuracy of the estimated feature information approximately till a 100% is achieved. At this point, the estimated feature information is completely accurate and resembles the actual feature information. After the extraction, a comparison of the information has been made with the feature information of the supplied training data to determine the severity level of edema, and diagnosing pioneer to the diseases that could have caused the formation of macular edema and possible treatments for a patient is suggested. The diagnosis check has been performed on a dataset of 151 images, and the number of mild, moderate and severe cases has been tabulated. Further through this documentation, the system architecture, the methodology, simulated results and an overall abstract of macular edema detection by CNN are discussed and elaborated.
2 Related Work Macular edema has several causes, varying with older age, diabetes [1], genetic disorders and surgery; leaving untreated may lead to severe eye damage and sooner or later to blindness. Therefore, it becomes crucial to detect macular edema at an early stage. Various techniques have been developed for the detection of macular edema [2, 3]. However, other methods developed has a limitation of not being capable of determining the severity level of macular edema in an efficient manner which is vital for the recommendations of treatment to the concerning patient, since different severity levels require different treatment methods. Syed et al. [4] introduced a new automated system for the detection of macula using robust macula localization. They used support vector machines along with the minimum distance classifier for accurate localization of fovea. The algorithm received an average accuracy of 96.1%. Vahadane et al. [5] worked on the detection of diabetic macular edema in tomography scans using patch-based deep learning neural network, using the training data for hard exudates which consisted of 3651
Fundus Image-Based Macular …
145
positive samples and 12,367 negative samples. They received an overall f1-score of 92.8%. Senger et al. [6] developed an automated system for detection of diabetic macular edema in retinal images using a region-based method. In this method, the center of the macula is detected which is independent of optic disk location, training the model using 100 images of the MESSIDOR database and achieved an overall accuracy of 80–90% for different cases.
3 System Architecture The CNN methodology performs two main procedures: the extraction of feature information for comparison [7] and the classification of fundus image of an eye. CNN is a deep learning algorithm that is utilized to categorize the supplied input data into three different classes or more. The algorithm is allocated with training data which consists of examples of all three categories. The key characteristics are those parameters for which all the examples belonging to a category contains the same quantifiable value. This defining characteristic has been used to distinguish between objects. In the case of macular edema, the CNN module [8] has been trained to distinctly identify the severity level of the disease. This has been achieved by supplying the feature information extracted from input data to the module, and the marker used for identifying a diseased eye is the presence of a circular swollen region in the macula of an eye. If the classifier can identify a swollen region, then the input image is categorized as “diseased eye” otherwise as “healthy eye.” Upon identification of a diseased eye, further classification is performed to determine the severity level of edema as shown in Fig. 1, which is described under Sect. 4.1—edema severity level classification.
4 CNN Operation In this module, a total of ten layers are in use. The first layer being the image input layer, consisting of supplied input images, transits the processed data matrix to the second layer, the convolutional layer. This layer has 20 convolutional filters, each filter with a mask of 5 × 5 in dimension. These filters slide over the entire image such that each pixel is covered during the process. The convolution mask slides across the entire image and convolves with each pixel resulting in the formation of a feature map of the eye image. The feature map information is then sent to the third layer, the rectified linear unit (ReLU) layer. It is an activation function which adds an element of nonlinearity to the system by removing all the negative activations present in the feature map; ensuring to a result of creating the feature map in a much faster pace. After the operation of the ReLU layer, the control is passed to the fourth layer which is the max pooling layer. Max pooling is performed by applying a max filter to the non-overlapping regions of the original input data. This results in down-sampling
146
C. Aravindan et al.
Fig. 1 System architecture
of the input data and helps in using the most important or the highlighted features of the eye image for further operation. Thus, the macular region which becomes the highlighted region is then used for further operations. The fifth, sixth and seventh layers consist of an additional convolutional layer, ReLU layer and max pooling layer, respectively. The eighth layer—fully connected layer, bridges an edge to edge connection between all the previous layers of the system. The fully connected layer takes the results obtained by the operation of all the previous layers and is subsequently flattened into a single vector of values, each value in this vector is to be utilized later under the classification operation. The macular region of the image is converted into a single vector matrix by the fully connected layer, followed by the ninth layer, the softmax layer. The softmax layer performs a type of a squashing function limiting the output value within a range of 0 to 1. This enables the output to be interpreted in a form of probability. The final comparison operation is performed by the tenth layer which is the output classification layer.
4.1 Edema Severity Level Classification The output classification layer calculates the cross-entropy loss, i.e., it compares the obtained feature map information of the input eye image from previous layers and
Fundus Image-Based Macular …
147
compares it to the feature information map for each case: mild, moderate and severe. Few sample images were provided as test cases for each of the three categories during the training phase of the system and this training data develops the feature information map for each case. The probability value for each case namely: mild, moderate and severe are determined by comparing the feature information map of the input image to the feature map of each category. The category for which the probability function has the maximum value is the concluding category to where the given input image of macular edema belongs. For instance, the probability function of mild, moderate and severe cases is 0.35, 0.69 and 0.81, respectively, and then the macular edema is obviously categorized under severe category for its higher probability function value. The dataset of all the eye images that are to be checked is created and this repository contains a complete set of images of eyes of those patients who have been checked for macular edema. The images then sequentially are supplied to the CNN module to determine the severity level of edema in all patients and generate a medical report for each one of them suggesting them the appropriate treatment and suggestions.
4.2 Optimizer An optimizer is an algorithm to control the various attributes involving in the learning operation of the neural network. These attributes include learning rate, total number of epochs, number of iterations per second and so forth. The appropriate values are set for all these attributes by the optimizer and the series of convolutional layers perform repeated convolution operations to arrive at a better estimate at the end of each iteration. These estimates eventually increase in accuracy and ultimately reach a 100% accuracy level, and at this point, the estimated feature information resembles the actual feature information of the input image. For macular edema detection, ADAM optimizer is utilized since it has the ability to adapt the learning rate according to different parameters meaning that it could choose a suitable learning rate for each parameter. This ability provides ADAM optimizer with a significant advantage in terms of achieving high rates of accuracy within relatively a short period of time. The process of feature information extraction happens through several layers as shown in Table 1. Table 1 Layers of CNN model and its parameters Convolution layer 2
FCL
●
Two convolution layers 32 × 3 × 20 ReLu
Model
Convolution layer 1
32 × 3 × 20 ReLu
Dropout (0.75) Softmax
●
Fully connected layer (FCL)
Batch normalization –
– –
Batch normalization Dropout (0.5)
148
C. Aravindan et al.
5 Methodology Throughout this chapter, the methodology or the flow of the system’s work has been defined with the following fundus image—Fig. 2 supplied as input to the deep learning module. The preprocessing of the image tends to be the initial step toward obtaining an improvement in quality of the image, extracting essential features to detect the presence of macular edema precisely. This stage involves processes like adaptive thresholding and filtering. The input image thus encounters the conversion of the original RGB image into the corresponding red, blue and green color channels—Fig. 3. This is a vital step in under the preprocessing since with the contingent of the amount of information present in each channel, and the resulting channel with the highest supply of information is utilized. A suitable color channel is selected and then that corresponding channel image is darkened and filtered after being converted into a grayscale image. Figure 4a shows the darkened image which has an improved image quality that the benefits the feature information being extracted easily. Figure 4b results the filtered image with the background removed. In this case, background has been removed from the filtered image to identify the macular region in the eye. Figure 4c displays the vessel image of the eye from which both the background and the optical disk has been removed. The optical disk is removed to examine the vessels in isolation to inspect the presence of any swelling due to fluid accumulation. The segmentation operation [9] has been performed on the obtained vessel image of the eye to group and identify the region where macular edema is present. In Fig. 5a, the segmented image of the edema region is distinctly visible. As the final output of classifying the vulnerability level of edema; Fig. 5b results the final labeled image, which is used to distinctly mark and identify the edema region if present, associating as an important prototype to the final comparison with the training data. Figure 5c shows the final outlined original image highlighting the diseased area in macula. Fig. 2 Original input image
Fundus Image-Based Macular …
149
Fig. 3 Conversion of RGB image into corresponding red, blue and green color channels
a
b
c
Fig. 4 a Darkened and filtered grayscale image. b Background removed eye image. c Vessel image of eye
a
b
c
Fig. 5 a Segmented image used for comparison with training data. b Labeled eye image used for the final classification. c Outlined original image
150
C. Aravindan et al.
6 Simulated Results By proficiently using the deep learning algorithm, the images have been analyzed and processed to determine the accurate information regarding the features present in the input. The obtained feature information has then been compared with the training data to determine the severity level of macular edema for each image. Figure 6 shows the graphs depicting accuracy and loss functions as a function of iteration. The loss decreases while the accuracy increases with each iteration and eventually becomes zero after the final training cycle. While with each iteration, it reaches a maximum value in the final training cycle. This signifies that the complete accurate feature information regarding the input image has been received and this feature information has subsequently been compared with the feature information in training data to determine the severity level of edema.
6.1 Simulated System Parameters The various parameters involved in the deep learning module and give the values that have been obtained during the feature information extraction process are shown in Table 2.
Fig. 6 Graphs of accuracy and loss functions as a function of time
Fundus Image-Based Macular … Table 2 Parameters and their corresponding values
151 Parameters
Values
Elapsed time
5s
Total epochs
15
Iterations per second
1
Maximum iterations
15
Learning rate
0.0001
Elapsed time is the total time taken by the deep learning module from the instant that the input image is supplied to complete the entire operation and till the determining the severity level of edema. The mini-batch accuracy parameter reported during training corresponds to the accuracy of that particular mini-batch at the given iteration, which is not a running average over iterations. This starts with a certain random value in the initial iteration and reaches an accuracy of 100% at the end of the final iteration. Total epochs parameter refers to the total number of iterations for which the program has run to successfully determine the severity level. Iterations per second parameter tell us the number of iterations that are happening per second. The learning rate is the parameter defining the magnitude in reference to the estimate value of the feature information being changed at the end of each iteration to arrive at a better estimate in the consecutive iterations. A small value for learning rate can result in slow operation of the CNN module, whereas a large value for the learning rate may cause inaccuracy in arriving at a better estimate. Hence, an optimum value of learning rate needs to be maintained. Maximum iteration parameter delivers us the value of maximum permissible number of iterations for which the program is allowed to run. If the detection does not occur within the stipulated number of iterations, then it is concluded that the module is not operating up to the required specifications, and other parameter values need to be edited to get the desired optimum performance from the CNN module.
6.2 Medical Report Generation A patient suffering from mild macular edema has been suggested in his medical report to get checked for uveitis and glaucoma for the case that these are the diseases most commonly associated with mild macular edema. The recommended medications, in this case, will be anti-vascular endothelial growth factor injection vitrectomy and anti-inflammatory treatments. Similarly, a patient suffering from moderate macular edema has been suggested in his medical report to get checked for micropsia and central scotoma since these are the diseases most commonly associated with moderate macular edema. The recommended medications, in this case, will be pharmacologic vitriolizes agents and carbonic anhydrase inhibitors (CAIs). In the same way, a patient suffering from severe macular edema has been suggested in his medical report to get checked for diabetic retinopathy and retinal vein occlusion since these are the
152
C. Aravindan et al.
Fig. 7 Generation of medical report
diseases most commonly associated with severe macular edema. The recommended medications, in this case, will be a diet modification supply of insulin and steroids. The diagnosis report mentioned in Fig. 7 is executed as the final output for all three of the severity levels.
7 Conclusion and Future Scope Concluding this system’s functionalities with various series of testing, it could be analyzed that the CNN module is much superior in comparison with SVM for the detection of macular edema. It also has the added capability to categorize the macular edema as mild, moderate or severe. This approach has yielded excellent results and has been tested on a dataset of 151 images. From the given dataset, 41 cases of mild macular edema have been detected. Similarly, a total of 78 cases of moderate macular edema have been identified, and last but not the least, a total of 32 cases of severe macular edema have been discovered. This ability to categorize the severity of macular edema has also provided a high amount of flexibility to the doctors and has facilitated them to provide customized treatments to all the patients based on their severity level of the disease. Thus, each patient has been treated based on his individual needs. With the continuous evolution of technology and the never-ending innovations happening in the medical field, we foresee a couple of additional modifications happening in a supplement to our proposed modifications. One of them being that the given system can be linked to the server of the hospital which will have a detailed data repository of all the patients who have come to the hospital for their eye checkup. The system will be able to generate a personalized report which contains the recommended treatment for each patient and then the system will automatically send the generated report to the email id of the patient. Another additional alteration we foresee happening is that with the advancement in the field of Internet of things (IoT) [10], it is extremely possible that soon the system
Fundus Image-Based Macular …
153
will be able to segregate the patients that have a severe case of macular edema and will automatically book an appointment for the surgery of the patient at the earliest date available and also send the notification regarding the surgery appointment to the personal email id of the patient.
References 1. F.K.P. Sutter, M.C. Gillies, H. Helbig, in Diabetic Macular Edema: Current Treatments. Medical Retina, (Springer, Berlin, Germany, 2007), pp. 131–146 2. C. Agurto, V. Murray, A multiscale optimization approach to detect exudates in the macula. IEEE J. Biomed. Health Inf. 18(4), 1328–1336 (2014) 3. A. Johny, A. Thomas, A novel approach for detection of diabetic macular edema. Proc. Int. Conf. Emerg. Trends Eng. Technol. Sci. (ICETETS), 1–4 (2016) 4. A.M. Syed, M.U. Akram, T. Akram, M. Muzammal, S. Khalid, M.A. Khan, Fundus imagesbased detection and grading of macular edema using robust macula localization. IEEE Acc. 6, 58784–58793 (2018) 5. A. Abhishek, A. Joshi, K. Madan, T.R. Dastidar, Detection of diabetic macular edema in optical coherence tomography scans using patch based deep learning, in IEEE 15th International Symposium on Biomedical Imaging (ISBI) (2018), pp. 1427–1430 6. N. Sengar, M.K. Dutta, R. Burget, L. Povoda, Detection of diabetic macular edema in retinal images using a region based method, in IEEE 38th International Conference on Telecommunications and Signal Processing (TSP) (2015), pp. 412–415 7. R.S. Rekhi, A. Issac, M.K. Dutta, C.M. Travieso, Automated classification of exudates from digital fundus images. Proc. Int. Conf. Workshop Bioinspired Intell. (IWOBI) 6, 1–6 (2017) 8. A. Kunwar, S. Magotra, M.P. Sarathi, Detection of high-risk macular edema using texture features and classification using SVM classifier. Proc. Int. Conf. Adv. Comput. Commun. Inform. (ICACCI), 2285–2289 (2015) 9. S.J.J. Kumar, C.G. Ravichandran, Macular edema severity detection in color fundus images based on ELM classier. Proc. Int. Conf. I-SMAC (IoT Social, Mobile, Anal. Cloud) (I-SMAC), 926–933 (2017) 10. A.H. Sodhro, S. Pirbhulal, A.K. Sangaiah, Convergence of IoT and product lifecycle management in medical health care, future generation. Comput. Syst. 86, 380–391 (2018)
Solar Tracker With Dust Removal System: A Review Mukul Kumar, Reena Sharma, Mohit Kushwaha, Atul Kumar Yadav, Md Tausif Ahmad, and A. Ambikapathy
Abstract The energy generating from the exhausting fossil fuels is one of the biggest challenges for the next generation. The concept of changing solar energy into electrical energy using Solar panels comes at the top when compared with the other sustainable sources of energy. But the constant distortion in the comparative sun’s angle with allusion to the planet earth decreases the energy conveyed by the (PV) panels. There are many other factors that affect the productiveness of the PV panels, i.e. clouds, dirt and dust, snow, shadow, bird droppings, etc. As it is seen that the main factor that affects the solar panel efficiency is dust and it is about 50%. Solar power is not fully utilized due to various factors. There must be appropriate access to the maximum consumption of solar panels. Solar tracker with a dust removal system is a mechanism that tracks the progress of the solar panel (PV) with respect to the guidance of the sun and it also cleans the dust particles that get collected on the solar panel. This mechanism is designed to trace the maximum concentration of light, as where there is less intensity of light it automatically changes its direction. Different technologies and methods are used to make such kind of efficient system. This paper reviews some of the technologies that have been made worldwide.
M. Kumar (B) · R. Sharma · M. Kushwaha · A. K. Yadav · M. T. Ahmad · A. Ambikapathy Galgotias College of Engineering and Technology, Greater Noida, India e-mail: [email protected] R. Sharma e-mail: [email protected] M. Kushwaha e-mail: [email protected] A. K. Yadav e-mail: [email protected] M. T. Ahmad e-mail: [email protected] A. Ambikapathy e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_15
155
156
M. Kumar et al.
Keywords Solar tracker · Sustainable energy · Microcontroller · Photovoltaic panel
1 Introduction In this new era, the depletion of natural resources is not a new thing, as fossil fuels are decreasing day by day, these are the short-term alternative for energy consumption, there is a need for sustainable energy to overcome this deficiency caused by fossil fuels. The growing attention towards the different problems caused by the use of fossil fuel laid more emphasis on using sustainable energy options that are renewable, clean, and non-polluting and don’t harm our environment and solar energy is one of them. As sunlight is abundantly available worldwide, so solar energy can be the most efficient and easy method to overcome this scarcity of energy. As far as the efficiency is concerned it is low because the sun changes its direction throughout the day with respect to time and according to different seasons. Hence increasing the efficiency of converting solar energy into electrical energy becomes an important issue. These PV panels are placed on the top of the building or house to track the intensity of the sun. The tradition is that the PV panels are fixed according to the latitude angle of the specific country. In few cases, humans manually try to relocate the solar tracker module towards the direction of the sun based on the upcoming season [1]. To obtain the most effective and maximized output the PV panels should be at 90° perpendicular to the sun or light-emitting source [2]. As the sun rotates all through the day as well as all over the year. Hence there is demand to make a solar panel tracking mechanism that controls the solar panel (PV) with the changing direction of the sun. Solar tracking system which is used to trace the sun’s movement has increased the efficiency of the solar panels between 30 and 60% as compared with fixed tracking systems [3]. Solar panel efficiency is also decreased by various factors like shadowing, dust and dirt, snow, bird droppings, etc. Dust and dirt are one of the biggest concern that decreases about half of the efficiency of the PV panels according to the place where the solar panels are mounted. Hence a solar panel (PV) cleaning mechanism system is designed with respect to increase the output of solar panel (PV) and maximize more and more amount of solar energy and get it converted into electrical energy.
2 Solar Tracker System Basically, a solar tracker is a system that is used to align the PV panels to the changing movement of the sun. In other words, we can say that the system moves its direction as the sunlight changes its direction with respect to the sun. This system may be termed as the electromechanical system which traps light intensity coming from the sun and convert them into electrical energy in a very efficient manner.
Solar Tracker With Dust Removal …
157
Broadly solar tracker system has been classified into two types: 1. Based on nature of motion 2. Based on freedom of motion.
Solar tracking system
Nature of moƟon
Passive solar tracker
AcƟve solar tracker
Freedom of moƟon
Single axis tracker
Double axis tracker
Passive Tracker System: This kind of system comes with the concept of thermal extension of elements or a compressed gas fluid with low boiling point is delivered on the one side of the solar panel or on the other side as a tracking method. Generally, a chlorofluorocarbon (CFC) or a memory shape alloy is mounted on any part of the PV panel. Whenever these panels get perpendicularly aligned to the sun, the equilibrium gets established between the two sides. As when the sun changes its direction, one side of panel gets heated up and which causes expansion and contraction between the two sides of the panel, by action of which solar panel gets rotated. A passive tracker system has the capability to increase productiveness by 23% [4]. Active Tracker System: This kind of system are that which uses different electrical motors and other electronic technologies for the tracker movement as per instruction given by the microcontroller reacting to the sun’s movement. The movement of the sun has been analysed all over the day. Whenever the tracking system is concerned to night or bad weather, it automatically stops the functioning depending upon the design. This is made possible with the help of sensors that are sensitive to light for example, LDRs. Single-axis tracker system: These type of tracking systems has only one degree of freedom which acts as an axis of rotation. The meridian to the true north is through which the rotation axis of these trackers gets coordinated. Most common application of these trackers includes (HSAT), (HTSAT), (VSAT), (TSAT) and (PSAT) [5].
158
M. Kumar et al.
Dual-axis tracker system: This type of tracking system has two degrees of freedom which acts as an axis of rotation. Both the axis is basically normal to one another. They are classified as primary and secondary axis the primary one is constant with respect to ground or earth, whereas the other one is referenced to the primary one.
3 Review of Different Technologies Using Microcontroller 3.1 Using AT89C51 Microcontroller In this kind of system basically, four LDR’s are used and a condition is made that any two LDR are working together at a time. A pattern of bits is created according to which the motor works and the shaft of the motor connected to the solar panel change it with respect to the sun. The combination of LDR plays a vital role and it senses the intensity of light the fetched output is then given to the microcontroller which controls the motor accordingly. LDR Sensor 1
LDR Sensor 2
LDR Sensor 3
LDR Sensor 4
0
1
1
0
0
1
0
1
1
1
0
0
1
0
0
1
The fetched output from LDR sensors is given to the terminals of comparator LM324. The LM324 has four independent operational amplifiers or comparators through which all the four LDR’s sensors are associated. The output from the first port of the LM324 is fed into the Port A or 1 of the AT89C51 which compares the bit pattern received with the existing one and then sends the signal to the motor driver L293D which then instruct the motor to freely align in any direction [6]. The block diagram of this system is as follows (Fig. 1).
3.2 Using Atmega32 Microcontroller Atmega32 is a 40-pin microcontroller and every pin has its own unique function. In this type of system there are two light detector registers (LDR) used one for the east and the other for the west side and LDR’s are connected to the comparator. Various operations have been that as the that are 1. Normal condition—In this kind of mechanism the sunlight is captured without any interference from the clouds. In this, there are two sensors used for the singleaxis tracking system whereas four sensors for the dual-axis tracking system. In
Solar Tracker With Dust Removal …
159
Fig. 1 Block diagram of solar tracker using AT89C51
normal conditions these sensors detect the sunlight and the output voltage is compared with each direction sensor and based on that the solar panel tracks down the sun. 2. Bad condition—In this kind of mechanism the sensor cannot detect the presence of light due to the intervention of clouds. Due to this, the LDR sensor is not able to send any output signal further. So, it would be challenging for the LDR sensors to locate the sun’s position. This issue would be resolved by the implementation of the method for the directional solar panel (PV) movement. As the sun moves about 360° in 24 h, hence every hour we get 360/24 = 15°. So, the rotation of sun is about 3.75° and we can collect data in about 15 min of continuity [7]. The block diagram is as follows (Fig. 2):
160
M. Kumar et al.
Fig. 2 Block diagram of solar tracker using ATMEGA32
3.3 Using Arduino In this kind of system, Arduino Uno is used as microcontroller to track the movement of the sun and attain maximum efficiency. Arduino is a single-board microcontroller, it has been used for making the interactive objects more easily accessible and useful. Arduino can be used to develop the interactive objects while taking the input from the different sensors available and control the different led, lights, motor, and other outputs. In this, the system consists of 2 LDRs (light detecting registers) each representing the direction (east and west) [8]. Each of the LDR sends the output for the appearance and non-appearance of light intensity to the Arduino microcontroller. Then the Arduino chooses the output for the motor driver (L293D) to drive the motor in clockwise (CW) or anticlockwise direction (ACW) according to the output. As the motor driver gets the output signals it runs the motor in a definite direction with a specific speed. Thus, the motor rotates through which the PV panel gets adjusted to the direction of the sun [9] (Fig. 3).
Solar Tracker With Dust Removal …
161
Fig. 3 Block diagram of solar tracker using Arduino Uno
4 Dust Removal System As we know that the solar tracking mechanism must be placed on a roof or open place where there must be sufficient sunlight available to convert the solar energy into productive electrical energy. But whole energy cannot be converted into electrical as the efficiency of the system is decreased due to various factors like dust, clouds, bird droppings, and many more [10]. The efficiency is most influenced by the dust factor which is about 50% of the total efficiency. The partial wrapping of some part of the solar panel can be termed as shadowing or it can be termed as the partial shading of some part of solar panel due to which the sun rays are distorted, which affects the productiveness of the solar panel or PV panel. Various reasons due to which this shadowing takes place [11]: 1. 2. 3. 4. 5. 6. 7.
Shadow from other buildings, towers Shadow from clouds Dust and dirt that get collected on the surface of panel Lack of knowledge about the PV panels Snow Other light-blocking obstacles Bird droppings.
162
M. Kumar et al.
Fig. 4 Proposed dust removal cleaning method
Movin Arm
Micro-controller Solar panel
Arduino
Some of the outcomes of this effect are as follows: 1. 2. 3. 4.
The productiveness of the PV panels decreased The output power also gets reduced Over Heating of solar panel take place Due to which the temperature of the PV panels gets increased.
To overcome this kind of problem a microcontroller-based system is designed, earlier the solar panels are cleaned manually using man power and water. This causes the wastage of water and even the cost of this method is very high. Second the system that is efficient as well as the cost and maintenance is less. This system consists of Arduino microcontroller, mechanical arm/moving arm, gear motor, solar panel. As per a certain hour or time, we will program the system as to rotate within some time or hour to clean the system, then the microcontroller gives the direction according to the programming, and tells the motor to rotate in a specific direction and clean the dust with the help of the moving arm [12]. The working block diagram of the recommended system is confronted in the below figure (Fig. 4). This system works as: 1. Firstly, we will program the microcontroller as per min or per hour time to rotate accordingly the whole day and clean the dust on the system. 2. As per the program, the microcontroller will tell the motor driver which drives the motor to run in a specific direction and the arm will move according to the motor. 3. Due to this whole microcontroller system, the efficiency of this kind of system will be boosted and the output power that’s get wasted would be utilized properly and in a very efficient way. 4. Comparing with the manpower method of cleaning solar panels this method is more suitable for the maintenance of the system. The expenditure of this kind of system is less as compared with the manpower and the wastage of water is more in that case.
Solar Tracker With Dust Removal …
163
5 Result and Discussion After comparing all these three methods for solar tracker, the Arduino based solar tracker is the most efficient and cheap method among them. As the two methods are having very complex programming and assembling of both systems are complex as compared to Arduino, and the dust removal system is efficient as compared to manpower usage system and other methods as this method is cheap and easily applicable.
6 Conclusion This whole system constitutes an efficient way to abstract all the desired light power and make the best use of it. It consists of all the different methods that are used to construct the solar tracker and choose the most effective and efficient one while the dust removal system is a system that cleans all the dust and dirt collected on the solar panel. The solar tracker with dust removal system will come up with great improved productiveness and helps to constitute the improvement of the output power of the solar panels (PV). Keeping all the demerits of the solar tracker in mind this system will surely help to increase efficiency. This is the extended perspective on solar trackers with a dust removal system and an effort to increase the solar panel efficiency by further using the LDR module instead of LDR sensors. Thus, we would like to thank all the other authors for their research on this solar tracker.
References 1. M. Serhan, L. El-Chaar, Two axes sun tracking system: comparison with a fixed system. In International Conference on Renewable Energies and Power Quality, Granada Spain, 23–25 March, 2010 2. A. Oetzberger, C. Hebling, H. Schock, Photovoltaic materials, history, status and outlook. Mater. Sci. Eng., R (2002) 3. P. Madhu, V. Viswanadha, Design of real time embedded solar tracking system. Int. J. Emerg. Trends Eng. Res. (IJETER) 3(6), 180–185 (2015) 4. C. Adrian, C. Mario, Azimuth-altitude dual axis solar tracker. B.Sc. project at Worcester Polytechnic Institute (2010) 5. D. Cooke, Single versus dual axis solar tracking. Alternate Energy eMagazine (2011) 6. M. Sharma, An efficient low-cost solar tracker using microcontroller. IOSR J. Electr. Electron. Eng. (IOSR-JEEE) 9(4), 37–40 (2014) 7. M.T.A. Khan, S.M.S. Tanzil, R. Rahman, S.M.S. Alam, Design and construction of an automatic solar tracking system, in International Conference on Electrical and Computer Engineering (ICECE), pp. 326–329, 18–20 Dec 2010 8. M. Haryanti, A. Halim, A. Yusuf, Development of two axis solar tracking using five photodiodes, in Electrical Power, Electronics, Communications, Control and Informatics Seminar (EECCIS) (2014)
164
M. Kumar et al.
9. V. Sharma, V.K. Tayal, Hardware implementation of sun tracking solar panel using 8051 micro-controllers, in 6th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO) (2017) 10. K. Tsamaase, T. Ramasesane, I. Zibani, Automated dust detection and cleaning system of PV module. IOSR J. Elect. Electron. Eng. (IOSR-JEEE) 12(6), 93–98 (2017), e-ISSN: 2278-1676, p-ISSN: 2320-3331 11. J. Isenberg, W. Warta, Realistic evaluation of power losses in solar cells by using thermographic methods. J. Appl. Phys. 95(9), 5200 (2004) 12. E. Skoplaki, J.A. Palyvos, On the temperature dependence of photovoltaic module electrical performance: a review of efficiency/power correlations. Int. J. Sol. Energy 83, 614–624 (2009) 13. H.A. Zondag, D.W. De Vries, W.G.J. Van Helden, R.J.C. Van Zolengen, A.A. Van Steenhoven, The thermal and electrical yield of a PV-module collector. Int. J. Sol. Energy 72(2), 113–128 (2002)
Driver Behavior Profiling Using Machine Learning Soumajit Mullick and Pabitra Mohan Khilar
Abstract The drivers’ behavior influences the traffic on road and this, in turn, influences energy consumed by the vehicles and emission of pollutants from the vehicles. So it is necessary to identify drivers’ characteristics to profile their behavior correctly. A large amount of data is needed for the analysis which is collected by the on-board unit present on the vehicle. On-board unit has sensors that are used to collect the required data. The comparative performance of different machine learning algorithms is evaluated on the data collected by the on-board unit and in turn helps in profiling drivers’ behavior. The experimental result shows that the support vector machine gives an accuracy of 99.4% among the remaining classifier. Keywords Vehicular ad hoc networks (VANET) · Machine learning · Intelligent transportation systems (ITSs) · Driver behavior · Safety application
1 Introduction Nowadays, the number of vehicles on the road is increasing exponentially. The population of considerable size uses a private car for their daily commute. One of the major drawbacks of using such a huge number of vehicles is road accidents. This leads to traffic jams and in turn leads to high fuel consumption and emission of dangerous pollutants from the vehicles. Dangers and expenses linked to road accidents are treated as a serious problem in today’s society. The statistic related to the number of accidents on an Indian road is released by the government, which is alarming. In 2017, the total road accident was reported to be 464,910, which claimed 147,913 lives and 470,975 persons were injured. This can be interpreted into 1290 injured
S. Mullick (B) · P. M. Khilar National Institute of Technology Rourkela, Rourkela, Odisha, India e-mail: [email protected] P. M. Khilar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_16
165
166
S. Mullick and P. M. Khilar
Fig. 1 Road side accident statistic in India
people and 405 lives lost daily from 1274 accidents. The fact is alarming that this is the official number and does not include the accidents which were not reported [1]. NHTSA is National Highway Traffic Safety Administration for India. According to them, about 25% of police reported crashes involve some form of driver inattention. One of the major reasons for the road accidents is the careless nature of a driver. The carelessness not only hurt himself but also the other people, who are riding with him and also all other vehicles on the roads. This cause major troll on the number of families and this happens only due to the negligence (known or unknown to himself) of some drivers. Therefore, emerging technologies to sense and alert oblivious drivers are very important to avert vehicular mishap and to imbibe disciplined driving in the drivers (Fig. 1). Advancement of wireless communication, which is applied to mobile computing, has boosted the intelligent transportation system (ITS) where the main focus is on the development of road safety applications [2, 3]. The technology that binds the above wireless communication with the automobile industry to take the industry to the next level is VANET. This is the backbone of intelligent transportation systems (ITS) [4]. VANET establishes a connection between vehicles (V2V) as well as vehicles and the roadside unit (RSU) (V2I). As it is an ad hoc network, it does not need much infrastructure to build the network. The presence of a communication unit (on-board unit) helps to use VANET for several applications like convenience applications, productive applications, and commercial applications. Convenience applications include toll tax collection [5], automatic parking service [6], etc. Productive applications include environmental parameter monitoring [7], secure transaction through VANET [8], etc. The commercial application includes marketing on the wheel [9]. Various optimization techniques to optimize the generic parameter of VANET [10] during application make VANET popular among researchers. In VANET, nodes communicate with each other using short-range wireless communication (e.g., IEEE 802.11p). A special allocation of 75 MHz in the 5.9 GHz is done by the Federal Communications Commission (FCC) for licensed dedicated short-range communication (DSRC) for the communication between vehicles and infrastructure whose main focus is to improve bandwidth utilization and to bring down latency.
Driver Behavior Profiling Using Machine Learning
167
The remainder of this paper is explained as follows. In Sect. 2, we have discussed some previous work done in this field. In Sect. 3, we discussed various machine learning techniques that we have used. In Sect. 4, we described the dataset used, preprocessing and implementation of the algorithms. In Sect. 5, we discussed the experiment result and finally we conclude the paper and the future work which can be done.
2 Related Work Since past few years, there are research being conducted going on monitoring driver behavior and also automatic road accident detection using various methodologies. Many researchers have ventured into the measuring of fatigued and drunkenness of the driver and also various risky behaviors which may be prone to accident. We have to take into consideration that there are many live applications related to insurance domain and fleet management. But they are not publicly accessible to us for research work. Some examples include Aviva Drive, Ingenie, Greenroad, Snapshot, and Seeing Machines. Nericell, proposed by Mohan et al. [11], is an application based on Windows to monitor road traffic and the condition of the road. An accelerometer is used to detect potholes and braking events. It also uses GPS/global system of mobile (GSM) communications to obtain the locality of the vehicle. Braking, bumps, and potholes are detected using a predefined threshold value. No machine learning algorithms were employed to learn the threshold value. Detection results in terms of false positives (FPs) and false negatives (FNs) are given in Table 1. The android application [12] was proposed by Dai and colleagues for real-time detection of dangerous driving events and alerts the driver. This detection is related to DUI of alcohol. Smartphone orientation and accelerometer are to detect abnormal curvilinear movements (ACM) and problems in maintaining speed (PMS), which in turn, related to the detection of drunken driving behavior. MIROAD is an iPhonebased application created by Johnson and Trivedi [13]. It uses a magnetometer, accelerometer, gyroscope, and GPS sensor data from the smartphone to classify whether the driver is aggressive/non-aggressive. It uses the DTW algorithm processed in real-time on the smartphone. The experiment result shows 97% accuracy. SenseFleet is an application based on the android platform devices, proposed by Castignani et al. [14], to detect unsafe driving events which are independent of vehicle and mobile device. It collects data from the magnetometer, gravity sensor, Table 1 Nericell application result
Event
False negative
False positive
Braking events detection
4.4%
22.2%
Bumps/potholes detection
23%
5%
Honk detection
0
0
168
S. Mullick and P. M. Khilar
accelerometer, and GPS from the mobile device. This data is used in a fuzzy system detection for risky behavior. In driver behavior profiling, on-board unit (OBU) is fixed with a sensor like a gyroscope, accelerometer, and GPS to collect data. OBU is kept in the vehicle. To collect data, a driver simulates a real-life risky behavior scenario. This is then modeled to classify genuine driving and help to profile the driver. In some papers, various sensors like alcohol sensor, eye movement, and eye exposure are used to detect the health of the driver.
3 Machine Learning Algorithm Here, we have used three machine learning algorithms. They are described below:
3.1 Logistic Regression Logistic regression [15] is a supervised learning model used for the classification of nonlinear data borrowed from the statistic field. Here, the target value can only be discrete values. It learns to calculate the probability of a given data consisting of a number of features belonging to a particular class. It uses the sigmoid function (g(z) = 1/ (1 + e−z )) to calculate the probability.
3.2 SVM Support vector machine (SVM) [16] is a machine learning model for classification as well as regression of data. It is a supervised learning algorithm that learns to find an optimal hyperplane that maximizes the separation (distance between the margin) of the training data. These margins are known as support vectors.
3.3 Multi-layer Perceptron Multi-layer perceptron is a simple two-layer artificial neural network (ANN). It consists of one input layer, one hidden layer, and one output layer. Hidden layer and output layer consist of nodes that are connected using different weights (these weights are learned by the algorithm using training data). The human brain was the inspiration behind the ANN algorithm to learn complex data [17]. In MLP, total nodes in the input layer (first layer) are equal to total features in the dataset and total nodes in the output layer (last layer) are equal to total classes we want to classify
Driver Behavior Profiling Using Machine Learning
169
the data. A single node computes the sigmoidal transfer of a weighted sum of value from the output of the previous layer.
4 Methodology For the paper, we have used the driver behavior dataset created by [18]. The dataset is a collection of sensor’s reading placed on the vehicle. The sensors used were accelerometer and gyroscope. The experiment was done with the help of three drivers to reproduce real like events on the road. The experiment’s conditions are as follow: • Cars: Ford Fiesta 1.25, Ford Fiesta 1.4, Hyundai i20 • Driver population: Three drivers of age 26, 27, 28 • Driver behaviors: Sudden right turn, sudden break, sudden left turn, and sudden acceleration • Sensor used: MPU6050 • Device used: Raspberry Pi 3 Model B. The purpose of this dataset was to record a set of driving events that represented real-world driving behavior such as braking, accelerating, turning, and lane changes. The raw data is stored in a CSV file in row format as “GyroX GyroY GyroZ AccX AccY AccZ,” where (GyroX, GyroY, GyroZ) is 3D coordinate of gyroscope and (AccX, AccY, AccZ) is 3D coordinate of an accelerometer. The rows in the data are time dependent. To make each row independent—to each other, we used a window of size fifteen. As shown in Fig. 2, the first data is calculated using the values from F i to F o rows. For the following data, we move the sliding window by one row (from F i+1 to F o+1 ) and calculate our second data using the corresponding values, and this continues for all the rows in the raw dataset to convert it into the feature dataset. For each window, the following statistics are calculated: N x 1. Sum: i=1
Fig. 2 Sliding window mechanism
170
S. Mullick and P. M. Khilar N
x
2. Mean: i=1 N 3. Median: med(X ) N 4. Variance: N1 ¯ 2 (xi − x) i=1
5. Min: Min(X ) 6. Max: Max(X ) 7. Skewness: γ = 8. Kurtosis:
1 N
μ3 3
=
μ22 N ¯ 4 (x i=1 i − x) s4
μ3 α3
From timestamp dependent data in the raw dataset, using the above sliding window, we have transformed into independent data. We have used 8 statistic measures. As a result, the number of features of the processed dataset is 48, i.e., 8 * 3 * 2 (8 statistic measures, 3 axes of a sensor, 2 sensors). Data on the above dataset is not normalized. To increase the performance of the model, we have normalized the above data.
5 Result Analysis and Discussion We applied three machine learning algorithms: Logistic regression, SVM, and MLP on the processed independent data. We divided the data into training, cross-validation, and testing in 3:1:1 ratio. We have used the following measures to compare the performance of the above algorithms: • • • •
Accuracy Precision Recall F1 score.
Results of the experiment are shown in the following Tables 2 and 3. From the result, we can see that support vector machine performs best with accuracy close to 99.4%. SVM tries to separate the two-class such that the distance between the two margins is maximal. So it will find a solution that is as reasonable as possible for both groups. This property does not hold by both linear regression and Table 2 Experiment result of 3 MLAs without normalization Algorithms
Accuracy
Precision
Recall
F1 score
Logistic regression
0.963
0.967
0.961
0.964
SVM
0.984
0.989
0.988
0.988
MLP
0.971
0.973
0.975
0.974
Measurement of the result is highest in the bold algorithm
Driver Behavior Profiling Using Machine Learning
171
Table 3 Experiment result of 3 MLAs with normalization Algorithms
Accuracy
Precision
Recall
F1 score
Logistic regression
0.984
0.986
0.984
0.985
SVM
0.994
0.993
0.993
0.993
MLP
0.987
0.988
0.987
0.988
Measurement of the result is highest in the bold algorithm
(a) Accuracy comparison
(b) F1 score comparison
(c) Recall score comparison
(d) Precision score comparison
Fig. 3 Metrics comparison of ML model with normalization
multi-layer perceptron. This is the reason for support vector machine’s performance is better than both the models (Fig. 3 and 4).
6 Conclusion and Future Work We have shown in this paper, the comparative study of three machine learning algorithms: Logistic regression, SVM, and MLP, using available dataset. From the evaluation matrix, it can be observed that SVM performs better than other two MLAs. We can conclude that 3 axes data from the 2 sensors are necessary for classification. Normalization of data is needed to increase the performance of all the machine learning model. All 8 statistic measures were critical in increasing the accuracy. For future work, we want to install the sensor devices on all vehicles present in a
172
S. Mullick and P. M. Khilar
(a) ROC for Logistic Regression
(b) ROC for SVM
(c) ROC for Multi-Layer Perceptron Fig. 4 ROC of machine learning models
particular town to get a real dataset with various weather and road conditions. We can use this large dataset to model a deep learning algorithm such as long shortterm memory (LSTM) network to get better results and gain new insight into driver behavior detection.
References 1. Road accidents in india claim more than 14 lakh lives in 2017. https://www.autocarindia.com/ industry/road-accidents-in-india-claim-more-than-14-lakh-lives-in-2017-410111 2. S. Olariu, M.C. Weigle, Vehicular Networks: From Theory to Practice (Chapman and Hall/CRC, 2009) 3. Y. Qian, N. Moayeri, VTC Spring (2008), pp. 2794–2799 4. S.K. Bhoi, P.M. Khilar, IET Networks 3(3), 204 (2013) 5. B.R. Senapati, P.M. Khilar, N.K. Sabat, 2019 IEEE 1st International Conference on Energy, Systems and Information Processing (ICESIP) (IEEE, 2019), pp. 1–5 6. B.R. Senapati, P.M. Khilar, Automatic parking service through VANET: a convenience application, in Progress in Computing, Analytics and Networking. Advances in Intelligent Systems and Computing, ed. H. Das, P. Pattnaik, S. Rautaray, K.C. Li, vol. 1119 (Springer, Singapore, 2020) 7. B.R. Senapati, R.R. Swain, P.M. Khilar, Smart Intelligent Computing and Applications (Springer, Berlin, 2020), pp. 229–238
Driver Behavior Profiling Using Machine Learning
173
8. K.E. Shin, H.K. Choi, J. Jeong, in Proceedings of the 4th ACM Workshop on Performance Monitoring and Measurement of Heterogeneous Wireless and Wired Networks (ACM, 2009), pp. 175–182 9. S.K. Bhoi, D. Puthal, P.M. Khilar, J.J. Rodrigues, S.K. Panda, L.T. Yang, Comput. Netw. 142, 168 (2018) 10. B.R. Senapati, P.M. Khilar, in Nature Inspired Computing for Data Science (Springer, Berlin, 2020), pp. 83–107 11. P. Mohan, V.N. Padmanabhan, R. Ramjee, in Proceedings of the 6th ACM Conference on Embedded Network Sensor Systems (ACM, 2008), pp. 323–336 12. J. Dai, J. Teng, X. Bai, Z. Shen, D. Xuan, in 2010 4th International Conference on Pervasive Computing Technologies for Healthcare (IEEE, 2010), pp. 1–8 13. D.A. Johnson, M.M. Trivedi, in 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC) (IEEE, 2011), pp. 1609–1615 14. G. Castignani, T. Derrmann, R. Frank, T. Engel, IEEE Intell. Transp. Syst. Mag. 7(1), 91 (2015) 15. D.G. Kleinbaum, K. Dietz, M. Gail, M. Klein, M. Klein, Logistic Regression (Springer, Berlin, 2002) 16. V. Vapnik, The Nature of Statistical Learning Theory (Springer Science & Business Media, 2013) 17. C. Zhao, Y. Gao, J. He, J. Lian, Eng. Appl. Artif. Intell. 25(8), 1677 (2012) 18. A. Sinan Yuksel. Driving behavior dataset. http://dx.doi.org/10.17632/jj3tw8kj6h.1file-83a 10979-d980-4099-b63f-d3e6f809d8e3
Comparative Analysis of Different Image Classifiers in Machine Learning Ritik Pratap Singh, Saloni Singh, Ragini Nandan Shakya, and Shahid Eqbal
Abstract The classification of an image is a very complicated method and it depends upon multiple factors. Through the image classification, we can communicate about modern techniques and troubles in addition to the opportunities of the usage of the system learning about the image. In image classification, the software program is given data input and then the acquired information is used for the classification of new observations. In this survey, we discuss numerous parameters of image classification which includes parameters like information acquired from the sensors, nature of the sample used in classification, nature of spatial information, various parameters used on data, the nature of margin used, nature of pixel information used on data and different algorithm modules. Different algorithm modules include Artificial Neural Network (ANN), Synthetic Aperture Radar (SAR), Support Vector Machine (SVM), and Decision Tree (DT). Keywords Machine learning · Support vector machine (SVM) · Artificial neural network (ANN) · Synthetic aperture radar (SAR) · decision tree (DT)
1 Introduction Image processing involves identifying what the image depicts. The human visuality can efficaciously apprehend and differentiate among a clean and blurred image. However, for computers, it is a challenging task because they can only manipulate R. P. Singh (B) · S. Singh · R. N. Shakya · S. Eqbal Galgotias College of Engineering and Technology, Greater Noida, India e-mail: [email protected] S. Singh e-mail: [email protected] R. N. Shakya e-mail: [email protected] S. Eqbal e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_17
175
176
R. P. Singh et al.
digits [1]. Based on the human visuality, the image classification is done so that machines can identify an object. The computer can understand an image if the system would extract high-level features with the help of machine learning. To achieve high accuracy in image classification, the deep neural network serves as one of the most important classes of machine learning models [1].
2 Different Techniques of Image Classification The most important aspect of image classification is the accuracy with which it recognizes the elements of an image. Image magnificence uses both supervised and unsupervised classifications [2]. Unsupervised classification follows a linear category structure while supervised classification follows a compact nonlinear category structure. ANN is a very popular machine learning algorithm that is composed of multiple interconnected simple processing elements that are present in layers [3].
2.1 In Accordance with the Information Acquired from Different Sensors Remote sensing has proved to be one of the most significant achievements of advanced earth observation technology. It is the process in which we detect and observe the reflected and emitted radiation which is measured at a distance from the targeted region to identify and observe the physical attributes of the region. A single classifier is used in general for the implementation of remote sensing image classification [1]. Hyper-spectral imaging (HSI) can acquire data for ground objects, with a larger spectral range and resolution. Hyper-spectral imaging is known to be amongst the most prominent field of infrared and electro-optical remote sensing. Hyper-spectral imaging has been more broadly applied to remote sensing issues due to the development of sensing and processing technology. Mostly the feature extraction is performed before the classification of hyperspectral remote sensing image analysis [4].
2.2 In Accordance with the Nature of a Sample Used for Training in Classification 2.2.1
Supervised Classification
It enables the user to not just choose sample pixels which are present in an image but also guide the image processing software to categorize every other pixel in the
Comparative Analysis of Different Image Classifiers …
177
picture by utilizing these training sites as references [2]. Training sets are decided according to the user requirements. To unite the pixels together, the user sets the limit for how many similarities must be shared with other pixels. In accordance with the spectral characteristics of the training area, the limitations have been set. Supervised classification has distinct applications, for example, classifier has been illustrated as a complex mathematical function, which is rather incomprehensible for humans in a support vector machine [5].
2.2.2
Unsupervised Classification
In unsupervised classification, the outcomes of this classification include the clustering of pixels with regular characteristics. With the help of different techniques, devices determine the pixels which are related, and then they are grouped together into classes. However, there should be a relation between the pixels linked together with the actual features on the ground [5].
2.3 In Accordance with the Nature of Spatial Information 2.3.1
Spectral Classifiers
In this classifier, the image classification uses the unadulterated information of the spectral information. The extreme variation observed in the spatial distribution of the same class leads to the generation of some amount of noise. Similarly, factors like the existence of unwanted features and large extensity of data make the classification of hyperspectral images a difficult task [6]. For example, artificial neural network is a spectral classifier.
2.3.2
Contextual Classifiers
The contextual refers to the neighboring pixels and the pixel information is employed for image classification [2]. Contextual classifiers are those where the classification of other pixels is based on the spectral value of neighboring pixels [6]. Example: point-to-point contextual correction is a contextual classifier.
178
R. P. Singh et al.
2.4 In Accordance with Various Parameters Used on Data 2.4.1
Parametric Classifier
A classifier is an algorithm that is used for the implementation of classification. The number of parameters in a parametric model is limited. New test data would be assigned by the classifier. Example: Discriminant Analysis Classification [7].
2.4.2
Non-parametric Classifier
The number of parameters in non-parametric models is (potentially) infinite. One can say that the complexity of their model grows with the number of training data. Example: Support Vector Machines [7].
2.5 In Accordance with the Nature of Margin Used 2.5.1
Hard Classification
In this classification, a pixel would be able to possess only a single category while a pixel may have the possession of more than one category in the urban region. It may be resulting due to the multiplicity of the land cover which has been composing that pixel. Such a pixel is known as a mixed pixel [8].
2.5.2
Soft Classification
In this classification, a pixel is not dependent on a single class completely, rather it possesses different degrees of membership in various classes. The mixed pixel issue has been highly noticeable in images having lesser resolution data. In fuzzy classification, the calculations are based on the part of the land cover classes from a mixed pixel [8].
2.6 In Accordance with Pixel Information Used 2.6.1
Pixel-Based Classification
To differentiate between spectrally similar classes, the pixel-based classifier involves the use of spectral and spatial information. Again, both spectral and spatial heterogeneity is used for the segmentation of an image after the pixel-based classification
Comparative Analysis of Different Image Classifiers …
179
takes place [9]. In pixel-based classification basically, each pixel available would represent a training sample and its representation is done in the form of a vector having n dimensions where n is the number of spectral bands present [6].
2.6.2
Object-Based Classification
In object-based approaches, classification is done on a confined group of pixels that have been grouped together depending on their spectral properties [2]. This approach highly relies on the standard of the image segmentation. For the object’s classification, shape characteristics and neighborhood relationships are also used [10].
2.7 In Accordance with Different Algorithm Models 2.7.1
Artificial Neural Networks
Artificial neural networks are the algorithms based on statistics that are stimulated through residences of the organic neural networks. ANN are required for different purposes, from quite simple categorization issues to speech recognition issues. ANN is based totally on biological neural networks and it consists of interconnected processing elements known as nodes. The nodes which are linked together have numerical values, known as weights, and via changing those values systematically, the neural network detects the desired characteristic. Each node takes many inputs and produces one output. Based on the statistical data, a hierarchal form of artificial neural networks is developed in which neurons are prepared into special layers, as depicted below [11] (Fig. 1).
Fig. 1 The structure of artificial neural network [11]
180
2.7.2
R. P. Singh et al.
Decision Tree
A decision tree is like a flowchart wherein a test on a feature is represented by every internal node (e.g. Whether or no longer a coin turn comes up heads or tails). Now, each leaf node is used to represent a class magnificence label (choice taken after computing all talents) and the branches constitute conjunctions of competencies that reason those elegance labels. The paths from the root to leaf represent classification suggestions. A decision tree is an important algorithm for the most predictive modelling strategies. The construction of a decision tree involves an algorithmic approach that identifies methods to split a recordset which is primarily based on certain conditions. It is amongst the most broadly used and sensible methods for supervised classification [12]. Tree models in which the target variable takes discrete values are known as classification trees and those in which it takes non-distinct values (generally real numbers) are known as regression tree. The common term used for such a tree is Classification and Regression Tree which is denoted as CART [9].
2.7.3
Support Vector Machines
SVM is the supervised learning models that have known to be associated with the learning algorithms. The data which is required for the analysis of classification and regression is examined with the help of these algorithms. The extent to which a classification can mismatch is known as the degree of confidence. Hence, we outline phrases functional margin and geometric margin. A functional margin tells you approximately the accuracy of a category of an element. Geometric margin alternatively, is the normalized model which tells about the approximate Euclidean distance among the hyperplane (or linear classifier) [13]. SVM shows one of the most extraordinary strategies in pattern category and image class. It is designed to separate a fixed pixel of distinct training, (x 1 , y1 ), (x 2 , y2 ), …, (x n , yn ) wherein x i in R, d-dimensional function vicinity, and yi in −1, +1, the magnificence label, with i = 1, …, N [13] (Fig. 2).
2.7.4
Synthetic Aperture Radar
In Synthetic Aperture Radar, the classification of images is done in accordance with a specific feature, grey level, which is not distributed unevenly. The phenomenon of speckle-noise is observed, and its presence deteriorates the quality of the image [14]. This noise is dependent on the given signal due to which the techniques developed for processing Gaussian noise fail. A single feature-grey level means can be used for the specification of a target in a SAR image. In accordance with the maximum likelihood classifier, grey means level has been generated [14] (Table 1).
Comparative Analysis of Different Image Classifiers …
181
Fig. 2 The structure of a support vector machine [13]
3 Conclusion Image classification has listed itself as one of the most prominent methods for representation of machine learning. It is a very complex process and its main objective is the accuracy of identification of an image. In this paper, different parameters of image classification are discussed, which includes parameters like information acquired from the sensors, nature of trained sample used in classification, nature of spatial information, various parameters used on data, nature of margin used, nature of pixel-based information used and different algorithm modules. Implementation of remote sensing is a very effective method to acquire information from different sensors. Supervised classification and unsupervised classification are two training samples used in classification. Classification based on the spectral information is spectral classifier, contextual classifier and spectral contextual classifier. Parametric and nonparametric classifiers are the parameters used on the data. Another classification method is information based on pixels used which includes pixel-based classification and object-based classification. This paper also discusses the eventualities and numerous image classification techniques which include Synthetic Aperture Radar (SAR), Support Vector Machine (SVM), Artificial Neural Network (ANN) and Decision Tree (DT). Our survey also discussed various problems that arise during different image classification techniques. So, this paper will help us in choosing the best category approach amongst all the available techniques.
182
R. P. Singh et al.
Table 1 Comparison between different types of classification method [15] Classification method
Description
Characteristics
Artificial neural network Artificial neural network is a form of synthetic intelligence that inhibits some capabilities of the human mind. To store experiential information, ANN has a normal tendency [11]. ANN is constituted of several layers and which is further constituted of fixed neurons. These neurons are further interconnected within each other and different layers [15]
ANN is known to use Nonparametric technique. Its great performance and high accuracy are predicated upon the community shape and a wide variety of input [15]
Decision tree
Decision tree is like flowchart which involves the breaking of data continuously. At every degree, hierarchical classifier allows the accepting as well as rejecting class labels [15] This method includes three factors: The partitioning of nodes, detection of the terminal node, and the class label allotment [12]
DT have supervised learning models and based on the output, DT is categorized into classification and regression [12]
Support vector machine
Support vector machine guide vector device develops a set of hyperplanes in an excessive or countless dimensional area, used for class [13]. SVM is a linear model for type and regression problems. It can remedy linear and non-linear problems for realistic troubles. The concept of SVM is simple: the set of guidelines creates a line or hyperplane which separates the statistics into training [15]
SVM makes use of non-parametric classifier and might deal with greater input statistics very efficaciously [13] The hyperplane selection determines the accuracy of SVM [15]
Synthetic aperture radar
Synthetic aperture radar offers a way to take a distinct immoderate decision radar pixel, normally of ground functions from a plane. In highlight mode, the radar beam is centered on one patch of ground as the plane flies from element A to element B [14]. Returned signs are collected constantly to create an excessive-resolution image integrated over a completely massive aperture [15]
The SAR possesses a good azimuth resolution which involves the use of small antenna and long wavelengths [15]. Also, the resolution does not depend upon the slant range to the target in SAR [14]
Comparative Analysis of Different Image Classifiers …
183
References 1. J. Song, S. Gao, Y. Zhu, C. Ma, A survey of remote sensing image classification based on CNNs. Big Earth Data 3(3), 232–254 (2019) 2. P. Kamavisdar, S. Saluja, S. Agrawal, A survey on image classification approaches and techniques. Int. J. Adv. Res. Comput. Commun. Eng. 2(1), 1005–1009 (2013) 3. L.H. Thai, T.S. Hai, N.T. Thuy, Image classification using support vector machine and artificial neural network. Int. J. Inf. Technol. Comput. Sci. (IJITCS) 4(5), 32–38 (2012) 4. L. Zhang, L. Zhang, D. Tao, X. Huang, On combining multiple features for hyperspectral remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 50(3), 879–893 (2011) 5. B.C. Love, Comparing supervised and unsupervised category learning. Psychon. Bull. Rev. 9(4), 829–835 (2002) 6. C.M. Gurney, J.R. Townshend, The use of contextual information in the classification of remotely sensed data. Photogram. Eng. Remote Sens. 49(1), 55–64 (1983) 7. A.P. Charaniya, R. Manduchi, S.K. Lodha, Supervised parametric classification of aerial lidar data, in Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2004), pp. 30–30 8. G.M. Foody, R.G. Jackson, C.P. Quine, Potential improvements in the characterization of forest canopy gaps caused by windthrow using fine spatial resolution multispectral data. Comparing hard and soft classification techniques. For. Sci. 49(3), 444–454 (2003) 9. D.C. Duro, S.E. Franklin, M.G. Dubé, A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ. 118, 259–272 (2012) 10. J. Peña, P. Gutiérrez, C. Hervás-Martínez, J. Six, R. Plant, F. López-Granados, Object-based image classification of summer crops with machine learning methods. Remote Sens. 6(6), 5019–5041 (2014) 11. M. Manish, M. Srivastava, A view of artificial neural network, in 2014 International Conference on Advances in Engineering & Technology Research (ICAETR-2014) (IEEE, 2014), pp. 1–3 12. B.A. Shepherd, An appraisal of a decision tree approach to image classification, in IJCAI (1983), pp. 473–475 13. A. Tzotsos, D. Argialas, Support vector machine classification for object-based image analysis, in Object-Based Image Analysis (Springer, Berlin, Heidelberg, 2008), pp. 663–677 14. V.S. Frost, L.S. Yurovsky, Maximum likelihood classification of synthetic aperture radar imagery. Comput. Vis. Graph. Image Process. 32(3), 291–313 (1985) 15. S.S. Nath, G. Mishra, J. Kar, S. Chakraborty, N. Dey, A survey of image classification methods and techniques, in International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT) (IEEE, 2014), pp. 554–557
A Survey on Autism Spectrum Disorder in Biomedical Domain Shreyashi Das and Adyasha Dash
Abstract ASD or autism spectrum disorder is a behavioral and neurological condition. It is a condition related to the development of brain. A person suffering from ASD faces difficulties in social communications and interactions. Here we found that better results can be achieved by machine learning techniques. Our objective was to carry out a survey and find the most appropriate classifier which can detect autism accurately. We took a biomedical data set and classifiers like random tree, random forest, J48, naive Bayes were used to observe the results such as precision, F-score, and accuracy. Keywords ASD · Machine learning · J48 · Random tree · Naive Bayes · Random forest
1 Introduction Autism spectrum disorder or commonly known as ASD is referred to as a class of complex neurodevelopmental disorder that takes place in an initial phase of life [1]. Symptoms of ASD are of various types which involve complications in verbal and nonverbal communication, repetitive behaviors, and typical communal interaction [2]. The main traits of ASD are persistent difficulties in communal communication and the occurrence of restricted and repetitive behavior, interests, or activities [3]. It is found that 1 in every 6 children in the USA suffers from developmental disorders, with 1 in every 68 children suffering from ASD. The average cost of care for an individual with ASD can go high as $2.4 million [4]. ASD screening is the method using which the autistic symptoms in a patient are determined. This is an important period of ASD where the basic clinic methods like blood test and body checkups cannot identify autism [5]. Therefore, in an individual self-operated and parent-operated screening methods help in recognization S. Das (B) · A. Dash School of Computer Engineering, KIIT Deemed to be University, Bhubaneshwar, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_18
185
186
S. Das and A. Dash
at a preliminary phase of possible autistic traits which in turn help the doctors in psychiatric health, psychology, and the behavioral science fields to detect autism [6]. The utilization of machine learning is to develop an algorithm which is efficient and robust and is also based on human coded behaviors from diagnostic instruments [7]. It is an experimenting topic that develops intelligent techniques and discovers useful concealed patterns, which are utilized to improve decision-making. In order to develop predictive models in the data sets related to autism, machine learning techniques, for example, decision trees, logistic regressions, etc., are applied [8]. In the present study, we have listed down some questions. We have used Weka tool to get the results using various classifiers such as ZeroR, naive Bayes, J48, and random forest. After which we have compared the various classifiers to measure the performance parameters and achieve the best classifier.
2 Literature Survey 2.1 Machine Learning Machine learning is a branch of computational algorithms which is growing with a speed. These algorithms are made to cope up with human intelligence by embracing features from the surrounding habitat [9].
2.2 Natural Language Processing Natural language processing (NLP) is a theory-motivated scale of computational techniques for the examination of human language spontaneously [10]. Text of various languages, modes, and genres is used. The texts can be verbal or written. The only requirement is that they should be in a language which humans use for communication with each other. The text should be gathered from actual usage, and the analyzed text is constructed for the purpose of the analysis of languages [11].
2.3 Machine Learning Classifiers • The classifier named J48 is a simplified version of C4.5 decision tree used for classification. The decision tree approach is highly required to solve problems on classification. Here to model the classification process, a tree is built and after that it is applied to each tuple in the database which results into the classification of tuples [12]. Decision tree classifiers are famous because the designing of tree does
A Survey on Autism Spectrum Disorder in Biomedical Domain
187
not require any domain expert knowledge and is good for exploring knowledge discovery [13]. • The Naive Bayes classifier is built on conditional probabilities. Here, Bayes’ theorem is used to calculate the probability. The theorem is used to count the frequency and combinations of values in the historical data. It is useful in finding the probability of an event where the probability of an previous event is already given [14]. • Random forest classifier is a collection of CART-like trees which follows a fixed rules for growing tree, combination of tree, testing tree, and post-processing of trees. Here each tree is grown on a different subsample of the training data set. Trees are first divided into parent node, and then each parent node is split into two children, and this process is also know as binary partitioning [15]. The random forest is used to classify a new object from an input point by examining the input point on each and every tree in the forest. The forest then selects the classification which have the most number of votes over all the trees in the forest [16]. • ZeroR is the simplest classification method which relies on the target and ignores all other predictors. The majority class is predicted by ZeroR. Although the predictability power in ZeroR is almost negligible, it is used to determine the baseline performance which in turn is used as a benchmark for other classifications [17].
2.4 Related Works Downs et al. [18] wanted to study and evaluate an NLP approach which could accurately identify suicide symptoms in ASD patients. Various NLP approaches were evaluated with precision, recall, and F1-score. The study used data collected from the anonymized, electronic clinical records of a sample of adolescents with ASD. The NLP tools deployed to filter out documents without any SR mentions from the original cohort. The suicidality outcome data provided by the NLP extraction tool permits analyses of complex interplay of ASD traits on factors contributing to the onset and recurrence of suicidality. Leroy et al. [19] used electronic health records for developing a decision support system and case classification for automated diagnostic criteria extraction for autism spectrum disorders detection in children. Each EHR was parsed, and the input features were the counts of terms that occurred in lexicons. Then after they compared a feedforward backpropagation neural network (NN) and decision trees (C5.0 with adaptive boosting) using R. Calculation of accuracy using tenfold cross-validation was done. The NN achieved between 72 and 77% accuracy using the lexicons that was created manually. The best results were achieved with 10 hidden units and decay of 0.6, which resulted in 76.92% accuracy. Slightly better performance was showed by C5.0, with overall accuracy between 76 and 85%.
188
S. Das and A. Dash
Kosmicki et al. [20] used the techniques of machine learning to design the autism diagnostic observation schedule (ADOS). It had a subset of behavior which is used to test and differentiate between children having autism or not. R and Weka tool were used as part of machine learning to find the results. Their were four test sets out of which the logistic regression classifier classified 13 out of 1089 individuals with ASD, i.e., 98.81% sensitivity and 7 out of 66 individuals without ASD, i.e., 89.39% sensitivity, and resulted in 98.27% accuracy. These results may help in developing future ASD machines. Vigneshwaran et al. [21] presented a technique for the diagnosis of ASD using magnetic resonance imaging scans with voxel-based morphometry and they identified features for a meta-cognitive radial basis function network classifier using projection-based learning algorithm. It was found that 79 of 184 subjects were detected with ASD and 105 were found healthy when an MR report was taken. The comparison of the PBL-McRBFN was performed with the help of J48 decision tree algorithm, random forest algorithm, SVM, naive Bayes algorithm, and K-NN algorithm. All the variables were tested using the Weka tool. The mean accuracy of PBL-McRBFN was resulted to be 97%. Random forest, 1-NN, and the PBLMcRBFN achieves the highest accuracy that is 100%. The results clearly indicated the best performance of the PBL-McRBFN classifier for determining whether person is suffering from ASD or not. Levy et al. [22] recognized groups of predictive characteristics for behavioral detection of autism. To generalize the clinical population sparse models were created that have higher potential. The performance included logistic regression with L2 regularization or linear SVM with L1 regularization. Finally, they acquired an area under the ROC curve of 0.95 for ADOS Module 3 and 0.93 for ADOS Module 2 with less than or equal to 10 features. The models provide the stability to time complexity was minimized for autism detection. Maenner et al. [23] developed an autism and developmental disabilities monitoring (ADDM) network which conducts population-based observation of autism spectrum disorder (ASD) among 8-year old children. NLP techniques like bag of words approach and random forests were used to compare the features of children that were coherently categorized by the machine learning algorithm. They found that a mechanized approach will be able to predict better with more accuracy that anyway a child would meet ASD criteria or not. While many logistical issues can be considered, these results surely indicate at the potential for using machine learning techniques to identify ASD from unorganized text data. Duda et al. [24] used forward feature selection, under sampling and tenfold cross-validation. All the analyses were performed in Python using Scikit-learn package. We saw that algorithms like logistic regression, LDA, SVC, and Categorical Lasso performed with high and better accuracy, i.e., accuracy greater then 0.96 and algorithms like decision tree and random forest had comparatively low accuracy. Clark et al. [25] included a natural language computing system that provides a diagnosis for a participant in the conversation which indicates symptoms of autism. To provide the diagnosis, the system included a diagnosis system that performs a
A Survey on Autism Spectrum Disorder in Biomedical Domain
189
training process to generate a machine learning model which is then used to evaluate a textual representation of the conversation. The system annotated the baseline conversations and identified features that are used to identify the symptoms of autism. The system generates a machine learning model that weights the features according to whether the identified features are, or are not, an indicator of autism. The method received text of a conversation between a plurality of participants and evaluated the text of the conversation to generate features using NLP. It determines a measure of probability that one of the plurality of participants falls on the autism spectrum. Chlebowski et al. [26] looked over the childhood autism rating scale (CARS) as a tool for ASD identification for two- and four-year old children who were directed as autism patient. The cutoff score to differentiate autistic disorder from PDD-NOS was 32 in the two-year-old sample and 30 in the four-year-old sample, with good sensitivity at both the ages. Typical development suggested that an ASD cutoff around 25 is valid in common clinical use. Kohane et al. [27] used electronic health records to estimate the presence of any other additional conditions co-occurring with ASD in children and young adults. Across three general hospitals and one pediatric hospital, a study was carried out using a distributed query system. Over 14,000 individuals under the age of 35 with ASD were categorized by their comorbidities and contrariwise, the prevalence of ASD within these comorbidities was measured. The comorbidity prevalence of the younger age that is less then 18 years and older age that is between 18 and 34 years individuals with ASD was compared. This study Kohane et al. [27] used electronic health records to estimate the presence of any other additional conditions co-occurring with ASD in children and young adults. Across three general hospitals and one pediatric hospital, a study was carried out using a distributed query system. Over 14,000 individuals under the age of 35 with ASD were categorized by their comorbidities and contrariwise, the prevalence of ASD within these comorbidities was measured. The comorbidity prevalence of the younger age that is less then 18 years and older age that is between 18 to 34 years individuals with ASD was compared. This study across many healthcare systems showed that their are many side effects and additional symptoms with ASD. Bowl disorders were found in over ten percent of patients with ASD. Cranial anomalies were found in almost 5% patient, and 2% patient were found with schizophrenia (Table 1).
3 Framework See Fig. 1.
Year of submission
2017
2017
2015
2013
2017
2016
2016
Author
Downs et al. [18]
Leory et al. [19]
Kosmicki et al. [20]
Vigneshwaran et al. [21]
Levy et al. [22]
Maenner et al. [23]
Duda et al. [24]
Table 1 Literature summarization
Forward feature selection, undersampling, and tenfold cross-validation, decision trees were used. Programming was mainly done using Python
Here they have used NLP techniques like bag of words approach and random forests
Machine learning techniques were used
The experiment was done using Weka tool. J48 decision tree algorithm, random forest algorithm, SVM naive Bayes algorithm and the K-nearest neighbor (K-NN) algorithm
R and Weka tool are used for machine learning analyses
The concept of decision tree was used and is implemented in R calculation of accuracy was done using tenfold cross-validation
Machine learning and NLP techniques like precision and F1-score are used
Concepts used
Accuracy obtained was 0.92
(continued)
A mechanized proposal was able to predict with high agreement that whether a child will meet ASD surveillance criteria or not
An area of 0.95 for ADOS Module 3 under the ROC curve and an area of 0.93 for ADOS Module 2 with less than or equal to 10 features
Results showed that 78% accuracy was achieved by PBL-McRBFN on the testing data set
13 out of 1089 patients suffering from ASD, and 7 out of 66 patients were found without ASD (89.39% specificity), which resulted into accuracy of 98.27%
They observed that C5.0 performed better and had a accuracy between 76 and 85%
Permitted the analyses ASD symptoms on factors contributing to the onset and regularity of suicide rates
Results
190 S. Das and A. Dash
Year of submission
2019
2010
2012
Author
Clark et al. [25]
Chlebowski et al. [26]
Kohane et al. [27]
Table 1 (continued)
NLP techniques were highly used here
NLP techniques were in use
Machine learning and NLP techniques were used
Concepts used
Results
This study shows the additional health conditions occurring with ASD
In terms of clinical use the results suggested that cutoff of ASD should be around 25
It determines a measure of probability that one of the plurality of participants falls on the autism spectrum
A Survey on Autism Spectrum Disorder in Biomedical Domain 191
192
S. Das and A. Dash
INPUT
PRE PROCESSING
Prepared a medical data set
Opening WEKA software
Opening a medical data set
Choose proper classifiers
Select test options
Select responses
OUTPUT
Cross validation Folds = Observation
Response should be categorical variable
Results
Prediction Information
Prediction error rates, confusion matrix, estimation
Fig. 1 Data flow diagram
4 Data Description and Flowchart Type of data used: Univariate, Multivariate, Sequential, binary and continuous, Time-Series, Text or Domain-Theory Nominal/categorical Job of data: categorization or classification Type of attributes used: Continuous, Categorical and binary Field: Medical health Format: non-matrix Does the data set contains missing values? Yes No. of Instances (records in your data set): 704 No. of Attributes (fields within each record): 21 (Table 2 and Fig. 2).
4.1 Flowchart Description 1. Here, firstly we prepare a medical data set of questions related to autism and people suffering from autism. 2. Next we open the Weka tool application which is pre-installed in our machine. 3. From the applications section of the tool, we select explore and then choose the medical data set file which we want to open in our application.
A Survey on Autism Spectrum Disorder in Biomedical Domain
193
Table 2 Data description Attribute
Type
Description
Age
Integer
Age of the patient in years
Gender
String
Male/Female
Ethnicity
String
List of country in text format
Is the patient born with jaundice
Boolean
Whether or not the case was born with jaundice or without jaundice
Family member have PDD
Boolean
Whether or not any family member has PDD
Who is completing the test
String
Parent, self, caregiver, medical staff, clinician, etc.
Residence country
String
List of the name of countries in text format
Screening application used before
Boolean
Whether the user has used a screening application or not
Types of screening method
Integer type, i.e., 0, 1, 2, 3
Based on age the type of screening methods used
Answer to question 1
Binary, i.e., 0 and 1
Answer code
Answer to question 2
Binary, i.e., 0 and 1
Answer code
Answer to question 3
Binary, i.e., 0 and 1
Answer code
Answer to question 4
Binary, i.e., 0 and 1
Answer code
Answer to question 5
Binary, i.e., 0 and 1
Answer code
Answer to question 6
Binary, i.e., 0 and 1
Answer code
Answer to question 7
Binary, i.e., 0 and 1
Answer code
Answer to question 8
Binary, i.e., 0 and 1
Answer code
Answer to question 9
Binary, i.e., 0 and 1
Answer code
Answer to question 10
Binary, i.e., 0 and 1
Answer code
Screening score
Integer
Using the scoring algorithms a final score is obtained
4. Next, we select the classify option and open the classify tab where all the classifications will be taking place. 5. Here we select proper classifiers like ZeroR, naive Bayes, decision trees, and several other classifiers. 6. Then we select the test options like cross-validation and number of folds required and responses too for our data set. 7. We click on the start button, and the required result is shown in the classifier output window. 8. From the output, we predict and extract information like error rate, confusion matrix, model estimation.
194
S. Das and A. Dash
Fig. 2 Work flow diagram
The patient data is collected
Attribute category and values are determined
Data stored in csv file format
Apply classification algorithm using weka tool
Performance evaluation and result analysis
5 Result and Analysis See Tables 3, 4 and Fig. 3. Table 3 Classification results Classification
TP rate
FP rate
Precision
Recall
F-measure
R-score
Naive Bayes
0.970
0.018
0.972
0.970
0.971
0.999
Random forest
1.000
0.000
1.000
1.000
1.000
1.000
Random tree
0.962
0.061
0.962
0.962
0.962
0.960
ZeroR
0.732
0.732
–
0.732
–
0.495
J48
1.000
0.000
1.000
1.000
1.000
1.000
Table 4 Accuracy results Classification
Correctly classified instance (%)
Incorrectly classified instances (%)
Naive Bayes
97.017
2.983
Random forest
100
0
Random tree
96.1648
3.8352
ZeroR
73.1534
26.8466
J48
100
0
A Survey on Autism Spectrum Disorder in Biomedical Domain Fig. 3 Accuracy result in graphical representation
1.5
195
ACCURACY
1 0.5 0
Table 5 Confusion tree for naive Bayes
a
Series1
Series2
Series3
Series5
Series6
Series7
b
515
Table 6 Confusion matrix for random forest
Series4
0
0
189
a
b
502
13
14
175
5.1 Confusion Matrix It contains the information about the actual and the predicted classifications done by a classification system [28] (Tables 5, 6, 7, 8 and 9). Table 7 Confusion matrix for random tree
Table 8 Confusion matrix for ZeroR
a
b
515
0
189
0
a 515 0
b 0 189
196 Table 9 Confusion matrix for J48
S. Das and A. Dash a 496 2
b 19 187
6 Conclusion and Future Scope We made a detailed study on different types of classifiers using Weka tools. We have taken a biomedical data set and have used machine learning techniques to determine the accuracy of the classifiers individually. We have calculated precision, F-score, TP rate, and many other factors. We used classifiers such as random forest, random tree, J48, and Naive Bayes. We mainly looked at the accuracy of the classifiers and determined that J48 and random forest are the two classifiers which have 100% accuracy and are suitable to determine the symptoms of ASD correctly. ZeroR has the least accuracy of 73.15%. In future, we will be going for the behavioral analysis of ASD and will be trying for early ASD detection which will help the medical professional in detecting ASD in early stage of life. Other classifiers can be surveyed to see if they work better and give good performance, then the once we have already used. We will be surveying for the best classifier to detect ASD.
References 1. M.B. Usta, K. Karabekiroglu, B. Sahin, M. Aydin, A. Bozkurt, T. Karaosman, A. Aral, C. Cobanoglu, A.D. Kurt, N. Kesim, ˙I. Sahin, Use of machine learning methods in prediction of short-term outcome in autism spectrum disorders. Psychiatry Clin. Psychopharmacol. 1–6 (2018) 2. J. Yuan, C. Holtz, T. Smith, J. Luo, Autism spectrum disorder detection from semi-structured and unstructured medical data. EURASIP J. Bioinf. Syst. Biol. 2017(1), 1–9 (2017). https:// doi.org/10.1186/s13637-017-0057-1 3. A. Crippa, C. Salvatore, P. Perego, S. Forti, M. Nobile, M. Molteni, I. Castiglioni, Use of machine learning to identify children with autism and their motor abnormalities. J. Autism Dev. Disord. 45(7), 2146–2156 (2015) 4. Y. Jayawardana, M. Jaime, S. Jayarathna, Analysis of temporal relationships between ASD and brain activity through EEG and machine learning 5. M. Al-Diabat, Fuzzy data mining for autism classification of children. Int. J. Adv. Comput. Sci. Appl. 9(7), 11–17 (2018) 6. F. Thabtah, D. Peebles, A new machine learning model based on induction of rules for autism detection. Health Inf. J. 1460458218824711 (2019) 7. D. Bone, M.S. Goodwin, M.P. Black, C.C. Lee, K. Audhkhasi, S. Narayanan, Applying machine learning to facilitate autism diagnostics: pitfalls and promises. J. Autism Dev. Disord. 45(5), 1121–1136 (2015) 8. F. Thabtah, Machine learning in autistic spectrum disorder behavioral research: a review and ways forward. Inf. Health Soc. Care 44(3), 278–297 (2019) 9. I. El Naqa, M.J. Murphy, What is machine learning?, in Machine Learning in Radiation Oncology (Springer, Cham, 2015), pp. 3–11
A Survey on Autism Spectrum Disorder in Biomedical Domain
197
10. E. Cambria, B. White, Jumping NLP curves: a review of natural language processing research. IEEE Comput. Intell. Mag. 9(2), 48–57 (2014) 11. E.D. Liddy, Natural language processing (2001) 12. T.R. Patil, S.S. Sherekar, Performance analysis of Naive Bayes and J48 classification algorithm for data classification. Int. J. Comput. Sci. Appl. 6(2), 256–261 (2013) 13. A. Rajput, R.P. Aharwal, M. Dubey, S.P. Saxena, M. Raghuvanshi, J48 and JRIP rules for e-governance data. Int. J. Comput. Sci. Secur. (IJCSS) 5(2), 201 (2011) 14. G. Parthiban, A. Rajesh, S.K. Srivatsa, Diagnosis of heart disease for diabetic patients using naive bayes method. Int. J. Comput. Appl. 24(3), 7–11 (2011) 15. J. Coe, Performance comparison of Naïve Bayes and J48 classification algorithms. Int. J. Appl. Eng. Res. 7(11), 2012 (2012) 16. A. Folleco, T.M. Khoshgoftaar, J. Van Hulse, L. Bullard, Software quality modeling: the impact of class noise on the random forest classifier, in 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence) (IEEE, 2008 June), pp. 3853–3859 17. L. Guo, Y. Ma, B. Cukic, H. Singh, Robust prediction of fault-proneness by random forests, in 15th International Symposium on Software Reliability Engineering (IEEE, 2004 November), pp. 417–428 18. J. Downs, S. Velupillai, G. George, R. Holden, M. Kikoler, H. Dean, A. Fernandes, R. Dutta, Detection of suicidality in adolescents with autism spectrum disorders: developing a natural language processing approach for use in electronic health records, in AMIA Annual Symposium Proceedings, vol. 2017 (American Medical Informatics Association, 2017), p. 641 19. G. Leroy, Y. Gu, S. Pettygrove, M. Kurzius-Spencer, Automated lexicon and feature construction using word embedding and clustering for classification of ASD diagnoses using EHR, in International Conference on Applications of Natural Language to Information Systems (Springer, Cham, 2017 June), pp. 34–37 20. J.A. Kosmicki, V. Sochat, M. Duda, D.P. Wall, Searching for a minimal set of behaviors for autism detection through feature selection-based machine learning. Transl. Psychiatry 5(2), e514–e514 (2015) 21. S. Vigneshwaran, B.S. Mahanand, S. Suresh, R. Savitha, Autism spectrum disorder detection using projection based learning meta-cognitive RBF network, in The 2013 International Joint Conference on Neural Networks (IJCNN) (IEEE, 2013 August), pp. 1–8 22. S. Levy, M. Duda, N. Haber, D.P. Wall, Sparsifying machine learning models identify stable subsets of predictive features for behavioral detection of autism. Mol. Autism 8(1), 65 (2017) 23. M.J. Maenner, M. Yeargin-Allsopp, K.V.N. Braun, D.L. Christensen, L.A. Schieve, Development of a machine learning algorithm for the surveillance of autism spectrum disorder. PloS One 11(12) (2016) 24. M. Duda, R. Ma, N. Haber, D.P. Wall, Use of machine learning for behavioral distinction of autism and ADHD. Transl. Psychiatry 6(2), e732–e732 (2016) 25. A.T. Clark, B.J. Cragun, A.W. Eichenlaub, J.E. Petri, J.C. Unterholzner, International Business Machines Corp, Diagnosing Autism Spectrum Disorder Using Natural Language Processing. U.S. Patent 10,169,323 (2019) 26. C. Chlebowski, J.A. Green, M.L. Barton, D. Fein, Using the childhood autism rating scale to diagnose autism spectrum disorders. J. Autism Dev. Disord. 40(7), 787–799 (2010) 27. I.S. Kohane, A. McMurry, G. Weber, D. MacFadden, L. Rappaport, L. Kunkel, J. Bickel, N. Wattanasin, S. Spence, S. Murphy, S. Churchill, The co-morbidity burden of children and young adults with autism spectrum disorders. PloS One 7(4) (2012) 28. T. Garg, S.S. Khurana, Comparison of classification techniques for intrusion detection dataset using WEKA, in International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014) (IEEE, 2014, May), pp. 1–5
Performance of Photovoltaics in Ground Mount-Floating-Submerged Installation Methods Nallapaneni Manoj Kumar, A. Ajitha, Aneesh A. Chand, and Sonali Goel
Abstract The use of ground-mounted photovoltaic (GMPV) systems for power generation is becoming popular these days. The GMPV systems demand vast land areas for their installations, and this has resulted in land-use conflicts. As a result, for mitigating the land-use issues, few novel ways of photovoltaic (PV) installations have emerged that include floating photovoltaic (FPV) and submerged photovoltaics (SPV). However, in literature, many have raised concerns over the FPV and SPV performance. In this paper, an experimental study is carried out to understand the performance of GMPV, FPV, and SPV systems. Three different prototypes of PV systems with data collection units in GMPV, FPV, and SPV installation methods are designed. An outdoor experimental study is carried at the same time, and performance assessment is carried out. Results observed from this study include weather parameters and electrical parameters. The analysis shows that FPV produces higher energy outputs when compared to the GMPV and SPV systems. From this investigation, we recommend the use of PV installation in FPV mode for solar power generation. The large-scale deployment of and the promotion of FPV systems would overcome the land-use conflicts between solar power and agriculture sectors. Also, N. Manoj Kumar (B) School of Energy and Environment, City University of Hong Kong, Kowloon, Hong Kong, China e-mail: [email protected] A. Ajitha Department of Electrical and Electronics Engineering, Birla Institute of Technology and Science (BITS) Pilani, Hyderabad Campus, Hyderabad, Telangana, India e-mail: [email protected] Department of Electrical and Electronics Engineering, Anurag Group of Institutions, Hyderabad, Telangana 500088, India A. A. Chand School of Engineering and Physics, University of the South Pacific, Suva, Fiji e-mail: [email protected] S. Goel Department of Electrical Engineering, Siksha ‘O’ Anusandhan (Deemed to be University), Bhubaneswar, Odisha 751030, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_19
199
200
N. Manoj Kumar et al.
the enhanced power outputs from FPV can help in overcoming the growing energy crisis. Keywords Photovoltaics · PV installation · Power conversion efficiency · PV performance · Floating solar · Submerged PV · Ground-mounted PV
1 Introduction Photovoltaics (PV) are playing a prime role in the modern-day power sector. Combined efforts on promoting renewables use and the activities that reduce greenhouse gas emissions accelerated the use of PV systems for power generation. Also, various initiatives are taken by the public and private organizations also supported the PV installations [1]. At present, most of the PV installations seen across the globe are ground-mounted photovoltaics (GMPV). The installed GMPV systems were performing within their designed limits and help in meeting the power demands. Recent literature on the GMPV systems suggested that their performance is varied from location to location as they experience different weather conditions [2, 3]. Besides, studies have also highlighted the negative impacts of these weather conditions on the power generation outputs of GMPV systems [4]. For example, the rise in PV module temperature and its impact on performance degradation is becoming a serious challenge [5]. On the other side, many activists claim that PV installation occupies vast amounts of land areas (approximately 2.5 to 5 acres of land for 1 MW of solar PV installation), and this has significantly affected the land-use decisions [6]. To be more specific, due to the solar movement, in many places, most of the agricultural land areas are converted into a solar power generating stations. This has resulted in a severe problem creating the land-use conflicts between the agriculture sector and the power sector [7]. Keeping the negative effects of PV module temperature and land-use conflicts in mind, few novel methods of PV installation were identified. These methods include the floating photovoltaic (FPV), and submerged photovoltaics (SPV). In the FPV installation method, the PV modules are installed on the water surface with the help of floating devices. The FPV method seems to be a reliable solution for avoiding land-use. Besides, the FPV systems act as barriers or water covers that would limit the evaporation of water from the water bodies. At the same, the PV module temperature tends to reduce due to the cooling effect caused by the water [8]. Similarly, the concept of SPV systems has also become popular these days, and here the PV modules are immersed in the water at different depths for harnessing the power that would be needed for marine applications [9]. Theoretically, the FPV and SPV systems seem to provide solutions, but many have questioned their functional performance related to the power outputs and power conversion efficiencies. For example, the incoming solar radiations that hit the PV module in the deep waters would be different when compared to the GMPV and FPV. At the same time, the thermal regulation of the PV module would be different in FPV and SPV systems.
Performance of Photovoltaics in Ground Mount-Floating-Submerged …
201
The above-highlighted concerns question the power generation capability of each system. This has led to a severe challenge and doubts on the selection of installation methods among the GMPV, FPV, and SPV. Hence, conducting a study that reveals the comparative performance of GMPV, FPV, and SPV is needed. The objectives of this paper are as follows: • To conduct an experimental study on three different PV installation methods (GMPV—Ground Mount Photovoltaics, FPV—Floating Photovoltaics, and SPV—Submerged Photovoltaics) and to understand the deviation in their performance. • To monitor and analyze the weather (global radiation, wind speeds, ambient temperature, and PV module temperatures) and basic electrical (voltage, current) parameters. • To evaluate and compare the power parameters such as power input, power output, and power conversion efficiency of GMPV, FPV, and SPV.
2 Materials and Methods 2.1 Power Conversion Efficiency In the performance assessment of solar photovoltaic modules, power conversion efficiency is one of the most critical parameters. It generally refers to the amount of electricity that is produced out of photovoltaic cells when photon energy from the sun in the form of sunlight is incident on the surface of the photovoltaic cells. Mathematically it is represented as a ratio of output electricity produced by the PV module to the input energy that is available at the module area level. Equation (1) is used to evaluate the power conversion efficiency of the photovoltaic module that is installed in three different configurations, i.e., (ground mount, floating, and submerged) [9]. ηpce =
Pout Pin
(1)
where ηpce is the power conversion efficiency in %; Pout is output electrical energy generated by the PV module in W; and Pin is the power input available at the PV module in W. Pin is estimated by using Eq. (2), where it is the product of the solar irradiance incident on the PV module to its area [9]. Pin = Sr × APV
(2)
where Sr is the solar irradiance in w/m2 ; and APV is the area of the PV module in m2 .
202
N. Manoj Kumar et al.
Fig. 1 Experimental setup showing three different ways of photovoltaic module installation
Table 1 Table captions should be placed above the tables Description
Parameter/Specification/Values
Installation method
Ground-mounted
PV technology
Thin-film (a-Si)
PV module electrical parameter
Open circuit voltage: 21.9 V, Short circuit current: 0.32 A, Maximum power: 10 W, Maximum voltage: 12 V
PV module dimension and area
Dimension: 0.215 m * 0.191 m, Area: 0.041065 m2
Instruments used
Voltmeter: 0–30 V, Ammeter: 0–2 A, Rheostat 300 Ohms/2 A
Floating
Submerged
2.2 Experimental Setup An experimental setup, as shown in Fig. 1, is designed. Three different thin-film PV modules made with a-Si material are used for experimentation. A provision is made to differentiate three installation methods such as ground mount, floating, and submerged. For monitoring the voltage, the current output from the three different experimental setups, voltmeter and ammeter are used. A rheostat is used to adjust the load. By setting the rheostat value, the experiment is performed by noting the currents and voltages from each setup for every 15 min from morning to evening. Table 1 gives a detailed specification and components used for experimentation.
3 Results and Discussion Using the designed experimental setup, data related to weather parameters, and PV module electrical parameters are recorded in three different installation methods (GMPV, FPV, and SPV). Later the data is analyzed and presented in the graphical representation. The global radiation, which is measured for each installation type, is
Performance of Photovoltaics in Ground Mount-Floating-Submerged …
203
given in Fig. 2a. The solar radiation that is incident on GMPV and FPV is observed to the same, and it varied between 650 and 982 W/m2 . In the case of the SPV module, the incident solar radiation varied between 293.91 and 686.49 W/m2 . Overall, the incident solar radiation on the SPV module is observed to be lesser than that of GMPV and FPV. Wind speed is another critical parameter that generally influences the efficiency of PV modules. The variation in wind speed is observed to be in between 1.6 and 4 m/s for the monitored local time and the same is graphically represented in Fig. 2b. Temperature is another critical parameter to be considered while analyzing the performance of the PV module in any installation configuration. In Fig. 3, the monitored temperatures are shown, and these include the temperatures at standard testing condition (STC), ambient, water surface, water at 12 cm depth. In SPV, water is the medium where the PV modules are immersed; hence, the water temperature is considered while analyzing the performance. Besides, the PV module temperatures in GMPV, FPV, and SPV installation methods are also shown in Fig. 3. In GMPV installation method, the recorded PV module temperatures range in between 44.19 and 50.90°C for local time between 11:00 a.m. and 4:00 p.m., whereas the for the same monitored period the recorded PV module temperatures for FPV and SPV are in the range of 41.33–46.31 and 33.77–42.34 °C, respectively. The recorded maximum PV module temperatures are 58.5, 52.42, and 49.09 °C for GMPV, FPV, and SPV installation methods, respectively. The recorded minimum PV module temperatures are 44.19, 41.33, and 33.77 °C for GMPV, FPV, and SPV installation methods, respectively. When compared to GMPV, the observed module temperatures in FPV and SPV methods are lesser. In FPV, the cooling effect of water on the PV module resulted in temperatures reductions. The recorded PV module electrical parameters that include voltage and current for three different installation methods (GMPV, FPV, and SPV) are represented graphically in Fig. 4a and b. The voltage and current are monitored every 15 min starting from morning 11:00 a.m. to evening 4:00 p.m., and their variation seems to be
Fig. 2 Weather parameters. a Global radiation in w/m2 , b Wind speed in m/s
204
N. Manoj Kumar et al.
Fig. 3 Ambient temperatures and photovoltaic module temperatures in GMPV, FPV, and SPV installation methods
dynamic in each monitored installation method. In GMPV and FPV installation methods, the recorded voltages for the monitored period are between 10 and 5 V, whereas in the case of SPV, the recorded voltage is in between 8 and 3.6 V. The recorded maximum voltages are 11, 12 and 11.5 V for GMPV, FPV, and SPV installation methods, respectively. The recorded minimum voltages are 5 and 3.6 V for GMPV, FPV, and SPV installation methods, respectively. In the GMPV installation method, the recorded current for the monitored period is in between 0.2 and 0.1 A, whereas in the case of FPV and SPV, the recorded currents are in between 0.23 to 0.1 A and 0.13 A to 0.12 A. The recorded maximum currents are 0.24, 0.26 and 0.25 A for GMPV, FPV, and SPV installation methods, respectively. The recorded minimum currents are 0.1, 0.1 and 0.12 A for GMPV, FPV, and SPV installation methods, respectively. Using the above-discussed PV module electrical parameters and based on methodology shown in Sect. 2, the performance of the PV module in GMPV, FPV, and SPV installation methods is studied. As a result, power inputs, power outputs, and power conversion efficiencies are calculated and these are presented in the graphical representation in Fig. 5a, b, c, respectively. The power input is calculated using Eq. (2) and is shown in Fig. 5a. The power input for GMPV and FPV installation methods is observed to be similar, and it is between 26.69 and 28.54 W for the local time between 11:00 AM and 4:00 PM. The recorded maximum input power is 40.32 W, and the minimum input power is 26.92 W. In the case, SPV, the observed input power
Performance of Photovoltaics in Ground Mount-Floating-Submerged …
205
Fig. 4 Electrical parameters of GMPV, FPV, and SPV. a Voltage in volts, b current in amps
Fig. 5 Performance parameters of GMPV, FPV, and SPV. a Power input in watts, b power output in watts, and c power conversion efficiency in percentage
206
N. Manoj Kumar et al.
is lower than that of GMPV and FPV systems. This is due to the low incident radiation on the SPV module. The recorded maximum and minimum input powers in the case of SPV installation are 36.73 and 12.07 W respectively. In Fig. 5b, the monitored power outputs of a PV module in the three installation methods are shown. The PV module in the GMPV installation method produced a power output that in the range from 2 to 0.5 W for the monitored between 11:00 a.m. and 4:00 p.m. The maximum power output of 2.31 W is observed in the peak sun hours from the GMPV, and the minimum power output is 0.5 W. Whereas in the case of FPV, the produced power outputs from the PV module is higher when compared to GMPV. The recorded power outputs in FPV are in the range between 2.3 and 0.5 with 3 W as maximum power output and 0.5 W as a minimum. The power outputs from a PV module in SPV are observed too much lesser when compared to GMPV and FPV. In the case of SPV, the maximum power produced is 2.3 W, and the minimum is 0.43 W. In Fig. 5c, the power conversion efficiencies calculated using Eq. (1) are shown. The power conversion efficiency of the PV modules is observed to vary in three installation configurations, and on the other side, the efficiency tends to increase in the case of FPV and SPV.
4 Conclusion In this paper, an experimental study of a thin-film amorphous silicon module is carried out in GMPV, FPV, and SPV installation methods. From the experimental investigation, the following conclusions are made: • It is observed that the PV module behave differently in each of the installations studied installation method. • The temperature of the PV module is observed to be lower in the case of FPV, and SPV when compared to GMPV. • Overall, the output power produced by a PV module in the FPV installation method is higher than the SPV and GMPV methods. • Comparing the GMPV and FPV installation methods, it is observed that power outputs in FPV are higher, and the range varied between 15 and 66.66%. • It is observed that the SPV systems produce less output power when compared to GMPV and FPV, and they are in the range of –0.99 to –63% and –7.03 to–63.33%. • Overall it is observed that the temperature of the PV module and the incoming solar radiation are the most affecting parameters that affected the performance of FPV and GMPV. • In the case of SPV, the incoming radiations are observed to be much lesser due to which the total produced power is lesser. Based on the above conclusions, we recommend the use of PV modules in the FPV installation method if there is a limit land area. However, the decision on selecting the installation method is based on multiple criteria such as availability land, availability
Performance of Photovoltaics in Ground Mount-Floating-Submerged …
207
of water surface area, cost, and power demand. We believe this work would serve as a useful material for the readers.
References 1. R.M. Elavarasan, G.M. Shafiullah, N. Manoj Kumar, S. Padmanaban, A state-of-the-art review on the drive of renewables in Gujarat, state of India: present situation, barriers and future initiatives. Energies 13(1), 40 (2020) 2. P. Rajput, M. Malvoni, N. Manoj Kumar, O.S. Sastry, A. Jayakumar, Operational performance and degradation influenced life cycle environmental-economic metrics of mc-Si, a-Si and HIT photovoltaic arrays in hot semi-arid climates. Sustainability 12(3), 1075 (2020) 3. S. Thotakura, S.C. Kondamudi, J.F. Xavier, M. Quanjin, G.R. Reddy, P. Gangwar, S.L. Davuluri, Operational performance of megawatt-scale grid integrated rooftop solar PV system in tropical wet and dry climates of India. Case Stud. Therm. Eng. 100602 (2020) 4. P. Rajput, M. Malvoni, N.M. Kumar, O.S. Sastry, G.N. Tiwari, Risk priority number for understanding the severity of photovoltaic failure modes and their impacts on performance degradation. Case. Stud. Therm. Eng. 16, 100563 (2019) 5. N.C. Park, W.W. Oh, D.H. Kim, Effect of temperature and humidity on the degradation rate of multicrystalline silicon photovoltaic module. Int. J. Photoenergy 2013, 9 (2013) 6. S. Ong, C. Campbell, P. Denholm, R. Margolis, G. Heath, Land-use requirements for solar power plants in the United States (No. NREL/TP-6A20–56290). National Renewable Energy Lab. (NREL), Golden, CO, USA (2013) 7. R.R. Hernandez, M.K. Hoffacker, M.L. Murphy-Mariscal, G.C. Wu, M.F. Allen, Solar energy development impacts on land cover change and protected areas. Proc. Nat. Acad. Sci. 112(44), 13579–13584 (2015) 8. N.M. Kumar, J. Kanchikere, P. Mallikarjun, Floatovoltaics: towards improved energy efficiency, land and water management. Int. J. Civ. Eng. Technol. 9(7), 1089–1096 (2018) 9. A. Ajitha, N.M. Kumar, X.X. Jiang, G.R. Reddy, A. Jayakumar, K. Praveen, T.A. Kumar, Underwater performance of thin-film photovoltaic module immersed in shallow and deep waters along with possible applications. Results Phys. 15, 102768 (2019)
IoT-Based System for Residential Peak Load Management and Monitoring of Connected Load A. Ajitha and Sudha Radhika
Abstract Utilities in the existing power system network face difficulty in handling loads during peak load conditions that are raised due to the accumulation of loads at the same time. This can be addressed through the effective utilization of available power resources for which load management in demand response is a better option in the future smart grid context. This paper presents an Internet of Things (IoT) based solution for load management at the consumer level with a direct load control (DLC). In the proposed system, residential loads are classified as critical and non-critical, and their time of operation is controlled during peak hours. Additional connected loads in the domestic sector that increase the burden on distribution transformers for which an effective monitoring system is proposed in the paper. Mobile alerts are sent to the consumer as and when peak load occurs on the grid and also when the energy utilization reaches 75% and above the maximum agreed load in order to monitor additional connected loads. Keywords Load management · Demand response · Direct load control · Internet of things (IoT) · Connected load monitoring · Arduino
A. Ajitha · S. Radhika Department of Electrical and Electronics Engineering, Birla Institute of Technology and Science (BITS) Pilani, Hyderabad Campus, Hyderabad, India e-mail: [email protected] A. Ajitha (B) Department of Electrical and Electronics Engineering, Anurag Group of Institutions, Venkatapur, Hyderabad, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_20
209
210
A. Ajitha and S. Radhika
1 Introduction Power Grid is the huge network built to deliver electrical energy to end consumers that comprise of power generation, transmission, and distribution [1]. The increasing demand for electricity with growing needs has wearied the existing traditional network that has been delivering power for decades and has to be restructured to meet future requirements of loads by maintaining reliability, resiliency, efficiency and stability [1–4]. To maintain the power system reliability and also to minimize its operating costs Load Management (LM) has been introduced by the utilities in the 1970s [5]. A sharp peak in the demand curve is mainly because of the pooling of loads through peak times that causing hard services to meet for an electrical utility [1, 2, 6]. Power utilities are action-packed to reduce those peaks to maintain supply-demand balance [7] through load management techniques like load shedding. Utilities remove a certain amount of load from the grid at the instants of peak demand to make the effective operation of the remaining part [8]. However, load shedding greatly influences human life and causes suffering in many ways [9]. The satisfactory alternative of load shedding is to encourage optimized utilization of available generation capacity instead of increased power generation [9]. In the existing power system network consumer is unaware of peak hours and will continue to use electricity without any proper load management. This leads to a sharp rise in demand and affects system stability if not handled in time. In recent times technologies of the smart grid that provide bi-directional communication and Demand Side Management (DSM) have been gaining importance for achieving flexible load demand [1, 7]. According to the EU, a Smart grid is an intelligent electrical network that integrates the activities of connected users for effective electrical services including sustainability, economics, and security [10]. In the Smart grid, DSM plays a key role to reduce peak loads and enhancing grid reliability [11]. DSM can be accomplished by controlling the load following utility time-dependent pricing methods and is being termed as Demand Response (DR) [12]. (DR) is a set of actions implemented on the consumer side that changes the regular consumer power consumption patterns based on the dynamic electricity pricing [13]. DR is a basic characteristic to improve the smart grid operations and consumer’s participation and interaction with utilities can be achieved through it [14]. Consumers can be engaged in DR programs by offering dynamic price signals or through programs like Direct Load Control (DLC) in which consumer loads can be directly controlled by the utility [8]. DLC is a typical load management method to mitigate load of peak hours [15]. In the conventional distribution network, there is no effective monitoring of additional connected load (CL) of residential consumers. The additional connected loads are those that are over and above the agreed connected loads during the first release of service. There is no proper system to regularize such loads which are causing an unexpected increase in load on the system, overloading of distribution transformers, low voltages, fuse blowouts and at the consumer level, it leads to burning of service mains and finally leading to less reliability of power supply.
IoT-Based System for Residential Peak Load Management and Monitoring …
211
There exists literature on different methods adopted for DLC. Huang [16] proposed a fuzzy-based algorithm for integrating DLC with interruptible load management. Vivekananthan et al. [17] presented an algorithm for residential demand response. In [15] Wang et al. proposed an intelligent system for DLC of air conditioners under DSM in a microgrid system. Stenner et.al examined the consumer distrust in participating in DLC [18]. Safty et al. in [19] proposed a particle swarm optimization algorithm for scheduling controllable devices using DSM. Mahmood et al. Presented an overview and comparison of load management techniques in smart grid [20]. In this paper, a framework to achieve consumer-level load management and to alert consumers on his maximum demand consumption a system is proposed using the Internet of Things (IoT). Based on the grid conditions the operations of domestic loads are controlled so that the sharp peaks can be avoided and also by continuously monitoring the consumer energy demand, the problems of additional connected loads are addressed. The paper is structured as follows. In Sect. 2 the proposed system framework and execution flowchart is explained. Section 3 presents the implementation using a pilot study. Results were illustrated in Sect. 4 and concluded the paper in Sect. 5.
2 Proposed Framework In this section, the architecture of the proposed system is explained. Figure 1 shows the proposed system architecture for monitoring the loads in each residential home that contains a processing unit, a microcontroller. The controller will manage the operation of each load depending on its status (on/off) and the load condition of the main grid. GSM network is used to send not only peak notification to the consumer and also to alert him based on his maximum demand reached. To implement load control the home appliances are categorized into critical and non-critical based on consumer comfort. Critical loads are those whose operation cannot be altered as they affect consumer comfort, for example, light, fan, TV, etc. Non-critical loads are those whose time of operation can be changed, like a washing machine and water pumps. Figure 2 shows the framework of load control done in the paper. A typical domestic house is considered with the above said load classification. When peak load conditions prevail on the grid consumer will get a notification and the controller at the house level will automatically turn off non-critical loads while critical load operations are unaltered. Once the peak loads are off, then non-critical loads are turned on automatically. At the same time, the energy demand of each consumer is measured and if it is more than the prescribed limit (agreed load demand), he will be alerted initially, and if the demand further increases then he will be levied a penalty accordingly. To realize this, IoT technology is used.
212
A. Ajitha and S. Radhika
Fig. 1 System architecture
2.1 Technology Used IoT is the network of physical devices that are made smart with embedded sensors, actuators, and networking technologies for data exchange that are clealry discussed in section 2.2 [21–28]. Various systems, sensors, and actuators can be connected to the internet using IoT technologies [21, 28]. Using IoT each load is embedded with sensors to access data with the help of a microcontroller.
2.2 Materials Used 2.2.1
Arduino Mega
Arduino Mega is a microcontroller board shown in Fig. 3 is based on AT mega 2560 microcontroller. It has 54 digital input-output pins, 16 analog input pins, and operates at 5 V with a clock speed of 16 MHz. The board is programmed using the Arduino Integrated Development Environment (IDE) and can be powered by either A B USB cable or from an external source [22, 23].
2.2.2
Current Sensor
The sensor used for the measurement of load current is ACS712 current sensor shown in Fig. 4 that works on the principle of Hall Effect and has 3 pins VCC, GND, and OUTPUT [21]. The sensor can be used for the measurement of both DC and AC currents. The conducting terminals are so isolated from the sensor leads such that it can be used without any optoisolators. It is available in different current ratings of 5, 20 and 30 A [24].
IoT-Based System for Residential Peak Load Management and Monitoring …
213
Fig. 2 Flowchart of system framework
2.2.3
Relay Module
It is a switch that can be operated electrically by an electromagnet with low controlled voltages of 5 V. The high voltage side of the relay has three pins with common in the middle, normally open (NO) and normally closed (NC) on either side. The other side of the module has 3 pins namely Vcc, ground, and input as shown in Fig. 5 [25, 26].
214
A. Ajitha and S. Radhika
Fig. 3 Arduino mega (Microcontroller)
Fig. 4 ACS712 current sensor
2.2.4
GSM Modem
A GSM modem, shown in Fig. 6 is a wireless modem used for communication over the GSM network. To communicate with the network it requires a SIM (Subscriber Identity Module) card. It interacts with a controller or processor using AT commands through serial communication [27].
3 Real-Time Implementation Using a Pilot Study In this section, the block diagram as in Fig. 7 and the implementation of the project are presented. To implement the load control at the consumer level using IoT, a
IoT-Based System for Residential Peak Load Management and Monitoring …
215
Fig. 5 Relay module
Fig. 6 GSM modem
microcontroller, different loads like incandescent lamps of different ratings (40, 60, 200 W), DC motor (considered as a fan), and water pump are used. Among the loads, pump and one of the bulbs (60 W) are treated as non-critical loads and the others as critical loads. Each load is connected with a relay to control through microcontroller. Each load condition is monitored by the microcontroller through the connected current sensor and a relay. A GSM modem is used to send messages to the consumer during different conditions like peak hours. The peak load condition on the grid is indicated by a bulb in the prototype developed and the non-critical loads are controlled accordingly. When the bulb is turned on, the instant is treated as peak hour and hence
216
A. Ajitha and S. Radhika
Fig. 7 Block diagram
non-critical loads are turned off automatically with an alert sent to the consumer. Non-critical loads are turned ON whenever the peak load is off. To check any increase in the consumer’s connected load, maximum energy used by the consumer is monitored continuously. To realize this in the prototype, loads are added in sequence and whenever the demand reaches 75% of maximum agreed demand, an alert is sent to the consumer, and if loads are added further that cause demand to reach more than 85% then the system sent a message to the consumer about the penalty levied.
4 Results and Discussion The hardware developed to realize the residential load management using IoT is shown in Fig. 8. Different loads shown in the hardware are being controlled by the microcontroller Arduino Mega based on peak load conditions. Figure 9 shows that initial status of all consumer loads including critical and non-critical that are ON and the peak load indication is off. When peak load is ON, non-critical loads are turned off as shown in Fig. 10 and will get turned on during non-peak load conditions automatically. Figure 11 shows the alert received by the consumer for peak conditions. Figures 12 and 13 show the messages received by the consumer for the energy consumption of 75 and 85% of agreed demand respectively. Hence consumers and the utility can act accordingly to avoid the consequences of additional connected loads.
IoT-Based System for Residential Peak Load Management and Monitoring …
Fig. 8 Hardware setup
Fig. 9 Initial load status
217
218
Fig. 10 Peak load conditions Fig. 11 Peak load notification
A. Ajitha and S. Radhika
IoT-Based System for Residential Peak Load Management and Monitoring …
219
Fig. 12 Mobile notification showing 75% utilization of connected load
Fig. 13 Mobile notification received for penalty level utilization of connected load
5 Conclusion and Future Scope The prototype developed using IoT for consumer-level load management provides an efficient way to deal with peak loads conditions on the main grid to avoid load shedding. The system also laid a path for effective monitoring of additional connected loads for the utilities so that they make necessary installations for reliable system operation. The prototype developed can be realized with an existing real-time network that forms the future scope in regard to the enhancement of this research work.
220
A. Ajitha and S. Radhika
References 1. N.M. Kumar, A.A. Chand, M. Malvoni, K.A. Prasad, K.A. Mamun, F. Islam, S.S. Chopra, Distributed Energy Resources and the Application of AI, IoT, and Blockchain in Smart Grids. Energies 13, 5739 (2020) 2. A. Mahmood, M. Ullah, S. Razzaq, A. Basit, U. Mustafa, M. Naeem et al., A new scheme for demand side management in future smart grid networks. Procedia Comput. Sci. 32, 477–484 (2014) 3. T. Samad, S. Kiliccote, Smart grid technologies and applications for the industrial sector. Comput. Chem. Eng. 47, 76–84 (2012) 4. N. Manoj Kumar, A. Ghosh, S.S. Chopra, Power Resilience Enhancement of a Residential Electricity User Using Photovoltaics and a Battery Energy Storage System under Uncertainty Conditions. Energies 13, 4193 (2020) 5. K.-H. Ng, G.B. Sheble, Direct load control-A profit-based load management using linear programming. IEEE Trans. Power Syst. 13, 688–694 (1998) 6. M. Marwan, F. Kamel, Demand side response to mitigate electrical peak demand in eastern and southern Australia. Energ. Procedia 12, 133–142 (2011) 7. M. Muratori, B.-A. Schuelke-Leech, G. Rizzoni, Role of residential demand response in modern electricity markets. Renew. Sustain. Energ. Rev. 33, 546–553 (2014) 8. H. Mortaji, O.S. Hock, M. Moghavvemi, H.A. Almurib, Smart grid demand response management using internet of things for load shedding and smart-direct load control, in 2016 IEEE Industry Applications Society Annual Meeting (IEEE, 2016), pp. 1–7 9. M. Billah, M.R. Islam, G.S.M. Rana, Design and construction of smart load management system: an effective approach to manage consumer loads during power shortage, in 2015 International Conference on Electrical Engineering and Information Communication Technology (ICEEICT) (IEEE, 2015) pp. 1–4 10. S. Lakshminarayana, Smart grid technology & applications, in 2014 Power and Energy Systems: Towards Sustainable Energy (IEEE, 2014), pp. 1–6 11. A. Safdarian, M. Fotuhi-Firuzabad, M. Lehtonen, A distributed algorithm for managing residential demand response in smart grids. IEEE Trans. Industr. Inf. 10, 2385–2393 (2014) 12. N. Gatsis, G.B. Giannakis, Residential load control: Distributed scheduling and convergence with lost AMI messages. IEEE Trans. Smart Grid. 3, 770–786 (2012) 13. A. Malik, J. Ravishankar, A review of demand response techniques in smart grids, in 2016 IEEE Electrical Power and Energy Conference (EPEC) (IEEE, 2016), pp. 1–6 14. G.K. Chellamani, P.V. Chandramani, Demand response management system with discrete time window using supervised learning algorithm. Cogn. Syst. Res. 57, 131–138 (2019) 15. P. Wang, J. Huang, Y. Ding, P. Loh, L. Goel, Demand side load management of smart grids using intelligent trading/metering/billing system, in 2011 IEEE Trondheim PowerTech (IEEE, 2011), pp. 1–6 16. K.-Y. Huang, Y.-C. Huang, Integrating direct load control with interruptible load management to provide instantaneous reserves for ancillary services. IEEE Trans. Power Syst. 19, 1626–1634 (2004) 17. C. Vivekananthan, Y. Mishra, G. Ledwich, F. Li, Demand response for residential appliances via customer reward scheme. IEEE Trans. Smart Grid 5, 809–820 (2014) 18. K. Stenner, E.R. Frederiks, E.V. Hobman, S. Cook, Willingness to participate in direct load control: the role of consumer distrust. Appl. Energ. 189, 76–88 (2017) 19. S. El Safty, A. El Zonkoly, O. Hebala, Smart load management in distribution networks incorporating different load sectors using PSO. Int. Conf. Renew. Energies Power Qual. (2015) 20. A. Mahmood, N. Javaid, M.A. Khan, S. Razzaq, An overview of load management techniques in smart grid. Int. J. Energ. Res. 39, 1437–1450 (2015) 21. T.A. Kumar, A. Ajitha, Development of IOT based solution for monitoring and controlling of distribution transformers, in 2017 International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT) (IEEE, 2017), pp. 1457–1461
IoT-Based System for Residential Peak Load Management and Monitoring …
221
22. https://www.elprocus.com/arduino-mega-2560-board/ 23. https://www.arduino.cc/en/Guide/ArduinoMega2560 24. https://pdf1.alldatasheet.com/datasheetpdf/view/168327/ALLEGRO/ACS712ELCTR-05B-T. html 25. https://randomnerdtutorials.com/guide-for-relay-module-with-arduino/ 26. https://howtomechatronics.com/tutorials/arduino/control-high-voltage-devices-arduino-relaytutorial/ 27. https://www.electronicsforu.com/resources/gsm-module 28. N.M., Kumar, P. K. Mallick, The Internet of Things: Insights into the building blocks, component interactions, and architecture layers. Procedia comput. sci. 132, 109-117 (2018)
A New AC Model for Transmission Line Outage Identification Mehebub Alam, Shubhrajyoti Kundu, Siddhartha Sankar Thakur, Anil Kumar, and Sumit Banerje
Abstract Identification of line outage is very vital in today’s modern power system. Most of the related studies are based on the DC power flow model which utilizes the phasor angle measurement obtained from phasor measurement unit (PMU). In this study, a new AC model is developed to solve line outage identification (LOI) problem utilizing complex voltage phasor measurements of PMU. Further, the measurement noises are also included in the developed model to make it viable in real power systems. For this purpose, Gaussian noise with mean zero and standard deviation (SD) of 1–5% is taken into account. The developed model is applied on benchmark IEEE 5, 14, 57 bus, and Damodar Valley Corporation (DVC) 38 bus practical Indian networks. The simulation results are also compared with two existing methods and the test results reveal the applicability and validity of the developed model. Keywords Line outage identification · PMU · Noise · Standard deviation
M. Alam (B) · S. Kundu · S. S. Thakur · A. Kumar Electrical Engineering Department, NIT Durgapur, Durgapur, West Bengal 713209, India e-mail: [email protected] S. Kundu e-mail: [email protected] S. S. Thakur e-mail: [email protected] A. Kumar e-mail: [email protected] S. Banerje Electrical Engineering Department, Dr. B.C. Roy Engineering College, Durgapur, West Bengal, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_21
223
224
M. Alam et al.
1 Introduction It is well known that transmission line loading is increasing day by day due to rapid development of society. Further, renewable energy is also integrated into the conventional grid to exploit the environmental benefit and utilize the potential of natural resources. Additionally, attempts are going to install several Flexible AC Transmission System (FACTS) devices for better power flow control, improvement of the transient as well as steady-state stability of the whole network. In this context, the modern power system is becoming complex day by day. In this complex power system, a proper monitoring mechanism is utmost needed to ensure the reliability and security of the grid. In the past few years, several blackout events occurred which draw the attention of the researchers to find a suitable solution. As per report, lack of situational awareness and deficiency of monitoring has been pinpointed as the root causes behind the blackout. It is well known that PMUs provide synchronized measurements of complex voltage and current phasors. PMU provided measurements are having special features like high sampling rate, wide-area monitoring capability, and good accuracy. So, PMUs are extensively used by the researchers for monitoring, control, and protection of the network. Line outage identification (LOI) is an important area of application of PMU. Several studies are suggested in the literature to solve the LOI problem. The authors exploit the compressive sensing (CS) method to develop the LOI model in [1]. A blockwise CS-based scheme is developed in [2] to improve the efficacy of the algorithm over the traditional CS-based method. In [3], support vector machine (SVM) is applied detect single line contingencies. Zhu et al. [4] developed a compressive sensing based method to identify multiple line outages. However, the efficiency of this method is affected by the internal noise considered in the mathematical model. DC model-based line outage is described in [5] considering the dynamic condition of power systems. In [6], three different methods i.e., DC method, AC method, and AC method with loss. However, the AC method with loss is proven to be superior compared to others in terms of accuracy. An efficient method based on the ambiguity group theory considering limited PMUs is presented in [7]. DC power flow-based model has been adopted in [8] using the susceptance matrix of the full network and LOI is carried out by comparing the measured bus power mismatch with previouslystored bus power mismatch obtained through simulation. Various uncertainties of PMU regarding line outage are discussed in [9]. A multi-hypothesis test is also carried out to determine the uncertainty of PMU. Almost all the existing schemes are based on voltage phasors only and the DC load flow model has been adopted. In this paper, a new AC model is developed utilizing the complex voltage phasor measurements. The measurement noise is also included in the developed model.
A New AC Model for Transmission Line Outage Identification
225
2 Methodology Most of the works related to line outage adopted the phasor angle and DC power flow method. The assumption of DC power flow method is likely to introduce errors in the mathematical model. The aim of this paper is • To develop a new mathematical model for LOI considering the AC load flow method • Utilization of complex voltage phasors instead of phasor angles only • Incorporation of noises in the mathematical model. We consider a network with N nodes and L transmission lines for developing the model. According to the AC load flow model, the real bus power injection vector can be written as P = real V I ∗
(1)
V and I is the complex bus voltage phasor vector and bus current injection vector respectively. Now, the bus current injection can be expressed by I = YV
(2)
Here Y represents the bus admittance matrix which is defined as
Ynm
⎫ ⎧ ⎪ −ynm ; if n = m and connected ⎪ ⎪ ⎪ ⎬ ⎨ y ; if n = m and connected nm = n=m ⎪ ⎪ ⎪ ⎪ ⎩ 0; if m, n not connected ⎭
(3)
The bus admittance matrix can also be expressed as Y = M Dz M T =
L 1 m l m lT z l=1 l
(4)
Here M is the incidence matrix (order N × L) elements of which can be expressed as M = [m 1 , m 2 , . . . , m L ] Further, we can write
(5)
226
M. Alam et al.
⎫ ⎧ ⎨ 1; if i = n(from bus) ⎬ Mil = −1; if i = m(to bus) ⎭ ⎩ 0; otherwise
(6)
During normal conditions, let the complex voltage phasors (V ) are obtained from PMUs placed in the network. From the basic power system model, bus current injection vector (I ) can be written as, I = YV
(7)
The post outage current injection vector (I ) can be expressed as I = Y V = I + γ
(8)
where Y is the post-outage admittance matrix and V is the post-outage complex voltage phasor vector. Here noise vector γ is included to represent the model errors i.e., PMU measurement errors due to noise. We model the noise vector as random noise i.e., Gaussian noise vector whose mean is zero and standard deviation (SD) sigma (σ ). In the post outage network, the admittance matrix will be changed and voltage phasors will also change. After the variation due to an outage, the admittance matrix can be written as Y = Y − Ydiff
(9)
In a similar way, the voltage phasors of post outage network can be expressed by V = V + V
(10)
where Y diff represents the variation of the admittance matrix between pre-outage and post-outage condition. Similarly, V represents the variation of the complex voltage phasor vector between pre-outage and post-outage conditions. In accordance with Eq. (13), the difference in the admittance matrix (Y diff ) can be expressed by Ydiff =
1 m l m lT = M Dz diag (ko )M T z l l∈L
(11)
out
where Dz represents diagonal whose matrix lth diagonal element is z1l . Here k o is the T
outage vector having dimension L × 1 i.e., ko = k0,1 , k0,2 , . . . , k0,L . The elements of k o are defined as
A New AC Model for Transmission Line Outage Identification
k0,l =
1, if l ∈ Lout 0, otherwise
227
(12)
Here represents Lout the set of outaged lines and it is evident that Lout ⊂ L. Here, L represents the set of all lines By substituting (9), (10) into (8) we can obtain Y V =Ydiff V + γ =M Dz diag (ko )M T V + γ =M Dz diag M T V ko + γ
(13)
Now, we denote, w = Y V
(14)
A V = M Dz diag M T V
(15)
w = A V ko + γ
(16)
Further we use notation
Now, we can arrive at
Finally, the LOI problem can be solved by using the following objective function F(k0 ; w, A V ) = w − A V k0 − γ 22
(17)
Admittance matrix Y is available from the pre-outage network. Now, Y and V are available through pre-outage and post-outage measurements. So, w can be obtained by, w = Y V . Now, from pre-outage network topology (i.e., M and Dz ) and postoutage voltage phasors (V ) the AV is also available through (15). So, estimation of the transmission line outages can be done by solving k0 = index(min(F(k0 ; w, A V ))
2.1 Proposed Algorithm The proposed algorithm to solve LOI problem is described below: Step 1:
Read system data
(18)
228
M. Alam et al.
Step 2:
Take measurements i.e., pre outage voltage phasors (V ) and post outage voltage phasors (V ) Step 3: Form bus incidence matrix M (NXL) Step 4: Form Dz matrix and then form admittance matrix Y using, Y = MDz M T Step 5: Form outage vector (k o ) for various line outage cases (OCs). Each outage vector is associated with a particular line OC Step 6: Introduce random noise vector (γ ) and compute objective function F by using (13), (15), and (16) Step 7: Compute norm values (NVs) for each individual OCs Step 8: Find out the minimum norm value (MNV) and index related to it Step 9: Form outage vector and identify the outage Step 10: End. The flowchart of the proposed scheme is shown in Fig. 1.
3 Simulation Results The developed LOI algorithm is implemented through simulation on IEEE 5, 14, 57-bus, and DVC 38 bus practical Indian networks. The MATLAB 7.10.0 (R2013a) version was used as a simulation platform with a PC having Intel Core-i3 processor (2.4 GHz) and 4 GB RAM.
3.1 IEEE 5 Bus Case Study This system consists of seven numbers of lines. Let us take an example of outage of line 5 with noise SD 5%. After the simulation, the obtained NVs are given below: NVs = [2.5317 2.3341 1.7688 2.1680 1.2971 2.3507 1.9726] MNV = 1.2971, Index = 5. This index 5 corresponds to outage vector, ko = [0000100]T . For this case, Lout = {5}, L = {1, 2, 3, 4, 5, 6, 7}. The detected line outage is represented by Fig. 2. Here Lout is the set of outaged lines. NVs for all seven single line OCs with noise SD 5% are presented in Table 1. This Table implies that all the line outages are detected successfully and boldface values indicate the minimum value which corresponds to line outage. The MNV obtained 2.5236, 1.7891, 0.2923, 0.552, 0.7992, 0.3571, 0.1186 for outage of line 1, 2, 3, 4, 5, 6 and 7 respectively. For IEEE 5 bus system, 21 double line OCs are possible. Among these 21 cases, for outage of line no. 5 and 7 the load flow diverges and it is termed as diverging case. Let take an example of outage of line 3 and 5 with noise SD 5%. After the simulation test, the obtained NVs are given below:
A New AC Model for Transmission Line Outage Identification
229
Start
Read pre-outage & post-outage complex voltage phasors
Calculate bus incidence matrix (M) & diagonal matrix (Dz)
Calculate bus admittance matrix of full network using Y=MD zMT
Compute difference ∆V between pre-outage & post outage complex voltage phasors
Compute objective function F
Find minimum norm value and estimate line outage
End Fig. 1 Flowchart of the proposed scheme
NVs = [26.2326 22.5464 23.0207 8.5971 26.0988 24.1278 20.7175 20.6112 5.4378 23.2137 21.2447 17.9700 0.8612 20.1396 17.8484 5.5846 21.2983 17.6361 6.7514 7.0936 21.3405] MNV = 0.8612, Index = 13. This index 13 corresponds to the outage of line 3 and 5 i.e., the outage is detected successfully.
230
M. Alam et al. 1 0.8
ko
0.6 0.4 0.2 0
7
6
5
4 LINE NO.
3
2
1
0.1
NOISE
0.05 0 -0.05 -0.1
1
1.5
2
2.5
3 BUS NO.
3.5
4
4.5
5
Fig. 2 Outage of line 5 with noise SD 5%
Table 1 NVs for seven OCs of IEEE 5 bus network Line outage no. 1
2
3
4
5
6
2.5236
6.7723
6.835
6.9824
8.8512
7.4133
7 6.5935
4.5874
1.7891
2.0105
2.3034
4.5211
3.746
1.5942
6.3373
3.6443
0.2923
0.7797
3.3748
3.0893
0.6669
6.3801
3.7808
0.9275
0.552
3.4211
2.6307
0.6651
6.7188
3.867
0.9918
1.0529
0.7992
3.1441
1.0452
6.4895
3.4058
1.1809
1.2763
4.0798
0.3571
0.3843
6.4601
3.4099
1.1492
1.1979
3.7717
2.879
0.1186
The bold value represents the MNV which corresponds the actual outage case
3.2 IEEE 14, 57 Bus and DVC 38 Bus Case Study For IEEE 14 bus system, line outage (line 10) detection with noise SD of 5% is considered. The NVs obtained for various OCs are given in Fig. 3. It is clear from this figure that the MNV (28.6902) is obtained for OC 10 which corresponds to the outage of line 10. The identified line outage is shown in Fig. 4. The IEEE 57 bus system consists of 80 number of lines. So, 80 single contingencies are possible but two cases are infeasible. Here, outage of line 18 is considered with noise SD of 5% take into account. The MNV (73.7136) is found for OC 18 which
A New AC Model for Transmission Line Outage Identification
231
70 Minimum norm value=28.6902
60
Norm value
50
40
30
20
10
0
6
4
2
0
14
12 10 8 Outage case No.
16
18
20
Fig. 3 NVs for various OCs of IEEE 14 bus network
ko
1
0.5
0 0
2
4
6
8
10
12
14
16
18
20
LINE NO. 0.15 SD=5%
NOISE
0.1 0.05 0 -0.05
0
2
4
6
8
10
12
14
BUS NO.
Fig. 4 Outage of line 10 with noise SD 5%
refers to line 18 outage. The obtained NVs for various single line OCs are shown in Fig. 5. The detected line outage is represented in Fig. 6. The NVs for few single line OCs considering noise SD 5% are shown in Table 2. The boldface values indicate the minimum value which corresponds to line outage. The MNV obtained 18.8275, 12.605, 25.5824, 25.6322, 26.1879, 1.0004, 42.354, 35.8748 corresponds to the outage of line 1, 2, 3, 4, 5, 6, 7 and 9 respectively.
232
M. Alam et al. 300 Minimum norm value=73.7136 250
Norm value
200
150
100
50
0
0
10
20
30 40 50 Outage case No.
60
70
80
Fig. 5 NVs for various OCs of IEEE 57 bus network
ko
1
0.5
0
0
30
20
10
40
70
60
50
80
LINE NO. 0.2
NOISE
0.1 0 -0.1 -0.2
0
10
20
30 BUS NO.
Fig. 6 Outage of line 18 with noise SD 5%
40
50
60
A New AC Model for Transmission Line Outage Identification
233
Table 2 NVs for few OCs of IEEE 14 bus network Outaged line no. 1
2
3
4
5
6
7
9
18.8275
33.2477
197.4436
12.2605
57.5831
56.5988
56.6415
33.955
114.0264
71.5776
34.8136
34.8421
34.5484
8.9771
102.496
213.3628
53.3583
23.6566
25.5824
26.1468
26.2825
6.9167
96.3423
45.8408
211.6468
22.6312
25.671
25.6322
26.2536
5.306
84.4599
42.5722
210.9525
21.271
25.7416
26.1517
26.1879
3.992
91.4269
39.9462
212.2463
20.5221
26.9904
27.23
27.3886
1.0004
91.3595
37.6774
212.5187
20.7799
27.4594
27.4418
27.5385
4.5515
42.354
41.3928
212.231
20.5595
26.7653
27.1829
27.2904
3.1839
92.6056
36.0571
212.2338
20.5295
26.8039
27.1802
27.3168
3.2928
91.397
35.8748
212.2869
20.8087
26.8382
27.1747
27.3903
3.5801
83.1278
37.3002
212.2366
20.5144
26.8482
27.1884
27.3528
3.476
90.2823
36.7402
212.2366
20.5148
26.8483
27.1882
27.353
3.4768
90.2273
36.7187
212.2366
20.5158
26.8491
27.1893
27.3537
3.4803
90.2984
36.8172
212.2367
20.5146
26.8478
27.1877
27.3525
3.4757
90.2217
36.7119
212.2369
20.5215
26.851
27.191
27.3564
3.5067
90.3056
37.8996
212.2365
20.5143
26.8476
27.1878
27.3524
3.4769
90.2348
36.6985
212.2365
20.5143
26.8477
27.188
27.353
3.4786
90.2147
36.7171
212.2365
20.5141
26.8477
27.188
27.3524
3.4762
90.2836
36.7343
212.2365
20.5141
26.8476
27.1878
27.3525
3.4759
90.216
36.6999
212.2365
20.514
26.8477
27.1879
27.3526
3.4764
90.2611
36.7359
The bold value represents the MNV which corresponds the actual outage case
Therefore, from Table 2, it can be inferred that all the line outages are detected successfully. Detail results of IEEE 5 bus and 14 bus systems are presented in this article so that the readers can easily understand the proposed method. For large systems like IEEE 57 system, a single case study is shown due to space limitation. The IEEE systems have been chosen as the testbed because these are the standard systems and used by the researchers for the validation of the developed model. Damodar valley corporation (DVC) [10] is a multipurpose power utility under the ministry of power, the government of India. The transmission network of DVC is spreading over different parts of West Bengal and Jharkhand. The DVC network consists of 38 buses and 48 lines. The proposed model is also applied to this DVC 38 bus practical Indian network.
234
M. Alam et al.
4 Discussion The developed scheme has been tested for all single-line OCs and a few double line OCs. A total of 250 OCs (143 single and 97 double) have been checked through a simulation study. However, few results are displayed here due to the limitation of space. In this regard, it is worth noting that for a few OCs the power flow solution does not converge and post outage measurements are not available for these cases. Hence, these cases can not be determined and these are termed as diverging cases. The diverging cases for different systems are shown in Table 3 which shows all single diverging cases of single contingency and few cases of double contingency. To check the effectiveness of the suggested model, identification accuracy is calculated. Identification accuracy is defined by the ratio of the number of cases identified successfully to the total number of trials taken. For example, 19 trials have been taken for 19 feasible single-line OCs of IEEE 14 bus network. Interestingly, all 19 cases have been identified successfully. So, the identification accuracy (excluding diverging cases) will be 19/19 = 100%. In this context, it is to be noted that one case i.e., outage of line 14 can not be identified due to diverging solutions. Therefore, the identification accuracy (including diverging cases) will be 19/20 = 95%. The identification accuracy of different test network is presented in Table 4. The comparison of identification accuracy is demonstrated in Table 5 which clearly implies that the suggested method provides better results in comparison with methods described in [6, 7]. The proposed model is formulated by exploiting the true AC characteristics of the power system. On the other hand, most of the existing models are rely on the DC load flow concept. Table 3 Diverging cases for different test networks Test bus 1-line outage
2-lines outage
IEEE 5
(5, 7)
NIL
IEEE 14 14
(14, 20), (17, 20)
IEEE 57 45, 48
(29, 30), (38, 39), (39, 40), (43, 44), (51, 76), (68, 69), (69, 70)
DVC 38 8, 9, 33, 34, 42, 43, 44, 46, 47 (7, 8), (9, 10), (32, 33), (41, 42), (44, 45), (47, 48)
Table 4 Identification accuracy for different test networks Test bus
Identification accuracy (diverging cases included) (%)
Identification accuracy (diverging cases excluded) (%)
IEEE 5
100
100
IEEE14
95
100
IEEE 57
97.5
100
DVC 38
81.25
100
A New AC Model for Transmission Line Outage Identification
235
Table 5 Comparison of identification accuracy for different test networks Method
IEEE networks
DVC 38 bus (%)
5-bus (%) 14-bus (%) 57-bus (%) AC method without loss [6]
–
–
91
–
DC method [6]
–
–
81
–
AC method with loss [6]
–
–
93.5
–
Ref. [7] with 30% PMU coverage
–
80
65
–
Proposed (including diverging cases)
100
95
97.5
81.25
100
100
100
Proposed (excluding diverging cases) 100
5 Conclusion In this paper, a new mathematical model is formulated using complex voltage phasors of PMU. The developed scheme is based on the AC power flow model and hence able to exploit the actual AC nature of the power network. Therefore, the developed model has also shown its superiority over the existing two methods in terms of identification accuracy. The effectiveness of the developed model is validated through different IEEE systems and DVC 38 bus practical Indian system. The proposed model can be adopted to identify line outage accurately. Therefore, the power system planners and researchers can take corrective actions accordingly so that the reliability and security of the power system can be ensured. The suggested model will also be helpful for the improvement of situational awareness and smart grid implementation. In the future, the proposed model may be implemented on large networks. Additionally, the effect of bad data may be examined on the performance of the algorithm and this is another scope of further research.
References 1. M. Babakmehr, F. Harirchi, A.A. Durra, S.M. Muyeen, M.G. Simões, Exploiting compressive system identification for multiple line outage detection in smart grids, in 2018 IEEE Industry Applications Society Annual Meeting (IAS) (Portland, OR, 2018), pp. 1–8 2. F. Yang, J. Tan, J. Song, Z. Han, Block-wise compressive sensing based multiple line outage detection for smart grid. IEEE Acc. 6, 50984–50993 (2018) 3. A.Y. Abdelaziz, S.F. Mekhamer, M. Ezzat, M.E.F. El-Saadany, Line outage detection using support vector machine (SVM) based on the phasor measurement units (PMUS) technology, in IEEE PES General Meeting (2012), pp. 1–8 4. H. Zhu, G.B. Giannakis, Sparse overcomplete representations for efficient identification of power line outages. IEEE Trans. Power Syst. 27(4), 2215–2224 (2012) 5. Q. Huang, L. Shao, N. Li, Dynamic detection of transmission line outages using hidden markov models. IEEE Trans. Power Syst. 31(3), 2026–2033 (2016) 6. A. Arabali, M. Ghofrani, M. Farasat, A new multiple line outage identification formulation using a sparse vector recovery technique. Electr. Power Syst. Res. 142(3), 237–248 (2017)
236
M. Alam et al.
7. J. Wu, J. Xiong, Y. Shi, Efficient location identification of multiple line outages with limited PMUs in smart grids. IEEE Trans. Power Syst. 30(4), 1659–1668 (2015) 8. M. Alam, S. Kundu, S.S. Thakur, S. Banerjee, A new algorithm for single line outage estimation, in 2019 Devices for Integrated Circuit (DevIC) (Kalyani, India, 2019), pp. 113–117 9. C. Chen, J. Wang, H. Zhu, Effects of phasor measurement uncertainty on power line outage detection. IEEE J. Sel. Topics Sig. Process. 8(6), 1127–1139 (2014) 10. www.dvc.gov.in
Perspective Analysis of Anti-aging Products Using Voting-Based Ensemble Technique Subarna Mondal, Hrudaya Kumar Tripathy, Sushruta Mishra, and Pradeep Kumar Mallick
Abstract The customer review is always beneficial for a company for making its product at the next level. There are thousands of products and their reviews which are impossible to analysis manually. Perspective analysis or sentiment analysis is an automated process which retrieves subjective opinion from the review and categorized as positive and negative. Through which an organization can apprehend that where their invention goes on. Living life with uncomfortable looks has a negative impact on our daily life. So, day by day increasing the use of anti-aging products is appreciable. Major portion of such data are available on Web sites like twitter, Facebook, Linked-In and also from e-commerce sites like Amazon, Flip-kart, and various types of blogs. Hence, this research paper aims to guide a sentimental analysis approach to customer reviews of anti-aging products from Amazon. Classify each review as an affirmative review; otherwise, neglect review by adopting several machine learning strategies. This research builds a model using supervised and ensemble machine learning approaches. The method of ensemble machine learning approach applied in this analysis is voting which altogether seven different classifiers: Gaussian Naive Bayes, logistic regression, random forest, bagging using decision tree algorithm, and boosting using extreme gradient boosting (XGB), SVM, k-nearest neighbor algorithm and give output according to best accuracy. The data have also been analyzed using text blob and VADER and accuracy score also compares with this model. The experiment in this research done using Python and NLTK and have created different scenarios for the evaluation of proposed methodology against the classifiers. The scenarios in feature extraction steps such as by using different types of N-grams S. Mondal · H. K. Tripathy · S. Mishra (B) · P. K. Mallick School of Computer Engineering, KIIT (Deemed to be University), Bhubaneshwar, India e-mail: [email protected] S. Mondal e-mail: [email protected] H. K. Tripathy e-mail: [email protected] P. K. Mallick e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_22
237
238
S. Mondal et al.
and analysis the accuracy for each and every classifier. The result which achieved the highest accuracy in logistic regression for using unigram but for some scenario, voting classifier also gives an optimum output. Keywords Sentiment analysis · Ensemble machine learning · Gaussian Naive Bayes · Bagging · Logistic regression · Boosting · k-nearest neighbor
1 Introduction Human beings have various phases in their life. From a health perspective, being a child to youth is the best part of life. During the youth phases of life, hormones work at magnificent way. The definition of aging is the revolutionary failing potentiality of human body’s inborn and genetic ability to protect, repair, and maintain itself to keep working every function of our body efficiently. Anti-aging products mainly help to achieve healthy and biologically efficient lifestyle. So, the sale of anti-aging products is increasing day by day. But the rate of selling a product depends upon its quality too. It gets to know by some of its customer reviews that the product is beneficial or not. Nowadays, online shopping sites are so popular for purchasing any products and there has a review section where customers can give a review about the product. But the analysis getting easier for us if there are few reviews but there are thousands of reviews that are not possible to analysis manually that is why human needs some help of technology. Sentiment analysis allows us to quickly process and extract useful illumination of huge volumes of text without having to read manually all of them. More specifically, it is useful to measure how people think about something. The data can be in form of tweets, reviews from online shopping sites, any kind of social media data or blogs, etc. Sentiment analysis is one kind of tool that enhances an organization’s understanding of customer opinions and actions. Sentiment analysis is a method of extract characteristics for a chunk of text in the formation of positive, neutral, and negative forms. This approach mainly uses natural language processing which is an important part for data science and machine learning techniques as a backbone to assign polarity scores to a part of that subject. Moreover, sentimental analysis mainly helps data analysts in an organization to understand customer opinion and experience about the product. It also conducts market research accurately and helps to monitor the brand and reputation of the product. Now, at this point, here, mainly focus on how to reduce its contradictions sites of the product to increase its efficiency and sale. Sentiment analysis helps to know that if the product is doing the job efficiently or not for which it was designed. The increasing number of positive opinions is proof of its efficiency but if the scenario is not alike, then a negative opinion gives guidance which portion needs to get improved. Aspect word is a solution to this problem. This research aims to find out the sentiment of anti-aging product reviews collected using Web scrapping from Amazon. It will farther be classified into positive and negative reviews by many supervised algorithms and voting-based ensemble machine learning models. This voting-based ensemble
Perspective Analysis of Anti-aging …
239
machine learning strategy has used seven different types of classifiers: Gaussian Naive Bayes, SVM, k-nearest neighbor, bagging using decision tree algorithm, and boosting using extreme gradient boosting (XGB) and also logistic regression, random forest algorithm, and most reliable accuracy it has given. The paper has systematized as follows: Sect. 2 shows the literature review part. The prepossessing step is shown in Sect. 3. Section 4 will show the anticipated model. Section 5 is about result and discussion. Finally, we conclude and discuss the future work of our paper in Sect. 6.
2 Literature Review So many works have already done in this field and so many needs to do. Here is the summary of some sentiment analysis on product reviews which are recently done. Zhao et al. suggest a deep learning approach for sentiment classification of product review data set with two steps: first of all, from the rating of reviews, extracting the general sentiment and then classification layers are built on top of the embedding layer. Two kind of low-level network structures they have mostly focused for modeling review sentences, one is convolutional feature extractors and the other is long short-term memory [1]. Alrehili and Albalawi with the help of ensemble machine learning model classified the reviews into positive approach or negative approach. They integrated five classifiers and they have used Weka for their experiment. They have also tested six different cases to evaluate their proposed model against the five classifiers [2]. D’souza and Sonawane, they have proposed a model based on dual sentiment analysis (DSA) on mobile device reviews from Amazon, helping out to choose the correct product. DSA also finds a path to overcome difficulties like the use for BoW model, polarity shift classification problems, and improving accuracy [3]. Wang et al., they are suggesting an emotion classification approach that is established on LSTM. They mainly analyze the regularization coefficient and learning rate parameters in this approach through the comparison of SRCONLY, FINETUNE, and LSTM-LON procedure on test data collecting from online sites like Amazon Books reviews and DVDs reviews. They mainly wanted to combine a lots of data which is mainly come from the Internet of things with the help of deep learning [4]. Vanaja and Belwal, they are focusing on extracting aspect terms from each Amazon customer review data and also recognize the parts of speech. By applying classification algorithms they are categorized the data in positive, negative and neutral form [5]. Zhang et al., they have used sentiment analysis with a fuzzy Kano model on Amazon data sets which is called aspect sentiment collaborative filtering algorithm (ASCF). ASCF enhances the precision of Item CF and opinionenhanced collaborative filtering, which provides precision which is highly recommended and at the precision which is quite similar it will provide fewer product recommendations [6]. Bouazizi and Ohtsuki, they have worked on Multi-purpose sentiment analysis in the twitter data set, which is detected the sentiment precisely of
240
S. Mondal et al.
the user rather than in general sentiment of that post. They have referred to this technique as ‘quantification’, which is an method that automatically attributes different scores to each sentiment in a tweet, and the highest scores sentiment got selected [7]. Kanakaraj and Guddeti, they have implemented an ensemble machine learning model on twitter data and showed that it will give 3–5% better accuracy rather than traditional machine learning classifiers and also, they have used Word Sense Disambiguation and WordNet sysnsets to increase the accuracy [8].
3 Prepossessing Steps for Data Set 3.1 Data Collection As per the purpose of such research is to make a model that performs sentiment analysis on the anti-aging product as accurately as possible, it has been collected customer review data set for analysis purposes. These data have collected customer reviews of the anti-aging product like anti_face_amra_beauty, turskine-vitamin [9] from Amazon through Web scraping. The data set has been collected from Amazon which is containing a total of 1000–3400 records. This research has applied a supervised classification and voting-based ensemble learning model. To create that model, need some label data, but the review data are not labeled. So, the data is labeled on its review rating. If the rating of the review is greater than 3 out of 5, it leveled that as positive review, and if it is less than 3, then it becomes a negative review.
3.2 Data Cleaning For the data cleaning and prepossessing, here, are the different steps applied. Like remove unwanted attribute from the data set, tokenize and removing punctuation, remove stop words, lemmatization or stemming, part of speech (POS) tagging. For Amazon review data set, first of all, have found out which attribute of that data sets are useful for analysis, except that it has been removed all the attribute from the data set. Then have applied data prepossessing step into that data sets. Now, the first step for data prepossessing is to convert the text in the form of tokens, and the process is called Tokenization [10]. In this research have used word tokenizer. After that, remove all the punctuation from the text. Then, removed stop words from the data sets because stop words in a sentence are the most common and highfrequency words like a, an, the, etc. Here, only kept the useful words from the sentence and removing it will increasing the model performance. Then apply stemming and lemmatization. Stemming and lemmatization [10] both mainly generate the root form of the inflected words. Stemming normally removes the inflections only like ‘caring’, ‘cars’ it converted into ‘car’ but lemmatization the word with their base form like
Perspective Analysis of Anti-aging …
241
it reduces the previous example into ‘care’ or ‘car’. Now, parts of speech or POS tagging [10] is a technique used to give particular attention of NLP researchers for product feature extraction. POS tagging classifies text in a category such as a norm, verb, adjective, etc., to each word. We are using this to extract the adjectives from the sentence.
3.3 Feature Extraction It is the most important step to build the model. For the understanding of this machine learning model, it is needed to convert all the text into a numerical representation which called vectorization. For making more efficient research also added N-grams [11]. A list of sequential words collection with size 1, 2 or N-words is mainly used in the N-gram technique. Unigram based on single word feature. By adding more word sequence, it is also adding a predictive power to the model. While in Bigram collection of the words happens based on the prior word, for example, nice quality. Trigram takes individual word with earlier two words, for example, a very nice quality. After feature extraction, now divided the data into training and test set.
4 Proposed Model For improving this model performance, ensemble learning is an efficient technique. Because it is a process where multiple machine learning approaches are strategically constructed to solve a particular problem. Some powerful techniques of ensemble learning models are like voting-based [12] ensemble learning where it creates a separate model using the same data set and according to their accuracy performs a voting process. Which model accuracy gets highest vote will be the final result. Let us assume, for instance, that after being reviewed by the five classifiers, the review result can come out as positive or negative, but the review will be classified as positive. Other techniques are like averaging, weighted average and also some advanced techniques like stacking, blending, bagging, and boosting. This model creates a voting-based ensemble learning model using seven different supervised classification algorithms which are Gaussian Naive Bayes, logistic regression, k-nearest neighbor, bagging using decision tree algorithm, boosting using extreme gradient boosting (XGB) [13], SVM, and random forest algorithm. [11] This model aims to bring a higher accuracy for classification. This proposed model classifies each review into two categories: one is positive category another one is for the negative category as shown in the Fig. 1.
242
S. Mondal et al.
Fig. 1 Recommended replica
5 Result Discussion This research have implemented this proposed methodology by using Python which is a powerful programming language and Anaconda Navigator which is a desktop graphical user interface (GUI) included in Anaconda distribution which allows to launch its applications in a user-friendly environment. It has used NLTK which is a popular Python library for NLP. Natural language processing (NLP) is about developing applications and services that can understand human languages. This research is applied in many scenarios to achieve optimum results. First of all, the scenario which is applied in the data prepossessing steps like N-gram, in Table 1, here, can see the model giving highest accuracy when it has used only Unigram and also have used three of it (Unigram, Bigram, Trigram) (Fig. 2). Table 1 Accuracy analysis for all the classification techniques by apply N-gram N-gram
Gaussian SVM Logistic k-nearest Random Bagging Boosting Voting Naive regression neighbor forest using using Bayes decision XGB tree algorithm
Unigram
0.608
0.86
0.86
0.795
0.84
0.820
0.841
0.854
Bigram
0.77
0.8
0.8
0.795
0.804
0.795
0.783
0.8
Trigram
0.525
0.795 0.795
0.795
0.79
0.795
0.795
0.795
Unigram, 0.76 Bigram
0.84
0.79
0.85
0.82
0.82
0.81
Unigram, 0.762 Bigram, Trigram,
0.841 0.84583
0.78333
0.82083
0.82083
0.8375
0.82
Bigram, Trigram
0.79
0.79
0.79
0.81
0.78
0.79
0.76
0.85
0.79
Perspective Analysis of Anti-aging …
243
Fig. 2 Accuracy result analysis by applying N-gram for all the classification techniques
Also testing this model in various train size on those scenarios which give more accurate results. Table 2, show this voting-based ensemble learning model giving heights accuracy when the model has used Unigram and training size of this model is 0.75, means it is using 75% of data as training data and 25% of data as testing data. Figure 3 shows us precision, recall, F1-score, and support value where it has got heights optimum value in this research. In Fig. 4 are comparing proposed model accuracy with other predefined python tools, Text Blob, and VADER. Text Blob is a Python library which arranges a API for dropping into common natural language processing activities such as part-of-speech tagging, sentiment analysis, classification, and more. [11] Valence Aware Dictionary and Sentiment Reasoner (VADER) is a sentiment analysis tool which is based on lexicon and rules that is used specially for sentiment analysis of social media data.
6 Conclusion and Future Work This research proposed a sentiment analysis architecture for anti-aging product review data. Where it has categorized the reassessment into positive and negative categories adopting a voting-based ensemble machine learning approach using seven different types of classifiers: Gaussian Naive Bayes, k-nearest neighbor, random forest, bagging using decision tree algorithm, SVM, logistic regression and boosting using extreme gradient boosting (XGB) [14] and tested different types of scenarios.
0.762
0.802
0.777
0.75
0.50
0.20
0.702
0.20
0.802
0.658
0.50
0.90
0.608
0.75
Unigram, Bigram, Trigram
0.522
0.90
Unigram
Gaussian Naive Bayes
Training data size
N-gram
0.80
0.81
0.84
0.82
0.81
0.81
0.86
0.86
SVM
0.805
0.818
0.84583
0.8541
0.822
0.829
0.86
0.864
Logistic regression
0.808
0.783
0.78333
0.812
0.792
0.787
0.795
0.802
k-nearest neighbor
Table 2. Accuracy result analysis according to different types of training data size
0.804
0.806
0.82083
0.843
0.808
0.820
0.84
0.843
Random forest
0.799
0.804
0.82083
0.843
0.790
0.806
0.820
0.843
Bagging using decision tree algorithm
0.815
0.812
0.8375
0.833
0.817
0.814
0.841
0.833
Boosting using XGB
0.807
0.808
0.82
0.833
0.812
0.812
0.865
0.864
Voting
244 S. Mondal et al.
Perspective Analysis of Anti-aging …
245
Fig. 3 Performance measure for all the classification techniques
Fig. 4 Accuracy comparison between recommended model with sentiment analysis tools
We have got an optimum accuracy in SVM and logistic regression classifier for unigram, which is near about 86% but in other scenarios, ensemble model gives a well performance. So, by using this model, we could make an assumption that anti-aging product is how much efficient of its users.
246
S. Mondal et al.
In the future, we are trying to implement other approaches for making our model more efficient and also try to use some aspect-based analysis approaches on negative review data set so that we cloud find out unsatisfactory features of the products. So that organization can improve their product quality according to it and increase its selling rate.
References 1. W. Zhao et al., Weakly-supervised deep embedding for product review sentiment analysis. IEEE Trans. Knowl. Data Eng. 30(1), 185–197 (2018) 2. A. Alrehili, K. Albalawi, Sentiment analysis of customer reviews using ensemble method, in International Conference on Computer and Information Sciences (ICCIS) (2019), pp. 1–6 3. S.R. D’souza, K. Sonawane, Sentiment analysis based on multiple reviews by using machine learning approaches, in Proceedings of. 3rd International Conference on Computing Methodologies and Communication (ICCMC) (2019), pp. 188–193 4. H.B. Wang, Y. Xue, X. Zhen, X. Tu, Domain specific learning for sentiment classification and activity recognition. IEEE Acc. 6, 53611–53619 (2018) 5. S. Vanaja, M. Belwal, Aspect-level sentiment analysis on e-commerce data, in Proceedings of the International Conference on Inventive Research in Computing Applications (ICIRCA) (2018), (IEEE Xplore Compliant Part Number: CFP18N67-ART; ISBN: 978-1-5386-2456-2) 6. J. Zhang, D. Chen, M. Lu, Combining sentiment analysis with a fuzzy kano model for product aspect preference recommendation. IEEE Acc. 6, 59163–59172 (2018) 7. M. Bouazizi, T. Ohtsuki, Multi-class sentiment analysis in twitter: what if classification is not the answer. IEEE Acc. 6, 64486–64502 (2018) 8. M. Kanakaraj, R.M.R. Guddeti, Performance analysis of ensemble methods on twitter sentiment analysis using NLP techniques, in Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing, IEEE ICSC 2015 (2015) 9. The Best Anti-Aging Products, According to Thousands | Real Simple, https://www.realsimple. com/beauty-fashion/best-anti-aging-products-amazon. Accessed 28 Dec 2019 10. NLP for Beginners: Cleaning and Pre-processing Text Data, https://towardsdatascience.com/ nlp-for-beginners-cleaning-preprocessing-text-data-ae8e306bef0f. Accessed 28 Dec 2019 11. Sentiment Analysis with Python (Part 2)—Towards Data Science, https://towardsdatascience. com/sentiment-analysis-with-python-part-2-4f71e7bde59a. Accessed 28 Dec 2019 12. A Primer to Ensemble Learning—Bagging and Boosting, https://analyticsindiamag.com/pri mer-ensemble-learning-bagging-boosting/. Accessed 17 Dec 2019 13. A Comprehensive Guide to Ensemble Learning (with Python codes), https://www.analyticsvid hya.com/blog/2018/06/comprehensive-guide-for-ensemble-models/. Accessed 28 Dec 2019 14. M.G. Huddar, S.S. Sannakki, V.S. Rajpurohit, An ensemble approach to utterance level multimodal sentiment analysis, in Proceedings of International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS) (2018), pp. 145–150
Analysis of a Career Prediction Framework Using Decision Tree Ankit Kumar, Rittika Baksi, Sushruta Mishra, Sourav Mishra, and Sagnik Rudra
Abstract Today in this competitive world, people mostly tend to focus on students who are very focused with their career and have their ideologies and goals set but in that long run the students who have no aim in their lives and are without any ideologies or goals set but have a great potential to perform well in certain aspects are forgotten. Our paper is entirely for them. The research has been conducted in such a way so that a platform can be provided for students to interact with and set up their goals thereby exploring their pros and cons in such a way that they achieve their own goals in their respective fields and not feel left out. Here a decision tree model has been proposed to efficiently help in determining career goals of students. Later performance evaluation has been carried out to compute the effectiveness of classification. Keywords Career counselling · Machine learning · Decision tree · Classification
1 Introduction In twenty-first century, it is found that most of the students scoring 97 or 99% in their yearly exams or in their secondary or primary exams. Students have become very competitive. The hunger to achieve and competition is somewhere eating up some A. Kumar · R. Baksi · S. Mishra (B) · S. Mishra · S. Rudra School of Computer Engineering, KIIT (Deemed to be University), Bhubaneshwar, India e-mail: [email protected] A. Kumar e-mail: [email protected] R. Baksi e-mail: [email protected] S. Mishra e-mail: [email protected] S. Rudra e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_23
247
248
A. Kumar et al.
terms like “loyalty” and “friendship.” But among them if a quick view is taken, then it will be found out that they are few in numbers. Most of the students are aimless or hobby less, but they are also participating in the long run of achievement in academics due to the pressure of elders and relatives. This Web site is built to help students to let them know that being aimless is not a crime. They have a right to dream, they too have a right to achieve success but in other fields which the Web site would help to explore by counselling them. The Web site is built to say students that other than academics, and there is also a big unexplored world. It will help them to explore that world and counsel them so that they should not consider themselves less that other because as they say “Every Child Is Special.” Our study mainly focus on • Career-related questionnaire. • Random quiz-based analysis. Most importantly the outcome consists of the counseled career for the user. Before ML career prediction was not even an option for concern. Studies getting higher number used to go for science and with lower number used to go for humanities. But after that situations changed, people started to aim for bigger opportunities in life. In this race of dreams, we missed out a huge mass among our population who surely had the potential but were unaware what they actually want from life. Initially, career predictors were their family, friends, and relatives. But it was considered extremely unprofessional. To make it more professional, career prediction centers were raised and a huge amount of fees were charged for career prediction. Then we came up with this idea of cost-free online career prediction Web site using ML which will help users to direct click on the Web site, answer a set of questions about themselves, their liking, disliking, hobbies, and after answering those questions within few second, the appropriate career for them will be suggested. Machine learning can have so many advantages in the field of career prediction. First of all, the main and important part is that based on their answers, it can give plenty of options at a single hit which might not be come in one’s mind that this can be an option also. Second one is that its you do not need to face anyone for your advice; i.e., it will be only you and the algorithm which will interact each other. A set of questions based on their interest, hobbies, and past experiences which will act as a data set for machine learning algorithm and will refine it to predict the possible outcomes. On the basis of possible outcomes, suitable careers will be suggested.
2 Literature Survey Many applications and Web sites present over Internet which suggest suitable career for the students. But mostly they only use personality attribute as the one and only feature to suggest the career, and result might be an inconsistent or incorrect. Certain Web sites that suggest career that focuses mainly on the student’s interest. They do not try to gather more information about the student and understand if the student can
Analysis of a Career Prediction Framework Using Decision Tree
249
survive in that field or not. The paper by [1] Beth and Janet indicates the importance of learning analytics to predict and improve the performance of students which advice the importance of student’s interest, strengths, ability, disability, etc., in their performance. The paper by [2] Lokesh et al., the accuracy of career prediction was determined using 12 features of student and different classifiers with c4.5 have the highest accuracy of 86%. [3] According to another paper by Roshani, Deshmukh has incremental group of classifiers and the final results were determined by using “Majority voting rule.” The accuracy observed by the proposed algorithm was of 90.8%. The work by Mustafer [4] suggested the importance of different attributes in evaluating the performance of faculty. As compared to other classifiers like ANNQ2H, CART, and SVM, its comparison proved that the most accurate classifier was c5.0 that has the maximum attribute. Also, the system suggestions provided are very much generalized and not particular to any university or country/state. For example, some systems suggest a group of courses that include data analyst, law, doctor, accountant, etc., then students might get confused as the specified course are of different streams. A study in [5] discussed the importance of big data and its constant rise in educational domain. A research prediction on computing the overall grade of students was undertaken in [6]. It generated a 76% accuracy in prediction. A work in [7] presents a study that forecasts the number of admissions in an educational institute. In [8], Sushruta et al. performed a detail analysis of some vital bioinspired optimization methods and thereby applied those methods in efficient classification of tumors. Sushruta et al. [9] presented a prediction model to evaluate academic performance of students using a decision tree approach.
3 Working Model In the first stage of Fig. 1 (Collection of career date set), a set of questionnaire based on likes, dislikes, and talents possessed by the individual was prepared and circulated among people like friends, family, and relatives working or reading in different areas. A minor research was done via some Web sites and words of some experienced people to prepare a good set of question so that it can help us to reach the goal. After all the set of questions and words, one analyzed a set of question was prepared. A set of around 26 questionnaires were made keeping in mind the requirements specified. With the help of Google forms, a survey was made on few successful people in their career and their mindset about that particular career 10 to 5 years before the lane. In stage 2 (feature selection using correlation matrix), different questions treated as features were selected and refined using different techniques like correlation matric and null value removal. Different questions like—Do you love reading or playing in your free time? If you have to spend your Sunday and have only two options, what will you prefer a sports game or a science museum visit? etc. In the third stage (split into training and testing dataset at 80:20 ratio), the whole career data set was split into training and testing data set after applying different feature selection techniques. It was done in the ratio 80:20, i.e., 80% of the career data set used for
250
A. Kumar et al.
Fig. 1 Career prediction machine learning model
training the model and 20% used for testing the model. Then, at fourth stage (use of different algorithms), implementation of four algorithms, i.e., decision tree, logistic regression, SVC, and KNN, was done and result was analyzed in the fifth stage (result analysis). Classification in machine learning is a systematic approach which classifies models according to input data set and then reaches a conclusion. Decision tree classifier is a famous and easy to use classification technique. It has a straight forward approach to find a solution of the classification problem. It creates a series of questions with respect to the attributes of data. Whenever it gets an answer, a follow-up question is asked which continues till a conclusion regarding that attribute is reached. The conditions are kept in a tree structure. To separate records of different characteristics, the root and internal nodes have the feature test conditions. Yes or No label is assigned at the terminal node of each branch.
Analysis of a Career Prediction Framework Using Decision Tree
251
3.1 Result Analysis Four machine learning algorithms were used which is best suited for these kind of problem statements and performance matric used here where accuracy, precision, recall, F-scores will be categorized vs. the algorithms, which shows the correct prediction out of 100 trials. The train and test data were divided into 80:20 ratio and found the following result. Figure 2a shows the comparison of accuracy rate observed by using different algorithms. Figure 2b, c show the comparison of precision and recall observed during the study, and Fig. 2d shows the analysis of F-score between the machine learning algorithms. After comparing the different performance matric, i.e., accuracy rate, precision, recall, and F-score, it has been found that the decision tree worked best for this study. It uses the techniques of dividing the decision at different levels. During preprocessing, it needs less effort of data preparation. Normalization and scaling of data are not required. It was also observed that the decision tree works best for the this study with the accuracy rate of 93.33%, precision of 91.5, recall of 89.0, and F-score of 45.12 (Table 1).
a)
c)
95 93
90
100 90 80
89.33
84.5
89 81.5 71
70 85
60
84.5
50 80
40 78
30
75
20 10
70
0 Logisc Regression
Decision Tree
SVC
KNN
Logisc Decision Tree Regression
Accuracy (%)
b) 80
86
91.5
50 45
83 72.33
70
KNN
Recall (%)
d)
100 90
SVC
40
42.65
45.12 41.12 35.82
35
60
30
50
25
40
20
30
15
20
10
10
5
0
0 Logisc Decision Tree Regression
SVC
KNN
Precision (%)
Fig. 2 Performance parameters evaluation of our study
Logisc Regression
Decision Tree
SVC
F-Score (%)
KNN
252
A. Kumar et al.
Table 1 Analysis of accuracy rate, precision, recall, and F-score Classifier/Parameter
Accuracy rate(%)
Precision
Recall
F-score
Logistic regression
89.33
86.0
84.5
42.65
Decision tree
93.0
91.5
89.0
45.12
SVC
78.0
72.33
71.0
35.82
KNN
84.5
83.0
81.5
41.12
4 Conclusion and Future Scope The basic idea was to create a career counseling Web site so after all the data collection and refining from that data on analysis was performed based on the data set and converting them into rows and columns. These data sets were nothing but a set of 26 questionnaires which was answered in the form of Yes/No. Decision tree and classification algorithm were used to classify and study the data and put them together and deployed that in pickle format. As soon as the questions will be solved by the individual, he/she will be given a chart based output on the basis of his answers where a satisfying suggestion will be provided to the user that what career choice will be ideal for him in the long run of life. The future cannot be anybody’s guess but the Web site that has been created has tried to keep simple to the users. Just like a psychometric test and the outcome will be in visualizations. The system is based on the ideology of consulting future where it consults the long-term future of the individual with exclusively new ideas updated as soon as invented. Computer is the king. So in the long run, it is expect that visiting a counselor spending hours with him and again booking another appointment and wait in the queue waiting for your turn to come up and paying a huge sum of money to him will be left far behind by this Web sites enabling career counseling which will be mess free, where the users need to solve a set of question and spend just an hour of their so valuable time and get the perfect career for them in the long run and at free of cost, relaxing in home.
References 1. U.D. Beth, H.E. Janet, Using learning analytics to predict (and improve) student success: a faculty perspective. J. Interact. Online Learn. 12, 17–26 (2013) 2. K.S. Lokesh, R.S. Bhakti et al., Novel professional career prediction and recommendation method for individual through analytics on personal traits using C4.5 algorithm, in IEEE Communication Technology (GCCT), 3 December 2015 3. A. Roshani, P.R. Deshmukh, An incremental ensemble of classifiers as a technique for prediction of student’s career choice, in IEEE Networks and Soft Computing (ICNSC), 25 September 2015 4. A. Mustafer, Predicting instructor performance using data mining technique in higher education. IEEE 4, 2379–2387 (2016) 5. C. Ling, R. Dymitr et al., Big data: opportunities for big data analytics, in IEEE Digital Signal Processing (DSP), 10 September 2015
Analysis of a Career Prediction Framework Using Decision Tree
253
6. M. Yannick, X. Jie et al., Predicting grades, in IEEE Transactions on Signal Processing, 15 February 2016 7. S. Mishra, S. Sahoo, B. K. Mishra, S. Satapathy, A quality based automated admission system for educational domain. In 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES) (pp. 221–223). IEEE. October 2016 8. S. Mishra, H.K. Tripathy, B.K. Mishra, Implementation of biologically motivated optimisation approach for tumour categorisation. Int. J. Comput. Aided Eng. Technol. 10(3), 244–256 (2018) 9. S. Mishra, A. Panda, Predictive evaluation of student’s performance using decision tree approach. J. Adv. Res. Dynam. Control Syst. 10(14), 511–516 (2018)
Machine Learning Approach in Crime Records Evaluation Sushruta Mishra, Soumya Sahoo, Piyush Ranjan, and Amiya Ranjan Panda
Abstract Due to the increase in the number of criminal records, research is going on with the help of data analytics to track the criminal nature and behavior for better understanding and to secure our society from the criminal. There are many ways to analyze the criminal records, but few models focus on prediction. In this paper, we have discussed the models used for predicting criminal behavior. We used four types of the model here for prediction the crime, and those models includes the LWL algorithm, linear regression, naive Bayes, and decision trees. LWL algorithm performance shows that it can be used for general reliably crime prediction with the frequency of antisocial behavior. Keywords Machine learning · Crime prediction · Regression · Decision trees · Locally weighed learning (LWL) algorithm
1 Introduction Due to the increase in criminal activity all over the world, fear is being created in the society. Law enforcement organizations create a huge size of criminal data each year, and the main challenges of this data are to analyze the data, so that it can be used in the future to avoid criminal activity. These kinds of data have multiple attributes and details regarding residents, sexual assault, demography, and such data. S. Mishra (B) · P. Ranjan · A. R. Panda School of Computer Engineering, KIIT (Deemed to be University), Bhubaneswar, Odisha, India e-mail: [email protected] P. Ranjan e-mail: [email protected] A. R. Panda e-mail: [email protected] S. Sahoo C.V Raman College of Engineering, Bhubaneswar, Odisha, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_24
255
256
S. Mishra et al.
By analyzing these data, we can not only identify the feature for a high crime but also help to take essential steps or action to prevent the crime. To extract important information data from the criminal data, we used machine learning techniques and its algorithms. To predicting useful information or to find a significant pattern from a large amount of crime data, machine learning techniques are used. By discovering the knowledge using machine learning, we can have the output that explaining the criminal activity from the crime data. Case-based reasoning techniques are used by the police who investigate the crime and relate it to machine learning. Therefore, police data is used for data mining. To predict future crimes, predictive analysis is used which is a statistical technique. These models generally measure the scores, which help to extract features like accuracy, precision, and recall. To achieve these, data samples are required to be trained with a relevant mining technique, and then, another data record is used to test the model’s accuracy. In this paper, various classification models are applied in the criminal dataset, and the criminal information is refined which can help the law and order personnel to take active action against criminals in a particular locality. In the next section, review of some important work undertaken by different researchers is being discussed. Then, the proposed methodology of the crime analysis model is presented with description. Finally, the implementation of the results is explained.
2 Literature Survey Several issues like storage of information, processing and analysis of data, and data warehousing are some critical challenges with the rise of criminal-related data records. Some popular machine learning approaches used in study of criminal data include classification, clustering, and frequent pattern analysis. Various popular methods have been developed and used in recent times to solve the issue of knowledge extraction from heaps of data using these machine learning algorithms. In [1], Malathi and Baboo developed a decision tree classification approach that was useful in prediction of criminal trends for the subsequent time period. Using eight years previous crime-based data, they predicted the frequency count of criminal activities for the subsequent year. Thongtae [2] presented a succinct study of various effective techniques of machine learning implemented in criminal data research based on historical data in previous works being carried out. Various feasible sources of crime information mining was studied by Ozgul [3] by describing the suitability of knowledge discovery based on various methodologies. He analyzed CRISP-DM method used in prediction and cluster analysis tasks. A general architectural framework for criminal data records was developed by Chen [4] which showed interrelationship between machine learning methods which are applicable for intelligent analysis tasks and determining crimes in national and international level. Xue and Brown [5] presented a prediction approach of future crime sites on the basis of discrete selection theory and cluster analysis. In another research carried out by Oatley and Ewart [6], the focus was on identification of the probability of repeated crimes for a specific
Machine Learning Approach in Crime Records Evaluation
257
entity and property. It used Bayesian belief network for the analysis. A classification technique implemented by Yu et al. [7] categorized regions into hotspots and cold spots and predicted whether an area will most likely be a hot spot for burglary in residential areas. In [8], Mishra et al. discussed the significance and impact of data redundancy and replication on energy level of wireless constraint networks. Sahoo et al. [9] used LVQ technique to illustrate and analyze the clustering deviation issue on breast cancer dataset. Mishra et al. [10] discussed and proposed a new adaptive feature optimization model that enhanced the dengue prediction with higher accuracy.
3 Dataset Used The samples utilized in our investigation are generated from data.police.uk, an open data repository on crime and police surveillance in England. This information is made public in December 2010 and is regularly updated. In addition, the known problem is discussed by the United Kingdom Police Department and how they are being resolved, like the accuracy of the location, the coincidence of judicial results, double counting of antisocial behavior and crime, constant data changes, and lack of data of results. Table 1 describes the attributes/characteristics of the dataset. Among all of our samples, the attributes “Reported by” and “Falls within” holds the value “Hampshire Constabulary.” The description suggests that the “If In” feature may vary with time, although these attributes are currently identical. The “location” attribute gives an Table 1 Crime analysis dataset Name
Data type
Description
Crime ID
Nominal
Nominal crime identification
Month
Crime date in yyyy-mm format
Reported by
The agency which gave crime data
Fall within Long
Similar to “Reported by” Interval
Anonymous length coordinates of the crime
Nominal
Specific or close to the place of the crime
Latit Loc
Anonymous crime latitude coordinate
LSOA code
Lower layer super output area (LSOA) code where crime was done
LSOA name
LSOA name where crime was conducted
Crime type
16 kinds of crime based on data police UK
Last outcome type
Referring to any of the relevant results within the crime it happened of late
Context
Extraneous data
258 Table 2 Crime dataset summary
S. Mishra et al. Data objects
196,374
Features
12
Cell values
5,899,464
Missing data
1,453,544
% of Missing data
16%
explanation of the crime site with respect to a point of reference like B. A street (e.g., A2030, Andover Way) or a specific focus of interest (e.g., shopping area, supermarket, parking lot). The attributes describing LSOA are related to the Lower Layer Super Output Area (LSOA) in which the anonymous point is located in accordance with the LSOA limits provided by the United Kingdom Office of National Statistics. There are 1454 unique LSOs for Hampshire Constabulary. The type of crime is one of the 16 categories used by the frequency police in the Hampshire Constabulary data used in our investigation. The “Last Result” category offers options such as: under investigation; unable to pursue suspects; investigation completed—no suspect identified, perpetrator warned; fines for offenders. The “context” attribute is a textual description of the context of a crime. With recently published data, this is always empty. An instance is a data object or data record that is identified by the attributes described above. The data is published in monthly records, with each row being an instance, i.e., H. A crime (Table 2).
4 Proposed Crime Prediction Model The initial stage in analyzing the crime prediction methods was to aggregate the data into a set of data by adding the monthly files. Understanding data involves a process of familiarization with the data, which includes problems of detection quality and data properties which is used for building the model. Preparation of data tells about the method to format the data desired practice in the modeling and which includes, for example, selecting the features, transforming or creating the attributes, and eliminating the data which has noise. The model building includes the applications of various techniques of model building or algorithms. The valuation refers to the evaluation of the model’s quality which is described in the previous phase. The deploying is dependent on the requirement of the first phase; it basically varies from a small documentation with the output to a multifaceted execution on the basis of the development of the model. Here, 80% of the data will be acting as the training data, and 20% data will act as testing samples. Objective is predicting most of the functions with the precise mining framework that affects a higher rate of crime that will gradually be helpful to the law and order department to take appropriate measures. For this reason, a specific execution of the subsequent algorithm predictive models of data mining has been adapted (Fig. 1).
Machine Learning Approach in Crime Records Evaluation
259
Raw Data
Data Integration Feature Analysis
Dimensionality Optimization Attribute Creation Data Aggregation
LWL Algorithm
Evaluation
Naive Bayes
Evaluation
Linear Regression
Evaluation
Decision Tree
Evaluation
Fig. 1 Crime prediction model using machine learning
Algorithms used in this research for crime analysis task includes:
4.1 Decision Tree It is a machine learning algorithm which is a flowchart like structure where internal nodes represent a “test” on a feature, every branch characterizes the output of the test data, and every node denotes the label of the class.
260
S. Mishra et al.
4.2 Naive Bayes It is a machine learning technique that is used to classify which users find the probability of the belonging class using Bayes rule, and this algorithm uses the idea of conditional probability.
4.3 Linear Regression It is also a data mining technique that works on the basis of predicting the modeling where the objective variable to estimate is continuous.
4.4 Locally Weighted Learning Algorithm The locally weighted learning (LWL) algorithm is also a machine learning model that performs a regression that provides a response to the query, “uses local weighted training to average, interpolate, extrapolate, or to put together the training data.” It is used for forecasting, using distance-weighted regression by adjusting the surface tone.
5 Result and Analysis In this research, we attentively build prediction models to detect the type of crime, frequency of each crime, using the official dataset published by the UK police. Our main aim is to investigate the performance level which obtained from 50 months of a criminal record. Another purpose was to compare the global framework which includes all kinds of crime with the special prototype that detects a specific crime. For the relevant system prototype, anti-social attitude kind is chosen by us as it includes most frequent crimes out of 16 types of crimes. Four different classifiers are implemented in this study which include linear regression, LWL algorithm, naive Bayes, and decision tree. These algorithms are evaluated with different evaluation metrics such as accuracy rate, precision, recall, and F-Score. Accuracy calculates the performance of every model which is given in percentage that helps to find the total number of correct features, and a number of positive features forecast by the model define the accuracy. Performance evaluation of this crime data is done on the four different classifiers mentioned above. It is observed that the LWL algorithm has the highest MAE error rate with 3.35 while naive Bayes records the lowest MAE value of 2.26, as far as the correlation coefficient metric is concerned, LWL records the lowest value of 0.75. The training time and latency are also observed to be the
Machine Learning Approach in Crime Records Evaluation
261
Table 3. Evaluation result of classifiers on crime dataset Evaluation metric
LWL
Decision tree
Linear regression
Naive Bayes
Mean absolute error (MAE)
3.35
2.86
2.33
2.26
Correlation coefficient
0.75
0.83
0.85
0.85
Time training
0.02
145.4
2639.4
4674.56
Time testing
486.8
343.76
2.19
1.52
Performance Evaluation 0.95 0.9 0.85 0.8 0.75 0.7 Decision tree
LWL Algorithm Accuracy Rate
Naive Bayes Precision
Linear Regression
Recall
Fig. 2 Performance evaluation of accuracy, precision, and recall metric on crime dataset
least with the LWL algorithm. While the training time for the LWL algorithm is only 0.02 s, the time to test the model is measured to be 486.8 s (Table 3). The classification accuracy rate for the LWL algorithm was found to be the highest of 93%, while it is lowest for naive Bayes with 81.7%. The precision metric is optimal for the LWL algorithm, and the least recall value is determined for the LWL algorithm too (Fig. 2).
6 Conclusion In this paper, we attentively build prediction models to detect the type of crime, frequency of each crime, using the official dataset published by the UK police. Our main aim is to investigate the performance level which obtains from 40 months of a criminal record. In terms of timestamp, we focus on the per-month-update level because UK police update the data on a once-a-month basis. LWL is an instancebased quick learning algorithm, but it does not produce a clear understanding model.
262
S. Mishra et al.
We found that this LWL algorithm performance is poor, and it may be more fit for a specific model. Such models predict all the crimes in a specific area or just a specific criminal activity used in the decision-making process for the police resource allocation. An additional resource is needed to deal with the criminal where the crime rate is increased. The prediction results tell us where the crime rate increases and where it decreases which helps to transfer troops and resources from one place to another that helps to decrease the crime rate.
References 1. A. Malathi, S.S. Baboo, An enhanced algorithm to predict a future crime using data mining. Int. J. Comput. Appl. 21(1), 1–6 (2011) 2. P. Thongtae, S. Srisuk, An analysis of data mining applications in crime domain, in IEEE 8th International Conference on Computer and IT Workshops (2008) 3. F. Ozgul et al., Incorporating data sources and methodologies for crime data mining, in IEEE Proceedings of 2011 International Conference on Intelligence and Security Informatics (2011) 4. H. Chen et.al., Crime data mining: a general framework and some examples. IEEE J. Mag. 37(4) (2004) 5. Y. Xue, D.E. Brown, Spatial analysis with preference specification of latent decision makers for criminal event prediction, Decis. Support Syst. 41(3), 560–573 (2006). http://www.scienc edirect.com/science/article/pii/S0167923604001319 6. G.C. Oatley, B.W. Ewart, Crimes analysis software: ins in maps clustering and Bayes net prediction. Expert Syst. Appl. 25(4), 569–588 (2003) 7. C.-H. Yu, M.W. Ward, M. Morabito, W. Ding, Crime forecasting using data mining techniques, in 2011 IEEE 11th International Conference on Data Mining Workshops (ICDMW) (2011), pp. 779–786 8. S. Mishra, S. Sahoo, B.K. Mishra, S. Satapathy, Data replication in clustered WSNs: a nonoptimal energy retention criterion. Int. J. Control Theor. Appl. 9(17), 8579–8592 (2016) 9. S. Sahoo, S. Mishra, S.K. Mohapatra, B.K. Mishra, Clustering deviation analysis on breast cancer using linear vector quantization technique. Int. J. Control Theor. Appl. 9(23), 311–322 (2016) 10. S. Mishra, H.K. Tripathy, A.R. Panda, An improved and adaptive attribute selection technique to optimize dengue fever prediction. Int. J. Eng. Technol. 7, 480–486 (2018)
German News Article Classification: A Multichannel CNN Approach Shantipriya Parida, Petr Motlicek, and Satya Ranjan Dash
Abstract Nowadays, more and more people are gaining interest in news and social media networks, and are also sharing their opinions freely in different languages. Such kind of activities leads to interesting topics of research that scientists are working on. Considering news, it must be classified and easily accessible by the users for the information of their interest. In comparison with traditional machine learning techniques, deep learning approaches have achieved surpassing results on natural language processing tasks. Convolutions neural networks (CNNs) have shown promising performance, which extracts n-grams as features to represent the input. In this work, we build a multi-channel CNN for German news article classification. The model can classify different categories of news articles with an accuracy of 99.2% on training and 81.4% on the test dataset. We also perform a comparative study with single-channel CNN and have found that the multi-channel approach outperforms the single-channel by +6.3% absolute on the test set. Keywords Convolution neural network · Multi-channel CNN · News article classification
S. Parida · P. Motlicek Idiap Research Institute, Martigny, Switzerland e-mail: [email protected] P. Motlicek e-mail: [email protected] S. R. Dash (B) School of Computer Applications, KIIT University, Bhubaneswar, Odisha, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_27
263
264
S. Parida et al.
1 Introduction The objective of text classification is to automatically classify documents by assigning one or more predefined tags/categories based on their content. In the past few years, deep learning methods were found to be effective for natural language processing (NLP)-related tasks. In several domains related to NLP applications including text classification, deep learning techniques result better as compared to traditional machine learning approaches [1, 2]. Two main types of deep neural network architectures that are widely explored for handling various NLP tasks and provide competitive results are CNN and recurrent neural network (RNN) [3]. CNNs are generally used in computer vision and have shown to achieve better performance on text classification tasks [4–6]. The convolutional layers of CNN are extracting the features using geometrically fixed filters and can be regarded as an implementation of the n-gram language model [7]. CNN generally outperformed RNN in capturing high-level features in short text [8]. A series of experiments with CNN for the task of sentence classification built on top of “word2vec” (with little hyper-parameter tuning) has shown excellent results over multiple benchmarks [4]. Even one-layer CNN performs excellently for sentence classification [9]. Deep CNN architectures operating directly on characterlevel representation/input have shown an improvement in text classification tasks [10]; however, for inputs such as patent text containing full of technical and legal terminologies, the performance may not be adequate [11]. The paper is organized as follows. Section 1 describes related work on text classification. Section 2 explains the proposed model architecture. Section 3 explains the dataset used in our experiment. Section 4 explains the experimental settings: pre-processing and hyper-parameters. Section 5 provides evaluation results with analysis and discussion. The paper is concluded in Sect. 6.
2 Model When applying CNN to text rather than an image, we have a one-dimensional array representing text and the architecture changed to 1D convolutional-and-pooling operations [12]. We define single- and multi-channel models. The multi-channel CNN is a combination of many versions of the standard model. The standard model contains an embedding layer as input, next to a one-dimensional CNN, a pooling layer followed by a prediction output layer with kernels of different sizes. This enables the text to be processed for various n-grams (groups of words) at a time, while the model learns how to best integrate these interpretations [4]. We define a model containing 3 input channels for processing different n-grams (4, 6, and 8-g) of the input text as shown in Fig. 1. Each of the channels consists of the following elements:
German News Article Classification: A Multichannel CNN Approach
265
Fig. 1 Model architecture with three channels
• Input layer defines the input sequences length. • Embedding layer is set to the vocabulary size, and dimension 100 to store realvalued representations. • One-dimensional convolutional layer having a filter size of 32 and the kernel size equal to the count of words read at once. • Max pooling layer consolidates the output from the convolutional layer. • Flatten layer maps from three-dimensional output to two-dimensional output, required for concatenation. The output obtained from the three channels is concatenated into a single vector. Then, it is processed by a dense and output layer, respectively. The architecture of the multi-channel network is shown in Fig. 2.
2.1 Regularization We use dropout for regularization which operates on randomly dropping out a variable proportion of the hidden units [13].
266
S. Parida et al.
Fig. 2 Proposed multi-channel CNN architecture
3 Dataset We use 10 k German News Articles Dataset (10 kGNAD)1 in our experiment. It has 10,273 German-language news articles collected by an Austrian online newspaper, categorized into nine topics [14]. The articles and per-category distributions are shown in Fig. 3. As can be seen, the class distribution of the 10 kGNAD is not balanced.
4 Experimental Setup This section describes the proposed setup for the experiments.
4.1 Pre-processing The downloaded 10 kGNAD training and test datasets contain articles along with their respective categories (i.e., Web, panorama, etc.). We mapped each article to one of nine (0–8) categories for the training and test dataset. The statistics of the dataset are given in Table 1. 1 https://github.com/tblock/10kGNAD.
German News Article Classification: A Multichannel CNN Approach
267
Fig. 3 Articles per class. [Image is taken from 10 k German news articles dataset website. https:// tblock.github.io/10kGNAD]
Table 1 Statistics of the experimental (10 kGNAD) data
Dataset
#Articles
#Category
Train
9245
9
Test
1028
9
During pre-processing, we performed the following text-normalization operations: • • • • •
Divide the tokens based on the occurrence of the white space. Removing of punctuation present in the words. Removing words which do not contain fully alphabetical characters. Removing German stop words. Truncate words having a length of ≤1 character.
4.2 Hyper-parameters The configuration parameters are given in Table 2. The parameters are similar for the single- and multi-channel experiments. We manually explored and tuned the hyperparameters such as dropout (for avoiding over-fitting), and batch size (for improving performance). The training also follows an early termination if validation loss does not improve for three epochs [15].
268
S. Parida et al.
Table 2 Configuration parameters: “ReLU” refers to the rectified linear unit, a common CNN activation function [16]
Description
Values
Filter
32
Feature maps
100
Activation function
ReLU
Pooling
1-max pooling
Dropout rate
0.5
Loss
Categorical cross-entropy
Optimizer
Adam
Epoch
15
Batch size
16
5 Evaluation and Discussion We evaluate the proposed text classification system on the test dataset (10 kGNAD) and given the results in Table 3. In addition to classification accuracies, we also show the training and test accuracy and loss w.r.t. each epoch of training in Figs. 4 and 5. The maximum article length (in words) is 1761, and the total number of unique words in the training dataset is 197,762. We observe a performance improvement comparing multi-channel CNN with a single-channel CNN. We also experimented Table 3 Evaluation results of text classification on the test dataset (10 kGNAD)
Model
Dataset
Accuracy
CNN-single-channel
Train
99.5
CNN-multi-channel
(a) Single-channel accuracy.
Test
75.1
Train
99.2
Test
81.4
(b) Multi-channel accuracy.
Fig. 4 Classification accuracies of news-article categories on training and test datasets
German News Article Classification: A Multichannel CNN Approach
(a) Single-channel loss.
269
(b) Multi-channel loss.
Fig. 5 Loss computed using categorical cross-entropy function measured on training and test datasets
with different kernel sizes (2, 4, and 6-g) for multi-channel CNN, while not observing any further improvement. As our experiments are limited to news articles German dataset, the performance of our model for other domains needs investigation.
6 Conclusion We propose a multi-channel CNN approach for German news article classification. We found that CNN’s can well capture the textual feature information for classifying text in multilingual scenarios [17]. The proposed model uses multiple parallel CNN which reads the German news articles using different n-gram sizes (4, 6, and 8-g) and the multi-channel CNN boosts text classification accuracy and able to classify different news categories for the news articles better than single-channel CNN. Even, with a dataset size of 10 K, the proposed model achieves good validation accuracy. As the next step, we plan to investigate more on: (i) applying different configurations (e.g., different n-grams, varying channels, deeper networks, varying dropout rates). (ii) experiment with our model with other languages and across domains. iii) compare the performance of our model with other supervised and hybrid approaches [18–22]. Acknowledgements The work was supported by an innovation project (under an InnoSuisse grant) oriented to improve the automatic speech recognition and natural language understanding technologies for German. Title: “SM2: Extracting Semantic Meaning from Spoken Material” funding application no. 29814.1 IP-ICT, and also supported by the EU H2020 project “Real-time network, text, and speaker analytics for combating organized crime” (ROXANNE), grant agreement: 833635.
270
S. Parida et al.
References 1. S. Lai, L. Xu, K. Liu, J. Zhao, Recurrent convolutional neural networks for text classification, in Twenty-ninth AAAI Conference on Artificial Intelligence (2015) 2. A.L. Maas, A.Y. Hannun, A.Y. Ng, Rectifier nonlinearities improve neural network acoustic models, in Proceeding ICML. vol. 30 (2013), p. 3 3. M. Hughes, I. Li, S. Kotoulas, T. Suzumura, Medical text classification using convolutional neural networks. Stud. Health Technol. Inf. 235, 246 (2017) 4. L. Xiao, H. Zhang, W. Chen, Y. Wang, Y. Jin, Transformable convolutional neural network for text classification, in IJCAI, pp. 4496–4502 (2018) 5. W. Li, P. Liu, Q. Zhang, W. Liu, An improved approach for text sentiment classification based on a deep neural network via a sentiment attention mechanism. Fut. Internet 11(4), 96 (2019) 6. Y. Goldberg, A primer on neural network models for natural language processing. J. Artif. Intell. Res. 57, 345–420 (2016) 7. J. Hu, S. Li, J. Hu, G. Yang, A hierarchical feature extraction model for multilabel mechanical patent classification. Sustainability 10(1), 219 (2018) 8. S.T. Hsu, C. Moon, P. Jones, N. Samatova, A hybrid CNN-RNN alignment model for phraseaware sentence classification, in Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics vol. 2, Short Papers (2017), pp. 443–449 9. A. Conneau, H. Schwenk, L. Barrault, Y. Lecun, Very deep convolutional networks for text classification, in European Chapter of the Association for Computational Linguistics EACL’17 (2017) 10. J. Yoon, H. Kim, Multi-channel lexicon integrated CNN-BiLSTM models for sentiment analysis, in Proceedings of the 29th Conference on Computational Linguistics and Speech Processing (ROCLING, 2017), pp. 244–253 (2017) 11. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014) 12. K. Shimura, J. Li, F. Fukumoto, HFT-CNN: learning hierarchical category structure for multilabel short text categorization, in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 811–816 (2018) 13. S. Wang, M. Huang, Z. Deng, Densely connected CNN with multi-scale feature attention for text classification, in IJCAI, pp. 4468–4474 (2018) 14. D. Schabus, M. Skowron, M., Trapp, One million posts: a data set of German online discussions, in Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) (Tokyo, Japan, 2017), pp. 1241–1244. https://doi.org/10.1145/ 3077136.3080711 15. L. Prechelt, Early stopping-but when? in Neural Networks: Tricks of the Trade (Springer, 1998), pp. 55–69 16. Y. Zhang, B. Wallace, A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification, in Proceedings of the Eighth International Joint Conference on Natural Language Processing, vol. 1 (Long Papers, 2017), pp. 253–263 17. D. Mahata, J. Friedrichs, R. Ratn Shah, et al., # phrama covigilance-exploring deep learning techniques for identifying mentions of medication intake from twitter. arXiv preprint arXiv: 1805.06375 (2018) 18. W. Yin, K. Kann, M. Yu, Schütze, H.: Comparative study of CNN and RNN for natural language processing. arXiv preprint arXiv:1702.01923 (2017) 19. A. Jacovi, O.S. Shalom, Y. Goldberg, Understanding convolutional neural networks for text classification. EMNLP 2018, 56 (2018)
German News Article Classification: A Multichannel CNN Approach
271
20. C. Guggilla, T. Miller, I. Gurevych, CNN-and LSTM-based claim classification in online user comments. in Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers (2016), pp. 2740–2751 21. Y. Kim, Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408. 5882 (2014) 22. K. Kowsari, K. Jafari Meimandi, M. Heidarysafa, S. Mendu, L. Barnes, D. Brown, Text classification algorithms: A survey. Information 10(4), 150 (2019)
MMAS Algorithm and Nash Equilibrium to Solve Multi-round Procurement Problem Dac-Nhuong Le, Gia Nhu Nguyen, Trinh Ngoc Bao, Nguyen Ngoc Tuan, Huynh Quyet Thang, and Suresh Chandra Satapathy
Abstract In this study, we combine a novel Nash equilibrium and Min-Max Ant System (MMAS) algorithm to resolve the confusion in choosing appropriate bidders in multi-round procurement. Our goal focus on the balance point in Nash Equilibrium to find an an appropriate solution to satisfy all participants in multiple-round procurements, which is most benefit for both investors and selected tenderers. The computational results of our algorithm proposed showed that this approach provide a scientific optimizing solution for the question how to choose bidders and ensure a path for a win-win relationship of all participants in procurement process. Keywords Multi-round procurement · Nash equilibrium · Min-max ant system · Software engineering · Software scheduling
D.-N. Le (B) Hai Phong University, Hai Phong 18000, Vietnam e-mail: [email protected] G. N. Nguyen Duy Tan University, Danang 55000, Vietnam e-mail: [email protected] T. N. Bao Hanoi University, Hanoi, Vietnam e-mail: [email protected] N. N. Tuan Department of ICT, Ministry of Education and Training, Hanoi, Vietnam e-mail: [email protected] H. Q. Thang Hanoi University of Science and Technology, Hanoi, Vietnam e-mail: [email protected] S. C. Satapathy School of Computer Engineering, KIIT DU, Bhubaneswar, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_28
273
274
D.-N. Le et al.
1 Introduction Multi-round procurement is a process of self-taking advantages of the investor and the bidders. It includes several participants join negotiating, persuading to gain the most benefit while their relationship is still preserved. In specification, the investor want to reap high yield while choosing trustful bidders with the minimum prices, make the project cost the least as well as not to lost faith of other providers. In contrary, purveyors have the first target is to be chosen. Hence, they need to give suitable requirement and acceptable cost. Their goal is the profit obtaining in the end of the project after being selected [1–4]. The problem is if each participant always try to get the best advantage, it will create an endless loop. The stakeholder will always put effort on constraining contractors to lower their prices. The contractors want to have the contract and keep longterm business will lower their product prices to compete with other. They could probably suffer losses or reduce their product quality to sign the deal at the moment but later increase the next product’s cost to erase the loss. Continuously, those trusty and quality suppliers may give up. The remaining will be the cheaper but highly risk of product quality decreased. In this case, the project owner has lost all worthy suppliers. Bidders may accept to suffer loss or they can collaborate to push the price higher. Thus its not benefit if the investor is too greedy or auction-goers are too competitive [5, 6]. The problems facing can be listed as follow: 1. Project time: The longer the project take, the higher the cost raise. We all know that a dollar at the moment will be more valuable than a dollar in the future. As inflation increasing, changes in the exchange rate will lead to decreasion in the value of the original money. As an consequence, the material price will higher or lower after time. 2. Material cost: The number of needed material also affect the profit of both sides. 3. Discount rate: This depends on the strategies of the contractors. Will they give a discount or intensify the price? 4. Selection: Usually, we tend to choose the cheapest price. The question is how do we ensure it won’t be risky. The quality can be low and the bidders who suffer the detriment can raise higher price next time to afford their loss.
2 Problem Formulation The multi-round procurement problem can be formulated by a formality knowledgeable game. In which, each member takes part and offer methodologies to locate the most fitting answer for keep up their relationship and lead the most benefit. • There is a list of necessary material divided in many packages as the plan of the bidders.
MMAS Algorithm and Nash Equilibrium …
275
• The project will occur in a certain time. In that time, several rounds will be held, each time the investor will by one or many package necessary for the project. • The project has many bidders join, including the longterm partners and new collaborators. • Each supplier is capable of providing some material based on their ability. They have their own tactics with information about the price, discount after time. Summarized, the solution to the multi-round procurement problem is adding parameters to confirm the beneficial answer for entire members in the project. In specify is to answer these problem listed above. In most cases, any decision to gain profit result in the disagreements of the other. According to Nash equilibrium [5, 7], solving this problem brings the balance result for all contestants. The strategies to reach this point is modeled as a group of tactics: G = {So , Sc , Fo , Fc }
(1)
in which, Sc and Fc are the benefit of project owner and project bidders, respectively; So is set of strategies of project bidders and Fo is set of strategies of project owner; The net benefit i is calculated by i = Bi − Ci at time ti
(2)
in which, Bi denote as the net revenue; Ci denote as the cost for each phase of the project over a period of time ti . At the beginning t0 , it is necessary to use discount rate to calculate the decrement of the cash: Bi − Ci i0 = (3) (1 + r )ti −t0 in which, r denote as the discount rate; i denote as the net benefit from t0 to ti . Set t0 = 0, we have: Bi − Ci (4) i0 = (1 + r )ti When the project is separated into many stages, the net benefit is calculated by: F=
N N N Bi − Ci Bi Ci = − ti ti (1 + r ) (1 + r ) (1 + r )ti i=0 i=0 i=0
(5)
Since r τmax Update τmin and τmax by τi j = τmax if τi j ∈ [τmin , τmax ] ; ⎩ τmin if τi j < τmin end if end for If (G best = ∅) then G best ⇐ Ibest ; If ( f (G best ) < f (Ibest ) then G best ⇐ Ibest ; Update pheromone trails follows G best by τi j ← (1 − ρ)τi j + τibest j ; ⎧ ⎨ τmax if τi j > τmax Update τmin and τmax by τi j = τmax if τi j ∈ [τmin , τmax ] ; ⎩ τmin if τi j < τmin end if Until (i > N Max ) or (optimal solution found); s ∗ ⇐ G best ; Compute fitness function f (s ∗ ) acording to |AFc − B F0 | + S 2F (in which, A, B are constant; Fc is total profit C
of bidders; Fo is the benefit of project owner; S 2F is the variance benefit of bidders.); C End
5 Experimental Evaluations 5.1 Datasets We developed a java program to solving the multi-stage procurement issue in the experiment. As the material, a sample date set is searched and gathered. Because of some troubles in collecting data, the project does not include statistics of the contractors. Therefore, the author had simulated contractors with the figures estimated by some local companies provided. Experimental limits are set with some prolonged cases or issues that can delay projects not mentioned. Table 1 shows the summary of datasets, which are organized in .json file with the structure (see Appendix).
280
D.-N. Le et al.
Table 1 Datasets description Dataset Time 1 2
2015–2020 2018–2020
Budget
Equipment category
Bidding information
39.8 Billion 168 Billion
46 (H01 − H46 ) 6 bidders 100 (H01 − H100 ) 15 bidders
5.2 Algorithm Parameters To evaluate our algorithm effect on two case study, we implemented the algorithm in Java and parameter in Table 2.
5.3 Case Studies In the experiment, we installed and compared the performance of six algorithms, such as NSGA-III [15], -MOEA (Multi-Objective Evolutionary Algorithm) [16], GDE3 (The third evolution step of generalized differential evolution) [17], PESA2 (Strength Pareto Evolutionary Algorithm 2) [18], -NSGA-II (Non-dominated Sorting Genetic Algorithm) [19, 20] and our MMAS algorithm on dataset 1 and 2. We compare on two criteria the payoff value of the solution and the runtime. The payoff of the solution is calculated by summing the “goodness” at each corresponding weighting goal. Thus, the payoff value is within [0, 1] and the closer to 0 the better the solution. After testing the program 10 times, the populations are random initialization includes 100 individuals, the maximum number of individuals is created for each runtime is 10000. The comparison of payoff parameter and the runtime on the dataset 1 and dataset 2 is given in Tables 3 and 4. From the experiment results, the -NSGA-II algorithm give the best results with 2 datasets (see Fig. 1), otherwise PESA2, NSGA-III, MMAS are the fastest running time. In addition, we have some comments and evaluations as follows: The order of runtime is similar with 2 datasets (see Fig. 2). Both -MOEA and -NSGAII algorithms are the worst-performing algorithms due to -dominate calculations taking much more time than conventional dominates.
Table 2 Parameters for MMAS algorithm Parameters Population size (number of ants) Number of iterations Heuristic constant The trail evaporation rate Lower and Upper bounds on pheromone values
Values K = 100 Nmax = 500 α = 1 and β = 10 ρ = 0.5 τmin = 0.01, τmax = 0.5
MMAS Algorithm and Nash Equilibrium …
281
Table 3 The comparison of payoff parameter and the runtime (s) on the dataset 1 NSGA-III
-MOEA
GDE3
Ord
Payoff
Payoff
Payoff
1
0.2961 4.197
0.2755 6.602
0.2805 5.981
0.2741 3.630
0.2744 8.954
0.2807 4.150
2
0.3322 4.178
0.2714 6.633
0.2799 5.992
0.2743 3.051
0.2765 8.921
0.2803 4.178
3
0.2911 4.209
0.2711 6.590
0.2799 5.974
0.2747 3.555
0.2801 8.933
0.2812 4.271
4
0.2966 4.166
0.2706 6.602
0.2811 6.000
0.2751 3.525
0.2809 9.102
0.2789 4.223
5
0.2979 4.178
0.2754 6.613
0.2821 5.981
0.2714 3.617
0.2712 8.990
0.2801 4.251
6
0.3025 4.184
0.2755 6.595
0.2797 6.112
0.2777 3.504
0.2708 9.051
0.2825 4.305
7
0.3105 4.189
0.2723 6.621
0.2797 5.969
0.2750 3.550
0.2714 8.915
0.2838 4.312
8
0.2923 4.171
0.2746 6.608
0.2810 5.980
0.2707 3.544
0.2721 8.956
0.2805 4.227
9
0.2940 4.190
0.2704 6.615
0.2807 5.987
0.2721 3.601
0.2725 8.937
0.2795 4.208
10
0.3025 4.178
0.2705 6.601
0.2805 5.994
0.2733 3.539
0.2708 8.929
0.2798 4.347
Time
Time
PESA2 Time
Payoff
Time
-NSGAII
MMAS
Payoff
Payoff
Time
Time
Table 4 The comparison of payoff parameter and the runtime (s) on the dataset 2 NSGA-III
-MOEA
GDE3
PESA2
-NSGAII
MMAS
Ord
Payoff Time
Payoff Time
Payoff Time
Payoff Time
Payoff
Time
Payoff Time
1
0.2674 5.6390 0.2590 7.6990 0.2555 7.5150 0.2671 5.6690 0.2555
12.4990
0.2556 5.4451
2
0.2654 5.6360 0.2596 7.6960 0.2545 7.5170 0.2702 5.6470 0.2569
12.5010
0.2565 5.4512
3
0.2666 5.6330 0.2610 7.7210 0.2569 7.4860 0.2666 5.6210 0.2518
12.4200
0.2519 5.4723
4
0.2668 5.6230 0.2591 7.7100 0.2518 7.4970 0.2697 5.5870 0.2525
12.4250
0.2527 5.5031
5
0.2697 5.6990 0.2610 7.6550 0.2581 7.5250 0.2653 5.5920 0.2522
12.3870
0.2522 5.4680
6
0.2701 5.5870 0.2586 7.6470 0.2525 7.5290 0.2666 5.6210 0.2557
12.4650
0.2558 5.4125
7
0.2670 5.6210 0.2589 7.6960 0.2533 7.5330 0.2701 5.5880 0.2525
12.4690
0.2526 5.4211
8
0.2655 5.6520 0.2587 7.7100 0.2529 7.5290 0.2655 5.6490 0.2551
12.4700
0.2553 5.4275
9
0.2654 5.6270 0.2590 7.6990 0.2519 7.6000 0.2654 5.6760 0.2543
12.4340
0.2547 5.4112
10
0.2680 5.6190 0.2599 7.6870 0.2520 7.5550 0.2680 5.6960 0.2569
12.4530
0.2570 5.4421
Fig. 1 The comparison of payoff value between algorithms on the dataset 2
282
D.-N. Le et al.
Fig. 2 The comparison of runtime value between algorithms on the dataset 2
The NSGA-III algorithm gives the worst results because it is very easy to focus on local solutions. The -NSGA-II algorithm has good results in both datasets, so using -dominate instead of dominate is often costly in terms of time, but it has significant efficiency. Both -MOEA and PESA2 have in common that dividing the target space into hyperboxes, the algorithm efficiency depends on the width of the target space, so there are differences in the result order compared to the other algorithms on two datasets. Similarly, MMAS and GDE3 have in common that the selection is not through mating, but based on the individual transformation so the resulting order has similarities on two datasets.
6 Conclusions In this paper, we combined a novel MMAS algorithm and the Nash Equilibrium theory to solve the problem of decision-making in the many stages of auction. The performance of our algorithm is evaluated through numerical studies are compared to NSGA-III, -MOEA, GDE3, PESA2, -NSGAII to evaluate the effects of payoff and time. Our solution to support decision-making of the multi-round procurement issue. The approach help proved the potential of the methodology in practical application in multi-stage procurement. Therefore, our approach is currently among the best performing algorithms for this problem. Acknowledgements This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 102.03-2019.10.
MMAS Algorithm and Nash Equilibrium …
283
7 Appendix The .json file data structure { "project_id ": "" , " inflation_rate ": "" , " start_date ": "" , "packages": [{ "package_id": "" , "execution_time" : "" , "joined_contractors ": [x, xx, xxx] , "products ": [{"product_id ": "" , "quantity ": ""}] , "estimated_cost ": "" }] , "product ": [{ "product_id ": "" , "description ": "" , "unit ": "" }] , "contractors ": [{ "contractor_id ": "" , "description ": "" , "capacity ": "" , "relationship ": "" , "products ": [{ "product_id ": "" , " sell_price ": "" , "buy_price ": "" }] , " strategies ": [{ "strategy_id ": "" , "products ": [{ "product_id ": "" , "discounts ": [{"from": "" ," to ": "" ," rate ": ""}] }] }] }] }
284
D.-N. Le et al.
References 1. S.S. Nagabhushan, Design of two-stage bidding model for supplier selection. Int. J. innovative Res. Sci. Eng. Technol. 2(7), 2676–2681 (2013) 2. L. Ji, T. Li, Multiround procurement auctions with secret reserve prices: theory and evidence. J. Appl. Econometrics 23(7), 897–923 (2008) 3. D. Wu, Estimation of procurement auctions with secret reserve prices (2015) 4. L. Ji, Three Essays on Multi-round Procurement Auctions (Doctoral dissertation) (2006) 5. C. Rao, Y. Zhao, S. Ma, Procurement decision making mechanism of divisible goods based on multi-attribute auction. Electron. Commer. Res. Appl. 11(4), 397–406 (2012) 6. D.C. Parkes, J. Kalagnanam, Models for iterative multiattribute procurement auctions. Manage. Sci. 51(3), 435–451 (2005) 7. N. Yang, X. Liao, W.W. Huang, Decision support for preference elicitation in multi-attribute electronic procurement auctions through an agent-based intermediary. Decis. Supp. Syst. 57, 127–138 (2014) 8. M. Bichler, K. Guler, S. Mayer, Split award procurement auctions- can bayesian equilibrium strategies predict human bidding behavior in multi-object Auctions? Prod. Oper. Manage. 24(6), 1012–1027 (2015) 9. B.N. Trinh, H.Q. Thang, T.L. Nguyen, Research on genetic algorithm and nash equilibrium in multi-round procurement. In SoMeT (2017), pp. 51–64 10. T.Q. Huynh, N.B. Trinh, T.X. Nguyen, Nash equilibrium model for conflicts in project management. J. Comput. Sci. Cybern. 35(2), 167–184 (2019) 11. L. Gronbak, M. Lindroos, G. Munro, P. Pintassilgo, Basic Concepts in Game Theory, in Game Theory and Fisheries Management (Springer, Cham, 2020), pp. 19–30 12. G. Schuh, S. Runge, Applying game theory in procurement. An approach for coping with dynamic conditions in supply chains. Contrib. Game Theo. Manage. 7(0), 326–340 (2014) 13. D.N. Le, A new ant algorithm for optimal service selection with end-to-end QoS constraints. J. Int. Technol. 18(5), 1017–1030 (2017) 14. D.N. Le, G.N. Nguyen, V. Bhateja, S.C. Satapathy, Optimizing feature selection in videobased recognition using MaxMin Ant System for the online video contextual advertisement user-oriented system. J. Comput. Sci. 21, 361–370 (2017) 15. W. Mkaouer, M. Kessentini, A. Shaout, P. Koligheu, S. Bechikh, K. Deb, A. Ouni, Manyobjective software remodularization using NSGA-III. ACM Trans. Softw. Eng. Methodol. 24(3), 17 (2015) 16. H. Li, Q. Zhang, Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II. IEEE Trans. Evol. Comput. 13(2), 284 (2009) 17. S. Kukkonen, J. Lampinen, GDE3: the third evolution step of generalized differential evolution, in 2005 IEEE congress on evolutionary computation, vol. 1. (IEEE, 2005) pp. 443–450 18. K. Deb, A. Pratap, S. Agarwal, T.A.M.T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002) 19. K. Deb, M. Mohan, S. Mishra, Evaluating the -domination based multi-objective evolutionary algorithm for a quick computation of Pareto-optimal solutions. Evol. Comput. 13(4), 501–525 (2005) 20. D. Hadka, Moea framework-a free and open source java framework for multiobjective optimization. version 2.11. (2015). http://www.moeaframework.org
Technology and Body Art: An Appraisal of Tattoo Renaissance Across Cultures Swati Samantaray and Amlan Mohanty
Abstract People across the globe have tattooed human skin en route for communing diverse ontological, psychological, and socio-cultural conceptions including cultural identity, status and position, medicine, beauty, and supernatural protection. As a scheme for communication of knowledge, tattooing persists to be a visual lingo of the skin whereby culture is inscribed, experienced, and preserved in a myriad of ways. This paper evaluates the attitudes various cultures have had toward tattooing and how individuals have affected today’s burgeoning acceptance of tattoos. It also tries to appraise the influence of technology on the tattooing industry—the way technology is redefining the art of tattooing in strange, new ways and how smart tattoos are used for practical applications in daily life. Keywords Technology · Body art · Tattoos · Communication · Culture · Ornamentation
1 Correlation Between Culture, Communication, and Technology Culture and communication are not isolated entities or areas—each is created in the course of a dynamic relationship with the other. Communication is a process by which message is disseminated between persons and is a way of getting in touch with others with facts, ideas, thoughts as well as values. Furthermore, it serves as a bridge of understanding among people so that they can share what they feel and know. The way of life of a society or social group is denoted by culture, including the aspects of society such as language, customs, dress, and the like. Stated simply, everything S. Samantaray (B) School of Humanities, Kalinga Institute of Industrial Technology Deemed to be University, Bhubaneswar 751024, India e-mail: [email protected] A. Mohanty IIIT Bhubaneswar, Bhubaneswar, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_29
285
286
S. Samantaray and A. Mohanty
which is socially learned, imbibed, and shared by the members of a society is culture. Any culture is as ‘exotic’ as any other. As Gunther Kress writes: Every cultural practice is a communicative event. Every act of communication is a cultural event. The structures, processes and communication are given by culture. Culture sets the ground entirely for communication, for what can be communicated, what is communicable, and for how it is communicated…anything outside the scope of communication is noncultural. [1]
Conceptually, culture is of two types—implicit and explicit cultures. Implicit culture constitutes the basic assumptions which define the meaning that a person attaches to things around him. The implicit culture produces norms and values, which then show in manifestations and visible artifacts (the explicit culture) such as language, dress, music, architecture, food, and the like. Culture varies from place to place and it also varies over time. Culture shapes the human experience. Jawaharlal Nehru in his ‘Visit to America’ says: If we seek to understand a people, we have to try to put ourselves, as far as we can, in that particular historical and cultural background. … It is not easy for a person of one country to enter into the background of another country. So there is great irritation, because one fact that seems obvious to us is not immediately accepted by the other party or does not seem obvious to him at all…. But that extreme irritation will go when we think … that he is just differently conditioned and simply can’t get out of that condition. One has to recognize that whatever the future may hold, countries and people differ … in their approach to life and their ways of living and thinking. In order to understand them, we have to understand their way of life and approach. If we wish to convince them, we have to use their language as far as we can, not language in the narrow sense of the word, but the language of the mind. That is one necessity. Something that goes even much further than that is not the appeal to logic and reason, but some kind of emotional awareness of other people. [2]
The relationship between communication and culture may be perceived in three different ways—communication as an expression of culture, culture as a communication phenomenon, and culture and communication as a reciprocal or mutual relationship. Cultures are fashioned through communication, or we may say communication is the vehicle of human synergy or reciprocal action by which cultural components are shared. Cultures are shaped by way of communication, that is, the mode of human interaction in the course of which cultural characteristics (like customs, rituals, roles, laws, or any other pattern) are conceived and mutually shared. In other words, we may say that culture is formed, shaped, passed on, and learned through communication. The reverse is also the case—communication practices are largely constituted, shaped, and channeled by culture. Technology is rapidly revolutionizing the way we communicate. Technology and culture also significantly influence each other. When new technology is brought in into a society, the culture responds either in an affirmative or negative way and is consequently altered. Accordingly, as cultures reform so does the technology they develop. In other words, progress in technology has an effect on how cultures evolve. Hence as cultures progress, they are likely to fashion new technology. “Technology is regarded as a superb equalizer and a mode
Technology and Body Art: An Appraisal …
287
of getting in touch with people that may not have been possible ever before. Technology condones our skin color, age, disabilities, looks, size, economics, and even our gender” [3]. Modern technology has taken body art to new heights.
2 Prelude to the Origin and Application of Tattoos With the changes in society, the body has become a focus of study in anthropology, history, philosophy as well as visual media. Temporary or permanent beautification of the human body (or body art) are found in all countries. All societies are concerned with bodily appearance, and all take certain steps to embellish, or substantially alter, the natural bodily condition. For thousands of years, people across the world are engaged in body art, or in other words, modifying the human body with tattoos, piercings, circumcision, subdermal implants, body painting, foot binding, and even altering the shape of the skull. In addition, modern medical technology has provided a new range of surgical procedures aimed at this goal. Tattoos signify a complex communication system (both literal and symbolic), embedded in a complex cultural and social system. Tattooing, a permanent form of body modification, designates the injection of coloring matter into the skin using needles or other equipment, for creating an ornamental design, gratifying to the eyes. People have been etching tattoos into their skin since the beginning of time. These designs are used as amulets, status symbols, declarations of love, signs of religious beliefs, adornments, and even forms of punishment. The etymological basis of the term tattoo probably has two main origins; the first is from the Polynesian word ‘ta’ meaning ‘striking something’, and the second is the Tahitian tattow—where ta refers to drawing on the skin or to mark something. Tattooing in general is perceived and exercised by an assortment of people from the orient as well as the occident and by both men and women belonging to all ages. While the historical record of tattoos can be mapped out in writings, physical proof of tattoos on humans predates written history. In the book, Prison Tattoos Efrat states: Excavations in Europe have revealed tools from the Late Paleolithic era, between 38,000 and 10,000 B.C., that apparently were used for tattooing…Next to these tools, human silhouettes with engravings, which are assumed to represent the tattoos, were found…Mummies, above 7000 years old, found in Russian Steppe, were also tattooed and the famous mummy Otzi the Iceman, which dates from approximately 3000 B.C.E., was also tattooed. [4]
Tattoos have deep roots in folk art. Native people around the Pacific Rim have used tattoos for centuries. Decorative scarification is practiced across many areas of Africa. It is also said that tattooing appeared in ancient Greece, Persia, and among the ancient Britons and Gauls, in America and throughout Asia. The word ‘tattow’ initially appeared in the Captain Cook’s Journal (July 1769). There are different types of tattoos like—permanent tattoos (where a needle is used to insert colored ink), permanent make-up tattoo (an everlasting tattoo which looks like a make-up
288
S. Samantaray and A. Mohanty
such as blush or eyeliner or lip liner), henna (also known as mehendi, a type of shortterm tattoo), and temporary tattoo stickers (which last for a few days and contain colors permitted for use in cosmetics applied to the skin). Beyond being a fashion statement, tattoos serve many purposes. The designs may possibly be dependent on the passion and sentiment of the person tattooed or on the convenience and familiarity of the tattoo specialist. Tattoos are done for decoration, personal satisfaction, attention-seeking behavior, identification with groups, and personal proclamations, including love. The Greeks and Romans during the ancient time used tattoos to brand slaves and criminals. “The institutional imposition of tattoos as a means of commodifying and objectifying people … the Nazi practice of tattooing Jews and other prisoners at the Auschwitz death camp with identification number” [5] is another exemplar. People also get tattooed to memorialize events. After 9–11, a number of people had images of the Twin Towers tattooed on their bodies. In several cultures, conventionally, the most attractive, good-looking, and gorgeous girls and women were tattooed to demonstrate for the rest of their lives that they were once of exceptional beauty (considered to be a boost in status). The Maori (native people of New Zealand) are famous for the Moko [6], a facial tattoo (Fig. 1). One of the best examples of the Moko is reflected in the Lee Tamahori directed film Once were Warriors. Many of the characters in that film were covered in ritualistic tattoos and affection for the body art subsequently spread. The traditional tattooing in Hawaii, called Kakau was used for physical and spiritual protection. A tattooed tongue (in Hawaii) was a sign of grief. In India tattooing is a part of folk and tribal art. In the northern and north-western regions, the tradition of tattooing has been
Fig. 1 ‘Moko’. Source Photo courtesy Nicolas Morlet [6]
Technology and Body Art: An Appraisal …
289
prevalent among the Santhals and the Bhils, the Kanbis, and the Warlis in the Gujarat region, the Banjaras of Rajasthan and the Kondhs of Odisha. Tattooing is also an integral part of the Baiga tribes. The tattoos of native or aboriginal cultures were customarily designed by means of only black ink and were executed by means of a hollow needle prepared from objects like bamboo, bone, porcupine quill, along with other natural substances. Medicinal tattoos have been documented to be used in communities for treatment of joint-related conditions such as rheumatism. Nomadic communities used tattoos as tags of identity, reassuring their recognition since they had to wander from place to place. The Singhpo community (belonging to Assam and Arunachal Pradesh) had a definite set of laws meant for each gender. The women who were married tattooed their legs (usually from the ankles to the knees), while the married men tattooed their hands. The unwed Singpho girls were barred from having tattoos. The Samoan Tatau portrays social class. In some Polynesian societies, tattoos identified clan or familial connections. According to Claude Levi-Strauss, a French Anthropologist, in his work, The Raw and the Cooked mentions that tattoos transform us from ‘raw’ animals into ‘cooked’ cultural beings [7]. Irezumi, a style of Japanese body tattoo, was inspired by Edo period woodblock artists (1600–1868); this method of tattooing considered human flesh as a canvas and used many woodblocking tools (like chisels, gouges) for transferring designs to bodies. The artists used a typical ink named Nara ink, which changes into blue-green color beneath the skin. In many cultures, tattoos are regarded as protective amulets. Jeanne Nagle in the book Why People Get Tattoos and Other Body Arts writes: …tattooing is not only about personal expression - it can also be a part of people’s faith. These tattoos can be a religious symbol or even quotes from sacred texts. In some faith cultures, tattooing is discouraged, while in others it is a vital part of a person’s spiritual journey. [8]
In modern-day Indian culture, people may bear a tattoo of spiritual symbols such as a cross or a quote from scriptures in order to identify with other members of the same religion. Yudit Kornberg Greenberg (in The Body in Religion: Cross-Cultural Perspectives) writes about Buddhism and tattoos thus: … there is an annual Buddhist tattoo festival in Thailand, where Thai Monks pray while performing the ritual of tattooing using their tools to mark the images into the flesh. This ritual renders tattoos as amulets, protecting the wearer from demons and providing spiritual strength in the face of crisis. [9]
Psychologically speaking, in today’s culture, it represents a person taking control and ownership over his or her own body, making a personal visual statement. Fashion trends and tastes have changed, and today, tattoos have become a mainstream fashion accessory. Tongue tattoos are also becoming a craze among body art fans, as it is less painful. A type of contemporary tattooing known as ‘macabre’ (or horror tattoos connected to demonology) features demons, zombies, and monsters, visually inspired by urban folklore, horror movies, and books connected to dark and depressive stories, presented in a morbid way by and large.
290
S. Samantaray and A. Mohanty
Our skin has a biological and cultural life; it becomes a site for the projection and exposure of cultural investments. Mentioning about the merits of tattooing, Bonnie Graves in the book Tattooing and Body Piercing writes: Recently, tattoos have come into use for both medical and nonmedical cosmetic reasons. Tattooing is used to cover up port-wine stains on a person’s face…Tattooing also is used to color the skin of people with vitiligo…Tattooing also can be used for permanent makeup such as eyeliner. [10]
Tattoos are artworks which can be used as a medium to flaunt our unique sense of individuality to the world. Ancient cultures employed tools like rose thorns, shark’s teeth, and pelican bones for tattooing. Natural pigments such as red ochre as well as soot were made use of to provide the color. Conventional Thai tattoo-tools were prepared from bamboo needles. In 1876 Thomas Edison invented the first electric tattoo machine. Thereafter, Samuel O’Reilly created the electric pen, taking ideas from Edison blueprints. O’Reilly improved the device (as compared to the Edison version) and the new thing he did is addition of an ink reservoir to the device. The tattoo machine used today—a dual coil reciprocating engraver—was first patented by Charlie Wagner. In the recent past, tattoo artists used 3D printers to create tattoos. Presently, artists are using complex robotics to ink human skin. Robots are now making their way into the tattoo industry—skin art is getting done by robots too! Moreover, tattoo artists are on their expedition to explore new connections with sound and tattooing from a distance. By means of digital tools and with the changes in society, we find, the body has become a focus of visual media. A Berlin-based technology startup ‘Dalia Research’ [11] had conducted a survey (regarding tattoos by age) in April 2018, taking into count 9054 Internet-connected respondents from different countries like UK, USA, Italy, Canada, Argentina, Australia, Brazil, Germany, Denmark, Spain, France, Greece, Israel, Mexico, Russia, Sweden, Turkey, and South Africa. The survey revealed that most number of respondents having tattoos (that is 45%) are between the age group of 30–49, compared to 32% of the respondents of the age group 14–29 years, and further 28% of the respondents over 50 years of age who like to have tattoos (Fig. 2).
3 Technology and Skin Research: Duoskin and Biomechanical Tattoos It is said that (looking at the history and classical anthropological interpretations of tattoos) tattoos have been interpreted as signs of backwardness, subordination, and ugliness; at the same time they have been evaluated as progressive, emancipated, and beautiful too—tattooing has reinvented itself. Moving into the twenty-first century we find that tattoo culture is being integrated with technology—modernization has majorly changed the essence of tattooing. Max Belkin, a relational psychoanalyst and psychologist opines:
Technology and Body Art: An Appraisal …
291
Fig. 2 ‘Tattoos by Age’. Source Dalia research [11]
Tattoos blossom at the crossroads of bodies and art, the physical and the imaginary. Their colors, shapes, and symbols pulsate with memories, meanings, and emotions. Above all, body art captures and reveals unspoken aspects of human relationships, both past and present. [12]
Digital Humanities uses digital tools and software for understanding and analyzing research questions related to humanities and interprets how digital methodologies are capable of enhancing disciplines such as Literature, History, Classical Studies, Art, Music, and many additional disciplines. The digital presence and the prevalence of tattoos on social media and social networking sites (including Facebook, Instagram, Pinterest) take tattoos to new heights, connecting to a vast audience. The Director of the Museum of Archeology and Anthropology (at Cambridge University), and the author of the book Body Art, Professor Nicholas Thomas has reported that tattoos have made a powerful and influential comeback: Because of advances in technology and medical science, people no longer understand the body as something natural that you’re born with and live with. Instead, we understand it much more as something that is changeable and mutable. [13]
Technology is redefining the art of tattooing in strange, new ways. Contemporary scientists have developed ‘soundwave tattoos’ (based on waveforms), ‘duoskin’ (a touch-based interface for controlling computer, TV, or smart device), and tattoos with wearable sensor technologies which are needle-free (Fig. 3).
292
S. Samantaray and A. Mohanty
Fig. 3 ‘DuoSkin’. Source Photo courtesy Cindy Hsin-Liu Kao [14]
We are in the midst of exploring computer and drone circuitry that could be woven into ‘tattoos’ and used to open car doors … or inject insulin”, writes Dmin Allan Dayhoff [15]. Margo DeMello talks about ‘Biomechanical tattoos’ (developed in the 1980s) stating that these tattoos use “fine black and gray lines to create the appearance of circuitry or mechanical parts, often blended with organic material” [5]. DeMello also goes on to share information about ‘Blacklight tattoos’ (also known as UV tattoo, which can be visible in blacklight). Andrew Reilley in his work Key Concepts for the Fashion Industry opines, “body modifications are becoming more fashionable and technology is advancing, some people are becoming cyborgs” [16]. He goes on to talk about ‘QR tattoos’ where “the rectangular QR code is tattooed onto the skin and when read by a smartphone’s QR code reader, it links to a website. The contents of the website can be changed…so that the tattoo is continually evolving and updated. [16]
Data source tattoos can be loaded with information somewhat comparable to a flash drive. It works well with all sorts of technologies and is used for booking movie tickets, shopping, and similar types of activities. Trackpad tattoo is used with the computer to move the cursor around, change slides during a powerpoint presentation, and drone control for the military. Digital art is a type of Duoskin tattoo that can show emotion based on biometrics. Digital art tattoos can be shaped into anything. Michael Irving in his article ‘Color-changing tattoos monitor blood glucose at a glance’ writes: Tattoos are fast becoming more than just a means of self-expression: soon they could be used for more practical applications, like tracking blood alcohol levels or turning the skin into a touchscreen. Now, a team from Harvard and MIT has developed a smart ink that could make for tattoos that monitor biometrics like glucose levels, and change color as a result. [17]
A special type of ink, termed as ‘Dermal Abyss’, is augmented with colorimetric and fluorescent biosensors that alter color depending upon the chemistry of the pH, sodium, glucose, as well as hydrogen ions available in the interstitial fluid of our bodies. Still, there is the issue of commercial feasibility. It is said that the general public is probably not eager regarding the thought of an electronic tattoo imprinted
Technology and Body Art: An Appraisal …
293
on their skin. It is not easy to design electronic tattoos as they are costly to fabricate. Electronic tattoos have special benefits for their flexibility. This necessitates all the segments of the circuit to be flexible, which makes their design complex and hence time-consuming. However, scientists are trying to make the fabrication of flexible digital tattoos effortless, practical, handy, and economical. Products like Wacom tablets (hardware input device used for sketching digitally), Photoshop (photo-editing software), and supplementary digital design tools proffer assistance to tattoo artists and to their clients too. The software is applied to alter tattoo drawings and patterns into digital files that are capable of being downloaded to the device. Subsequently, the limb of the user is inserted into the printer and the needle draws the design into their skin. Contemporary tattoo artists are using digital 3D body parts where a customer wants the design to be applied. They digitally superimpose the tattoo design over the body part, edit it as required (to make the tattoo look exactly the way a customer desires), and show the customer how their finished tattoo would look like once done. The media certainly has control in boosting the tattooed trend and shaping social reaction to increased use of tattoos. The media often regulates the degree of attention to be given to tattoos and how viewers process and estimate the information. The modern tattoo artists are able to showcase their designs through social media. As more people witness tattoo artists and their tattoos through digital mediums, the more socially accepted they are becoming. Tattoo artists are not just restricted to sketching designs on the skin, they are now can use digital technology to create new forms of tattoo art. New technology is making an endeavor to fuse tattoos and computers. Since the 1990s, tattoos emerged in every sector of the media including periodicals and magazines in vogue, scholarly literature, movies, and the entertainment industry at large. Biomechanical tattoos (which developed in the 1980s and was influenced by the art of H. R. Giger, who created the set designs for the 1979 film, Alien) make the wearer appear like cyborgs that are, human–machine hybrids as if there is machinery beneath their skin. These tattoos are crafted in such a way that they appear like circuitry, often mingled with organic material.
4 Environmental Impacts of Tattoos If the natural barrier of the body, that is, the skin is marred or wounded, there may be a risk of contamination. Some people are still not conscious that the ink they are permanently injecting into their dermis is not eco-friendly. Professional tattoos have a variety of different colors and consist of a range of pigments. Tattoo ink is a solution comprised of a carrier and a colorant. A carrier is a fluid, containing liquids like glycerine, water, isopropyl alcohol that are used to transport the colorant to the injection site. Other carrier ingredients may contain dangerous substances like formaldehyde, methanol, and other aldehydes. The inks used for tattoos may contain cancerproducing substances. Health risks potentially developed from tattoo injections
294
S. Samantaray and A. Mohanty
include inflammation, granuloma, keloids, blood-borne diseases like methicillinresistant Staphylococcus aureus, hepatitis B, and hepatitis C. The American Academy of Dermatology has further proved this. The tattooed persons may develop hypertrophic and keloid scars. Moreover, tattoo inks and aftercare products are made of animal products—bone char (burnt animal bones), glycerin from animal fat, gelatin from hooves (made from boiled connective tissues of cows and pigs), or shellac (a type of resin secreted) from beetles. The stencil papers are often made with lanolin, the substance derived from a sheep’s wool. Permanent tattooing has significant disadvantages as it is associated with infection, pain, and social stigma; these tattoos may not be cool when the fashion changes. Permanent tattoos have to be surgically removed. Tattoo dyes should be subject to testing or licensing by the health authorities. Nature, as we know, has always been a source of inspiration for art. Contemporary body art professionals are presently turning to animal-free ink to create beautiful vegan tattoos. Tattoo lovers are seeking vegetable-based nontoxic ink. The ministries of health worldwide are developing strict regulations about tattoo ink manufacturing in an attempt to eliminate carcinogenic ingredients from tattoo inks.
5 Conclusion Tattooing has traveled through the shadowy fringes of society and re-emerged in the radiance of the mainstream. Technological innovation is not only responsible for progress in society, but it has brought freshness to the art of tattooing; its essence, its mysterious duality, and its permanence have continued to mesmerize people throughout the globe. Body art (such as tattoos, body piercings, branding, scarification, dermal anchor, and three-dimensional art or beading) needs to be preserved as the text on the skin. Tattooing with its resurgence in contemporary culture has steadily remained a long tradition whose persistence appears inevitable. Tattooing is a very personal choice, and there are numerous reasons why people prefer to do it. It not only makes a genuine statement about who we are, but it is imperative for us to understand the reasons for getting body tattoo and also be familiar with how to get body art that does not impair our health. Owing to the influence of digital technologies, the skin has become a link between the self and the outer world.
References 1. G. Kress, Communication and Culture: An Introduction (UNSW Press, Australia, 1988) 2. R.L. Ahuja, Nehru: His Philosophy of Life and Education (Vishva Vidyalaya Prakashan, Delhi, 1965) 3. J.R. Mohanty, S. Samantaray, Cyber Feminism: Unleashing Women Power through Technology. Rupkatha J. Interdisc. Stud. in Humanities. 9(2), 328–336 (2017) 4. E. Shoham, Prison Tattoos: A Study of Russian Inmates in Israel (Springer International Publishing, Heidelberg, 2015)
Technology and Body Art: An Appraisal …
295
5. M. DeMello, Inked: Tattoos and Body Art Around the World (ABC—CLIO LLC, California, 2014) 6. N. Morlet, Moko. https://nicolasmorlet.artstation.com/projects/3dGdY 7. C. Levi-Strauss, The Raw and the Cooked (Harper & Row, New York, 1969) 8. J. Nagle, Why People Get Tattoos and Other Body Arts (The Rosen Publishing Group Inc, New York, 2011) 9. Y.K. Greenberg, The Body in Religion: Cross-cultural Perspectives (Bloomsbury Publishing, New York, 2017) 10. B. Graves, Tattooing and Body Piercing (Capstone Press, Minnesota, 2000) 11. Dalia Research, Who has the Most Tattoos? It’s not who you’d expect. https://medium.com/ daliaresearch/who-has-the-most-tattoos-its-not-who-you-d-expect-1d5ffff660f8 (2018) 12. M. Belkin, What do Tattoos Mean? https://www.psychologytoday.com/blog/contemporary-psy choanalysis-in-action/201311/what-do-tattoos-mean (2013) 13. J.W. Simons, Ink with Meaning: What We Can Learn from the Tattoos of Our Ancestors. https://edition.cnn.com/style/article/what-we-can-learn-from-the-tattoos-ofour-ancestors/index.html (2015) 14. H.L.C. Kao, DuoSkin: Rapidly Prototyping On-Sin User Interfaces Using Skin Friendly Materials. http://duoskin.media.mit.edu/duoskin_iswc16.pdf (2016) 15. A.D. Dmin, God & Tattoos: Why Are People Writing on Themselves? Lulu.com (2016) 16. A. Reilley, Key Concepts for the Fashion Industry (Bloomsbury Publishing, London, 2014) 17. M. Irving, https://newatlas.com/dermal-abyss-smart-tattoo/51572/ (2017)
Protection of End User’s Data in Cloud Environment K. Chandrasekaran, B. Kalyan Kumar, R. M. Gomathi, T. Anandhi, E. Brumancia, and K. Indira
Abstract Disseminated registering spared and on-demand cases are all things considered given by master associations. Hybridization of the two alternatives can cost essentially extra when renting resources from the cloud. Nevertheless, it is a noteworthy test to choose the fitting proportion of held and on-demand resources to the extent customers’ essentials. In this paper, the work procedure booking issue with both held and on-demand models is considered. The objective is to constrain the total rental cost under due date obliges. The idea about issue is numerically illustrated. A different progression based most prompt culmination time procedure is proposed to assemble plans for the work forms. Four interesting standards are used to create beginning errand circulation game plans. Types and measures of benefits are settled by an accessible time square-based schedule advancement segment. New groupings are made by a variable neighborhood search strategy. Exploratory and real examinations and results show that the proposed figuring produces broad cost venture finances when stood out from the estimations with just on-demand or spared models. Keywords Advanced encryption standard · Safety in cloud storage · GPGPUs · DCT
1 Introduction The growth and improvement of PCs and cloud enlisting advancement [1], the example starting late is to redistribute information storing and taking care of on cloud-built organizations. The cloud-centered organization for solitary end clienteles is getting reputation specifically for data accumulating [2]. Contingent upon gigantic additional room and strong correspondence channel, cloud-based pro centers, for instance, Drop box, Sales force, or SAP, etc., are giving distinct customers for all K. Chandrasekaran · B. Kalyan Kumar · R. M. Gomathi (B) · T. Anandhi · E. Brumancia · K. Indira Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_30
297
298
K. Chandrasekaran et al.
intents and purposes unending and negligible exertion additional room. This condition increases the issue such as steadfastness of cloud master centers. Various information safety and assurance scenes are found in the present cloud organizations [3, 4]. From one viewpoint, cloud pro communities deal with a tremendous amount of exterior outbreaks. “In 2018, a total of 1.5 million Singapore [5] Health patients’ non-remedial individual data were taken from the prosperity system in Singapore.” In contrast, cloud expert communities cannot be totally trusted either. Person information might be abused in a mischievous manner like “The Facebook and Cambridge Analytical data scandal which affected 87 million customers in 2018” [6]. Thusly, it ends up being logically huge for end customers to capably guarantee their data (works, pictures, or chronicles) self-ruling from cloud authority communities. One reasonable course of action is to guarantee data on an ensured end customer’s device afore re-appropriating to cloud which ordinarily turn out to be standard figures, for instance, AES. In any case, encryption counts are moving affirmation on information to safeguard on keys along these lines, presents key organization issues. At the point when the keys are revealed, data safety will be undetermined. Progressively unfortunate, the end customers have no cryptography incredible rehearsal also endeavor to use again comparative keys for diverse information security; one key presentation will incite a colossal extent of information spillage. Thus, despite figures, extra data protection ideas remain critical to sustenance such circumstances.
2 Related Work Some new criteria will be in like way referenced to look at our outcomes and existing approaches. In, utmost two basic norms for the media, selective encryption strategies are appeared as histogram assessment and affiliation appraisal. By and by, the criteria for assessing information insistence frameworks ought to be released up as indicated by the reasonable use cases, for example, the protected information putting away from end clients to cloud depicted. Example, the performance quickness necessity to be surveyed on useful equipment arranged and separated encryption estimations. The safety level is like way overviewed by the structure reason. Information uprightness, as a crucial basic for perceiving position pragmatist, is additionally fundamental to be assessed. For the shielded information hoarding from end clients to cloud usage circumstance, considering the breaking point course streamlining and confirmation from mess up spread is moreover crucial. For assessing the execution speed, it is vital to from the earliest starting point think about whether in the structure level, there are extra pre-taking care of steps, for instance, the DCT technique. For this methodology, simply the pre-getting ready advance subject to DCT is more delayed than using AES all in all data found on a forefront CPU [7], inciting execution gives that are not considered. Such issue is always dismissed by change-based selective encryption (SE) technique [8, 9]. A general-purpose GPUs (GPGPUs) is used to revive figuring tasks, and the implementation periods are surveyed to exhibit the viability differentiated and advanced encryption standard (AES). In this paper, raised
Protection of End User’s Data in Cloud Environment
299
safety examination is achieved to show an extraordinary safety level that is cultivated guaranteeing the both private areas as well as open parts.
3 Literature Survey As a developing fast budding technology, smart grid networks (SGNs) ought to remain intensely recognized via the present power resource production for reaching extraordinary recital power control scheme. The wireless smart grid networks (WSGNs) have permitted plentiful stretchy power administration clarifications wanting the limitations of the wired substructure. The cognitive radio network (CRN) is one of the extensively organized wireless networking strategies. The communication safety is a main worry despite the fact that cognitive radio network (CRN) is used in wireless smart grid net works. At present, jamming and spoofing are binary public attack methods that are lively in the placement of wireless SGNs after using cognitive radio networks [18]. This paper advises an attack approach, “maximum attacking strategy using spoofing and jamming” (MASS-SJ), which consumes an optimum power supply to exploit the confrontational things. Spoofing and jamming outbreaks are propelled in a vigorous method in edict to obstruct with the determined sum of signal channels [17]. In this paper, another difference media dispersion recreation calculation is proposed for negligibly intrusive vascular intercessions. Based on smoothed molecule hydrodynamics (SPH), the calculation could be primarily separated into two sections: liquid connection and liquid strong communication. In the liquid connection, an adsorption model of iodine elements is fabricated, and the neighboring blood stream elements nearby can be caught rapidly by “Eulerian framework and space sparse hash.” In the liquid strong cooperation, “Coulomb rubbing model is utilized to understand the frictional contact between the liquid particles and the inward dividers of vessel. For educating the computational speediness” [10], the calculation utilizes multi-string matching innovation to report the unpredictable dissemination issue of difference media on PC bound together gadget engineering (CUDA). The exploratory outcomes show that this calculation is plausible and could significantly improve the advancement impact of veins with constant execution, particularly the vessels [16]. Test data are displayed that clearly show the degree of usage of apex signalto-commotion extent (PSNR) as video quality estimation [19]. In any case, when the substance is rehabilitated, association among conceptual value and PSNR is extraordinarily diminished. Along these lines, PSNR cannot be a strong procedure for reviewing the video excellence over diverse video substance, Stated by M. Ghanbari, “Scope of Validity of PSNR in image/video Quality Assessment” [11]. The lecture will review certain of the leading-edge performs in the fields, and contemporary some chances for upcoming growth of cloud computing for developing mobile cloud apps [12]. The current cloud auction system often overlooks resource sharing but fails to deliver dynamic resources to cloud users. The online job form based on auction is
300
K. Chandrasekaran et al.
proposed to reduce the number of rounds of auction as well as improve social welfare with the same degree of competitive ratio and truthfulness [13]. A novel approach to energy efficiently protected routing protocol for software-defined vehicle Internet using restricted Boltzmann algorithm is used to detect hostile routes with minimum delay, less energy, and higher performance compared to traditional routing protocols [14]. The static sensor node gathers the accumulated information from different nodes in its zone of reference area and sends this gathered information to AUV while it enters 3-D ZOR, in this way lessening the energy utilized for transmission [15, 20].
4 Existing System One and only practical result for secure data in the harmless end user’s device afore subcontracting to cloud logically comes to be traditional ciphers such as advanced encryption standard (AES). But, encryption processes are relocating security in information to safeguard the key which sequentially presents selective encryption (SE). It is an innovative drift in images and audio-visual contented security. It comprises of encrypting simply a subsection of the data. The intention of SE is to shrink the volume of information to encrypt while conserving an adequate level of protection.
5 Proposed System The projected technique is implemented in edict to launch the level of security. The private fragment of data hunk is fictional to stay steadily secure. In this paper, encryption is done by using advanced encryption standard 128 on the other hand can be substituted with new encryption procedures as the elasticity as shown in Fig. 1. Advanced encryption standard (AES) supported by National institute of Standard System configuration is the sensible model that describes the structure, lead, and more points of view on a structure. A plan depiction is a legitimate delineation and depiction of a system, sifted through with the end goal that supports pondering the structures and practices of the system. A system configuration can include structure parts and the sub-systems [20] developed, that will collaborate to execute the general system. There have been attempts to formalize vernaculars to delineate structure building; aggregately these are called plan depiction lingos.
Protection of End User’s Data in Cloud Environment
301
Fig. 1 Protection and cloud storage architecture
6 Module Descriptions 6.1 UI Design The UI design is a user interface, which is the first module of this paper. This segment is designed for the safety purpose. In this module, the graphical user interface (GUI) widget toolkit Swing is used for creating strategy. At this point, we sustain the login client and server confirmation.
6.2 File Upload User login their account and upload a folder or picture, and that files and pictures are encrypted and stored in admin side as shown in Figs. 2 and 3. Even the uploaded user also cannot enter, afore admin can agree.
6.3 Store Data in Public and Private Cloud In this part, the uploaded file stores fewer than two cloud: public cloud and private cloud. The files are splitted here and stored under public and private cloud as shown in Fig. 4. In public cloud, we can show the file that we are already uploaded, but in private cloud, we cannot able to access the file, because that uploaded file will be encrypted in private cloud.
302
K. Chandrasekaran et al.
Fig. 2. File uploads module-register
Fig. 3 File upload login
6.4 User Requesting File from Cloud In this module, user will request the file that is uploaded by another user (owner), user cannot know that the file is under private or public cloud as shown in Fig. 5.
Protection of End User’s Data in Cloud Environment
303
Fig. 4 Store data in public and private cloud module
Fig. 5 User requesting file module
6.5 Response for the Requested File In this part, owner will give response for the file requested by user. Owner knows the file that it is stored under private or public cloud.
304
K. Chandrasekaran et al.
6.6 View/Read File For analysis, the file uploaded is divided into four fragments, we must be proprietor of the file, or else we must distinguish the four altered key which have been pooled by random algorithm, after viewing the file, you can also download the file, or else by incorrect key you cannot expose the content.
7 Conclusion In this paper, we stated a result for end users to exploit the habit of low-cost cloud storage amenities while protecting their data secure. Our technique can be used on many diverse data plans which meaningfully upgraded the idea of selective encryption by way of presenting fragmentation and dispersion procedures. The experimental and theoretic outcomes have proved that our technique can deliver an extraordinary level of security with confrontation beside propagation errors. We also delivered a quick runtime on dissimilar personal computers platforms with hands-on strategies and applications based on GPGPU quickening. In this paper, we projected a protected and capable data security technique for end users to safely store the data in cloud.
8 Future Enhancement We know that the encryption was joint key, but we do not distinguish correctly how it works which bounds the hands-on presentations. Though, encryption constructed on machine learning is adequate to make us wonder and no-one elsewhere encryption may go into the forthcoming periods. As knowledge progresses, so does our capability to encrypt data with neural networks now proficient of knowledge how to keep data secured. With so much modernization at our fingertips, “Davey Winder discovers where else encryption might go in the future.”
References 1. F. Hu, M. Qiu, J. Li, T. Grant, D. Taylor, S. McCaleb, L. Butler, R. Hamner, A review on cloud computing: design challenges in architecture and security. J. Comput. Inf. Technol. 19(1), 25–55 (2011) 2. H. Li, K. Ota, M. Dong, Virtual network recognition and optimization in SDN-enabled cloud environment. IEEE Trans. Cloud Comput. (2018) 3. Y. Li, W. Dai, Z. Ming, M. Qiu, Privacy protection for preventing data over-collection in smart city. IEEE Trans. Comput. 65(5), 1339–1350 (2016) 4. K. Gai, K.-K.R. Choo, M. Qiu, L. Zhu, Privacy-preserving content-oriented wireless communication in internet-of-things. IEEE Internet of Things J. 5(4), 3059–3067 (2018)
Protection of End User’s Data in Cloud Environment
305
5. S. Hambleton et al., A glimpse of 21st century care. Austral. J. General Pract. 47(10), 670–673 (2018) 6. O. Solon, O. Laughland, Cambridge analytica closing after facebook data harvesting scandal. The Guardian (2018) 7. H. Qiu, G. Memmi, Fast selective encryption methods for bitmap images. Int. J. Multimedia Data Eng. Manage. (IJMDEM) 6(3), 51–69 (2015) 8. A. Pommer, A. Uhl, Selective encryption of wavelet-packet encoded image data: efficiency and security. Multimedia Syst. 9(3), 279–287 (2003) 9. N. Taneja, B. Raman, I. Gupta, Selective image encryption in fractional wavelet domain. AEU-Int. J. Electron. Commun. 65(4), 338–344 (2011) 10. A. Massoudi, F. Lefebvre, C. De Vleeschouwer, B. Macq, J.- J. Quisquater, Real-time simulation of contrast media diffusion based on GPU. EURASIP J. Inf. Secur. 2008(1) (2008) 11. Q. Huynh-Thu, M. Ghanbari, Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 44(13), 800–801 (2008) 12. H. Li, K. Ota, M. Dong, A. Vasilakos, K. Nagano, Cloud computing for emerging mobile cloud apps. IEEE Trans. Cloud Comput. (2017) 13. H. Natarajan, P. Ajitha, An adaptive approach for dynamic resource allocation in cloud service. Int. J. Control Theo. Appl. 9(10), 4871–4878 (2016) 14. K. Indira, P. Ajitha, V. Reshma, A. Tamizhselvi, An efficient secured routing protocol for software defined internet of vehicles, In 2nd International Conference on Computational Intelligence in Data Science, Proceedings (2019) 15. R.M. Gomathi, J.Martin Leo Manickam, Energy efficient static node selection in underwater acoustic wireless sensor network. Wireless Pers. Commun. 107(2), 709–727 (2019). https:// doi.org/10.1007/s11277-019-06277-2 16. A. Velmurugan, T. Ravi, Alleviate the parental stress in neonatal intensive care unit using ontology. Ind. J. Sci. Technol. 9, 28 (2016) 17. N. Srinivasan, C. Lakshmi, Stock price prediction using rule based genetic algorithm approach. Res. J. Pharm. Technol. 10(1), 87–90 (2017) 18. V.Vijeya Kaveri, V. Maheswari, A framework for recommending health-related topics based on topic modeling in conversational data (Twitter). Cluster Comput. 22(5), 10963–10968 (2017). https://doi.org/10.1007/s10586-017-1263-z 19. K. Rathan, S.E. Roslin, E. Brumancia, MO-CSO-based load-balanced routing in MRMC WMN. IET Commun. 13(16), 2453–2460 (2019) 20. P.K. Rajendran, B. Muthukumar, G. Nagarajan, Hybrid intrusion detection system for private cloud: a systematic approach. Procedia Comput. Sci. 48(C), 325–329 (2015)
Privacy Preserving and Loss Data Retrieval in Cloud Computing Using Bucket Algorithm T. Durga Nagarjuna, T. Anil Kumar, and A. C. Santha Sheela
Abstract Cloud data is maintained, stored, and managed from all over the global network. The suggestions about the information will not be given by the cloud provider. Encryption-based technology is used to protect the schemes like privacy. To prevent leakage of data from the cloud, there are many different types of privacy preserving methods. In order to overcome the drawback, we propose a framework based on fog computing known as three-layer storage. The framework that is proposed takes total advantage of protecting the privacy of data and cloud storage. So, we will be using the Hash-Solomon algorithm code, which is designed for partition of data into equal and separate parts. If any part of the data is lost, we will remain with other parts of the data which is insufficient. Here, we will be using algorithms based on bucket concept in this framework to secure the information of the data and then provide the security and efficiency in our scheme. After that, based on the computational intelligence, this algorithm can analyze the distribution parts stored in cloud server, fog server, and local device. Software as a service on a hosting environment, client releases their application which can be accessed through network by application users from various clients. Except for limited user-specific application configuration settings, the client cannot manage or control the cloud structure and data. Keywords Cloud computing · Bucket concept · Security · Privacy · Three-layer storage
T. Durga Nagarjuna · T. Anil Kumar · A. C. Santha Sheela (B) Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] T. Durga Nagarjuna e-mail: [email protected] T. Anil Kumar e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_32
307
308
T. Durga Nagarjuna et al.
1 Introduction The cloud computing explains a type of extended version of computer services which is similar to the electricity supply that is outsourced in the modern computer science. It can be simply used by the users. They do not need to worry about the factors such as where the electricity is from, how it is made, or how it is transported. Payment for the consumption should be made for every month. The thought behind the cloud computing is that anyone can simply store the data, changing the power, or crafted development environments, which does not have to be worry about how these work inside the system. The cloud computing is basically an Internet-based computing and storage system. The cloud system is a data storage for the users in the Internet network and on how the Internet has been completely explained in computer network pictographs, which states that it is the exact abstraction that is hiding from the intruders in the complex facility of the Internet. It uses the computing systems provided as a service which allow the users to access technology or control over the technologies behind these servers. Fog computing is also a storage system that is invincible and can be used in both large cloud system data and big data types. This makes a difference in a decrease of quality of the gained content. It is the issue that has been attacked while the creation of metrics that leads to increase the efficiency and accuracy [1]. The effects of fog computation on cloud and big data may or may not be different [2]. However, a common thing that can be derived is a limited rule in exact content distribution [3]. The fog computing is making a reference to the difficulties that are growing in accessing information based on the subjects. Fog network has the control panel and a data plane. For example, on the data fog computing, the fog computing turns on the computational services to stay at the last of the network as stayed against to servers in a data center [4, 5]. Comparing to the cloud computing, the fog computing will emphasize the accuracy to previous users and clients, thick server distribution and local resource filling, latency deduction, and bandwidth savings to get the better quality of service, resulting in superior experience of the user and redundancy in case of failure [6, 7].
2 Literature Survey In this paper, L. Malinga, J. Hajny, and P. Dzurenda stated about privacy preserving security solution for cloud services. The related work is defined as the thing that it briefly explains the construction of the project along with the modules that are required to implement those techniques, and this is an extension of that and has been discussed in the introduction part. As discussed above that data from different resources which includes the data from geographical locations, based on environment and seasons, casual and the registered ones and some other columns that are required to perform as a part of the project, all this data has been merged or grouped into one component then the content available is the raw data which nothing but the
Privacy Preserving and Loss Data Retrieval in Cloud …
309
data that contains missing values, duplicate values and various other kinds of data that are net required as a part of this, so to overcome this we need to implement some data [8]. The users can use cloud services without causing any threats which store their usual behaviors [9]. This paper which was written and published after their thesis and experiments of Chandramohan and Dhasarathan cleaning methods or the data processing techniques which will completely change the unwanted data, if there are multiple records contain the same data then that duplication of data has been removed, also the boundary missing values which is nothing but some of the records contain empty values and that has been filled up with some random data values that are related to it, and finally the categorical variables and this contains two more divisions, firstly the label Encoding, secondly the one-Hot Encoder [10]. The use of these two encolors is that the machine itself cannot understand the data that has been given in the human-readable format, so we need to convert the data in such a way that the machine should understand, and this has been done by using these encoding methods [11]. These methods will convert the data present in text data or categorical data into a number which our predictive models or methods can understand better [12, 13]. The scikit learning library contains these two parts of unclears [14, 15]. Then the data is ready to train the machine, so the next is followed by stem machine learning algorithms firstly the expedition, secondly the classification methods. The prediction module method involves two divisions one is the unsupervised learning algorithms and the other one is the regressions [16]. Similarly, the classification methods also contain the supervised learning algorithms and the classifiers. The supervised learning algorithms are linear for regression problems, random for classification, and regression or support vest problems [17]. Along with some popular algorithms, unsupervised learning algorithms are K-means fa clustering algorithms etc., so based upon all these algorithms along with the data we have cleaned with them, we need to train the machine in such a way that it should predict the result whenever we give an input of new record that will provide the prediction based on the previous experience that we have then they considered burst error interleaving codes by efficient burst error-correcting technique, which is in known as verifiable secret sharing schemes and distributed pairwise checking protocol [18–21]. They proved the error-correcting capability of the error set correcting interleaving codes by finishing this experiment [22].
3 Proposed Work The framework we used may take total control over the cloud data and protect the data. The cloud computing here will attract great attraction from different parts of the society. The data is partitioned into three dissimilar parts and are stored in different locations known to be the three-layer cloud storage. If any one of the data parts is missing, we will lose the information of the data. But, in this proposed framework, by using the bucket concept-based algorithms, the lost data parts can be retrieved back from the software. In our system, we are using a bucket concept so reduce the
310
T. Durga Nagarjuna et al.
Fig. 1 System architecture
data wastages and reduce the process timings. We are using a BCH code algorithm. It is highly flexible. BCH code is used in many communication application and low amount of redundancy (Fig. 1).
3.1 Proposed Algorithm Bucket Everything that you store in cloud storage must be contained in a bucket. There are access control lists for buckets within cloud storage. The contents of bucket are unsorted. It has fixed size. You need to specify a unique name while creating bucket. Access control lists will let you know who has the access to the data. The three-layer cloud storage stores the data into three different parts at different locations. If any data part is missing, we will lose the information in the data. Bucket concept-based algorithm is used in this proposed framework.
4 Results and Discussion The results start with the designed Web page. The client should have an account or should register for one. After the account is created with the correct details, the authentication will be completed for the user to login. The data which needs to be stored is divided into three different parts, encrypted, and stored in different locations, namely cloud server, fog server, and local server. To solve the problem of privacy and security issues in cloud technology, we propose a framework known as TLS framework which is based on fog computing model (Fig. 2).
Privacy Preserving and Loss Data Retrieval in Cloud …
311
Fig. 2 Encryption of data
The data stored in three different locations can be accessed by the user together using Hash-Solomon algorithm. Here, the Advanced Encryption Standard (AES) algorithm is used to encrypt and decrypt the data. The algorithm used 128-bits and 256 blocks to complete the process of encryption which is the best way to secure the data. The client can download the data whenever needed (Fig. 3). The above description is the process of the project. If in case, any data part among the three parts is deleted or modified unknowing or by any third-party person, the data can be retrieved from the server using the bucket algorithm. The project is developed in order to secure the data, provide additional protection and data retravel after any intrusions. The procedure provides cent results in retrieving the data back from server which will be very helpful to the cloud data users.
Fig. 3 Decryption and combining the data
312
T. Durga Nagarjuna et al.
5 Conclusion The cloud storage is an adaptable storage technology which helps the users to enlarge the capacity of the storage through online. The upgrades in the cloud computing technology provide a lot of benefits. But cloud data storage may also cause a continuous security problem. While using the cloud storage technology, users cannot control the storage of the data physically. It results in the partitions of owning and accessing of data. To solve the problem of privacy and security issues in cloud storage technology, we propose a framework known as TLS framework which is based on fog computing model. Design an algorithm named BCH code which encrypts the data. Through this thesis of safety analysis, the scheme is declared to be feasible. By allocating the size of data parts which are saved in different servers, we can declare the privacy of the data in each server is very high and the cracking of the code is impossible as per theory. We can protect the fragmentary information besides using hash transformation, through the experiment test. Acknowledgements We are happy to confess our heartfelt gratitude to Board of Management of SATHYABAMA to their amiable motivation to this successful project completion. We are thankful to them. We send our gratitude to Dr. T. Sasikala, M.E., Ph.D., Dean, School of Computing and Dr. S. Vigneshwari, M.E., Ph.D. and Dr. L.Lakshmanan M.E., Ph.D., Head of the Department, Department of Computer Science and Engineering for giving us vital assistance and information on correct time for the continuous assessments. We are pleased convey our heartfelt thanks to our Project Mentor A. C. Santha Sheela, M.E., Assistant Professor, Department of Computer Science and Engineering to her precious advice, ideas and continuous support for the prosperousaccomplishment of our project work. We would like to sendour gratitude to all Teaching and Non-teaching staff members of the Department of Computer Science and Engineering who were supportive in more ways for the fulfillment of the project.
References 1. Z. Xia, X. Wang, L. Zhang, Z. Qin, X. Sun, K. Ren, A privacy preserving and copy-deterrence content-based image retrieval scheme in cloud computing. IEEE Trans. Inf. Forensics Secur. 11(11), 2594–2608 (2016) 2. T.P. Jacob, T. Ravi, An optimal technique for reducing the effort of regression test. Indian J. Sci. Technol. 6(8), 5065–5069 (2013) 3. D. Narmadha, A. Pravin, An intelligent computer-aided approach for target protein prediction in infectious diseases. Soft Comput. 1–14 (2020) 4. J.S. Plank, T1: erasure codes for storage applications, in Proceedings of 4th USENIX Conference on File Storage Technology (2005), pp. 1–74 5. R. Kulkarni, A. Forster, G. Venayagamoorthy, Computational intelligence in wireless sensor networks: a survey. IEEE Commun. Surv. Tutorials 13(1), 68–96 (First Quarter 2011) 6. L. Xiao, Q. Li, J. Liu, Survey on secure cloud storage. J. Data Acquis. Process. 31(3), 464–472 (2016) 7. R.J. McEliece, D.V. Sarwate, On sharing secrets and Reed-Solomon codes. Commun. ACM 24(9), 583–584 (1981)
Privacy Preserving and Loss Data Retrieval in Cloud …
313
8. G. Nagarajan, R.I. Minu, V. Vedanarayanan, S.S. Jebaseelan, K. Vasanth, CIMTEL-mining algorithm for big data in telecommunication. Int. J. Eng. Technol. (IJET) 7(5), 1709–1715 (2015) 9. S.P. Mary, E. Baburaj, Genetic based approach to improve E-commerce web site usability, in 2013 Fifth International Conference on Advanced Computing (ICoAC) (IEEE, 2013), pp. 395– 399 10. S. Jancy, Discovering unpredictable changes in business process. J. Comput. Theor. Nanosci. 16(8), 3228–3231 (2019) 11. A. Jesudoss, M.J. Daniel, J.J. Richard, Intelligent medicine management system and surveillance in IoT environment, in IOP Conference Series: Materials Science and Engineering, vol. 590, no. 1 (IOP Publishing, UK, 2019), p. 012005 12. V.R. Krishna, R. Subhashini, Mimicking attack by botnet and detection at gateway. Peer-to-Peer Networking Appl. 13, 1–1 (2020). https://doi.org/10.1007/s12083-019-00854-9 13. R. Neha, J.Y. Bevish, A survey on security challenges and malicious vehicle detection in vehicular ad hoc networks. Contemp. Eng. Sci. 8(5), 235–240 (2015) 14. A. Sivasangari, S. Poonguzhali, M.I. Rajkumar, Face photo recognition using sketch image for security system, 8(9S2) (2019). ISSN: 2278-3075 15. R. Subhashini, R. Sethuraman, V. Milani, Providing mother and child care telemedicine through interactive voice response, in Intelligent Computing, Communication and Devices (Springer, New Delhi, 2015), pp. 771–778 16. M. Selvi, P.M. Joe Prathap, WSN data aggregation of dynamic routing by QoS analysis. J. Adv. Res. Dyn. Control Syst. 9(18), 2900–2908 (2017) 17. J. Refonaa, M. Lakshmi, Cognitive computing techniques based rainfall prediction—a study, in 2017 International Conference on Computation of Power, Energy Information and Commuincation (ICCPEIC) (IEEE, 2017), pp. 142–144 18. H.T. Dinh, C. Lee, D. Niyato, P. Wang, A survey of mobile cloud computing. Mobile Comput. 13(18), 1587–1611 (2013) 19. J. Chase, R. Kaewpuang, W. Yonggang, D. Niyato, Joint virtualmachine and bandwidth allocation in software defined network (sdn) andcloud computing environments, in Proceedings of IEEE International Conference on Communications (2014), pp. 2969–2974 20. H. Li, W. Sun, F. Li, B. Wang, Secure and privacy-preserving datastorage service in public cloud. J. Comput. Res. Dev. 51(7), 1397–1409 (2014) 21. Y. Li, T.Wang, G.Wang, J. Liang, H. Chen, Efficient data collection in sensor-cloud system with multiple mobile sinks, in Proceedings of Advances in Services Computing, 10th Asia-Pacific Services Computing Conference (2016), pp. 130–143 22. P. Mell, T. Grance, The NIST definition of cloud computing. Nat. Inst. Stand. Technol. 53(6), 50 (2009)
Question Paper Generator and Result Analyzer R. Sathya Bama Krishna, Talupula Rahila, and Thummala Jahnavi
Abstract An examination may be a crucial activity for any educational establishments that is accustomed to notice the individual’s performance and their growth in progress. This online examination method is to be implemented so as to improve their knowledge in all the aspects thoroughly. Preparing the questions manually is extremely difficult, hard, and time taking process for the staff. Therfore, we introduced the process of conducting online mode of examination, so that the task for the staff to prepare the questions is reduced and machine takes all the responsibilities after loading the questions. This technique includes many modules like login, subject, questions, ranking, etc. Machine takes all the responsibilities after loading the questions. These modules assist the teacher in generating question papers and in anlyazing the results, the entire process is been implemented in a user friendly approach. The automated Question Paper Generation makes the work of teachers simple and also the automated result evaluation and analyser, saves a lot of their time which could be efficiently used in improving their skills. Keywords Automated · Question paper generation · Randomization · Shuffling algorithms
R. Sathya Bama Krishna (B) · T. Rahila · T. Jahnavi Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] T. Rahila e-mail: [email protected] T. Jahnavi e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_33
315
316
R. Sathya Bama Krishna et al.
1 Introduction This work can take away all those complexities that the workers ought to bear to create the question paper. The random generation rule is utilized by different sources to generate the questions randomly for different systems. Different systems use different techniques of randomization so as to produce the question paper in an efficient way [1, 2]. Victimization of code is to be performed for boosting the operations quickly. The best part of the code is to take the choices smartly for eliminating recurrent queries and checks the sources for replacement [3]. Not solely the forming of the matter paper is conjointly potential for limit finding of queries that have not been used in the least or are used but nominative variety of times. This makes it greatly advantageous because the difficulty in non-automation repetition avoidance is completely removed. The automatic system offers time saving and work efficiency. Associate degree oval epitome of different theme delineated in this entire project [4]. The most important goal of this project is to elucidate the generation of question paper. Victimization collapse rule is used in this process [5, 6]. This technique is personal computerfriendly which has many options chiefly manufacturing real sets of examination question paper. Result shows the unique professional of soppy work of this kind of rule for sorting the systems [7, 8]. Efficiency in hard work is to use different types of techniques (randomization) likewise as additionally to generate the questions, we are able to produce an equivalent code through creating a feature to provide queries through straightforward online text, which can be achieved through victimization or random generation processes [9]. The main goal of QPGRA is that it cuts back the duties of lecturers within the examination room. The purpose of non-manual generation systems is to compose the question papers. In step with the features of the data management system, system works inside the two modes, they are the user and the administrator [10]. This project can change faculty control to mechanically generate the papers for exam from the question bank which is already existing [11, 12]. The machine can have capacity to method the completely not same distinctive paper sets terribly mechanically [13, 14]. System can take over the complete hard task and will the non-automation, toilsome process fleetly and also with efficiency. User can be able to organize the question bank in step with subject name, class number/name, section name, and marks in particular subject. This code does all the work related to subject, starting from question guide preparation to paper generation. Code is extraordinarily helpful for all the educational institutions [15, 16]. The code can improve the institute with a robust technique to urge exam papers throughout a really less time. Authorities have pliability to urge category tests, terminal tests, public tests, and revision tests [17, 18].
Question Paper Generator and Result Analyzer
317
2 Related Works There are many previous works that are done regarding the examination system in the educational institutions [19]. The existing system allows the students to take the test in online but the major problem in this system was correction. The correction system was not automatic in some projects [20]. The existing system is very time consuming and chances of paper leakage are more when compared to online method of conducting the examination. Whereas the major task after the exam is correction, the existing methodology consumes more time for correcting the papers and also a bit of risk for humans is also involved like students can cheat by changing the marks [21]. An automated question paper generation system was introduced in September 2014 through which examination is conducted online and question papers can be generated once if student clicks the start test button. Question paper generator system was put forward in October 2015. Test paper generation system using ant colony algorithm was put forward by a journal in the year 2017 Also, the same system was introduced by the students of Wuhan University but by using different algorithms and methodologies.
3 Proposed System QPGRA is special and distinctive code, which can be utilized in class, establishments, colleges, check paper setters. World Educational Organization wishes to possess enormous information of queries for frequent generation of question. This code may be enforced in numerous medical, engineering, and coaching institutes for theory paper. You will be able to produce random question paper with this code anytime inside seconds. You will be able to enter unlimited units and chapter relying upon the system storage, capability, and as per the necessity. For generating the question paper, we need some parameters and the process has to be divided into several parts namely the result instrument, parameter selection, guide, and QPG. The skeleton paper is generated by the admin (staff). College’s square measure is created to know the queries through the information together by several problem level and importance. These questions which are prepared are sent for generation. Questions chosen are going to be unbiased and support the rules. This question paper is then understood by the admin. Once after the analysis, the generated exam paper may be mailed to the institute. Once check taken by student, workers are going to be able to read the result and analyze them. The Framework for Automation took a look at the paper and created system “supplied a radical perception of method of non-manual paper generation.” A threetype model is furnished at some stage in the framework. Generation of question papers is done using the syllabus, complex composer, and query aggregator. The exam paper which is generated is predicted on the produced patterns. Other issues like bank system take care of human rights and privileged tasks. The QPG selects a
318
R. Sathya Bama Krishna et al.
trouble constant with the simple methods. This system additionally provides another system where a chosen query is noted in order to remove it from the system. This method prevents repetition of questions in papers. This project QPG uses the algorithm of randomization technique “describes a system which uses a collapsed algorithm as a choice technique.” This system consists of various modules like consumer, admin, difficulty, subject-level identity, query, query management, paper creation, and paper organization. The machine introduces an enormously environment-friendly algorithm. The exam questions are then chosen toward this array method, making sure they generate completely choice-generated query papers. However, this technique is incomplete to make use of the exceedingly environment-friendly pointing system. Exam questions once if they are selected it is not a problem if they are repeated. The QPG has furnished a capability to utilize built-in question bank. This project accurately describes the unique kind of the questions which are complex however highly efficient and algorithm-friendly. It notes the usage and dealing of connecting from manual structures to no-paper systems. This project additionally and sincerely says to put forward the importance of data for educational organizations.
3.1 Feasibility Study 1. Economic Feasibility: This question paper generator system is economically feasible as it saves a lot of money by saving paper. This proposed system not only saves money but also time. 2. Technically Feasible: This question paper generator system is also technically feasible as the proposed system does not have any complications in any of the hardware or software architectures. 3. Operational Feasibility: This question paper generation system is operationally feasible as the proposed system is designed in such a way that users do not face any task as difficult to complete. Characteristics • • • • • •
Time saving Autocorrect process Autoresult generation Result analysis Paperless activity Easy student report generation.
Question Paper Generator and Result Analyzer
319
4 Architecture This question paper generator architecture is built in such a way that this can be accessed by anyone very easily as its software does not have any complications. The architecture is designed in such a way that the operation of this Web page are easily operated by anyone. It generally has two phases of testing, in the first-phase risk analysis is performed where the system undergoes the test that in future it does not face any problems. In the second phase, system monitoring is done in order to check its speed, time complexity, and many other factors (Fig. 1). The major steps involved in generating the question paper are:
Fig. 1 Architecture diagram
Fig. 2 Block diagram
320
R. Sathya Bama Krishna et al.
Input should be taken from users, i.e., questions which are to be stored. Next is skeleton generation, i.e., raw data generation [22], and next test is to be conducted. Finally, result is to be analyzed (Fig. 2).
4.1 Modules and Description This question paper generator system can be implemented using several modules like Admin User Quiz Ranking Feedback. Admin—This module can be employed by the staff to login and add question papers or to remove question papers and to check the students result analysis. User—This module can be employed by the students to login and write their exams and to check their results. Quiz—In this quiz module, there are two sections: 1. Add quiz 2. Remove quiz. This module is employed by the staff to add the questions and to remove or change the questions. Ranking—This module can be employed by both students and teachers. Both student and teachers have same functionality that is to check results and ranking. Feedback—This module can also be employed by both students and teachers. Students can give feedback and teachers can read the feedback given by students. Algorithm This algorithm (i.e., shuffling algorithm) is extraordinarily familiar and tremendous methods to make n different question papers to generate them randomly. It tests for non-original and repeated questions in the generated question paper. The process for set of 1 to N − 1 numbers is as follows: 1. Numbers are to be chosen from 1 to n (in the database) 2. Select a number (k) randomly from the remaining 3. Store the number which is generated if the location of the number is equal to zero.
Question Paper Generator and Result Analyzer
321
4.2 System Requirements 4.2.1 • • • • • •
Software Requirements
Microsoft Windows (Windows 7/Windows 10) MySQL Php CSS Any Browser (Google, Mozilla, etc…) UAMPP/WAMPP/XAMPP local servers.
4.2.2
Hardware Requirements
• 20 GB hard disk • RAM 1 GB/2 GB • I3/I5 processer-based system.
5 Conclusion This mission is to understand the use of our system in online. It also describes a prototypical working of the algorithm used for the generation of different questions randomly. The device is a PC primarily software program. This project is to less the human efforts and mistakes while generating the question paper and also to reduce the human errors in the paper. Through this online system, lots and lots of papers can be saved by the educational institutions. Through this system, students can know what exactly the subject is, without just preparing the important questions. Students can have lesser chances to copy in the examination. Finally, this system reduces the stress for teachers by doing autocorrection and reduces the students strain as they do not need to write in papers for hours and hours through conducting the exam online.
References 1. G. Nagarajan, R.I. Minu, V. Vedanarayanan, S.S. Jebaseelan, K. Vasanth, CIMTEL-mining algorithm for big data in telecommunication. Int. J. Eng. Technol. (IJET) 7(5), 1709–1715 (2015) 2. B.K. Samhitha, S.C. Mana, J. Jose, M. Mohith, L. Siva Chandhrahasa Reddy, An efficient implementation of a method to detect sybil attacks in vehicular ad hoc networks using received signal strength indicator. Int. J. Innovative Technol. Exploring Eng. (IJITEE) 9(1) (2019). ISSN: 2278-3075
322
R. Sathya Bama Krishna et al.
3. B. Jinila, Cluster oriented ID based multi-signature scheme for traffic congestion warning in vehicular ad hoc networks, in Emerging ICT for Bridging the Future-Proceedings of the 49th Annual Convention of the Computer Society of India, CSI, vol. 2 (Springer, Cham, 2015), pp. 337–345 4. R. Sathyabama Krishna, M. Aramudhan, Unsupervised spectral sparse regression feature selection using social media datasets, in ICIA-16: Proceedings of the International Conference on Informatics and Analytics, Article No. 34 (2016), pp. 1–5. https://doi.org/10.1145/2980258. 2980323 5. Y. Yu, H. Wang, Adaptive Online Exam Questions Supported Systematic Analysis and Style (Wuhan University of Technology, 2008) 6. D. Liu, J. Wang, L. Zheng, Automatic test paper generation based on ant colony algorithm. J. Softw. 8 (2013). https://doi.org/10.4304/jsw.8.10.2600-2606 7. S. Choudhary, B. Rias, Question Paper Generator System. Int. J. Comput. Sci. Trends Technol. (IJCST), 3(5) (2015) 8. R. Bhirangi, S. Bhoir, Automated Question Paper Generation System, vol. 5, Issue 4 (Computer Engineering Department, Ramarao AdikInstiotute of Technology, Navi Mumbai, 2016). ISSN: 2278-9359 9. D. Narmadha, A. Pravin, An intelligent computer-aided approach for target protein prediction in infectious diseases. Soft Comput. 1–14 (2020) 10. G. Kalaiarasi, K.K. Thyagharajan, Clustering of near duplicate images using bundled features. Cluster Comput. 22(5), 11997–12007 (2019) 11. P. Asha, S. Srinivasan, Hash algorithm for finding associations between genes. J. Biosci. Biotechnol. Res. Asia ‘BBRA’ 12(1), 401–410 (2015). ISSN: 0973-1245 12. R. Sathya Bama Krishna, D. Usha Nandini, S. Prince Mary, A study on unsupervised feature selection. J. Adv. Res. Dyn. Control Syst. 11(08), 1252–1257 (2019) 13. R. Sethuraman, G. Sneha, D.S. Bhargavi, A semantic web services for medical analysis in health care domain, in 2017 International Conference on Information Communication and Embedded Systems (ICICES) (IEEE, 2017), pp. 1–5 14. A. Jesudoss, M.J. Daniel, J.J. Richard, Intelligent medicine management system and surveillance in IoT environment, in IOP Conference Series: Materials Science and Engineering, vol. 590, no. 1 (IOP Publishing, UK, 2019), p. 012005 15. P.V. Raja, K. Sangeetha, D. Deepa, Extractive text summarization system using fuzzy clustering algorithm for mobile devices. Asian J. Inf. Technol. 15(5), 933–939 (2016) 16. A. Ponraj, Optimistic virtual machine placement in cloud data centers using queuing approach. Future Gener. Comput. Syst. 93, 338–344 (2019) 17. B. Jinila, Cluster oriented ID based multi-signature scheme for traffic congestion warning in vehicular ad hoc networks, in Emerging ICT for Bridging the Future-Proceedings of the 49th Annual Convention of the Computer Society of India CSI, vol. 2 (Springer, Cham, 2015), pp. 337–345 18. R.S.B. Krishna, B. Bharathi, M.U. Ahamed, B. Ankayarkanni, Hybrid method for moving object exploration in video surveillance, in 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE), Dubai, United Arab Emirates (2019), pp. 773– 778. https://doi.org/10.1109/ICCIKE47802.2019.9004330 19. T.P. Jacob, T. Ravi, An optimal technique for reducing the effort of regression test. Indian J. Sci. Technol. 6(8), 5065–5069 (2013) 20. K. Vasanth, V. Elanangai, S. Saravanan, G. Nagarajan, FSM-based VLSI architecture for the 3×3 window-based DBUTMPF algorithm, in Proceedings of the International Conference on Soft Computing Systems (Springer, New Delhi, 2016), pp. 235–247 21. S.P. Mary, E. Baburaj, Performance enhancement in session identification, in 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT) (IEEE, 2014), pp. 837–840
Question Paper Generator and Result Analyzer
323
22. G. Nagarajan, R.I. Minu, Fuzzy ontology based multi-modal semantic information retrieval. Procedia Comput. Sci. 48, 101–106 (2015)
Online Crime Reporting System—A Model Mriganka Debnath, Suvam Chakraborty, and R. Sathya Bama Krishna
Abstract Crime which is an unlawful act is increasing in our society day by day. With the advancement of technology, criminals are also getting new ways of doing crimes. Crime takes place due to a few common reasons that are money, imbalance mentality, and emotions. After the crime takes place victims need to go through a very complicated and lengthy process for reporting the crime at police station. It’s also a very hectic process for the crime branch to do it manually and maintain the records. So the crime reporting system is the solution for all the victims and for the crime department. This will not only make the work easy but also it will help the users to access many features like news feed and all the updates regarding crime taking place in your locality. It will bring the police and the victims closer and hence increasing the security. This makes the FIR registration simple and easy hence making it time-efficient. The system will help the crime department to take action as quickly as possible and maintain the database efficiently. The police can even update an alert to the citizens regarding the most wanted persons, lost belongings, and any kind of emergency through this system. As a result, this system will give a sustainable solution to the users, police, and victims for managing the crime in a better and a structured way. Keywords Crime report · Security · Victims · Criminals · Police
M. Debnath · S. Chakraborty · R. Sathya Bama Krishna (B) Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] M. Debnath e-mail: [email protected] S. Chakraborty e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_34
325
326
M. Debnath et al.
1 Introduction In the twenty-first century with the advancement of the technology the crimes are also taking place in huge numbers and in renewed forms. The criminals are redefining the crimes with the use of technology and different tactics. People are actually engaged with a crime because they want to earn a huge amount of money in an easier way. They want to harm others for their benefit and get engaged in unlawful activities. Sometimes crime also takes place due to sentimental and emotional issues which include domestic violence, property issues, and unwanted relations. Crime specifically occurs at a particular place, at a particular time, and for some particular persons. Cyber-crime like hacking and mining of unauthorized data is taking place. Nowadays people feel unsecured about the increasing crimes and the process related to reporting crime. So the crime branch needs to go hand in hand with the act of the criminals or a one step ahead. Then only the offenses that are taking place can be controlled. In the traditional way of reporting crime which is done manually where the victim needs to go to the police station and file a complaint, these lead to chaos in the police station which even becomes a tough job for the crime branch to control the crowd [1]. Then the victim needs to visit regularly to the police station for updates and to present proof and documents for further action to be taken [2]. If the victim is an old or handicapped person, it becomes difficult for them to travel to the police station because sometimes the police station is located far from their home. And in some cases, like the victim is of the acid attack, rape, etc., hesitate to travel to the police station and file a complaint. So for them, this proposed system will be a blessing and make them feel at ease and comfortable [3, 4]. The human resource required by the crime branch will be reduced and hence leading to efficient expenditure of the budget [5]. This system will make offense reporting digitalized which will go hand in hand with the vision of our honorable prime minister [6]. This system can even be used as an app on mobile, and users get to access it easily. All the information will be confidential and secured. The users also get many features like alerts and updates regarding crime taking place around them. They also get the news feed of the world and details of all the persons working in the crime branch along with their positions [7]. The police maintain the database and make the required update, they verify the documents and complaint filed and to take action against it and make contact with the victim [8, 9]. The admin provides the authorization and the required access to the crime branch for making changes to the database [10].
1.1 Problem Statement In the present world, crime may be a universal phenomenon. Today it became a serious concern for the citizens which has been rising in different parts of the world and among all sectors of the society. It’s always been a civilization. The recent rate growth of the crime rate has not only been unprecedented but also has been
Online Crime Reporting System—A Model
327
accelerating. New sorts of crime are emerging and old forms are assuming new dimensions. The tremendous advances in technology have resulted in a rapid climb in urbanization and industrialization. The social organization influenced by technoindustrial-urban complexes offers a setting conducive to crime such a lot so that a variety of students, like Baldwin and Bottoms (1976) consider crime largely as an urban phenomenon. However, it doesn’t mean that rural areas are free from crime; it only indicates that the urban community is more susceptible to crime. The incidence of crime doesn’t only vary from urban to rural areas, but also from one area to a different rural or urban depending upon a variety of things. This leads to a demand for a sustainable, flexible, and a user-friendly solution that will help all the persons related to the system.
1.2 Research Objectives and Significance The primary aim of this proposed system i.e. “OCRS” (Online Crime Reporting System) is to assist the crime branch in such a way that they can get all the necessary and detailed information of the incident that took place in a particular area of society which will help the crime branch department to start the investigation in a much faster way [11–13]. The proposed system also provides information or status of the investigation regarding the present case to both the users and the police personal [14]. There are several intents of this proposed system. Some of the important intents are as follows: 1. This model provides detailed information on crime to both the victim and the police. 2. This system has a database which helps in retrieving information by the crime branch whenever it is necessary. 3. It makes the whole process efficient. 4. This process is based on digitalization. 5. It provides authentication and keeps the system secured. The proposed model “Online Crime Reporting System (OCRS)” is based on the new technologies which make the whole process efficient. It is going to be very important and profitable for the whole society especially for the victim and for the investigating officers. This system will provide more accurate information like the time and date of the incident, place of the incident, etc. which will be very helpful for the investigating officers to process the investigation, and thus it will take less time to complete the process. This system also shows the status of the investigation to the user, victims, and what action has been taken against a particular incident that took place. Thus, this system makes the whole process very easy and increases the flexibility of the system. This system will help to decrease the crime to an extent and making the citizens aware of their society and country.
328
M. Debnath et al.
This model will help in building the strategies to prevent the crimes which will be helpful for society, especially for women. OCRS will surely help in bringing down the crime rates and gradually in the future it will make the whole country a safe place to reside.
2 Related Works The previous system for the “Crime Report” was very hectic and complex as it needs the victim to go to the police station and give a statement to the police to register a complaint or FIR which results in chaos in the police station. Moreover, it was timeconsuming and it requires manual labor but the current system is proposed based on the new technologies which don’t require manual labor and it is also time-efficient as in this system; the victim doesn’t need to go to the police station rather the victim can directly register a file or complaint from their home or from any other place as it is an e-crime reporting system. The cutting edge technology is used to build this project because no compromise will be considered when it comes to the security of the country [15].
2.1 Management of Crime Crime management deals with managing, controlling, and processing actions against crime. Crime is the situation where a person violates the articles/sections under the constitution, which goes against the rules and regulations that is the ethics of the society. For getting rid of crimes the police follow many processes like patrolling, CCTV surveillance, collecting information from spy and localities. This OCRS will help to report the crime on a web-site and managing the crime records through a computerized manner in a database. It will act as the best friend of the police and also for the citizens. The police will be able to track the person violating the law and can continue the required process. The maintenance of the database is very essential as one of the important factors in crime management. The database helps in retrieving an old crime record or any other information whenever it is required for the crime branch. It provides authentication to keep the database safe and to prevent any loss or damage to the database. Only the authorized person can access the database by giving the login credentials and it is provided to the police officers by the admin so that the confidentiality and security are maintained.
Online Crime Reporting System—A Model
329
2.2 Citizen Patrol’s Scenario Patrolling is a part of crime report investigation. Patrolling refers to the people or volunteers who work for the crime department or police officers but they are not police officers or any other authorized officials. They are just like normal people who act as volunteers whose task is to survey an area of society to report crimes or any other illegal incidents to the police officers or crime branch and they also supply clear and detailed information about the criminals which makes the task easier for the police to reach to the criminals. Citizen patrol doesn’t have or possess any official power neither they carry weapons. They are just like the other citizen who just works for the police officials and report the crimes to the police. Citizen patrol can cover a small area such as a particular locality or village or society, apartments, or large areas like cities depending upon the requirement of the police officers. Citizen patrols are connected to the police through cell phones, radio, etc. To become a citizen patrol they should: • • • •
Have gone through training by police law. Work together. Not carry weapons. Always be connected to the police through radios or cell phones.
2.3 Reduction in Crime Through Education After observing the current scenario it became necessary to give proper education to the students from the root level. They must be given the knowledge of all the rights of a citizen, about the constitution, criminal law, all the laws of evidence, and criminal justice system; so that they get psychologically prepared for what is right or what is wrong. They must be taught how to abide by the constitution and how to maintain the ethics of society. The public must also be made aware of the laws and crimes happening and how to avoid them. The main reasons behind the crime are as follows: • The first one is the proper education • Second one is the increasing rate of unemployment • Third one is for many psychological issues. People must be made educated and well aware of the technologies that can help them to get rid of crime in their society. Everyone in society must be made aware of OCRS so that they can use it properly at the time of their requirement. The public must be informed that they can help a lot in reducing crime by remaining alert and informing to the police of any kind of doubts. The people should be made aware of the punishments that they need to go through after doing a crime. We can see in developed countries like the USA, Russia, Austria, UAE, etc. where the crime rates are very less due to the high rate of employment and education.
330
M. Debnath et al.
2.4 Reporting of Crime In the traditional method of crime reporting, the victim or the citizen needs to go to the police station for reporting the crime and report it. Sometimes you must need to wait in a long queue for filing a complaint where it becomes a chaotic environment. The police are also not able to perform their work properly due to the rush in the police station. They become puzzled and cannot concentrate on a single case. The police need to contact the victim several times to the victim to visit the police station and the victim reports there multiple times making the process a very hectic one. The victim needs to provide the evidence and the documents manually; after which the police needs to keep it very safe so that the data does not get lost. The police maintain a register where all the complaints are filed and it needs to be kept safely until the investigation is done and also after the case gets solved. The victim cannot even see the status of the filed complaint and its processing. The victim does not get any information about offenses taking place in their surroundings. Ways of Reporting: Methods of reporting crime are as follows: • Through phone calls • Walk into police station for filing • Through email or messaging. The citizens can report the crime to the crime branch by calling to the toll-free helpline number. The victim needs to explain all the situation, incident, identity, location, and register the complaint. In the offline method, the victim needs to walk into the police station and file the complaint with evidence and proper documents. The victim needs to make the payment for FIR and will receive an acknowledgement. Then the victim needs to visit the police station frequently for providing evidence and required documents. Even for any updates of the case, the victim needs to visit. The other way to lodge a complaint is to mail it in their mail ID with evidence and required documents. And mention all the details of the victim and the crime.
2.5 Emerging Technology for Preventing Crime In this world of technology, the world is reaching newer heights through innovation and technology day-by-day. Technology has changed the lives of the people and made it easier. So we must take the advantage of the technology and apply it to the well-being of society. So we have decided to make use of the cutting-edge technologies to build this application which will help the crime branch and the citizens and make them feel secured and comfortable. In our system CSS, HTML, JavaScript, MySQL, bootstrap, and PHP has been used to make the application more dynamic and user-friendly. Keeping in mind the security of this application we have assigned the authentication and authorization work to the admin who will be able to make
Online Crime Reporting System—A Model
331
changes and provide access to the users, the user needs to first register for log-in and then use it. We have made a dashboard that gives each and every information regarding the number of reported crimes, missing vehicles, missing persons, and the application details like certificate, abstract, etc. Even the details of the persons, vehicles along with their images and id are shown and displayed to the admin. These technologies will ensure to maintain the law and order in society. Technology is a blessing if we use it in the right way and misuse of it leads to the destruction and harm to society. The technology can be used to get away from crimes like child trafficking, sex rackets, smuggling, kidnapping, etc. which are ruining the youth and misleading them.
2.6 Technical Training by Police The police must be trained technically and must make them aware of the advanced ways for committing crime, since the crime taking place is increasing and also the way they are taking place are a challenge to the police. The police must track the CCTV cameras, speedometer, and Goggle map tracking which will help them to keep an eye on the person breaking law and order. The police must also be trained to tackle cyber-crime. The police must be able to track social media so that he can observe and take action on any offensive act. And social media are one of the main reasons for spreading the rumor which leads to many unwanted acts like lynching, torture, and violent protest. The police must be able to track the phone calls and every message that is being regulated so that he will be able to work on it and can avoid any unwanted situations.
3 System Design and Its Methods In the current system, it is a traditional one where the victim needs to visit the police station for lodging a complaint. Here the police maintain the data in a register which contains all the information regarding the complaint and needs to maintain it manually. It’s a very lengthy process in which the police does an investigation collects evidence one by one and discuss with the victim again and again. But the “OCRS” will overcome all of these. It is designed and developed in such a way so that it becomes user friendly. In this system, the user interface build is very attractive and professional. The system has access to the crime branch, victims, and the citizens. The admin of the system can only control, make changes, and maintain the databases. The admin provides the required security to the users so that the data does not get deleted or leaked. On the top of the “OCRS” portal it is having the option of home, alert, apply, and report a crime. On the left side of it consists of all the options which give the user to get the information regarding laws and regulations
332
M. Debnath et al.
related to crime, information about the officials of crime branch and their positions, about terrorism, Indian constitution, and about the defense system of India. On the right of the portal consists of all the information regarding the police forces, their assets, and the technology they are using. In this system the police can see all the information, i.e., number of crimes reported, vehicle missing, persons missing, and the detailing with photo. The police can make an alert if there is any kind of emergency or a wanted person. When the case gets solved the police can even withdraw alert. The police update the status of reported crime so that the victim gets information about each and everything. They maintain the database and can even make changes when required. The citizens using this system need to get access from admin by registering and login. The user in the system can go through the news feed and the important information given. The user needs to go to the applicable option for reporting the crime, after that the user needs to fill in all the required data and upload the related document or evidence. The citizens can get aware of the crime taking place in their society and get alert. The user gets all the options to contact the police in case of an emergency. The user will be updated the status as the investigation proceeds. The user cannot make any change, as they will only be able to access the portal. In this system, we have used MySQL for building the database. The User interface is built by using CSS, html, and bootstrap. The backend and the main framework of the program are done in JavaScript and PHP. The system needs XAMPP for working.
3.1 Features of the System The proposed system i.e. Online Crime Reporting System (OCRS) consists of various features which make this model efficient for the user and the police. The features of the proposed system make it a different one from the traditional method. The proposed model has a different section. The sections are as follows: Admin This section is known as admin page which controls the portal, maintains the database, and makes necessary changes whenever required. It gives access, authentication, and authorization to the users. An FIR form is also provided on this page which helps the user to report crimes. The data or the reported crime can be edited or deleted as per requirement. Different sections have been made, one for the user, one for the police, and the other for the admin. The log-in credentials are also provided by the admin and it is also maintained by admin. Admin can also update the model by providing multi-lingual features and cloud access can also be provided by the admin. Public This section is for the public or any users. In this section, access is given to the public. Registration and the log-in form are provided where the users need to give
Online Crime Reporting System—A Model
333
their login credentials (for registered users) or for new user they need to register first. After logging in, the user or public can report the crime and the information will be reached to the police. They can even get information about their surroundings and the details of the crime branch. Police This section is for the police and it is also called a police section where each and every detailed information available regarding different types of crime. Here, the police officials can see the numbers of crimes reported, they can also see at what time and date the case was registered. This section also helps the police officials to update the status of the current investigation so that the user can see how much the investigation is completed i.e. the user can see the status of the investigation only by logging in to the website of crime reporting system. By using this police can take action and gather evidence. They can make an alert so that the citizens become aware.
3.2 Functions The crime reporting system model has different types of functions. Few are listed as below: • The user can report any crime by filling the FIR form that is provided on the website. • The user can also elaborate and provide necessary information like the type of crime, place of the crime, gender of the criminals, etc. which will be very helpful for the police. • Missing forms are also provided in this model. • This model is much secured as it needs login credentials to access the website. • Admin can also edit or delete information if necessary. • It delivers news feed to the users and necessary information about laws and regulations.
3.3 Requirements for the System This model is based on new technologies, so to run this model some basic requirements are needed for a system. The requirements are categorized into two parts. They are software and hardware. The software and hardware used are advanced in technology and an updated one. Required Software: The minimum required software that is needed to run the model is: • Windows XP, Windows 7, Windows 10 • Structured Query Language
334 Fig. 1 Block diagram
M. Debnath et al.
Log in or Register First Investigation Report form, submits, report, etc.
E-Crime Report
• PHPMyAdmin or HTML, Cascading Style Sheet, JavaScript • XAMPP. Required Hardware: The minimum required software that is needed to run the model: • Minimum HDD: 20 GB • Random Access Memory: 512 MB • I3 processor-based computer or higher.
4 Software Development Methodology 4.1 Architecture of the Proposed System See Fig. 1.
4.2 Architectural Design In the block diagram, the user has to log in or register on the website in order to file a complaint. The user interface contains the login or registers form. A user can file a complaint by filling the FIR (First Investigation Report) form. The proposed system has been divided into three sections: Admin, User, and Police. The proposed system provides accurate and detailed information on the crime to the crime branch. The proposed system also shows the status of the current case and that can be seen by user also. In this module, user can edit or delete information as per requirement. Along with the FIR or complaint file, the user can see the images too which helps in providing more detailed information of the crime.
Online Crime Reporting System—A Model
335
4.3 Description of the Model • • • • •
Victim’s or Volunteer’s reports FIR. Every FIR has its own unique ID number. Victim’s or Volunteer’s report the crime. Further investigation and authentication are handled by police. The important areas or points are marked by the police that helps in preceding the investigation further.
4.4 Modules The crime reporting system model consists of mainly three actors. They are admin, police, and the user. The primary task of the admin is to maintain the database records and provide authentication to the police and the user. The user uses the credentials to access the website so that they can file a complaint or FIR of the crime. The police see the reported crime and start the investigation and keep on updating the status of the investigation which can be seen by the user (Fig. 2).
4.5 Use Cases The proposed system has users and they are victims or volunteers. The user files a complaint or FIR and the notification alert is sent to the police. The investigation is preceded by the police based on the information provided by the user or victim. Police keep on updating the status of the process on the website which can be seen by the user or victim. The duties are: 1. Report crimes. 2. Provide detailed information to police to start the investigation. 3. Police keeps on updating of status of the investigation on the website. Figure 3 describes the methods or steps of the use case crime reporting process. Police role: The steps involved are: • Police logins. • Performs analysis and verification on FIR lodged by a victim or volunteer. • Document the FIR for faster Action.
336
M. Debnath et al.
Fig. 2 Activity diagram
5 Application Software Modules Password Module: In this section, the user enters password, and therefore the portal checks its validity. This section checks the authentication and if the log in credentials is correct then the user can log in and report crimes. This module keeps the whole system secure and it doesn’t allow any unauthorized users to access the system. Search: This module helps in searching for the records from the database and also helps in retrieving information from the database whenever necessary. Progress: All the progress reports and special remarks are often entered within the system by the admin. Registration: In this section, the user has to register first if he or she is new to this website. After successful registration, they get their log in credentials which are
Online Crime Reporting System—A Model
337
Fig. 3 Diagram of use case of crime report Visit Application
Fills the FIRform
Victim Process FIR andSend alert to the victimand
Police fetches theInformation and start investigation
Counterfile
Immediate action
provided by the admin. They use the log in credentials to log in and report the crimes on the website.
6 Result and Conclusion As a result of this system getting introduced in our defense system, it will act as a bridge to cover the gap of communication between the police and the victim. The police will be able to understand the victim better which will further result in taking quick actions. The victim will not get harassed and will be delivered with justice in time. It will make the world a better place to live in peace and togetherness. In conclusion, we can say that this system will make the system of reporting crime more efficient and flexible for the users. The users no need to worry about the security and the confidentiality of the documents submitted. And for the police, he needs not to worry about the information to be stored because it is all maintained in a database and can be edited or retrieved whenever needed. The portal is secured, monitored, and maintained by the admin from time to time.
338
M. Debnath et al.
References 1. Lal, Divya & Abidin, Adiba & Garg, Naveen & Deep, Vikas. (2016). Advanced Immediate Crime Reporting to Police in India. Procedia Computer Science. 85. 543-549. 10.1016/j.procs.2016.05.216. 2. A. Khan, A. Singh, A. Chauhan, A. Gupta, CRIME MANAGEMENT SYSTEM, International Research Journal of Engineering and Technology (IRJET), Volume: 06 Issue: 04 | Apr 2019. 3. R. S. B. Krishna, B. Bharathi, M. U. Ahamed. A and B. Ankayarkanni, "Hybrid Method for Moving Object Exploration in Video Surveillance," 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE), Dubai, United Arab Emirates, 2019, pp. 773-778, doi: 10.1109/ICCIKE47802.2019.9004330. 4. Kalaiarasi, G. & K K, Thyagharajan. (2019). Clustering of near duplicate images using bundled features. Cluster Computing. 22. 10.1007/s10586-017-1539-3. 5. N.G.M. Reddy, G.R. Sai, A. Pravin, Web camera primarily based face recognition victimization VIOLA JONE’S rule with arduino uno, in 2019 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2019), pp. 0667–0671 6. T. Prem Jacob (2015), Implementation of Randomized Test Pattern Generation Strategy, Journal of Theoretical and Applied Information Technology, Vol.73 No.1, pp.no.59-64. 7. G.V.K. Sai, P.S. Kumar, A.V.A. Mary, Incremental frequent mining human activity patterns for health care applications, in IOP Conference Series: Materials Science and Engineering, vol. 590, no. 1 (IOP Publishing, UK, 2019), p. 012050 8. R. Sethuraman, G. Sneha, D.S. Bhargavi, A semantic web services for medical analysis in health care domain, in 2017 International Conference on Information Communication and Embedded Systems (ICICES) (IEEE, 2017), pp. 1–5 9. A. Jesudoss, M.J. Daniel, J.J. Richard, Intelligent medicine management system and surveillance in IoT environment, in IOP Conference Series: Materials Science and Engineering, vol. 590, no. 1 (IOP Publishing, UK, 2019), p. 012005 10. A. Sivasangari, S. Poonguzhali, M. Immanuel Rajkumar, Face photo recognition using sketch image for security system. Int. J. Innovative Technol. Exploring Eng. 8(9S2) 11. P. Yugandhar, B. Muni Archana, Crime reporting system. Int. J. Innovative Res. Technol. 4(11), 1745–1748 (2020) 12. S. Dhamodaran, J. Albert Mayan, N. Saibharath, N. Nagendra, M. Sundarrajan, Spatial interpolation of meteorological data and forecasting rainfall using ensemble techniques, AIP Conference Proceedings,volume.2207 (2020), Issue 1, 10.1063/5.0000059,https://doi.org/10.1063/ 5.0000059. 13. S.M. Raza, A proposed solution for crime reporting and crime updates on maps in android mobile application. Int. J. Comput. Appl. 975, 8887 (2015) 14. J. Refonaa, G.G. Sebastian, D. Ramanan, M. Lakshmi, Effective identification of black money and fake currency using NFC, IoT and android, in 2018 International Conference on Communication, Computing and Internet of Things (IC3IoT) (IEEE, 2018), pp. 275–278 15. K.K. Thyagharajan, G. Kalaiarasi, Pulse coupled neural network based near-duplicate detection of images (PCNN–NDD). Adv. Electr. Comput. Eng. 18(3), 87–97 (2018)
An Efficient Predictive Framework System for Health Monitoring K. Praveen, K. V. Rama Reddy, S. Jancy, and Viji Amutha Mary
Abstract An electronic medical record is an expert document that contains all data created during the treatment procedure. The EMR can use different data groups, for example, numerical data, content, and pictures. Mining the data and knowledge covered up in the colossal measure of medical record data is a basic necessity for clinical choice help, for example, clinical pathway definition and evidence-based medical research. Right now, propose an AI-based system to mine the concealed prescription examples in EMR content. The structure methodically coordinates the Jaccard comparability evaluation, unearthly grouping, the adjusted latent Dirichlet allocation, and cross-coordinating among numerous highlights to discover the residuals that portray extra knowledge and bunches covered up in various viewpoints of exceptionally complex medicine designs. These techniques cooperate, bit by bit to uncover the hidden prescription example. Keywords Components · Formatting · Style · Text · Insert
1 Introduction A computation is a fabricated course of action of explicit characterized strategies that executes a characterized information-driven game plan of investigation generally addressed in a predefined gathering by pictures and numbers [1]. A count tends to a particular inquiry considering a hypothesis or to create knowledge to make
K. Praveen (B) · K. V. Rama Reddy · S. Jancy · V. A. Mary Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] K. V. Rama Reddy e-mail: [email protected] S. Jancy e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_35
339
340
K. Praveen et al.
a hypothesis for extra assessment [2, 3]. The word count in itself has an uncommonly long history [4, 5]. Information use and the board practices are allotted into five sorts: information library, new information, even more new information, took care of information, and information stockroom [6, 7]. Information library practices make information to arrangement explicit e-prosperity work process structures [8, 9]. New information, even more new information, and arranged information incorporate advancing information activities, and information appropriation focus development sums information for explicit applied methods [10–12]. While reviewing an e-wellbeing condition the first step is to make a flowchart from an inward control point of view recognizing how one procedure prompts the following [13]. The subsequent advance is to inspect how the e-well-being framework electronically speaks with outer gatherings [14]. Proficient associations started advertise norms enumerating how electronic [15–17].
2 Related Work See Table 1.
3 Existing System The reason for principal component analysis in information mining is to infer a diminished number of illustrative highlights that are direct blends of the watched factors [18–20]. The biggest eigenvalue (well-spring of assortment) is spoken to by the primary head segment (eigenvector), with ensuing head parts being symmetrical to the main head part and to one another [21, 22]. PCA changes the first information space into an element space where the highlights are uncorrelated [23, 24]. Drawbacks of Existing System: • • • • •
No reliability in the data High computational complexity Become very complex if the amount of variance Not scalable High storage requirement.
4 Proposed System The proposed system is classified into five general classifications of techniques by specialists for overseeing complex clinical choice errands:
An Efficient Predictive Framework System for Health Monitoring
341
Table 1 Comparison of various optimization and allocation techniques Paper title
Author
Year Description
Can cluster-boosted regression Mahsa Rouzbahman 2016 Sharing of individual well-being improve prediction of death and data is dependent upon different length stay in the ICU requirements, which may deter a few associations from sharing their information Infection prediction by machine Lin Wang learning over big data from healthcare communities
2017 With huge information development in biomedical and human services networks, exact investigation of clinical information benefits early sickness recognition, understanding consideration, and network administrations
Clinical text classification with Word embedding features versus pack of words features
2018 Word implanting roused by profound learning has indicated promising outcomes over conventional sack of words highlights for common language handling
Nell Marshall
Prescient modeling of hospital Yu-Xiao Zhang mortality for patients with heart failure by using an improved random survival forest
1. 2. 3. 4. 5.
2018 Recognizable proof of various hazard factors and early forecast of mortality for patients with cardiovascular breakdown are significant for directing clinical dynamic in Intensive consideration unit accomplices
Decision clash, Mental projection, Decision exchange offs, Managing vulnerability, Generating rule of thumb.
Working of Proposed System: The adequacy of word division determines the precision of the data extricated from EMRs [17, 25]. In the subsequent advance, the extricated data as numbers, words, and expressions are charged into an arrangement that the PC can process, for example, a word framework. In the third step, a medicine data system arranges the extracted drugs into various gatherings. Be that as it may, so as to uncover the clinical significance from the bunched data, the fourth step perceives the prescription example.
342
K. Praveen et al.
5 Existing Algorithm
6 Algorithm Used Step 1: Begin with n clusters, each containing one object and we will number the cluster 1 through N. Step 2: compute the between-cluster D(x, y) as the between object distance of the two objects in r and s, respectively, x, y = 1, 2, …, n. Step 3: if the objects are represented by quantitative vectors, we can use Euclidean distance. Step 4: next, find the most similar pair of clusters r and s such that the distance, D(x, y), is minimum among all the pairwise distances. Step 5: Merge r and s to a new cluster t and compute the between-cluster distance D(t, k) for any existing cluster. Step 6: Once the distances are obtained, delete the rows and columns corresponding to the old cluster x and y do not exist anymore. Then add a new row and column in D corresponding to cluster t. Step 7: Repeat step 3 a total of n − 1 times until there is only one cluster left (Figs. 1 and 2).
An Efficient Predictive Framework System for Health Monitoring Fig. 1 Proposed algorithm procedure
343
Proposed Algorithm
Input: sequence of N Examples. = {( 1, 1), … . ( 2, 2)} Initialize: Distribution= 1 = 1,2, … . . 1. Neighbours = = , 1 < , 2. Select the subset training data 1. 3. Distribution 2 4. Train the base classifier and receive the hypothesis.ℎ : → . ℎ 5. . 6. .
Fig. 2 Proposed work flow
Input
Textual Feature Vector Preparation
Feature Hiding
Ti- id Term weighting
Feature Extraction Out of score scaling Prediction
344
K. Praveen et al.
7 Conclusion The proposed model reveals specific prosperity focused acknowledges in order to decide the prosperity state lack of protection capably. Additionally, laminated structure of the paradigm is relied upon to accomplish assorted seeded assignments in a synchronize route to accomplish the general ampleness. From the beginning, IoT innovation is utilized to obtain data regards for various properties during exercise gatherings. These information regards are securely transmitted to the related conveyed stockpiling for the top to bottom examination.
8 Results See Figs. 3 and 4.
Fig. 3 Normal or dementia for men and women
An Efficient Predictive Framework System for Health Monitoring
345
Fig. 4 Count and distribution of group
References 1. L. Catarinucci, D. De Donno, M.L. Stefanizzi, L. Tarricone, An internet of things-aware architecture for smart health care systems. IEEE Internet Things J. 2, 515–526 (2015) 2. https://doi.org/10.1109/JIOT.2015.2417684 3. A.J. Jara, M.A. Zamora-Izquierdo, A.F. Skarmeta, Interconnection framework for mHealth and remote monitoring based on the internet of things. IEEE J. Sel. Areas Commun. 31(9), 47–65 (2013). https://doi.org/10.1109/JSAC.2013.SUP.0513005 4. G. Nagarajan, R.I. Minu, A. Jayanthiladevi, Brain computer interface for smart hardware device. Int. J. RF Technol. 10(3–4), 131–139 (2019) 5. J. Refonaa, G.G. Sebastian, D. Ramanan, M. Lakshmi, Effective identification of black money and fake currency using NFC, IoT and android, in 2018 International Conference on Communication, Computing and Internet of Things (IC3IoT) (IEEE, 2018), pp. 275–278 6. S.P. Mary, E. Baburaj, Performance enhancement in session identification, in 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT) (IEEE, 2014), pp. 837–840 7. P. Asha, S. Srinivasan, Distributed association rule mining with load balancing in grid environment. J. Comput. Theor. Nanosci. 13(1), 33–42 (2016) 8. C.H. Kumar, A.S. Sangari, An efficient distributed data processing method for smart environment. Indian J. Sci. Technol. 9, 31 (2016) 9. A. Jesudoss, M.J. Daniel, J.J. Richard, Intelligent medicine management system and surveillance in IoT environment, in IOP Conference Series: Materials Science and Engineering, vol. 590, no. 1 (IOP Publishing, UK, 2019), p. 012005 10. B. Xu, L. Da Xu, H. Cai, C. Xie, J. Hu, F. Bu, Ubiquitous data accessing method in IOT-based information system for emergency medical services. IEEE Trans. Ind. Inf. 10(2), 1578–1586 (2014). https://doi.org/10.1109/TH2014.230638 11. B. Xu, L. Xu, H. Cai, L. Jing, Y. Luo, Y. Gu, The design of an m-Health monitoring system based on a cloud computing platform. Enterp. Inf. Syst. 11(1), 17–36 (2017). https://doi.org/ 10.1080/17517575.2015.1053416
346
K. Praveen et al.
12. N.K. Choudhry, R.H. Fletcher, S.B. Soumerai, Systematic review: the relationship between clinical experience and quality health care. Ann. Intern. Med. 142(4), 260–273 (2005) 13. T.P. Jacob, T. Ravi, An optimal technique for reducing the effort of regression test. Indian J. Sci. Technol. 6(8), 5065–5069 (2013) 14. D. Narmadha, A. Pravin, An intelligent computer-aided approach for target protein prediction in infectious diseases. Soft Comput. 1–14 (2020) 15. M. Chignell, M. Rouzbahman, R. Kealey, R. Samavi, E. Yu, T.Sieminowski, Nonconfidential patient types in emergency clinical decision support. IEEE Secur. Priv. 11(6), 1218 (2013) 16. C.A. Alvarez et al., Predicting out of intensive care unit cardiopulmonary arrest or death using electronic medical record data. Med. Inform. Decis. Making 13(1), 28 (2013) 17. S. Jancy, C. Jayakumar, Sequence statistical code-based data compression algorithm for wireless sensor network. Wirel. Pers. Commun. 106, 971–985 (2019) 18. M. Rouzbahman, A. Jovicic, M. Chignell, Cancluster-boosted regression improve prediction of death and length of stay in the ICU? IEEE J. Biomed. Health Inform. 21(3), 851–858 (2017) 19. E.F. Tjong, K. Sang, F. De Meulder, Introduction to the CoNLL-2003 shared task: languageindependent named entity recognition. Development 922, 1341 (1837) 20. J. Jose, S.C. Mana, B.K. Samhita, Efficient system to predict and analyse stock data using Hadoop techniques. Int. J. Recent Technol. Eng. (IJRTE) 8(2) (2019). ISSN: 2277-3878 21. D.D. Lewis, Text representation for intelligent text retrieval: a classification-oriented view, in Proceedings of Text-Based Intelligent Systems: Current Research and Practice in Information Extraction and Retrieval (1992), pp. 179–197 22. D.U. Nandini, E.S. Leni, Efficient shadow detection by using PSO segmentation and regionbased boundary detection technique. J. Supercomputing 75(7), 3522–3533 (2019) 23. T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation f word representations in vector space (2013) 24. R. Sethuraman, G. Sneha, D.S. Bhargavi, A semantic web services for medical analysis in health care domain, in 2017 International Conference on Information Communication and Embedded Systems (ICICES) (IEEE, 2017), pp. 1–5 25. S. Jancy, C. Jayakumar, Pivot variables location-based clustering algorithm for reducing dead nodes in wireless sensor network. Neural Comput. Appl. 31, 1467–1480 (2019)
Identification of Diabetic Retinopathy and Myopia Using Local Binary Pattern with Machine Learning Kranthi, Sai Kiran, A. Sivasangari, P. Ajitha, T. Anandhi, and K. Indira
Abstract This work investigates seclusion confines in the outside of retina pictures to detach among hypochondriac and sound fundus pictures. Henceforth, the presentation of local binary patterns (LBP) as a surface descriptor for retinal pictures has been investigated and wandered from different descriptors, for example, LBP isolating (LBPF) and neighborhood arrange quantization (LPQ). The objective is to see diabetic retinopathy (DR) and regular retina pictures investigating the outside of the retina establishment and ignoring the irritated area step. Examinations were orchestrated and supported with the proposed structure getting promising outcomes in terms of precision, exactness, audit, and unequivocality. Keywords Diabetic retinopathy · Local binary pattern · Texture features · Retina
Kranthi · S. Kiran · A. Sivasangari (B) · P. Ajitha · T. Anandhi · K. Indira Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] Kranthi e-mail: [email protected] S. Kiran e-mail: [email protected] P. Ajitha e-mail: [email protected] T. Anandhi e-mail: [email protected] K. Indira e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_36
347
348
Kranthi et al.
1 Introduction The World Health Organization (WHO) assesses that 285 million. People were obstructed and large outside in 2010. Although the sum of visual deficiency cases has been substantially far reduced lately [1]. Diseases will also face a strong growth later in the present society due to the increase in diabetes and the maturation of the population [2, 3]. This reality legitimizes screening endeavors [4, 5]. Be that as it may, a screening effort requires an overwhelming remaining burden for prepared specialists in the investigation of atypical examples of every illness which, added to the in danger populace increment, makes these crusades monetarily infeasible [6]. Hence, the requirement for programmed screening frameworks is featured. In view of these realities, a PC helped analysis programming equipped for separating, through picture handling, between a solid fundus (with no pathology) and DR and AMD patients was made [7]. This paper presents the novel method for identification of diabetic retinopathy based on local binary pattern and random forest [8, 9]. It is continued as follows: Sect. 2 describes the related works; Sect. 3 presents the proposed system; Sect. 4 covers the experimentation results and analysis; and Sect. 5 expresses the conclusion.
2 Related Work The automated procedure for assessing arteriolar-to-venular broadness proportion (AVR) was proposed by Muramatsu et al. [10]. The procedure includes optical plate division to guarantee the AVR gauge area, division of retinal supply routes, arranging of vessels into passageways and veins, combination of significant vessel sets, and estimation of AVRs. The affectability in the evaluation district was 87% for the significant bodies, while 93% were basically isolated into passageways or veins [11]. In 36 out of 40 vessel assortments, bits of the coordinated vessels were isolated accurately at any pixel. The method can be valuable for objective appraisal of AVRs, and can recognize focal arteriolar narrowing on macula-centered fundus pictures. Ojala et al. proposed the multi-resolution gray scale texture classification method for proto type distribution [12, 13]. Ahonen et al. proposed the novel and efficient facial image representation using local binary pattern texture classifiers [14]. Vijayamadheswaran et al. [15] mostly focused of that work was around dividing the diabetic retinopathy picture and describing the exudates. Division was finished utilizing fitting gathering and exudate grouping was done utilizing requested radial basis method (RBF) strategy. All the fundus pictures in that activity are changed to a typical state for picture style. This alters the impact on pictures in the light. ID of exudates turned out to be progressively solid exactly when the fundus picture was taken with satisfactory quality. Joshi et al. [16] reported that the predominant cause of vision impairment in metropolitan communities was diabetic retinopathy. Prompt diagnosis by routine
Identification of Diabetic Retinopathy and Myopia …
349
evaluation and prompt care also proven to reduce visual malaise and visual loss. Improvements to fundus image analysis have provided an assurance in tending to the lack of qualified staff for the scanning with massive scope. The article aims to build up a response that used the advances made in fundus image analysis to the basic issues in traditional telescreening. Merlin Sheeba et al. [17] asserted that past strategies for the division of the veins in retinal photographs might be isolated into rule-based and controlled systems. The investigation proposes a last-class vital methodology. The method shows another actualized device for division of the veins in automated retinal pictures. The story arrangement utilized a pixel evaluated extreme learning machine (ELM) technique, which quantifies a 7D vector that contains dull level and moment pixel portrayal invariant-based features. On the DRIVE and STARE sites, the depiction of the system was tried. It has been seen that the arrangement proposed gives striking yield as far as accuracy.
3 Proposed System This paper explores the ability to separate between anomalous (diabetic retinopathy) and normal fundus representations in the structure of retina. The primary focus, in particular, is to analyze the appearance of the local binary patterns (LBP) as a texture descriptor for retinal pictures. The objective of this paper is to recognize DR and normal retina images simultaneously and ignoring the different segmentation models for retinal lesion segmentation that means avoiding the segmentation stage [18, 19]. The texture of the retina background is straightforwardly examined by methods for LBP, and just this data is utilized to separate normal patients and the abnormal patients. Figure 1 shows the overall proposed work model. Proposed models use four steps for classification of DR and normal retina images, i.e., image acquisition, preprocessing, feature extraction, and classification. The four steps are as follows: 1. Retrieve the image from the database; 2. Preprocessing is performed and remove the noise using median filter; 3. Third local binary pattern propose for the feature extraction and based on LBP features some standard texture features are extracting such as mean, standard deviation, entropy, kurtosis, and skewness; 4. Finally, random forest classifier is used for classify the DR and normal images.
3.1 Preprocessing The goal of the preprocessing step is to implement potential image enhancing techniques to achieve the visual quality of the images needed. Image enhancement models, i.e., RGB channel extraction, filtration, and contrast enhancement. Channel
350
Kranthi et al.
Fig. 1 Overall proposed system
Fig. 2 Structure of retinal image
extraction and filtration are performed to reduce the unwanted noise. The contrast enhancement of the image can be observed by using adaptive histogram equalization model. Figure 2 shows the structure of retinal image.
3.2 Feature Extraction In the extraction phase of highlight, nearby two-fold example is utilized independently in each channel for extraction of highlights. Concentrate then the factual highlights for each LBP picture source. Take a normal of all channel highlights to
Identification of Diabetic Retinopathy and Myopia …
351
make the last highlights after that. The principle factual qualities are mean, standard deviation, entropy, kurtosis, and skewness.
3.3 Local Binary Pattern LBP is a most impressive surface descriptor in PC vision framework dependent on their working effortlessness. The initial step of the LBP is, name appointing to every pixel in the picture dependent on neighborhood pixel. The label deified as radius r and total points p. Next binary pattern generated for neighborhood pixels by neighbor pixels are thresholded based on center pixel. The LBP label value is got for each pixel by adding of paired string weighted with forces of two as follows: LBP p,r =
p−1
k(h a − h b ).2a
(1)
p=0
k(y) =
1 if y ≥ 0 0 if y < 0
(2)
where h a and h b are denoted as neighborhood gray values and center pixel.
3.4 Statistical Texture Feature Texture attributes are resolved in measurable surface investigation from the factual dissemination of watched blends of forces at characterized areas comparative [19] with one another in the picture. (A) Mean: The pixel size average value.
3.5 Standard Deviation Standard deviation is a factor reflecting how many pixel intensities vary from the ROI pixel intensity norm.
3.6 Entropy Entropy indicates the quantity of the image information.
352
Kranthi et al.
3.7 Skewness Skewness is a symmetry factor, or, more specifically, a lack of symmetry.
3.8 Kurtosis Kurtosis is a calculation of whether the data is elevated or flat with respect to normal distributions.
3.9 Classification The classification is main step in the identification of diabetic retinopathy. The classification process is done from extraction of final features. The principal innovation here is random forest adoption. The functions are extended over the RF classifier and the classification is finished.
3.10 Random Forest The random forest is one of the important models of classification that is able to classify the large data with high precision. Random forest is an ensemble model that performs both classification and regression. In training, number of decision trees is used; in output class, individual trees produce the class output. Combination of tree predictors is called as random forest. The basic principle of random forest is combining the week learners together to strong learner. Many classification trees emerge in random forests. Tree is grown according to: 1. Provided that the training dataset includes n instances. Sub-samples are selected randomly with substitution from those n instances. Certain random sub-samples picked from the preparation dataset are utilized to make singular trees. 2. Accepting information factors are k, a number m is picked so that m < k. At every hub, m factors are haphazardly chosen from k factors. The separation the hub is chosen the split which is the most grounded of these m factors. The importance of m stays unaltered while the land is gathered. 3. Every tree is grown to a maximum size without pruning. 4. The class of the new article is assessed dependent on most of votes got from the entirety of the choice trees together.
Identification of Diabetic Retinopathy and Myopia …
353
4 Experimental Results In our work, we used STARE database which contains the total 402 images with size is 700 * 605 pixels. From this dataset, we generate the three subsets, i.e., AMD (t = 47), normal (t = 37), and DR (t = 89). Figures 3 and 4 describe about original image and RGB channel extraction and contrast enhancement. Figures 5 and 6 represent the LBP feature extraction and average statistic features analysis. Accuracy, precision, recall, and specificity parameters are evaluated and described in Figs. 7 and 8 (Fig. 9).
Fig. 3 Image acquisition
Fig. 4 Showed the RGB channel extraction, noise removal, and contrast enhancement
354
Kranthi et al.
Fig. 5 Showed the LBP feature extraction and statistical texture features
Fig. 6 Showed the average of statistical texture features
5 Conclusion New technique for assessing DR was demonstrated in this study. To differentiate regular patients from DR photos relies on the application of texture differentiation capacities of retinal pictures. The LBP demonstration is used and compared with the perfect classifier and other descriptors of the texture. The most important result is that the suggested technique is designed to segregate the groups based on examining the underlying texture of the retina, missing the phase of segmenting the lesion. These measurements of the lesion segmentation may be tedious, so it is helpful to avoid
Identification of Diabetic Retinopathy and Myopia …
355
Fig. 7 Showed the classification result
Fig. 8 Showed the performance metrics calculation
the segmentation. The findings demonstrate that using LBP as a texture descriptor for fundus pictures provides retinal disease imaging useful highlights.
356
Kranthi et al.
Fig. 9 Showed the proposed method performance analysis
References 1. G. Nagarajan, R.I. Minu, Fuzzy ontology based multi-modal semantic information retrieval. Procedia Comput. Sci. 48, 101–106 (2015) 2. World Health Organization (WHO), Universal eye health: a global action plan 2014–2019 (2013) 3. World Health Organization (WHO), Action plan for the prevention of avoidable blindness and visual impairment 2009–2013 (2010) 4. R. Sethuraman, G. Sneha, D.S. Bhargavi, A semantic web services for medical analysis in health care domain, in 2017 International Conference on Information Communication and Embedded Systems (ICICES) (IEEE, 2017), pp. 1–5 5. A. Jesudoss, M.J. Daniel, J.J. Richard, Intelligent medicine management system and surveillance in IoT environment, in IOP Conference Series: Materials Science and Engineering, vol. 590, no. 1 (IOP Publishing, UK, 2019), p. 012005 6. N. Manikandan, A. Pravin, LGSA: hybrid task scheduling in multi objective functionality in cloud computing environment. 3D Res. 10(2), 12 (2019) 7. T.P. Jacob, T. Ravi, An optimal technique for reducing the effort of regression test. Indian J. Sci. Technol. 6(8), 5065–5069 (2013) 8. S.P. Mary, E. Baburaj, Performance enhancement in session identification, in 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT) (IEEE, 2014), pp. 837–840 9. P. Asha, S. Srinivasan, Distributed association rule mining with load balancing in grid environment. J. Comput. Theor. Nanosci. 13(1), 33–42 (2016) 10. C. Muramatsu, Y. Hatanaka, T. Iwase, T. Hara, H. Fujita, Automated selection of major arteries and veins for measurement of arteriolar-to-venular diameter ratio on retinal fundus images. Comput. Med. Imaging Graph. 35, 472–480 (2011) 11. C.H. Kumar, A.S. Sangari, An efficient distributed data processing method for smart environment. Indian J. Sci. Technol. 9, 31 (2016)
Identification of Diabetic Retinopathy and Myopia …
357
12. T. Ojala, M. Pietikinen, T. Menp, A generalized local binary pattern operator for multiresolution gray scale and rotation invariant texture classification, in 2nd International Conference on Advances in Pattern Recognition (2001), pp. 397–406 13. T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002) 14. T. Ahonen, A. Hadid, M. Pietikainen, Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 2037–2041 (2006) 15. R. Vijayamadheswaran, M. Arthanari, M. Sivakumar, Detection of diabetic retinopathy using radial basics function. Int. J. Innovative Technol. Creative Eng. 1(1), 40–47 (2011) 16. G.D. Joshi, J. Sivaswamy, DrishtiCare: a telescreening platform for diabetic retinopathy powered with fundus image analysis. J. Diabetes Sci. Technol. 5(1), 1–9 (2011) 17. X. Merlin Sheeba, S. Vasanthi, An efficient ELM approach for blood vessel segmentation in retinal images. Int. J. Man Mach. Interface 1, 15–21 (2011) 18. J. Refonaa, G.G. Sebastian, D. Ramanan, M. Lakshmi, Effective identification of black money and fake currency using NFC, IoT and android, in 2018 International Conference on Communication, Computing and Internet of Things (IC3IoT) (IEEE, 2018), pp. 275–278 19. G. Nagarajan, R.I. Minu, A.J. Devi, Optimal nonparametric Bayesian model-based multimodal BoVW creation using multilayer pLSA. Circ. Syst. Sig. Process. 39(2), 1123–1132 (2020)
Social Network Mental Disorders Detection Using Machine Learning Yarrapureddy Harshavardhan Reddy, Yeruva Nithin, and V. Maria Anu
Abstract As increasing lot of social network organization mental clutters (SNMDs), for example, Cyber-Relationship Addiction, Information Overload, and Net Compulsion. Signs of these intellectually disturbed are by and large observed idly these advanced days. Right now, utilizing the substance of information mining through Web administrations it is anything but difficult to recognize through SNMDS at beginning time. It is attempting to recognize SNMD considering the way that the mental variables considered in standard demonstrative criteria (survey) cannot be seen from social development logs. Our strategy, new and imaginative to the demonstration of snmd revelation, does not rely upon self-revealing of those mental components by methods for reviews. Or maybe, we propose an AI structure, in particular, Informal community Mental Disorder Detection (SNMDD), that adventures highlights extricated from informal community information to precisely distinguish potential cases of SNMDs. We also abuse multi-source learning and propose another SNMD-based Tensor Model (STM) to improve the show. Our structure is assessed by means of a client concentrate with 3126 informal community customers. We lead a component assessment, and moreover apply for huge extension datasets and separate the characteristics of the three SNMD types. The results show that SNMDD is promising for recognizing on the Web casual network customers with potential. Keywords Online social network · Mental disorder detection · Feature extraction · Social network services · Tensor factorization acceleration
Y. H. Reddy (B) · Y. Nithin · V. Maria Anu Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] Y. Nithin e-mail: [email protected] V. Maria Anu e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_37
359
360
Y. H. Reddy et al.
1 Introduction Web-based life Web browser is described as “a webpage that supports meeting people, finding like characters, passing on and sharing substance, and building system”; these types of Web empowers different of components, for instance, business, blend of both [1–3]. Online long range interpersonal communication characterizations consolidate modernized library, electronic business, redirection, conversation, geo location, to add or to edit Web document, social overview, online gaming, casual association [4]. Casual association is the subdivision of electronic life social structure of public who evolved for common interest [5]. Online informal communications are social channels of correspondence using electronic headways, work stations, and compact developments [6–9]. These advancements make significantly natural stages due to individuals, systems, affiliations can share communicate data, talk about, rating, comments, and alter customer made online substance [10]. This types of progress engage correspondence among associations, affiliations, systems, and individuals propels change the way where in individuals and gigantic affiliations pass on, and they are logically being made [11, 12]. Wide extent of usages in business and open system uses supposition examination [13]. Nostalgic examination is as of now being used from unequivocal thing displaying to lone lead affirmation [14, 15]. Associations and affiliations have reliably been stressed over how they are seen by general society. This stress results from an arrangement of motivations, including displaying and publishing [16].
2 Literature Survey 2.1 A Google Wave-Based Fluffy Recommender Framework in University Digital Libraries 2.0 Nowadays Digital Libraries 2.0 are generally reliant on the participation between customers through network arranged applications, for instance, wikis, destinations, etc., or new potential perfect models like the waves proposed by Google. This new thought, the wave, addresses a run of the mill space where resources and customers can coordinate. The issue develops at the point when the measure of advantages and clients is high, by then devices for helping the clients in their data needs are principal. Right now delicate phonetic recommender framework subject to the Google Wave limits is proposed as mechanical gathering for discussing specialists enthused run of the mill explore lines [17]. The framework permits the making of a common space by inferring a wave as a technique for collaborating and exchanging considerations between a couple of investigators enthusiastic about a comparative point. In like manner, the system proposes, in a modified way, a couple of pros and significant resources for each wave. These recommendations are figured after a couple of as of late portrayed tendencies and characteristics by techniques for feathery etymological
Social Network Mental Disorders Detection Using Machine Learning
361
names [18]. Thus, the system supports the potential joint endeavors between multidisciplinar specialists and underwrites central assets noteworthy for the association.
2.2 A Half Breed Fluffy-Based Customized Recommender for Telecom Items or Administrations The Internet makes fantastic open doors for organizations to offer customized online types of assistance to their clients [19]. Recommender frameworks are intended to naturally create customized proposals of items/administrations to clients. Since different vulnerabilities exist inside both item and client information, it is a test to accomplish high suggestion exactness. This examination builds up a half and half suggestion approach which consolidates client-based and thing-based collective separating strategies with fluffy set systems and applies it to versatile item and administration proposal [20]. It especially executes the proposed approach in an insightful recommender.
2.3 Recommender Systems Based on Social Networks The customary recommender frameworks, particularly the cooperative sifting recommender frameworks, have been considered by numerous scientists in the previous decade. In any case, they disregard the social connections among clients. Truth be told, these connections can improve the precision of proposal. Lately, the investigation of social-based recommender frameworks has become a functioning examination subject. Right now, propose a social regularization approach that consolidates interpersonal organization data to profit recommender frameworks. The two clients’ companionships and rating records (labels) are utilized to foresee the missing qualities (labels) in the client thing framework. Particularly, we utilize a bi-clustering calculation to recognize the most reasonable gathering of companions for producing diverse last proposals.
362
Y. H. Reddy et al.
2.4 A Hybrid Trust Based on Recommender System for Social Media or Online Communities of Practice The requirements forever long learning and the fast improvement of data innovations advance the improvement of different sorts of online community of practices. In online CoPs, restricted reasonability and metacognition are two critical issues, especially when understudies face information overweight and there is no data authority inside the learning condition. This examination proposes a mutt, trust-based recommender structure to mitigate above learning issues in online CoPs. A relevant investigation was driven using Stack Overflow data to test the recommender structure. Critical revelations include: (1) differentiating and other interpersonal organization stages, understudies in online CoPs have more grounded social relations and will in general cooperate with a littler gathering of individuals in particular; (2) the cross-breed calculation can give more exact suggestions than superstar-based and content-based figuring; and (3) the proposed recommender system can support the improvement of altered learning systems.
2.5 RecomMetz: A Setting Mindful Information-Based Versatile Recommender Framework for Film Showtimes Recommender frameworks are utilized to give separated data from a lot of components. They give customized proposals on items or administrations to clients. The proposals are planned to give intriguing components to clients. Recommender frameworks can be created utilizing various systems and calculations where the choice of these strategies relies upon the zone where it will be applied. This paper proposes a recommender structure in the unwinding zone, expressly in the film showtimes space. The structure proposed is called RecomMetz, and it is a setting careful compact recommender system reliant on Semantic Web progresses. In detail, an area cosmology basically serving a semantic closeness metric changed in accordance with the idea of “bundles of single things” was created right now. What’s more, area, group, and time were considered as three various types of relevant data in RecomMetz. Basically, RecomMetz has remarkable highlights: (1) the things to be endorsed have a composite structure (cinema + film + showtime), (2) the compromise of the time and gathering factors into a setting careful model, (3) the execution of a cosmology-based setting exhibiting approach, and (4) the improvement of a multi-stage local versatile UI planned to use the equipment capacities (sensors) of cell phones.
Social Network Mental Disorders Detection Using Machine Learning
363
2.6 A Tale Crossover Approach Improving Execution Recommender frameworks bolster clients by creating conceivably fascinating recommendations about applicable items and data. The expanding consideration toward such instruments is seen by both the extraordinary number of incredible and complex recommender calculations created as of late and their reception in numerous mainstream Web stages. In any case, exhibitions of recommender frameworks can be influenced by numerous basic issues with respect to example, over-specialization, trait choice, and adaptability. To relieve some of such negative impacts, a crossover recommender framework, called Relevance Based Recommender, is proposed right now.
2.7 Recommender Framework for Researchers Dependent on Bibliometrics Recommender frameworks (RSs) misuse past practices and client similitudes to give customized suggestions. There are a few points of reference of utilization in scholastic conditions to help clients finding applicable data, in light of suppositions about the attributes of the things and clients. Regardless of whether quality has just been considered as a property of things in past works, it has never been given a key job in the re-positioning procedure for the two things and clients. Right now, present REFORE, a quality-based fluffy etymological recommender framework for researchers. We propose the utilization of some bibliometric quantifies as the best approach to evaluate the nature of the two things and clients without the communication of specialists just as the utilization of 2-tuple etymological way to deal with portray the semantic data. The framework considers the deliberate quality as the primary factor for the re-positioning of the top-N proposals list so as to call attention to scientists to the most recent and the best papers in their examination fields.
3 System Architecture See Fig. 1.
364
Y. H. Reddy et al.
Fig. 1 System architecture
3.1 Modules • • • •
Data collection Preprocessing Training Classification.
3.1.1
Modules Description
Data Collection: It is the process of collecting or gathering and measuring information from variety of sources to get a complete and accurate data of an area of interest. Parasocial relationship (PR): The component of is spoken, where |aout| and |ain| signify quantity of moves a client makes to companions the quantity moves companions make client, individually. On the Web and disconnected communication proportion (ON/OFF): As saw by emotional wellness experts, individuals who entertain themselves with OSNs will in general reprimand their companions, all things considered, We remove the quantity of registration logs with companions and the quantity of “going” occasions as a marker of the quantity of disconnected exercises to appraise the on the web (|aon|)/disconnected (|aoff|) association. Proportion Social capital (SC): Two kinds of fellowship ties are normally associated with the hypothesis of (i) Bond stimulating (solid), which tends to the utilization of OSNs to invigorate the affiliations; and (ii) Information chasing (frail tie), relates utilization online life discover significant data. Social looking through versus perusing (SSB): The human appetitive framework is responsible for the addictive conduct. It shows social looking (effectively perusing news sources from companions’ dividers) makes more delight than social perusing (inactively perusing individual news channels).
Social Network Mental Disorders Detection Using Machine Learning
365
3.2 Training The given SNMD highlights of N clients extricated from M OSN sources, we build a three-mode tensor T ∈ RN × D × M and afterward direct Tucker decay, an eminent tensor deterioration method, on T to separate an inert component framework U, which introduces the inactive highlights of every individual condensed from all OSNs. We plan to take care of these inactive highlights for SNMD identification. Framework U successfully assesses a shortage highlight (e.g., a missing component esteem inaccessible because of security setting) of an OSN from the comparing highlight different OSN together with t highlights of different clients with comparative conduct. In light of Tucker decay, we present another SNMD-based Tensor Model (STM), which engages U to join critical characteristics of SNMDs, for instance, the association of the comparable SNMD sharing among dear colleagues.
3.3 Classification Mini-batch Gradient Descent It is angle drop calculation that parts the preparation dataset into little bunches that are utilized to figure model blunder and update model coefficients. Smaller than usual clump slope plummet looks to discover a harmony between the heartiness of stochastic angle drop and the proficiency of group inclination plunge. At long last foresee what sort of confusion. Naive Bayes Algorithm It is a technique on Bayes theorem with supposition of freedom among predictors. In basic terms, a guileless Bayes classifier taken as that the nearness of a specific component in a class is eccentric to the nearness of some other element.
3.4 Data Flow Diagram See Fig. 2.
366
Fig. 2 Data flow diagram
3.5 Program Code
Y. H. Reddy et al.
Social Network Mental Disorders Detection Using Machine Learning
367
368
Y. H. Reddy et al.
Social Network Mental Disorders Detection Using Machine Learning
369
4 Results See Figs. 3, 4, 5 and 6.
5 Conclusion In this project, we conclude that to detect automatically to identify the potential online users with SNMDs and SNMDD framework which defines the different ways from data logs also new tensor procedure for getting idle highlights from numerous OSN for SNMD detection.
Fig. 3 Data collection
370
Y. H. Reddy et al.
Fig. 4 Preprocessing
Fig. 5 Training
5.1 Future Work In this, we have represented there is a collaborating effects that to be distinguished between the social media and users mental healthcare to explain the issues. Then, after, we also plan to research to extract features from the sight and sound substance by utilizing the procedures on NLP and PC vision. Next
Social Network Mental Disorders Detection Using Machine Learning
371
Fig. 6 Classification and prediction
venture to recognize new issues that araised social network service provider like Facebook, instagram.
References 1. K. Young, M. Pistner, J. O’Mara, J. Buchanan, Cyber-disorders: The mental health concern for the new millennium. Cyberpsychol. Behav. (1999) 2. J. Block, Issues of DSM-V: internet addiction. Am. J. Psychiatry (2008) 3. K. Young, Internet addiction: the emergence of a new clinical disorder. Cyberpsychol. Behav. (1998) 4. I.-H. Lin, C.-H. Ko, Y.-P. Chang, T.-L. Liu, P.-W. Wang, H.-C. Lin, et al. The association between suicidality and Internet addiction and activities in Taiwanese adolescents. Compr. Psychiat (2014) 5. Y. Baek, Y. Bae, H. Jang, Social and parasocial relationships on social network sites and their differential relationships with users’ psychological well-being. Cyberpsychol. Behav. Soc. Netw. (2013) 6. D. La Barbera, F. La Paglia, R. Valsavoia, Social network and addiction. Cyberpsychol. Behav. (2009) 7. K. Chak, L. Leung, Modesty and locus of control as indicators of web enslavement and web use. Cyberpsychol. Behav. (2004) 8. K. Caballero, R. Akella, Progressively displaying patients wellbeing state from electronic clinical records: a period arrangement approach. KDD (2016) 9. L. Zhao, J. Ye, F. Chen, C.-T. Lu, N. Ramakrishnan, Hierarchical Incomplete multi-source feature learning for Spatiotemporal Event Forecasting. KDD (2016) 10. E. Baumer, P. Adams, V. Khovanskaya, T. Liao, M. Smith, V. Sosik, K. Williams. Limiting, leaving, and (re)lapsing: an exploration of Facebook non-use practices and experiences. CHI (2013)
372
Y. H. Reddy et al.
11. S.P. Mary, E. Baburaj, Performance enhancement in session identification, in 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT) (IEEE, 2014), pp. 837–840 12. P. Asha, S. Srinivasan, Distributed association rule mining with load balancing in grid environment. J. Comput. Theor. Nanosci. 13(1), 33–42 (2016) 13. C.H. Kumar, A.S. Sangari, An efficient distributed data processing method for smart environment. Indian J. Sci. Technol. 9, 31 (2016) 14. R. Sethuraman, G. Sneha, D.S. Bhargavi, A semantic web services for medical analysis in health care domain, in 2017 International Conference on Information Communication and Embedded Systems (ICICES) (IEEE, 2017), pp. 1–5 15. A. Jesudoss, M.J. Daniel, J.J. Richard, Intelligent medicine management system and surveillance in IoT environment, in IOP Conference Series: Materials Science and Engineering, vol. 590, no. 1 (IOP Publishing, UK, 2019), p. 012005 16. L. Zhao, J. Ye, F. Chen, C.-T. Lu, N. Ramakrishnan, Various leveled incomplete multi-source include learning for spatiotemporal event forecasting. KDD (2016) 17. J. Refonaa, G.G. Sebastian, D. Ramanan, M. Lakshmi, Effective identification of black money and fake currency using NFC, IoT and android, in 2018 International Conference on Communication, Computing and Internet of Things (IC3IoT) (IEEE, 2018), pp. 275–278 18. E. Baumer, P. Adams, V. Khovanskaya, T. Liao, M. Smith, V. Sosik, K. Williams, Who is based on constraining, leaving, and relapsing: an investigation of Facebook non-use practices and encounters. CHI (2013) 19. R. Surendran, B. Keerthi Samhitha, Energy aware grid resource allocation by using a novel negotiation model. J. Theor. Appl. Inf. Technol. (2014) 20. G. Nagarajan, R.I. Minu, Fuzzy ontology based multi-modal semantic information retrieval. Procedia Comput. Sci. 48, 101–106 (2015)
Multi-layer Security in Cloud Storage Using Cryptography Mukesh Kalyan Varma, Monesh Venkul Vommi, and Ramya G. Franklin
Abstract Cloud technology has revolutionized the way we interact with our machines and added a new perspective of our lives where we tend to do things without bothering of the actual physical presence of our data in drives. It acts like a backend where we need not understand what it really is but should know using it. With the growth of communication, storage and data have become primary concern for the consumers. This has raised the need of cloud storage in present trend. But also, at the same time, security has also become a major concern to that data where many attacks are possible and leakage of data is so vulnerable. Privacy preserving in cloud has become a major concern in the area of cloud storage as there are many possible ways to alter data. Cryptography can be a way to solve this major issue. Keywords Cloud storage · Privacy · Cryptography · Security · Multi-layer
1 Introduction In the field of computing, cloud computing has a similar analogy to that of the electricity we use in our daily life. We merely go on using it without the thought of where it is produced and what goes into the process of producing electricity. The concept of cloud computing is quite similar [1–3]. The users are free to use the cloud storage, computing power, or integrated development environments without having to worry about how it all works in the background [4]. Cloud computing is basically Internet-based computing. The term cloud has been used as a figure of speech. The M. K. Varma · M. V. Vommi · R. G. Franklin (B) Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] M. K. Varma e-mail: [email protected] M. V. Vommi e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_38
373
374
M. K. Varma et al.
complicated infrastructure of cloud is concealed by the Web [5]. It provides the users with the facility to access technology-enabled Web services without having to worry about the management of this technology. So, considering cloud storage is booming as a technology and also had a problem security; this project is implemented that provides security using cryptographic methods like AES, DES, and Hash Solomon techniques [6]. Also, we collect the user data and store it for future use, we can also use it, develop a community of trusted people, and also be able to log their actions and can find if there is any suspicious activity [7, 8]. Also, that no attacker can be able to pass the security and even in the events of trespassing or eavesdrop and man-in-the-middle attacks [9, 10]. For downloading, the file gets decrypted by using the same algorithms and finally gets joined together to get the original file that can be used by the user [11]. The downloaded file will always be equal to the original file. (A) Existing system In the existing system, the user can send the data to the cloud but there is no layer of extra security as the data just flows to the cloud as it is [12]. The user can just upload the data to the cloud and download the data from the cloud. Even if there is any security, the file itself is considered to be a standalone file or a single file. (B) Existing system disadvantages • Attackers can do main-in-the-middle or eavesdrop on the files that we work on in the process of uploading and downloading the data to the cloud. Security can be breached if the user’s password is compromised or by any other means.
2 Related Work A novel privacy-preserving security solution for cloud services. Our solution is based on an efficient no bilinear group signature scheme providing the anonymous access to cloud services and shared storage servers [13]. The novel solution offers anonymous authentication for registered users. Thus, users’ personal attributes (age, valid registration, successful payment) can be proven without revealing users’ identity, and users can use cloud services without any threat of profiling their behavior [14]. However, if a user breaks provider’s rules, his access right is revoked. Our solution provides anonymous access, unlikability, and the confidentiality of transmitted data [15, 16]. We implement our solution as a proof of concept application and present the experimental results [17]. Further, we analyze current privacy-preserving solutions for cloud services and group signature schemes as basic parts of privacy enhancing solutions in cloud services [18]. We compare the performance of our solution with the related solutions and schemes. This paper spotlights privacy and its obfuscation issues of intellectual, confidential information owned by insurance and finance
Multi-layer Security in Cloud Storage Using Cryptography
375
sectors. Privacy risk in business era if authoritarians misuse secret information. Software interruptions in steeling digital data in the name of third-party services [14, 19]. Liability in digital secrecy for the business continuity isolation, mishandling causing privacy breaching the vicinity and its preventive phenomenon is scrupulous in the cloud, where a huge amount of data is stored and maintained enormously. In this developing IT-world toward cloud, user’s privacy protection is becoming a big question, albeit cloud computing made changes in the computing field by increasing its effectiveness, efficiency, optimization of the service environment, etc., cloud user’s data and their identity, reliability, maintainability and privacy may vary for different CPs (cloud providers). CP ensures that the user’s proprietary information is maintained more secretly with current technologies [20]. More remarkable occurrence is even the cloud provider does not have suggestions regarding the information and the digital data stored and maintained globally anywhere in the cloud. The proposed system is one of the obligatory research issues in cloud computing. We came forward by proposing the Privacy Preserving Model to Prevent Digital Data Loss in the Cloud (PPM–DDLC). This proposal helps the CR (cloud requester/users) to trust their proprietary information and data stored in the cloud. Cloud computing is an emerging technology and it is purely based on Internet and its environment. It provides different services to users such as Software-as-a-Service (SaaS), PaaS, IaaS, and Storage-as-a-service (SaaS). Using Storage-as-a-Service, users and organizations can store their data remotely which poses new security risks toward the correctness of data in cloud. In order to achieve secure cloud storage, there exists different techniques like flexible distributed storage integrity auditing mechanism, distributed erasure-coded data, Merkle Hash Tree (MHT) construction, etc. These techniques support secure and efficient dynamic data storage in the cloud. This paper also deals with architectures for security and privacy management in the cloud storage environment.
3 Methodology The system that is proposed here will be able to secure the data which will be stored in the cloud by using cryptographical methods. Also, this system stores data of the user in a separate database which can also be placed inside cloud. This system encrypts and splits the data into three parts which then when needed to be accessed again joins them and decrypts for the user. Storing the data in the cloud and providing security for the data have been a major issue and needed to be implemented in demand but it comes with a lot of effort when using technologies like blockchain and going for cybersecurity, it costs lot of both time and money. So, to offer a solution, we came with cryptography and multi-layer storage concepts. Where the data will be encrypted during the upload using three algorithms which were AES, Triple DES, and Hash Solomon algorithm which were lightweight and less costly. And the data will be stored in three different locations and even if one gets accessed, there will be of no use. And, later when the user wants to download the data, the data itself gets
376
M. K. Varma et al.
Fig. 1 System architecture
joined and then decrypts itself for the user. We also log the locations of the data for later accessibility. To implement a product that adds a layer of security to the data and also scatters the data into three different locations. So that even if one data is compromised, the rest of the data will be safe and secure. The data will be encrypted and scattered there will also be a log for the data and during the download of the file, it gets joined as a one single file. This encryption includes algorithm like AES, DES and Hash Solomon to increase the encryption security.
3.1 System Architecture See Fig. 1.
4 Experimental Results 4.1 User Module In this module, the user should first register, and that data will be saved inside a database. Later, he can login the application using the same credentials he provided at the time of registration (Figs. 2 and 3).
Multi-layer Security in Cloud Storage Using Cryptography
377
Fig. 2 Registration page
Fig. 3 Login page
4.2 File Upload Module User is going to upload the file and then in that file is divided into multi-parts and stored into the respected file path. Here, in that, the path is cloud, cloud1, local system. The system also logs the location of each divided file so that we can later make use of it (Fig. 4).
378
M. K. Varma et al.
Fig. 4 Upload page
4.3 File View Module • Uploaded file will be stored in specific path. In that file view, we can see the all multi-paths file and that should be in the encryption form only. • And then, here, we are collecting all the encryption files in one place and finally that will be encrypted. Then, we decrypt that final encryption file (Fig. 5).
Fig. 5 File page
Multi-layer Security in Cloud Storage Using Cryptography
379
5 Conclusion Providing security and privacy to the user is the main concern of the project with any technical means or in the future, by implementing and integrating latest technologies, this project will have life in it and will not be outdated. There may be few deficiencies in the project but it works efficient and completes its function properly. This project will be made open source and anyone can develop on it. Only with the help our guide Mrs. Ramya G. Franklin mam, we are able to implement this project which we believe could change the security of the consumers lives and implement a trust on the system. Our future works include implementing and integrating this project with latest technologies like blockchains and artificial intelligence which help to counterattack attackers.
References 1. T. Wang, J. Zhou, X. Chen, G. Wang, A. Liu, Y. Liu, A three-layer privacy preserving cloud storage scheme based on computational intelligence in fog computing. IEEE Trans. Emerg. Top. Comput. Intell. 2(1) (2018) 2. H.T. Dinh, C. Lee, D. Niyato, P. Wang, A survey of mobile cloud computing: architecture, applications, and approaches. Wireless Commun. Mobile Comput. 13(18), 1587–1611 (2013) 3. J. Chase, R. Kaewpuang, W. Yonggang, D. Niyato, Joint virtual machine and bandwidth allocation in software defined network (sdn) and cloud computing environments, in Proceedings of IEEE International Conference on Communications (2014), pp. 2969–2974 4. N. Manikandan, A. Pravin, LGSA: hybrid task scheduling in multi objective functionality in cloud computing environment. 3D Res. 10(2), 12 (2019) 5. T.P. Jacob, T. Ravi, An optimal technique for reducing the effort of regression test. Indian J. Sci. Technol. 6(8), 5065–5069 (2013) 6. P.K. Rajendran, B. Muthukumar, G. Nagarajan, Hybrid intrusion detection system for private cloud: a systematic approach. Procedia Comput. Sci. 48(C), 325–329 (2015) 7. H. Li, W. Sun, F. Li, B. Wang, Secure and privacy-preserving data storage service in public cloud. J. Comput. Res. Dev. 51(7), 1397–1409 (2014) 8. Y. Li, T.Wang, G.Wang, J. Liang, H. Chen, Efficient data collection in sensor-cloud system with multiple mobile sinks, in Proceedings of Advanced Services Computing, 10th Asia-Pacific Services Computing Conference (2016), pp. 130–143 9. S. Thamizhselvi, P.S. Mary, A survey about data prediction in wireless sensor networks with improved energy efficiency. Res. J. Pharm. Biol. Chem. Sci. 7(2), 2118–2120 (2016) 10. A. Uthirakumari, P. Asha, Hybrid scheduler to overcome the negative impact of job preemption for heterogeneous Hadoop systems, in 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT) (IEEE, 2016), pp. 1–5 11. L. Xiao, Q. Li, J. Liu, Survey on secure cloud storage. J. Data Acquis. Process. 31(3), 464–472 (2016) 12. R.J. McEliece, D.V. Sarwate, On sharing secrets and Reed-Solomon codes. Commun. ACM 24(9), 583–584 (1981) 13. J.S. Plank, T1: erasure codes for storage applications, in Proceedings of 4th USENIX Conference on File Storage Technology (2005), pp. 1–74 14. R. Kulkarni, A. Forster, G. Venayagamoorthy, Computational intelligence in wireless sensor networks: a survey. IEEE Commun. Surv. Tutorials 13(1), 68–96 (First Quarter 2011) 15. C.H. Kumar, A.S. Sangari, An efficient distributed data processing method for smart environment. Indian J. Sci. Technol. 9, 31 (2016)
380
M. K. Varma et al.
16. R. Sethuraman, G. Sneha, D.S. Bhargavi, A semantic web services for medical analysis in health care domain, in 2017 International Conference on Information Communication and Embedded Systems (ICICES) (IEEE, 2017), pp. 1–5 17. A. Jesudoss, M.J. Daniel, J.J. Richard, Intelligent medicine management system and surveillance in IoT environment, in IOP Conference Series: Materials Science and Engineering, vol. 590, no. 1 (IOP Publishing, UK, 2019), p. 012005 18. A. Velmurugan, T. Ravi, Enabling secure data transaction in bio medical engineering using CCart approach. J. Theor. Appl. Inf. Technol. 92(1), 37 (2016) 19. Z. Xia, X. Wang, L. Zhang, Z. Qin, X. Sun, K. Ren, A privacy preserving and copy-deterrence content-based image retrieval scheme in cloud computing. IEEE Trans. Inf. Forensics Secur. 11(11), 2594–2608 (2016) 20. P. Asha, S. Srinivasan, Distributed association rule mining with load balancing in grid environment. J. Comput. Theor. Nanosci. 13(1), (2016), pp. 33–42(10)
Web-Based Automatic Irrigation System Bajjurla Uma Satya Yuvan, J. A. BalaVineesh Reddy Pentareddy, and S. Prince Mary
Abstract The most core part of our livelihood is food without which an enormous number of people are in famine. To improvise the cultivation of food growing crops in the fields there is an urge for the rightful utilization and efficient technology that could sustain the entire world which everyone relies upon. There were a lot more methodologies that had been proposed for the purpose of agriculture but there are some major faults in the conceptual thinking that must be removed for survival. These systems have thus need to be improvised for working for reducing wastage in the farms by predicting the type of crops they are suitable to the given type of soil to support better irrigation management. This system completely differs from the existing methods by the aim of reducing human intervention in the fields. Keywords Intervention · Analyzing · Globalization · IoT · KNN
1 Introduction Prediction of crops is a piece of cake that local people feel about because they only notice the outer facts that hide the inner mechanized methodology which makes them pay more amount of prices to which they do not literally afford to pay for because of many factors [1–3]. But by the efficient utilization of technology, we can eradicate this issue from the roots by digging a little deeper into the technological view B. U. S. Yuvan · J. A. BalaVineesh Reddy Pentareddy · S. Prince Mary (B) Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] B. U. S. Yuvan e-mail: [email protected] J. A. BalaVineesh Reddy Pentareddy e-mail: [email protected]
© Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_39
381
382
B. U. S. Yuvan et al.
of soil testing [4, 5]. The results that are observed are not fully accurate enough to grow the crops because the machine only tests for the soil parameters like acidity, alkaline, and pH of the soils but simply ignores the core part of soil testing that are weather prediction, soil moisture level, and humidity [6–8]. Because of rapidly evolving climatic conditions in the environment, there is a serious urge to categorize the soil parameters like temperature, humidity, and soil moisture [9, 10]. This prototype had been evolved technically to diminish the swift changes in the core part of agriculture There were enumerable mechanisms that had bought a wide range of rapid developments in the field of crops that led by research and biomagnification of the algal blooms that were left unexplored for most of the years ranging from birth to death [11]. There were some minute enumerable errors that are involved from past agricultural plantations [12]. Lack of knowledge on what type of crops that must cultivate for yielding enhanced productions in the area of farming because of these human intervention methods from the past in agriculture [13–15]. The enormous integral alliance of IoT is utilization of packet switching mobile pattern communication which uplifts the economy of the agriculture by enormously enhancing the assured mechanical and automatic mobiles [16]. Mobiles are edge computing nodal planes that facilitate communication overseas by the efficient utilization of the satellites in the space [17, 18]. Through this collection of technologies, our lives can be improvised by mobile phones [19]. For example, a farmer had recently ploughed the agricultural floral field but enormously confused what crops to sow in the soil [20]. So, to help the farmer IoT takes a huge leap by using a sensors kit which could regulate the water supply across various units from the field by the help of these sensors he could efficiency categorize what type of plants can be grown in the soil by thus diminishing the urge of a human intervention. The main algorithm behind this methodology is the neural networks that could precisely differentiate between humongous levels of data. By utilization this algorithm could lead the data analytics and prediction techniques that are existing due to Deep learning [21].
2 Existing System The merge of resources is not swiftly perceived by translucent crystalline limpid. The earlier routines were not thoroughly total which caused many emphasized formatspecific warnings in devices. Either the electronic devices are partially arranged or the capabilities of the device [22, 23]. The algorithms that are utilized for the beneficiary of homo sapiens would not be completely allocated which can make raise to many errors in the methodologies that deviate from the original results. But these methods were power efficient but costlier when compared to the relative perceived methods that utilize higher network resources that would exhaust because these algorithms were not much swift [24].
Web-Based Automatic Irrigation System
383
Components of Existing Methods: Microcontroller Microcontroller makes unadorned arrangement in circuit that behaves peculiarly marks the pathway for constituent elements from the unitary methods that involve rapid prototypes. All the structures are an integral complex method that complexly clears the junk in embedded circuits. Following components, a processor, memory, and input/output (I/O) peripherals on a single chip. Advantages: • Low time required for performing operations. • The microcontroller is easy to interface with additional RAM, ROM and I/O ports. Disadvantages: • Heavily damageable. • Interfacing issues. ZigBee Many of the human advancements, which are a handful enough to communicate wirelessly with node management applications that accomplish chores. The Zigbee liaises the outrageous methodologies at the swiftly explored IP addresses that may outline the manifestation of methods. Advantages: • Shelf life is lofty. • Power can be salvaged efficiently. • Simply affordable. Disadvantages: • Less rely on the network. • Heavy perpetuated charges. • Substitution is costlier. Wireless Sensor Networks WSNS is mechanized nodes that have a miniature configuration that can involuntarily aid in creating a edge-based network that could function in any kind of circumstances that spontaneously wreck the natural laws of the environment either by physical or Human-based hazardous experiments that would surely lead the pavement for mechanical destruction. It is a widely economical method which anyone can deploy that via the base station which will clearly interpret the observations in the Board.
384
B. U. S. Yuvan et al.
Advantages: • Can perform best in challenging environments. • Wireless sensor nodes can easily be deployed without any hustle. Disadvantages: • Low resources available. • Very minimal battery power. • Offers storage in a limited manner. GSM GSM is the first form of mobile edge communication that has predominantly of growing technology. It has become the source of edge network telecommunications Standard to clearly cross send the signals to the receiver from various nodes. It uses simple protocols to communicate with other satellites. It cleaves the signals into various forms of smaller and shorter signals that follow TDMA methodology. There exists an internal procedure for the process to clearly be distinguished by the form of multiplexer and demultiplexer which has the ability to handle utmost inputs and could deliver the output in the form of understandable formation by the duration of timing of the input of the waves which are transferred. Advantages: • Higher Data transfer rate. • Easy to compatible with existing methods. • More economical. Disadvantages: • Provides limited data rate capability. • Interference of the existing signals.
3 Proposed System There are assorted number of phases that are existing in the ongoing pictograph that are invisible. Some of the remarkable phases that are usefully shown in Fig. 1. • Data collection. • Transmitting the data to the server. • Analytics phase. The data collection deals with the numerous amounts of data that has to be collected for classification of the raw data into analog and digital based on the data
Web-Based Automatic Irrigation System
385
Fig. 1 System block diagram
that had been received by the sensors. The data collected has several phases to deal with in order to make the data more clear, precise, and accurate. Components Used: • Sensor Unit The main architecture of the Arduino board is a dependent unit of the upon the sensors that make the functionality of individual components present in the unit. Sensors are the electronic machines that aid in checking the physical parameters of the real world. Each sensor can be priority based with the aid of generating the display results with the ideology by clearly distinguishing vivid kinds and categories of results that are obtained from the sensors either the formation of analog curves or Digital curves. The analog graphs represent the graphs in the manner of sinusoidal wave format whereas the digital sensors generate the results in the unit form representing 1 or 0. 1 is the indicator of ON and 0 clearly indicates OFF. Components are shown in Fig. 2. • Temperature Sensor The temperature sensor has been a household friend for the villagers which is known as lm32 sensor. The range varies from K to C. 23–50 °C is task manager which handles the analysis phase and data collection from environment. Configuration A1 and A0 Pins. It can merely be changed with the help of a common electrical bulb. • Humidity Sensor A Humidity values increase with an increase in the water droplets volume in the air content. At a general level, the values are generally noticed from high values when there is no point of touch of contact between the sensor and intermediate sensors. It
386
B. U. S. Yuvan et al.
Fig. 2 System architecture
has small bulbs attached to the sensor body. The Humidity sensors work by detecting changes that differ electrical currents or temperature levels in the air. There are three basic types of humidity sensors they are: • Capacitive • Resistive • Thermal. The soil moisture IoT sensor has two rod-shaped structures that rely on the moisture level of the sample soils namely: Red soil, Black soil, Loamy soil. It can withstand high levels of moisture levels. From the samples. Practically it is very beneficial for the research purposes which require high levels of arbitrary understanding of the psychological disorders which are faced by the pupils’ from various backgrounds in the villages. • Arduino Board Arduino Board is the transmitter of the electronic board which constitutes LCD screen, ADC unit, and Sensors. The ADC unit converts the data from the analog form to digital form. The pins have the configuration which is defined manually by the components of the electronic board of distributions. • IoT Board The IoT Board helps by relaying the data from the receiver to the server with the aid of the server request and rest API methodologies. Thus, the graph can be generated by usage of the Wi-Fi module which has all the configuration set up with at-most of
Web-Based Automatic Irrigation System
387
Fig. 3 IoT board
32 pins which can easily be set with the help of the fundamental mechanism of data transmission shown in Fig. 3. • Server The server acts as an intermediate between the Arduino Board and the display unit controlling the functionality of both the components. It handles both inputs and outputs simultaneously without any delay in the requests and responses. A server provides data to other computers for storage and scaling the data across various nodes in the embedded board. It collects the data is transmitted by the sensor unit to the computer via the help of IOT and Arduino boards. It is typically used to predict the crops that can be grown the given soil with the help of recorded data by the sensor unit. The data that is collected by the server acts as a reference for other calculations when the prediction goes wrong. The diverse nature of the data helps us to analyze the results in a better way. The secret behind faster data patterns and searching is the new patterns that are discovered in the data.
4 Results and Discussion To predict the crops that are suitable for the agricultural field it is advised to keep the sample soil with germinated seeds at least for 5 days for getting the desired results. Then the sensor unit begins its work by sensing the change in the temperature, humidity, and soil moisture in the soil. Based on the obtained results the server records the given data. This data thoroughly cleaned and corrected for errors then it is analyzed for accurately producing the results. The algorithm that is used for prediction of crops in this project is Neural Networks. It is the best and fastest algorithm, which helps to predict the output by increasing the efficiency of the given Machine-learning model. The model makes
388
B. U. S. Yuvan et al.
Fig. 4 Soil moisture sensor graph
the appropriate methods for making an inferred function to the data through careful analysis and extrapolation of patterns from raw data shown in Fig. 4. The layers are used for analyzing the data is periodic. The graphs succor by analyzing the data which is noticed. The server acts as an arbitrator for relaying the data from the IoT transmitter to the receiver Arduino kit. Whenever any new pattern is observed then the adjacent hidden layer is activated and triggered. The picture of the soil sample is detected by the hidden layer stepwise manner. Firstly, it detects the edges of the given sample in a single hidden layer, and then it combines all the hidden layers simultaneously for detecting the image. This infers that next time any image it comes across is already present then would have probably learned everything about that sample shown in Figs. 5 and 6. Fig. 5 Humidity sensor graph
Web-Based Automatic Irrigation System
389
Fig. 6 Temperature sensor graph
5 Conclusion To make agriculture more beneficial to the farmers it is combined with the technology for predicting the uncertain weather patterns and improper soil conditions in the agricultural field to many environmental factors. So, it is always a better method that agriculture and technology go hand in hand. Today humans had even set their foot on the moon but we are unable to solve the problems faced by the farmers. Our only aim is to make agriculture sustainable and scalable in any kind of environmental condition. We could achieve this only because of using IoT and machine learning together because IOT could be used for smart home automation and other purposes but when it is combined with machine learning it could result in miracles that would be beneficial to society.
References 1. P. Tripicchio, M. Satler, G. Dabisias, E. Ruffaldi, C.A. Avizzano, Towards smart farming and sustainable agriculture with drones, in Proceedings of IEEE International Conference on Intelligent Environment (IE), Prague, Czech Republic (2015), pp. 140–143 2. J. Gozalvez, New 3GPP standard for IoT [mobile radio]. IEEE Veh. Technol. Mag. 11(1), 14–20 (2016) 3. Z. Yu, L. Xugang, G. Xue, L. Dan, IoT forest environmental factors collection platform based on zigbee. IEEE Sensors J. 14(5), 51–62 (2014) 4. J.G. Jaguey, J.F. Villa-Medina, A. Lopez-Guzman, M.A. Porta-Gandara, Smartphone irrigation sensor. Cybern. Inf. Technol. 15(9), 5122–5127 (2015) 5. Y.D. Beyeneetal, NB-IoT technology overview and experience from cloud-RAN implementation. IEEE Wireless Commun. 24(3), 26–32 (2017) 6. A.H. Ngu, M. Gutierrez, V. Metsis, S. Nepal, Q.Z. Sheng, IoT middleware: a survey on issues and enabling technologies. IEEE Internet Things J. 4(1), 1–20 (2017) 7. T.A. Shinde, J.R. Prasad, IoT based animal health monitoring with naïve Bayes classification. Int. J. Emerg. Trends Technol. 1(2), 252–257 (2017)
390
B. U. S. Yuvan et al.
8. O. Chieochan, A. Saokaew, E. Boonchieng, IoT for smart farm: a case study of the Lingzhi mushroom farm at Maejo University, in Proceedings of 14th International Joint Conference on Computer Science and Software Engineering (JCSSE) (2017), pp. 1–6 9. T. Baranwal, Nitika, P.K. Pateriya, Development of IoT based smart security and monitoring devices for agriculture, in 6th International Conference—Cloud System and Big Data Engineering (IEEE, 2016). 978-1-4673-820-8/16 10. S. Prince Mary, S. Vijaya Lakshmi, S. Anuhya, Color detection and sorting using internet of things machine. J. Comput. Theor. Nanosci. 16(8), 3276–3280 (2019) 11. G. Nagarajan, R.I. Minu, Wireless soil monitoring sensor for sprinkler irrigation automation system. Wireless Pers. Commun. 98(2), 1835–1851 (2018) 12. S. Thamizhselvi, P.S. Mary, A survey about data prediction in wireless sensor networks with improved energy efficiency. Res. J. Pharm. Biol. Chem. Sci. 7(2), 2118–2120 (2016) 13. Sheela, A.C. Santha, C. Kumar, Duplicate web pages detection with the support of 2D table approach. J. Theor. Appl. Inf. Technol. 67(1) (2014) 14. J. Refonaa, M. Lakshmi, Accurate prediction of the rainfall using convolutional neural network and parameters optimization using improved particle swarm optimization. J. Adv. Res. Dyn. Control Syst. 11(2), 7275–7285 (2019). ISSN 1943-023x 15. L. Jany Shabu, C. Jayakumar, Comparison of multimodal image fusion in brain tumor detection by ABC optimization and genetic algorithm. J. Adv. Res. Dyn. Control Syst. 11(2) 16. P. Asha, S. Srinivasan, Hash algorithm for finding associations between genes. J. Biosci. Biotechnol. Res. Asia ‘BBRA’ 12(1), 401–410 (2015). ISSN: 0973-1245 17. M. Selvi, P.M. Joe Prathap, Analysis & classification of secure data aggregation in wireless sensor networks. Int. J. Eng. Adv. Technol. 8(4), 1404–1407 (2019) 18. N. Manikandan, A. Pravin, LGSA: hybrid task scheduling in multi objective functionality in cloud computing environment. 3D Res. 10(2), 12 (2019) 19. V. Kanimozhi, T.P. Jacob, Artificial intelligence based network intrusion detection with hyperparameter optimization tuning on the realistic cyber dataset CSE-CIC-IDS2018 using cloud computing, in 2019 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2019), pp. 0033–0036 20. C.H. Kumar, A.S. Sangari, An efficient distributed data processing method for smart environment. Indian J. Sci. Technol. 9, 31 (2016) 21. R. Surendran, B. Keerthi Samhitha, K.P.R.C Primson, D. Anita, T. Sridevi, K. Shyamala, A. Murugan, Energy aware grid resource allocation by using a novel negotiation model. J. Theor. Appl. Inf. Technol. 68(3), (2014) 22. R. Sethuraman, E. Sathish, Intelligent transport planning system using GIS. Int. J. Appl. Eng. Res. 10(3), 5887–5892 (2015) 23. A. Jesudoss, N.P. Subramaniam, Enhanced certificate-based authentication for distributed environment, in Artificial Intelligence and Evolutionary Algorithms in Engineering Systems (Springer, New Delhi, 2015), pp. 675–685 24. A. Velmurugan, T. Ravi, Enabling secure data transaction in bio medical engineering using CCart approach. J. Theor. Appl. Inf. Technol. 92(1), 37 (2016)
Student Location Tracking Inside College Infrastructure K. Yedukondalu, K. Chaitanya Nag, and S. Jancy
Abstract The task includes Android Application Development of a GPS-based understudy Location Tracker in which with the assistance of any cell phone; some other GPS empowered handset could be found. In spite of the fact that target client might be found anyplace in inside the school foundation, he should have arrange availability and be GPS empowered. In past writing, this is typically done dependent on some predefined rules, which have been affirmed to be substantial. In any case, these standard put together strategies to a great extent depend with respect to scientists’ own insight, which is definitely emotional and discretionary. Besides, they are not compelling enough to process the gigantic measure of information in the time of enormous information. Right now, based GPS following information are focused on and keep up the understudy participation framework utilizing RFID. A gathering of properties, for example, school GPS mapping, is determined to describe the cell phone holders’ movement status. At the end of the day, the following focuses could be recognized as being at the condition of voyaging or non-voyaging, in view of which the understudy infringement during school times are handily identified. Keywords AVD · Android device · GPS · Jason · LAMP · Location tracking discover
K. Yedukondalu (B) · K. Chaitanya Nag · S. Jancy Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] K. Chaitanya Nag e-mail: [email protected] S. Jancy e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_40
391
392
K. Yedukondalu et al.
1 Introduction Present days, the greater part of the individuals utilizing the cell phones for our every day reason. Since the android cell phones have the memory limits, great handling speed, and higher information move rate [1, 2]. Android is Linux-based working framework with Java backing and it accompanies open-source programming [3, 4]. Many guide-based android applications are accessible in the Google play store. Guide is utilized to travel the clients starting with one spot then onto the next Google map; GPS is utilized for finding the particular area in open air condition [5]. Utilizing this application individuals can without much of a stretch air terminal, shopping centers, and so forth [6]. GPS (Global Positioning System) is one of the famous route framework on the planet [7]. Be that as it may, it gives higher precision for open air condition not for indoor condition [8–11]. Numerous college grounds, shopping centers, and association are huge, so the individuals are hard to discover the area inside the shopping centers, college grounds, and association [12]. There is no compelling highlights for finding the area inside the structures [13–15]. In the application, utilizing the indoor area-based administrations is utilized to locate the present area of the versatile customers [16]. Indoor Location-Based Services are the augmentation of area based administrations [17, 18]. It is utilized for following the area inside the structures or grounds. Indoor Atlas android SDK is utilized for indoor route [19, 20]. The SDK offers the highlights like the indoor situating with higher exactness and acquiring floor level [21, 22]. In Indoor Atlas to follow the ideal area at that point update the floor subtleties for wanted area and subsequent to fixing the course inside the structures [23, 24].
2 Related Work See Table 1.
3 Existing System In the contemporary framework, India has ample extends of academes and education is certainly one of the widespread moves giving exertion to wide variety of entities who like to offer fabric to the individuals. Today, numerous universities of rustic vicinity are hard basic troubles like bunking the school addresses moreover meet with the mishaps. India department of education emerges query to the division for their flightiness. Training office furthermore looks for records of the size able amount of standbys which are difficult to hold up. This depicts a prototype (Fig. 1).
Student Location Tracking Inside College Infrastructure
393
Table 1 Literature survey for various algorithms Paper title
Author
Year
Description
An automated student Siyu Yang; Xinxing Huang attendance tracking system based on voice print and location
2016 Identifying a pupil is collaboratively verified by means of voice print and real-time location
Transport system using integrated GPS
K. Radha, A. Kumari, M. Priyanka
2016 The development of a semi-self reliant intimation device to dad and mom about the presence of their in university campus
Real-time transportation system and location tracking using android application
C. Harish Prasad, K. Rahul, R. Sathya Narayanan
2017 It provides basic information about the location of the bus and empty seats which helps the passenger to determine the time of reach to next destination point
Location tracking system based on GPS via android device
Uddin, Md. Nadim, Md. Palash
2013 With brand new technological improvements of modern-day science people beings are now watching for the statistic about the location of any item for tracking purposes. Presently, we need more location primarily based services for being superior and to save money and time also
Disadvantages of existing system • There are numerous inconveniences in the ordinary framework. • This framework is less sorted out just as has less adaptability. • The framework utilizes part of paper which is squandered toward the day’s end. It is tedious just as less easy to understand. • The framework is less coordinated as it does not hold together the various members in framework, for example, understudies, educators, and so forth. Also it has less accessibility as understudies cannot get to their participation effectively.
4 System Architecture Phones constantly records a progression of GPS following point data including client identity, scope, timing, longitude, height, and so forth. In particular, client identity is the number assigned to every respondent, timing, scope and longitude are the worldly
394
K. Yedukondalu et al.
Algorithm used 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Initialize the LCD display. Initialize the baurd rete at 38400. By using the AT commands to read the SMSdata. Sent At+CMGR=1. Read the data from the GSM modem. The string should be scroll at the bottom line. In top displays Phone number. This action will continuous for all 30 SMS. Once the SMS read that will be deleted. If SMS is important, that will rotate those many time as we declares to rotate.
Fig. 1 Procedure for existing algorithm
Fig. 2 Block diagram for location tracking system
facilitate and spatial organize of GPS following focuses. In the event that the area of the respondents surpasses the prefixed mapping of the school area. At that point, an implication is sent to the separate staffs that the understudy is out of school area (Fig. 2).
5 Proposed System Steps for proposed algorithm Step 1: Decision tree starts with all education instances related with the root node. Step 2: It splits the dataset into training set and testing set.
Student Location Tracking Inside College Infrastructure Fig. 3 Procedure for proposed algorithm
395
1. Drawing Interference from decision trees 2. Input: Classical and spatial decision trees 3. Output: Implicit accident patterns 4. For each node do 5. ni=number of tuples in the node 6. pi=number of tuples in parent node 7. Support = pi/N 8. Confidence =ni/pi 9. If support > Thresh and confidence > Thresh 10. then 11. Inference Drawn End 12. Else Discard end
Step 3: It makes use of facts to advantage and chooses attributes to label the ever node. Subsets made contain statics with a similar characteristic attribute (Fig. 3). Advantages of proposed system • • • •
Low power utilization Flexible and solid More solid than manual activity Automatically controlled and simple to utilize.
6 Conclusion Smartphone-based totally Google positioning system monitoring records are targeted in this paper. A organization of attributes, such as college GPS mapping, is derived to represent the phone holders travel status. In other words, the monitoring points must be recognized as being at the country of touring or non-travelling, based on which the scholar violation throughout college instance are without issues detected.
7 Results and Discussion See Figs. 4, 5 and 6.
396 Fig. 4 Registration form
Fig. 5 Login form
Fig. 6 Location from GPS
K. Yedukondalu et al.
Student Location Tracking Inside College Infrastructure
397
References 1. V. Vineetha, P. Asha, A symptom based cancer diagnosis to assist patients using naive Bayesian classification. Res. J. Pharm. Biol. Chem. Sci. 7(3), 444–451 (2016) 2. A. Sivasangari, S. Bhowal, R. Subhashini, Secure encryption in wireless body sensor networks, in Emerging Technologies in Data Mining and Information Security (Springer, Singapore, 2019), pp. 679–686 3. S. Jancy, C. Jayakumar, Pivot variables location based clustering algorithm for reducing dead nodes in wireless sensor network. Neural Comput. Appl. 31, 1467–1480 (2019) 4. S. Jancy, C. Jayakumar, Sequence statistical code based data compression algorithm for wireless sensor network. Wirel. Pers. Commun. 106, 971–985 (2019) 5. N. Manikandan, A. Pravin, LGSA: hybrid task scheduling in multi objective functionality in cloud computing environment. 3D Res. 10(2), 12 (2019) 6. J. Jose, S.C. Mana, B. Keerthi Samhitha, An efficient system to predict and analyze stock data using Hadoop techniques. Int. J. Recent Technol. Eng. (IJRTE). 8(2) (2019). ISSN: 2277-3878 7. Nandini, D. Usha, E.S. Leni, Efficient shadow detection by using PSO segmentation and regionbased boundary detection technique. J. Supercomput. 75(7), 3522–3533 (2019) 8. N. Schuessler, K. Axhausen, Processing raw data from global positioning systems without additional information. Transp. Res. Rec. J. Transp. Res. Board (2105), 28–36 (2009) 9. J. Ogle, R. Guensler, W. Bachman, M. Koutsak, J. Wolf, Accuracy of global positioning system for determining driver performance parameters. Transp. Res. Rec. J. Transp. Res. Board (1818), 12–24 (2002) 10. Shirehjini, A.A. Nazari, Equipment location in hospitals using RFID-based positioning system. IEEE Trans. Inf. Technol. Biomed. 16(6) (2012) 11. A.N. Shirehjini, A. Yassin, S. Shirmohammadi, An RFID based position and orientation measurement system for mobile objects in intelligent environments. IEEE Trans. Inst. Meas. 61(6), 1664–1675 (2012) 12. V. Kanimozhi, T.P. Jacob, Artificial intelligence based network intrusion detection with hyperparameter optimization tuning on the realistic cyber dataset CSE-CIC-IDS2018 using cloud computing, in 2019 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2019), pp. 0033–0036 13. D.-T. Pham, B.A.M. Hoang, S.N. Thanh, H. Nguyen, V. Duong, A constructive intelligent transportation system for urban traffic network in developing countries via GPS data from multiple transportation modes, in Proceedings of IEEE 18th International Conference on Intelligent Transportation Systems (2015), pp. 1729–1734 14. C. Parsuvanathan, Big data and transport modelling: opportunities and challenges. Int. J. Appl. Eng. Res. (17), 38038–38044 (2015) 15. Y. Asakura, E. Hato, Tracking survey for individual travel behaviour using mobile communication instruments. Transp. Res. C Emerg. Technol. 12(3–4), 273–291 (2004) 16. P. Kumari, S. Jancy, Privacy preserving similarity based text retrieval through blind storage. Am. Eurasian J. Sci. Res. 11(5), 398–404 (2016) 17. A. Velmurugan, T. Ravi, Enabling secure data transaction in bio medical engineering using CCart approach. J. Theor. Appl. Inf. Technol. 92(1), 37 (2016) 18. V.V. Kaveri, S. Gopal, Notifying and filtering undesirable messages from online social network (OSN), in International Conference on Innovation Information in Computing Technologies (IEEE, 2015), pp. 1–8 19. S. Itsubo, E. Hato, Effectiveness of household travel survey using GPS-equipped cell phones and web diary: comparative study with paper based travel survey, in Proceedings of Transportation Research Board 85th Annual Meeting (2006), p. 13 20. P. Stopher, C. FitzGerald, J. Zhang, Search for a global positioning system device to measure person travel. Transp. Res. C Emerg. Technol. 16(3), 350–369 (2009) 21. R. Sethuraman, E. Sathish, Intelligent transport planning system using GIS. Int. J. Appl. Eng. Res. 10(3), 5887–5892 (2015)
398
K. Yedukondalu et al.
22. A. Jesudoss, N.P. Subramaniam, Enhanced certificate-based authentication for distributed environment, in Artificial Intelligence and Evolutionary Algorithms in Engineering Systems (Springer, New Delhi, 2015), pp. 675–685 23. G. Nagarajan, R.I. Minu, Fuzzy ontology based multi-modal semantic information retrieval. Procedia Comput. Sci. 48, 101–106 (2015) 24. S. Thamizhselvi, P.S. Mary, A survey about data prediction in wireless sensor networks with improved energy efficiency. Res. J. Pharm. Biol. Chem. Sci. 7(2), 2118–2120 (2016)
Finding Smelly or Non-smelly Using Injected and Revision Method B. Suresh and A. C. Santha Sheela
Abstract Code smells are simple programmatic qualities, which can indicate a code or plan problem that makes programming difficult to develop and maintain, and which can cause code refactoring. Late research is dynamic in characterizing programmed discovery instruments to help people in discovering smells when code size gets unmanageable for manual audit. Since the meanings of code smells are casual and emotional, evaluating how viable code smell identification apparatuses are is both significant and difficult to accomplish. This paper audits the present scene of the devices for programmed code smell identification. It characterizes explore inquiries regarding the consistency of their reactions, their capacity to uncover the locales of code generally influenced by basic rot, and the importance of their reactions concerning future programming development. It offers responses to them by breaking down the yield of four agent code smell identifiers applied to six unique forms of Gantt Project, an open-source framework written in Java. The aftereffects of these trials illuminate what current code smell location instruments can do and what the pertinent zones for additional improvement. Keywords Perplexing · Unwavering · Discrepancy
1 Introduction These days there is an expanding number of programming investigation instruments accessible for identifying terrible programming works on, featuring abnormalities, and when all is said in done expanding the familiarity with the product engineer about the basic highlights of the program being worked on [1]. As these apparatuses B. Suresh (B) · A. C. Santha Sheela Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] A. C. Santha Sheela e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_41
399
400
B. Suresh and A. C. Santha Sheela
are picking up acknowledgment by and by, an inquiry emerges on the most proficient method to survey their adequacy and select the “best” one [2]. For this angle, we need to confront the normal and exceptionally troublesome issue of hardware correlation and approval of the outcomes. Right now concentrate on code smells and on programmed instruments created for their identification [3]. Code smells are basic qualities of programming that may show a code or plan issue and can make programming hard to advance and keep up [4]. The idea was presented by Fowler, who characterized 22 various types of scents. Afterward, different creators (e.g., Mäntylä) recognized more scents, and new ones can be found. Code smells are carefully identified with the act of refactoring programming to improve its inside quality [5]. As designers recognize awful stenches in code, they ought to assess whether their essence alludes to some pertinent debasement in the structure of the code, and if positive, choose which refactoring ought to be applied. Utilizing a representation, smells resemble the manifestations of potential sicknesses, and refactoring activities may recuperate the related infections and expel their side effects [6]. Since a similar indication might be brought about by various infections, or even by no ailment by any means, human judgment is essential for evaluating smells with regards to the venture where they are found. Programmed apparatuses, then again, can assume an important job in reducing the errand of discovering code smells in huge code bases [7]. Not really all the code smells must be expelled: it relies upon the framework. At the point when they must be evacuated, it is smarter to expel them as ahead of schedule as could be expected under the circumstances. On the off chance that we need to expel smells in the code, we need to find and identify them; apparatus support for their discovery is especially valuable, since many code scents can go unnoticed while software engineers are working [8]. In any case, since the idea of code smell is unclear and inclined to abstract translation, evaluating the adequacy of code smell identification instruments is particularly testing [9, 10]. Various apparatuses may give various outcomes when they investigate a similar framework for some reasons [11]. The uncertainty of the smell definitions and subsequently the perhaps various understandings is given by the instruments practitioners [12, 13]. There are a few words and imperatives which are routinely used to characterize the substance of numerous scents, for example, the words “few,” “some,” “enough,” “enormous,” “personal,” which are clearly equivocal. In different cases, the definitions are excessively unclear and must be refined and improved to upgrade the identification instruments [14]. The distinctive discovery systems are utilized by the apparatuses [15]. They are typically founded on the calculation of a specific arrangement of joined measurements, or standard item situated measurements, or measurements characterized specially appointed for the smell location reason [16]. The distinctive edge esteems are utilized for the measurements, in any event, when the procedures are closely resembling or indistinguishable. To set up these qualities, various components must be considered, for example, the framework area and size, association best practices, the skill and the comprehension of the product engineers and the developers who characterize these qualities [17]. Changing limits clearly
Finding Smelly or Non-smelly …
401
greatly affects the quantity of identified scents, for example, such a large number of or excessively few. All in all, the approval of the aftereffects of the current methodologies is rare, done distinctly on little frameworks and for barely any scents [18]. The approval of the location results is perplexing, for the reasons referred to above, yet for the issues identified with the manual approval [19]. The manual approval of the consequences of a device probably will not be totally right, regardless of whether the manual validators are given similar criteria utilized by the apparatus (e.g., smell definition and recognition rule). Truth be told, the estimations of certain measurements, similar to those utilized for instance to quantify technique multifaceted nature, can transform from a manual assessment done by a software engineer to the programmed calculation of this measurement done by a device [20]. In addition, the manual calculation can contrast starting with one developer then onto the next: it is emotional. Be that as it may, the aftereffects of manual and programmed discovery of scents distinguished through basic measurements, such as checking substances, will in general have more noteworthy agreement [21].
1.1 Bad Smells: Definitions and Their Detection Fowler differentiates 22 code fragrances and integrates each of them with refactoring improvements that could enhance the code structure. He has a casual scent of code, holding the lonely human instinct will decide whether or not a refactoring has been completed [22]. A few scents show genuine code problems (e.g., lengthy parameter records make it harder to conjure strategies). Indications of an issue: for example, a strategy has feature envy when it utilizes the highlights of a class unique in relation to that where it is announced, which may demonstrate that the technique is lost, or that some example as visitor is being applied. As Zhang et al. propose, Fowler’s definitions are too casual to even think about implementing in a smell identification apparatus [23]. The vague meanings of scents greatly affect their recognition. We give here the meanings of some code smells on which we concentrate right now.
1.1.1
Smells Definition
Copied code implies that a similar code structure shows up in more than one spot. Highlight envy implies that a technique is increasingly inspired by different class(es) than the one where it is presently found [24]. This technique is in an inappropriate spot since it is more firmly coupled to the next class than to the one where it is as of now found. Enormous class implies that a class is attempting to do excessively. These classes have too many occurrence factors or strategies. Long approach is an excessively long technique, so it is difficult to get, adjust, or extend [25]. The long method smell is like the brain method smell of Lanza et al., which usually blends the usefulness of a unit, just like the utility of a whole function, sometimes even a
402
B. Suresh and A. C. Santha Sheela
whole structure, is a God class. The long list is an excessively long and therefore difficult-to-understand list of parameters.
1.1.2
Smells Detection
This subsection diagrams how the scents can be recognized and examines a portion of the important issues which we need to look for their programmed identification. A portion of the contemplations we report here has been portrayed by Mäntylä. We plot beneath how a few instruments identify the above scents and we remind to the following segment for the portrayals and the references to the apparatuses.
1.1.3
Duplicated Code
Finding copies should be possible by estimating the level of copied code lines in the framework. The issue in the estimation of this smell lies in the various potential sorts of duplication. Careful duplication is easy to distinguish, utilizing diff-like procedures. Different sorts of duplication suggest substance renaming or associating, so the recognition system should have the option to deal with the punctuation of the programming language and needs progressively computational time to deal with all the potential mixes of up-and-comer renaming. Another issue is that copied code is frequently marginally changed and blended in with various code. The identification procedure of iPlasma and inFusion is to recognize copied code through a lot of measurements about precise code duplication, however considering likewise the length of the duplication and the separation between two duplications. Checkstyle recognizes this smell basically tallying 12 line of back-to-back copied code in the content of the program; the bit of copied code can likewise traverse different techniques or classes. PMD considers as an event of the smell a bit of code that is copied at any rate once and that is at any rate made by 25 tokens.
1.1.4
Feature Envy
The discovery of feature envy can be accomplished by estimating the quality of coupling that a technique needs to strategies (or information) having a place with remote classes. There are no other ordinary estimates abused for the identification of this smell. Taxonomies or Classifications of Smells: Fowler in his book on code smells and refactoring does not propose any kind of smell order or scientific classification. The main scientific categorization we know about has been proposed by Mäntylä et al. with the plan to distinguish potential relationships among smells. They recognize six disjoint classes, which incorporate all the scents distinguished by Fowler. Wake proposes an alternate arrangement in his book. He initially recognizes smells inside classes and scents between classes, he characterizes new scents, and orders all the
Finding Smelly or Non-smelly …
403
scents of Fowler and the new ones out of nine classes. In his book, Wake portrays likewise the refactoring important to evacuate all the depicted scents and how the order recommends conceivable smell connections. Lanza et al. characterize 11 scents in three distinct classes, 7 from Fowler and 6 new scents. Their arrangement is especially situated to the distinguishing proof of connections existing among smells. Moha et al. propose a nonexclusive order of the scents as intraclass and between class smells, as in the grouping of Wake. At that point, they recognize three classifications (structural, lexical, measurable), where they arrange the scents of Fowler, with some covering since they characterize a few scents in more than one class. They propose this sort of order, with the point of disentangling the identification procedure of the various scents. Actually, they affirm that “for instance, the discovery of a basic smell may basically be founded on static investigations; the identification of a lexical smell may depend on regular language preparing; the recognition of a quantifiable smell may utilize measurements.”
2 Motivation Since examining codes physically to distinguish code smell is time consuming, blunder inclined and exorbitant, specialists have built up a lot of programmed code smell identification apparatuses. These devices have, legitimately or roundaboutly, arranged source code steps for code smells in massive enterprises. These devices are exceptional in their use. However, they have a few limitations given all. Amorim et al. used PMD, CheckStyle, JDeo, and inFusion to differentiate four kinds of fragrances: Blob, Long Parameter List (LPL), Long Parameter List (LPL) Method (LM), Gantt Project Function Envy (FE). Note that these instruments are fantastically weak in their comprehension. We are not able to guarantee the unwavering efficiency of these tools because of this discrepancy in the name of these instruments. We have proposed a new approach to use application creation data to improve the unwavering consistency of marks in the dataset of code smell to address this issue. Code smell locator offers responses to them by investigating the yield of four agent code smell identifiers applied to six unique variants of Gantt Project, an opensource framework written in Java. The consequences of these tests illuminate what current code smell identification apparatuses can do and what the pertinent territories for additional improvement.
3 Proposed System We suggest a prioritization strategy of code smells based on the background of developers. With automated impact assessment, we establish a context-relevance index (CRI) to be used as an priority for code smells. Empirical analyses of these method characteristics and influences which may influence the qualitativity of the findings
404
B. Suresh and A. C. Santha Sheela
Fig. 1 Outlook of proposed system
are presented. We present a controlled experiment that shows that our technology can give priority to first code smells that professionals choose to prioritize. Advantage: We carried out a more extensive study of the influence of the accuracy of impact analysis on our methodology. That is, this project considers more forms of impact analyses. We launched a study to combine the methods of gravity and history. Design is a multi-step focusing on the software architecture of data structures, procedural information, algorithms, and modules interface. The design process also converts demands into software presentation which can be accessed before coding begins. The design of computer software increasingly changed as new methods; better analysis and understanding of frontiers developed. In its development, software design is relatively early. Therefore, the methodology of software design lacks the depth, versatility, and quantitativeness normally associated with traditional engineering disciplines. While software design techniques are used, design quality requirements are available and design notation can be applied (Fig. 1).
4 Conclusion The key insight of this approach is that advanced deep learning technology is able to select helpful features for code smell detection and map these features to binary (straight or non-straight) prediction. In this project, we propose a deep learning approach for detecting code smells. We propose an automated method for generating labeled training data for the detection of smell to achieve the maximum potential
Finding Smelly or Non-smelly …
405
of the deep learning application smell detection. The created dataset is much larger compared to traditional manual datasets, making it convenient to detect coded smells through in-depth learning techniques. Acknowledgements We are happy to confess our heartfelt gratitude to Board of Management of SATHYABAMA to their amiable motivation to this successful project completion. We are thankful to them. We send our gratitude to Dr. T. Sasikala, M.E., Ph.D, Dean, School of Computing and Dr. S. Vigneshwari, M.E., Ph.D. and Dr. L. Lakshmanan M.E., Ph.D., Head of the Department, Department of Computer Science and Engineering for giving us vital assistance and information on correct time for the continuous assessments. We are pleased convey our heartfelt thanks to our Project Mentor A. C. Santha Sheela, M.E., Assistant Professor, Department of Computer Science and Engineering to her precious advice, ideas and continuous support for the prosperous accomplishment of our project work. We would like to send our gratitude to all teaching and non-teaching staff members of the Department of Computer Science and Engineering who were supportive in more ways for the fulfillment of the project.
References 1. M.M. Lehman, Programs, life cycles, and laws of software evolution. Proc. IEEE 68(9), 1060– 1076 2. N. Brown, Y. Cai, Y. Guo, R. Kazman, M. Kim, P. Kruchten, E. Lim, A. MacCormack, R.L. Nord, I. Ozkaya, R.S. Sangwan, C.B. Seaman, K.J. Sullivan, N. Zazworka, Managing technical debt in software-reliant systems, in Proceedings of the Workshop on Future of Software Engineering Research, 18th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE) (2010), ACM, pp. 47–52 (2010) 3. P. Kruchten, R.L. Nord, I. Ozkaya, Technical debt: from metaphor to theory and practice. IEEE Softw. 29(6), 18–21 (2012) 4. F. Shull, D. Falessi, C. Seaman, M. Diep, L. Layman, in Technical Debt: Showing the Way for Better Transfer of Empirical Results. Perspectives on the Future of Software Engineering (Springer, Berlin, 2013), pp. 179–190 5. W. Cunningham, The WyCash portfolio management system. OOPS Messenger 4(2), 29–30 (1993) 6. M. Fowler, Refactoring: Improving the Design of Existing Code (Addison-Wesley, Berkeley, CA, USA, 1999) 7. M. Tufano, F. Palomba, G. Bavota, R. Oliveto, M. Di Penta, A. De Lucia, D. Poshyvanyk, When and why your code starts to smell bad (and whether the smells go away). IEEE Trans. Softw. Eng. (2017) 8. M. Tufano, F. Palomba, G. Bavota, M. Di Penta, R. Oliveto, A. De Lucia, and D. Poshyvanyk, An empirical investigation into the nature of test smells, in Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering (2016), ACM, pp. 4–15 9. R. Arcoverde, A. Garcia, E. Figueiredo, Understanding the longevity of code smells: preliminary results of an explanatory survey, in Proceedings of the International Workshop on Refactoring Tools (2011), ACM, pp. 33–36 10. S. Thamizhselvi, P.S. Mary, A survey about data prediction in wireless sensor networks with improved energy efficiency. Res. J. Pharm. Biol. Chem. Sci. 7(2), 2118–2120 (2016) 11. N. Manikandan, A. Pravin, LGSA: hybrid task scheduling in multi objective functionality in cloud computing environment. 3D Res. 10(2), 12 (2017)
406
B. Suresh and A. C. Santha Sheela
12. A. Chatzigeorgiou, A. Manakos, Investigating the evolution of bad smells in object-oriented code, in Proceedings of the 2010 Seventh International Conference on the Quality of Information and Communications Technology, QUATIC’10, IEEE Computer Society (2010), pp. 106–115 13. R. Peters, A. Zaidman, Evaluating the lifespan of code smells using software repository mining, in IEEE European Conference on Software Maintenance and ReEngineering (2012), pp. 411– 416 14. S. Olbrich, D.S. Cruzes, V. Basili, N. Zazworka, The evolution and impact of code smells: a case study of two open source systems, in Proceedings of the 3rd International Symposium on Empirical Software Engineering and Measurement, ESEM’09 (2009), pp. 390–400 15. V. Kanimozhi, T.P. Jacob, Artificial Intelligence based network intrusion detection with hyperparameter optimization tuning on the realistic cyber dataset CSE-CIC-IDS 2018 using cloud computing, in IEEE International Conference on Communication and Signal Processing (ICCSP) (2019), pp. 0033–0036 16. M. Abbes, F. Khomh, Y-G. Gueheneuc, G. Antoniol, An empirical study of the impact of two antipatterns, blob and spaghetti code, on program comprehension, in Proceedings of the 15th European Conference on Software Maintenance and Reengineering, CSMR’11, IEEE Computer Society (2011), pp. 181–190 17. A. Yamashita, L. Moonen, Exploring the impact of inter-smell relations on software maintainability: an empirical study, in IEEE International Conference on Software Engineering (ICSE) (2013), pp. 682–691 18. F. Khomh, M. Di Penta, Y.-G. Guéhéneuc, G. Antoniol, An exploratory study of the impact of antipatterns on class change- and fault-proneness. Empirical Softw. Eng. 17(3), 243–275 (2012) 19. F. Palomba, G. Bavota, M. Di Penta, F. Fasano, R. Oliveto, A. De Lucia, On the diffuseness and the impact on maintainability of code smells: a large scale empirical study. Empirical Softw. Eng. (2017) 20. F. Palomba, G. Bavota, M. Di Penta, R. Oliveto, A. De Lucia, Do they really smell bad? a study on developers’ perception of bad code smells, in IEEE International Conference on Software Maintenance and Evolution (ICSME) (2014), pp. 101–110 21. G. Nagarajan, R.I. Minu, A. Jayanthiladevi, Brain computer interface for smart hardware device. Int. J. RF Technol. 10(3–4), 131–139 (2019) 22. R. Nimesh, P. Veera Raghava, S. Prince Mary, B. Bharathi, A survey on opinion mining and sentiment analysis. IOP Conf. Ser.: Mater. Sci. Eng. 590(1), 012003 (2019) (Scopus) 23. J. Refonaa, M. Lakshmi, Cognitive computing techniques based rainfall prediction—a study, in IEEE International Conference on Computation of Power, Energy Information Communication (ICCPEIC) (2018), pp. 1–6, pp. 142–144 issue indexed in WOS, (Scopus) 24. S.L. Jany Shabu, C. Jayakumar, Detection of brain tumor by image fusion using genetic algorithm. Res. J. Pharm. Biol. Chem. Sci. 7(5), 505–511 (2016) 25. M. Selvi, P.M. Joe Prathap, Performance analysis of QoS oriented dynamic routing for data aggregation in wireless sensor network. Int. J. Pharm. Technol. 9(2), 29999–30008 (2017)
Smart Bus Management and Tracking System M. Hari Narasimhan, A. L. Reinhard Kenson, and S. Vigneshwari
Abstract In this advancing world, boarding up buses has been one of the prominent problems faced by many remote users. So, we proposed a new smart system to allocate, schedule, and provide real-time information of the transport system. Therefore, to overcome these problems, we present a mobile application-based live tracking system to track buses through scheduling and live tracking using Google Maps application program interface (API) and then is managed with the help of an admin dashboard. The dashboard takes care of the management of the buses and drivers through allocation and scheduling. The required data is fed and supervised by the administrator through the admin dashboard. The driver app helps in providing the real-time location of the buses. The location data is then fed to the server hosted by Firebase. Then it is forwarded to the dashboard and is sent to the user application. Keywords Google map · Firebase server · Dashboard · Administrator · Maps API
1 Introduction Smartphones have become one of the most important devices used in this progressive world. Where for every need of the user, there is an app for it. It has become one of the most reliable devices in human history [1]. No one can survive in this digital world without a smartphone. From ordering merchandises to ordering food, smartphones have paved the way for digital marketing and online amenities [2]. Every necessity of the user is satisfied by the handheld device starting from transport to travel services M. Hari Narasimhan (B) · A. L. Reinhard Kenson · S. Vigneshwari Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] A. L. Reinhard Kenson e-mail: [email protected] S. Vigneshwari e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_42
407
408
M. Hari Narasimhan et al.
made available all around the world at any given point of time [3]. Many private companies create mobile applications for commercial use of the public through which many users are benefitted [4, 5]. Public transport is the means of transport for majority of people [6]. And also, other users include students and corporate professionals who share common transportation [7]. One of the major problems faced by these users is the ability to track their means of transport which makes it challenging to follow the bus schedules and manage their timings according to it [8]. Scheduled timing tends to get messed up due to real-life problems like traffic, accidents, and congestions [9]. Any user who wants to use the common transportation is unable to predict the arrival and departure timings due to various factors [10]. This system facilitates the user with the timing schedules and the live location of the transport system [11]. This system also accommodates the bus drivers by displaying the individually allocated buses for them [12]. This avoids confusions and paves a systematic and administrated method for allocation of buses [13]. It also includes facilities like emergency notification which notifies the administrator during emergencies. The Applications are native and rely on Firebase as the server [14, 15]. Firebase is the server through which the applications and dashboard are interconnected.
2 Related Work Researchers have investigated the topic of bus tracking continuously. We can see that the use of IoT devices has become a relic of the past [16]. Over the modern age, smartphones have become an essential and convenient invention for both commercial and research purposes [17]. Most of the major companies and start-ups utilize mobile application for their business. Some even use them as their main product and is working on the further development of their application. Such companies and startups include Ola, Uber, Swiggy [18, 19]. From the previous researches, we can identify the approach to be dependent on Hardware for tracking. Kumbhar [20] has proposed the methodology of using three models consisting of the bus module, central control unit, and a client-side module. Each module present performs a separate task individually [21]. The bus app gives the live tracking of the bus data to the central control unit. When the user requests the location, the central control unit provides the live data of the user. From this process, we understand that the central control unit provides the requested location to the user. But there is not a management system for managing buses and users at the same time. Kumbhar [20] has implemented this methodology using a GPS module and GPRS for receiving the real-time location. Godge [22] has proposed a management system which maintains the users as well as the location of the buses [23]. Though they use IoT methodology for live tracking, the management system has proven to be user-friendly. This system helps the users for better usage and access to real-time location of the data. Hoq [4] has proposed the usage of mobile phones as a means of real-time tracking from the inbuilt GPS module present in it. This system helps in avoiding the hardware
Smart Bus Management and Tracking System
409
maintenance required when it comes to IoT devices. The system consists of a Web application as for the user interface in which the location gathered by the mobile app is displayed [24, 25].
3 Proposed Approach Considering the previous approaches taken by different researchers, we can conclude that tracking buses through IoT devices might work though it might require maintenance, managing them in case of failure should not lead to mismanagement. It is one of the downfalls seen toward the diverse approaches taken so far. We propose a system to overcome this demerit as well as creating an efficient means of management and administration of the users, buses, routes, and drivers (Fig. 1). Three different modules are developed for the usage of the drivers, users, and the administrator, respectively. We use Firebase as a backend server for interconnecting these modules. The bus app transmits the real-time location of the bus through the smartphone and pushes them to the server. The administrator controls the drivers and supervises over the routes and assigns it to the buses accordingly with the help of the admin dashboard. The location is accessed and observed with the help of Google Maps API.
Fig. 1 Architectural design of proposed model
410
M. Hari Narasimhan et al.
Fig. 2 Architectural design of admin dashboard
3.1 Dashboard The dashboard is a significant part of this proposed system. It plays a crucial role in the allotment and management of the buses and routes. The admin is only given the privileges to modify data from the dashboard, whereas users other than the admin will not be permitted to access it. The location data present in the server is retrieved and showcased in it. The admin can access and modify the bus details available, their routes, and also the drivers assigned to that specific bus and route. The server contains all the databases including the bus database, driver database, and route database necessary for the management by the administrator. At the time of development, these databases are already created and fed into the server. The databases are then supervised and modified by the administrator. The admin credentials are given only to the administrator for accessing the admin privileges. The admin acts as the superuser for this system. Figure 2 depicts that an efficient user experience and functionality are the key aims while developing this dashboard. The administrator would understand the UI and ultimately avoid confusion and mismanagement whenever possible.
3.2 Driver App The driver app is the component used to get a real-time location from the driver’s mobile device. The main objective of the application is to transmit real-time location seamlessly to the Firebase server. The drivers are distributed with individual credentials by the admin. The drivers can only access the application with their credentials. The main page of the bus app consists of a toggle switch, which when toggled, will start to send the live location of the mobile device to the server. It consists of a
Smart Bus Management and Tracking System
411
Fig. 3 Architectural design of driver application
separate section for sending emergency alerts and receiving circulars from the admin (Fig. 3).
3.3 User App The user application is linked with the Firebase server and has the access to the buses and routes present in its database. The app is implemented with a search engine for better access of the user. When the user inserts the required bus, then the search engine displays the relevant search results. The searched query is checked in the server’s database and displays the location of that bus. It uses Google Maps API for displaying the real-time location of the bus (Fig. 4).
4 Techniques and Methodologies Used We use Firebase as the backend for this project. It acts as the backbone of the entire system. The dashboard is developed to react and connected with Firebase as its backend. The applications created are done on both on Android and iOS. We use Android Studio for nativity support for the Android application, whereas for the iOS application, we use Swift and XCode for its development. Google Maps API is used for transmitting and receiving the location data from the driver app to the Firebase server. It is then received and displayed by the user app and dashboard, respectively. The dashboard uses the Maps API to react and access the live location of the buses. The API is obtained as a react component by the dashboard with JS as its framework.
412
M. Hari Narasimhan et al.
Fig. 4 Architectural design of User Application
5 Results and Discussion We have managed to create a user-friendly architecture with a full-fledged user interface. The dashboard elements consist of the necessary navigation pane, toolbars, content view, and maps (Fig. 5). The home page consists of icons that redirects to the respective overview controller page according to the user necessity (Fig. 6). The overview controller page consists of the details of the requisite transportation medium with necessary labels and characteristics that are indigenous to the respective medium. The details include the unique id of the real-time location, route maps, driver details, and the count of passengers (Fig. 7). These above-mentioned figures are the prototypes of our proposed system. They are the stage of deployment on a real-time basis. These pages include the registration and login pages for both the driver app and the bus app. The buses registered send their GPS location to the Firebase server which is then retrieved by the admin dashboard and the user app. Figure 8 represents the database in which each detail of the user and driver are saved at the process of registration. It is further then used for login and allocating the live location of the drivers. It is then read by the user through this database. Figure 9 is the log file which saves the details of the users and drivers at the time of login. The usage of the real-time database is monitored in order to prevent confusion
Smart Bus Management and Tracking System
413
Fig. 5 Home page
Fig. 6 Overview controller page
and mismanagement. This mentions the user’s login time corresponding with their usage through the user UID. Once registered in the bus and user app, the credentials of the use are saved in the Firebase server, which in turn forms a new database in the server. The location transmitted from the bus app is also saved within the real-time database of the server. The credentials of the user are checked by the server and gives them access only after the authentication process.
414
M. Hari Narasimhan et al.
Fig. 7 Application interface
6 Conclusion Our proposed system tends to prove that developing a management system will increase the efficiency of administration. Also, it helps the user to get real-time location seamlessly. This architecture enables us to provide an efficient protocol and control over the transportation system. It gives clear-cut structure over the operation
Smart Bus Management and Tracking System
Fig. 8 Real-time database hosted in Firebase
Fig. 9 Real-time log database hosted in Firebase
415
416
M. Hari Narasimhan et al.
and utilization over the graphical interface. The overview layout of the proposed modules gives easier navigation and management.
References 1. D. Joshua Nithin, J. James VetriKodi, S. Vigneshwari, Livetimes: an android application for local news updates. Int. J. Appl. Eng. Res. 10(4), 10471–10481 (2015), ISSN: 0973–4562 2. A. Mary Posonia, S. Vigneshwari, J. Albert Mayan, D. Jamunarani, Service direct: platform that incorporates service providers and consumers directly. Int. J. Eng. Adv. Technol. 8(6), 3301–3304 (2019) 3. S. Vigneshwari, B. Bharathi, T. Sasikala, S. Mukkamala, A study on the application of machine learning algorithms using R. J. Comput. Theor. Nanosci. 16(8), 3466–3472 (2019) 4. M.M.K. Hoq, M.J. Alam, M.N. Mustafa, Mobile tracking system using web application and android apps, Int. J. Eng. Res. Technol. (IJERT) 6(2) (2017), ISSN: 2278-0181 5. M. Chandwani, B. Batheja, L. Jeswani, Real time bus tracking system, IOSR J. Eng. (IOSRJEN) 14, 24–28 (2018), p-ISSN: 2278-8719 6. A. Velmurugan, T. Ravi, Enabling secure data transaction in bio medical engineering using CCART approach. J. Theor. Appl. Inf. Technol. 92(1), 37 (2016) 7. R. Ramesh, Y. Ezhilarasu, P. Ravichandran, S. Prathibha, Regulating bus management system using cloud platform. Int. J. e-Edu. e-Bus. e-Manag. e-Learn. 2(6) (2012) 8. R. Bandhan, S. Garg, B.K. Rai, G. Agarwal, Bus management system. Int. J. Adv. Res. Comput. Sci. 9(3) (2018), ISSN: 0976-5697 9. A. Sivasangari, S. Bhowal, R. Subhashini, in Secure Encryption in Wireless Body Sensor Networks. Emerging Technologies in Data Mining and Information Security (Springer, Singapore, 2019), pp. 679–686 10. T.G. Pipalia, H.D. Nagalkar, Bus transport management system, Int. Res. J. Eng. Technol. (IRJET) 6(2) (2019), p-ISSN: 2395-0072 11. K. Sridevi, A. Jeevitha, K. Kavitha, K. Sathya, K. Narmadha, Smart bus tracking and management system using IoT. Asian J. Appl. Sci. Technol. (AJAST) 1(2) (2017) 12. P. Asha, S. Srinivasan, Hash algorithm for finding associations between genes. J. Biosci. Biotechnol. Res. Asia 12(1), 401–410 (2015), ISSN: 0973-1245 13. S.N. Divekar, S.R. Patil, S.A. Shelke, Smart bus system. IJSRSET 4(4) (2018), p-ISSN: 23951990 14. N. Lakshmi Praba, V. Nancy, S. Vigneshwari, Mobile based privacy protected location based services with three layer security. Int. J. Appl. Eng. Res. 10(4), 10101–10108 (2015), ISSN: 0973-4562 15. B. Ranadeep Reddy, Ch Sri Krishna Karthik, C.L. StefiSterlin, S. Vigneshwari, Developing an application for rural development to provide medical services. Int. J. Appl. Eng. Res. 10(3), 7743–7749 (2015), ISSN: 0973-4562; T.F. Smith, M.S. Waterman, Identification of common molecular subsequences. J. Mol. Biol. 147, 195–197 (1981). https://doi.org/10.1016/0022-283 6(81)90087-5 16. A. Uthirakumari, P. Asha, Hybrid scheduler to overcome the negative impact of job preemption for heterogeneous hadoop systems, in IEEE International Conference on Circuit, Power and Computing Technologies (ICCPCT) (2016), pp. 1–5 17. N. Manikandan, A. Pravin, LGSA: Hybrid task scheduling in multi objective functionality in cloud computing environment. 3D Res. 10(2), 12 (2019) 18. R. Sethuraman, E. Sathish, Intelligent transport planning system using GIS. Int. J. Appl. Eng. Res. 10(3), 5887–5892 (2015) 19. A. Jesudoss, N.P. Subramaniam, in Enhanced Certificate-Based Authentication for Distributed Environment. Artificial Intelligence and Evolutionary Algorithms in Engineering Systems (Springer, New Delhi, 2015), pp. 675–685
Smart Bus Management and Tracking System
417
20. M. Kumbhar, M. Survase, P. Mastud, A. Salunke, Real time web based bus tracking system. Int. Res. J. Eng. Technol. (IRJET) 3(2) (2016), p-ISSN: 2395-0072 21. V. Kanimozhi, T.P. Jacob, Artificial intelligence based network intrusion detection with hyperparameter optimization tuning on the realistic cyber dataset CSE-CIC-IDS2018 using cloud computing, in IEEE International Conference on Communication and Signal Processing (ICCSP) (2019), pp. 0033–0036 22. P. Godge, K. Gore, A. Gore, A. Jadhav, A. Nawathe, Smart bus management and tracking system, Int. J. Latest Eng. Sci. (IJLES) 2(2) (2019), e-ISSN: 2581-6659 23. G. Nagarajan, R.I. Minu, A. Jayanthiladevi, Brain computer interface for smart hardware device. Int. J. RF Technol. 10(3–4), 131–139 (2019) 24. S. Thamizhselvi, P.S. Mary, A survey about data prediction in wireless sensor networks with improved energy efficiency. Res. J. Pharm. Biol. Chem. Sci. 7(2), 2118–2120 (2016) 25. S.P. Mary, E. Baburaj, A novel framework for an efficient online recommendation system using constraint based web usage mining techniques (2016)
Telemetry-Based Autonomous Drone Surveillance System R. Sriram, A. Vamsi, and S. Vigneshwari
Abstract The existing surveillance system comprises monitoring and manual inspection of visual feed which is relatively difficult when the area of coverage is too large and objects in motion. Henceforth, this dilemma needs to be solved through implementing automated surveillance system in mobile drones which would eliminate the disadvantages of stationary surveillance. Also, by implementing the intruder and obstacle detection system to identify object motion and analyze the image data so that we would be managing to integrate a full-fledged non-stationary, mobile surveillance system through drones with cameras. Keywords Automation · Drone · Surveillance · Human activity · Motion detection
1 Introduction The surveillance system which requires manual inspection is quite hard to monitor the activities. Major disadvantages of this system involve management through human intervention, tedious labored pattern recognition, visual human detection, inability to follow objects in motion and stationary analysis [1]. Also, the accuracy and time complexity factor will be greatly affected in a negative manner. So, developing an automated drone implemented surveillance system will detect and inspect the required suspicion inducing elements in the frames for tracking and seeking purposes [2].
R. Sriram (B) · A. Vamsi · S. Vigneshwari Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] A. Vamsi e-mail: [email protected] S. Vigneshwari e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_43
419
420
R. Sriram et al.
The main purpose of the proposed system is to work autonomously for hours without human intervention. It automatically monitors the situation to alert for security. To avoid threats and prevent crime, this system works very efficiently. The foundation of the system is to prevent crime before it is committed [3]. Depending upon the level of threat, we can deploy the system in various environments such as Industries, construction sites, and various confidential sectors [4, 5]. In the present systems, the architectures are capable of a statically positioned monitoring environment [6]. To fix these problems, our system works in an automated drone not only for monitoring also for our environment [7]. The detection of mobile objects plays an important aspect in the success of a monitoring system. The approach of the system is to get data from the various feed of multiple drones to form a data model for the crime rate prediction and analysis [8]. Following the introduction, we have the literature survey to provide a clear idea of the research that took place in the past few years [9]. To avoid issues from those systems, we have proposed our system that we hope will definitely resolve the issues [10, 11].
2 Literature Review Park et al. [12] presented a UAV, on high-level design, based on the platform of IoT and implemented an example of multiple UAVs services utilizing the global standards-based IoT platform. While numerous vendors or developers are developing their own platforms, the presence of a standardized platform is essential in controlling and monitoring multiple heterogeneous UAVs in terms of linking with other legacy platforms and spreading UAV services ecosystem. In an experiment by Takai [13] in 2010, it is possible to detect suspicious activity from the watched person’s behavior and to measure the degree of risk of suspicious activity by finding the detecting point. This surveillance camera system can identify unsafe oversight areas and suspicious people and bring the observer’s attention to them using this proposed method. An observer is relieved off the burden of mind and body that occurred from the matter, which an observer must watch the enormous quantity of image data shot by multiple Web cameras with constant monitoring of remote control. Then, the system presents itself with a flaw where an observer misses an important predictor of crime in an area under surveillance [14–18]. Dhulekar et al. [19] developed an algorithm based on optical flow method which showed the desirable result with increased detection accuracy and less detection time. The testing has been carried out on several standards with the well offline dataset. The obtained results perform significantly well not only for a single subject but also for a group of people who have been detected. Moreover, the subjects under grass or any other occlusion with partial visualization are also accurately identified by the proposed algorithm [20–24].
Telemetry-Based Autonomous Drone Surveillance System
421
Lonh et al. [16] provided a distinct overview toward drones and their mechanical aspects. In their study, a prototype to detect and identify alarming notions and activities that were suspicious was created using an aerial drone. Gayathri et al. [25] provided a probable solution to the disadvantages of immovable CCTV cameras by equipping quad copters with automated image processing capabilities to compensate the immobility of CCTV cameras in urban areas. Muhammed et al. [26] equipped special drones with features such as X-ray and infrared cameras to help in activities such as detecting metal. It was also equipped with a global positioning system to ensure it could be tracked all times. These were targeted toward military personnel.
3 Materials and Methods 3.1 Problem Description In order to detect and act on suspicious activity on a constant basis, we require a fullfledged feedback loop architecture between the drone and the ground control system (GCS). Henceforth, we produce a semi-autonomous framework by which the drone navigates its way through the geographical range given by using the flight controller and telemetry system. The camera positioned on the drone captures and sends visual feed of the area it surveys through file transfer networking system between the drone and the ground control system. The ground control system instead processes it and delivers signal back to the drone which in turn takes the necessary actions.
3.2 Proposed Framework The proposed framework consists of a two-phase architecture which includes data collection and training the collected data by utilizing deep learning. We integrate the module called Resnet, which is a component of ImageNet which can be imported and extended to leverage 3D kernels that would enable us to include a spatiotemporal component that would be utilized for activity recognition (Fig. 1). Deep neural network advances and improvements on the image classification with ImageNet have led to various revelations in deep learning activity recognition [1, 6].
3.3 Training Method We have utilized a stochastic gradient descent with momentum for training the networks and continue to generate required samples randomly from pre-existing
422
R. Sriram et al.
Fig. 1 Algorithm
Fig. 2 Training methodology
clips in the given training dataset so that we are able to perform augmentation of data (Fig. 2). Initially, we should choose a temporary position in the given specified video through sampling uniformly so that we could extrapolate a definitive training fodder. A 16-frame video clip is eventually extrapolating over the chosen temporary axis. In case the clip is less than length than sixteen frames, then looping should be performed as much as required in order to get the necessary results. Thereafter, we choose a spatial axis from the four corners if not, the center in a random manner. Along with the spatial axis, we get to choose a spatial scalable sample so that conclusively we could perform cropping on a large scale.
Telemetry-Based Autonomous Drone Surveillance System
423
The cropped clips are then fed to the neural phase networks which in turn procure the necessary labels and train an extensive model for our perusal.
3.4 Hardware Components Considering the drone, it consists of the carbon-fiber frame for lightweight and strength in comparison of standard plastic or aluminum frames, 4 × BlHeli 20A ESC for 4 × 2200 series 1800 kv brushless motors which can lift up to 0.5 kg and an APM flight controller with an RF receiver and a GPS module.
3.5 Drone Architecture The components of drone architecture given in Fig. 3 are as follows: Electronic Speed Controller. An electronic speed control otherwise called ESC is an electronic circuit which is employed to regulate the speed of brushless motor and its direction. ESCs use the analog input to generate a frequency cycle for the brushless motor, which results in the three-phase electrical power. By controlling the frequency cycle, it controls the speed of the motor. In this project, BLHeli 20A ECS is employed to ensure properties like high performance with great linearity; high throttle response; and has multiple protection features such as low-voltage cut-off protection, protection from overheat and throttle signal loss. Brushless Motors. Brushless DC motor is powered by DC electricity via switching power supply in the closed-loop controller. The speed of the motor is controlled Fig. 3 Drone architecture
424
R. Sriram et al.
Fig. 4 APM flight controller architecture
by the ESC by alternating the polarity of windings attached to the rotor. 2300 KV powered brushless motor is used in this project. Raspberry Pi. The Raspberry Pi 4 is used for image processing technique and data collection process with the help of Pi Camera. Raspberry Pi 4 is used for its specifications such as frequency rate of 1.5 GHz [2, 4, 5]. Other Components. The APM power module is used to monitor the battery and provide a clean power supply to the flight controller. The GPS module: Ublox neo 8M is used for autonomous navigation. Telemetry module is also used to monitor the status of the drone.
3.6 APM Flight Controller APM is a flight control system given in Fig. 4 and is a multifunctional system which supports quad copter, as well as aircraft of fixed-wing.
3.7 Capabilities of APM Flight Controller Board • • • •
It supports GPS module. It supports data transmission of real-time data. It provides anti-vibration aerial photograph. Supports galvanometer to ensure stable flight.
Telemetry-Based Autonomous Drone Surveillance System
425
Fig. 5 Detection of activity: reading book
In this project, we use APM flight controller board because it gives stability to drones. It helps to get a clear image data that helps in the further machine learning process and it also has an in-built GPS module that helps to locate the real-time location of the drone.
4 Results and Discussion From Fig. 5, we can see that the algorithm has detected the actions of the persons. The actions performed are simultaneously checked with the pre-trained dataset and displays prediction of the action performed. Here, the algorithm has detected the user who is reading a book. When implemented with the drone, the person’s actions are monitored to predict and determine malicious activities (Fig. 6). Actions such as dancing and loitering around are recorded and identified by the algorithm, which helps in detecting suspicious people in the given area. The drone helps in covering a large area which makes the algorithm to run simultaneously on the people present in the area.
5 Conclusion Henceforth, by implementing these extensive security features, not only do we monitor, detect, and report any suspicious activity, but we also create a secure environment for the people of our society to coexist. The integration of cameras on drones would let us do freeform extensive surveillance within the given range quite efficiently and effectively. In the upcoming future, drones will play a predominant role
426
R. Sriram et al.
Fig. 6 Detection of anomaly
in surveillance. Due to constant improvement and development of technology, better power sources, improved detection algorithm, and better hardware would enable us to make stationary surveillance cameras invalid, as they are already considered to be outdated.
References 1. A. Mary Posonia, S. Vigneshwari, J. Albert Mayan, D. Jamunarani, Service direct: platform that incorporates service providers and consumers directly. Int. J. Eng. Adv. Technol. 8(6), 3301–3304 (2019) 2. A. Sankari, S. Vigneshwari, Automatic tumour segmentation using CNN, in Third IEEE International Conference on Science Technology, Engineering Management-ICONSTEM 2017, 23–24 March 2017, Chennai. 3. A. Sivasangari, S. Bhowal, R. Subhashini, Secure encryption in wireless body sensor networks, in Emerging technologies in data mining and information security (Springer, Singapore, 2019), p. 679–686. 4. R. Nithya, K. Muthu Priya, S. Vigneshwari, Stitching large images by enhancing SURF and RANSAC algorithm, in 2017 Second IEEE International Conference on Electrical, Computer and Communication Technologies, 22–24 Feb 2017 SVS Engineering College, Coimbatore.https://doi.org/10.1109/ICECCT.2017.8117868 5. V. Chandra, R. Prasad, M. Yashwanth Sai, P.R. Niveditha, T. Sasipraba, S. Vigneswari, S. Gowri, Low cost automated facial recognition system, in 2017 Second IEEE International Conference on Electrical, Computer and Communication Technologies, 22–24 Feb 2017 SVS Engineering College, Coimbatorehttps://doi.org/10.1109/ICECCT.2017.8117829 6. S. Prince Mary, B. Bharathi, S. Vigneshwari, R. Sathyabama, Neural computation based general disease prediction model. Int. J. Recent Technol. Eng. (IJRTE) 8(2), 5646–5449 (2019). ISSN: 2277-3878 7. N., Manikandan, A. Pravin, (2019). LGSA: hybrid task scheduling in multi objective functionality in cloud computing environment. 3D Res. 10(2), 12.
Telemetry-Based Autonomous Drone Surveillance System
427
8. V. Kanimozhi, T. P. Jacob, Artificial intelligence based network intrusion detection with hyper-parameter optimization tuning on the realistic cyber dataset CSE-CIC-IDS2018 using cloud computing, in 2019 International Conference on Communication and Signal Processing (ICCSP). IEEE, pp. 0033–0036 (2019). 9. P. Kumari, S. Jancy, Privacy preserving similarity based text retrieval through blind storage. Am.-Eurasian J. Sci. Res. 11(5), 398–404 (2016) 10. G. Nagarajan, K.K. Thyagharajan, A machine learning technique for semantic search engine. Procedia Eng. 38, 2164–2171 (2012) 11. S.P. Mary, E. Baburaj, A novel framework for an efficient online recommendation system using constraint based web usage mining techniques (2016) 12. J.-H. Park, S.-C. Choi, I-Y. Ahn, J. Kim, Multiple UAVs-based surveillance and reconnaissance system utilizing IoT platform (2017) 13. M. Takai, Detection of suspicious activity and estimate of risk from human behavior shot by surveillance camera (2010) 14. N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005. CVPR 2005, vol. 1 (IEEE, 2005), p 886–893. 15. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. FeiFei, Imagenet: a large-scale hierarchical image database, in IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009 (IEEE, 2009), p. 248–255. 16. J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, p. 3431–3440. 17. D.G. Lowe, Object recognition from local scale-invariant features, in Computer vision, 1999. The Proceedings of the Seventh IEEE International Conference on 1999, vol. 2, p. 1150–1157. 18. J. Redmon and A. Farhadi. Yolo9000: better, faster, stronger. arXiv preprint arXiv:1612.08242, 2016. pp. 1–3. 19. P.A. Dhulekar, S.T. Gandhe, N. Sawale, V. Shinde, S. Chute, The surveillance system for detection of suspicious human activities at war field (2018) 20. S. Ren, K. He, R. Girshick, J. Sun. Faster r-cnn: towards real-time object detection with region proposal networks, in Advances in Neural Information Processing Systems, p. 91–99, 2015. pp. 1–3. 21. J. Russakovsky, H. Deng, J. Su, S. Krause, S. Satheesh, Z. Ma, A. Huang, A. Karpathy, M. Khosla, Bernstein et al., Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015) 22. K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). 3, 4 23. J.R. Uijlings, K.E. Van De Sande, T. Gevers, A.W. Smeulders, Selective search for object recognition. Int. J. Comput. Vis. 104(2), 154–171 (2013). 1, 2 24. M.D. Zeiler, R. Fergus, Visualizing and understanding convolutional networks, in European Conference on Computer Vision (Springer, 2014), p. 818–833. 3, 4 25. G.D. Ramaraj, S. Venkatakrishnan, G. Balasubramanian, S. Sridhar, Aerial surveillance of public areas with autonomous track and follow using image processing, in Published in International Conference on Computer and Drone Applications (IConDA), 2017. 26. M. Hamza, A. Jehangir, T. Ahmad, A. Sohail, Design of surveillance drone with X-ray camera, IR camera and metal detector, in Proceedings of 2017 Ninth International Conference on Ubiquitous and Future Networks (ICUFN).
A Perceptive Fake User Detection and Visualization on Social Network Sai Arjun, K. V. Jai Surya, and S. Jancy
Abstract Online reviews give a significant asset to potential clients to settle on buy choices. Be that as it may, the sheer volume of accessible reviews just as the enormous varieties in the review quality present a major hindrance to the compelling utilization of the reviews, as the most accommodating reviews might be covered in the huge measure of low quality reviews. The objective of this framework is to create models and calculations for anticipating the supportiveness of reviews, which gives the premise to finding the most accommodating reviews for given items. We first show that the supportiveness of a review relies upon three significant variables: the reviewer’s mastery, the composing style of the review, and the practicality of the review. Contrasted with different all around examined conclusion investigation and feeling rundown issues, less exertion has been made to break down the nature of online reviews. The target of this paper is to fill right now naturally assessing the “supportiveness” of reviews and subsequently creating novel models to distinguish the most accommodating reviews for a specific item. Keywords Component · Formatting · Style · Styling · Insert
1 Introduction Individuals who are profoundly engaged with the investigation of language are etymologists, while the term “computational etymologist” applies to the investigation of processing languages with the use of calculation [1]. Basically, a computational etymologist will be a PC researcher who has enough comprehension of languages and can apply his computational aptitudes to display various parts of the language [2]. While computational etymologists address the hypothetical part of language, S. Arjun (B) · K. V. Jai Surya · S. Jancy Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] S. Jancy e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_44
429
430
S. Arjun et al.
NLP is only the use of computational semantics [3, 7]. NLP is increasingly about the use of PCs on various language subtleties and building true applications utilizing NLP methods [4]. In a pragmatic setting, NLP is undifferentiated from showing a language to a kid [17]. The absolute most regular assignments like getting words, sentences, and framing syntactically and fundamentally right sentences, are natural to people [5]. In NLP, a portion of these undertakings mean tokenization, piecing, grammatical feature labeling, parsing, machine interpretation, discourse acknowledgment, and a large portion of them are as yet the hardest difficulties for PCs [15, 16]. I will talk more on the functional side of NLP, accepting that we as a whole have some foundation in NLP [6]. The desire for the peruser is to have negligible comprehension of any programming language and an enthusiasm for NLP and Language [9]. When we have parsed the content from an assortment of information sources, the test is to comprehend this crude information [7]. Content purifying is approximately utilized for the greater part of the cleanout to be complete on content, contingent upon the information basis, parsing execution, outside clamor, etc. [8]. In outline, any procedure that is finished with the expect to make the content cleaner and to evacuate all the commotion encompassing the content can be termed as content purging [18]. There are no reasonable limits between the terms information munging, content purging, and information wrangling; they can be utilized conversely in a comparable setting. Tokenization is the way toward parting the crude string into important tokens [10]. The multifaceted nature of tokenization fluctuates as per the need of the NLP application, and the intricacy of the language itself. For instance, in English, it very well may be as basic as picking just words and numbers through a customary articulation [13]. There are two most normally utilized tokenizers. The first is word_tokenize, which is the default one, and will work much of the time [14]. The other is regex_tokenize, which is to a greater degree a redidtokenizer for the particular needs of the client. The vast majority of the different tokenizers can be gotten from regex tokenizers [19]. You can likewise assemble a quite certain tokenizer utilizing an alternate example. In line 8 of the previous code, we split a similar string with the regex tokenizer [20]. We use \w+ as a normal articulation, which implies we need all the words and digits from the string, and different images can be utilized as a splitter, same as what we do in line 10 where we determine \d+ as regex [11, 12]. The outcome will deliver just digits from the string.
2 Parts of Speech Tagging 2.1 Parts of Speech Languages like English have many labeled corpuses accessible in the news and different spaces. This has brought about many best in class calculations. A portion
A Perceptive Fake User Detection and Visualization on Social …
431
of these taggers are nonexclusive enough to be utilized across various areas and assortments of content. Be that as it may, in explicit use cases, the POS probably would not proceed true to form. For these utilization cases, we may need to fabricate a POS tagger without any preparation. To comprehend the internals of a POS, we have an essential comprehension of a portion of the AI procedures. id f (term) = ln
n documents
n documents containing term
A. Stanford tagger Utilizing NLTK’s or another lib’s pre-prepared tagger, and applying it on the test information. Both going before taggers ought to be adequate to manage any POS labeling task that manages plain English content, and the corpus is not very area explicit. Building or training a tagger to be utilized on test information. This is to manage a quite certain utilization case and to build up a tweaked tagger. There is a Linguistic Data Consortium (LDC) where individuals have devoted such a great amount of time to labeling for various languages, various types of content, and various types of labeling like POS, reliance parsing, and talk. Normally, labeling issues like POS labeling are viewed as arrangement naming issues or an order issue where individuals have attempted generative and discriminative models to foresee the correct tag for the given token. B. Sequential tagger The default tagger is a piece of a base class sequential back off tagger that serves labels dependent on the sequence. Tagger attempts to show the labels dependent on the unique situation, and on the off chance that it cannot anticipate the tag accurately, it counsels a back off tagger. Ordinarily, the default tagger parameter could be utilized as a back off tagger. Unigram just thinks about the contingent frequency of labels and predicts the most regular tag for the each given token. The bigram tagger parameter will think about the labels of the given word and the past word, and tag as tuple to get the given tag for the test word. The trigram tagger parameter searches for the past two words with a comparative procedure. It is extremely apparent that inclusion of the trigram tagger parameter will be less and the precision of that case will be high. Then again, unigram tagger will have better inclusion. To manage this tradeoff between exactness/review, we join the three taggers in the first piece. First it will search for the trigram of the given word grouping for anticipating the tag; if not discovered, it back off to bigram tagger parameter and to a unigram tagger parameter and in end to a NN tag. C. Regextagger There is one more class of successive tagger that is an ordinary articulation-based taggers. Here, rather than searching for the specific word, we can characterize a normal articulation, and simultaneously, we can characterize the comparing tag for the given articulations. For instance, in the accompanying code, we have given the absolute most regular regex examples to get the various grammatical forms. We know a portion of the examples identified with every po class, for instance, we know the articles in English and we realize that anything that closes with ness
432
S. Arjun et al.
will be a modifier. Rather, we will compose a lot of regex and an unadulterated Python code, and the NLTK RegexpTagger parameter will give a rich method for building an example based POS. This can likewise be utilized to initiate area related POS designs. D. Brilltagger Brill tagger is a change-based tagger, where the thought is to begin with a supposition for the given tag and, in next emphasis, return and fix the blunders dependent on the following arrangement of rules the tagger learned. It is additionally a managed method for labeling, yet dissimilar to N-gram labeling where we include the N-gram designs in preparing information, we search for change rules. In the event that the tagger begins with a unigram/bigram tagger with a worthy exactness, at that point, brill tagger, rather searching for a trigram tuple, will be searching for rules dependent on labels, position, and the word itself. A model principle could be: Replace NN with VB when the past word is TO. After we as of now have a few labels dependent on unigram tagger, we can refine if with only one basic principle. This is an intuitive procedure. With a couple of cycles and somemore enhanced guidelines, the brill tagger is to pay special mind to over-fitting of the tagger for the preparation set can beat a portion of the N-gram taggers. The main suggestion (Fig. 1). Composing Style: Due to the huge variety of the reviewers’ experience and language abilities, the online reviews are of drastically various characteristics. A few reviews are exceptionally meaningful and along these lines will in general be increasingly useful, while a few reviews are either long however with scarcely any sentences containing creator’s sentiments, or smart yet loaded up with offending comments. An appropriate portrayal of such contrast must be distinguished and considered into the forecast model. Practicality: By and large, the support of a review may essentially rely upon when it is distributed. For example, look into shows that a fourth of a film’s all out income
Sent Item
Text Data
Tidy text
Corpus Object
Fig. 1 Overview of the project
Summarized text
Document term matrix
Visualization
A Perceptive Fake User Detection and Visualization on Social …
433
Fig. 2 Results for visualization data on social networks
originates from the initial two weeks, which implies an opportune review may be particularly significant for clients looking for assessments about the film (Fig. 2).
3 Conclusion Support reviews can assist clients with getting course experience data and aid dynamic. In this way, in light of our perceptions, it is useful to arrange reviews without class marks through forecast models. In light of the information investigation results we have done, we picked the Naive Bayes calculation to manufacture prescient models. Afterward, we have proposed the utilization of supposition examination and Laplace smoothing on the Naive Bayes model to improve execution. Execution is estimated by exactness, mistake, accuracy, and memory measurements.
References 1. S. Jancy, C. Jayakumar, Pivot variables location based clustering algorithm for reducing dead notes in wireless sensor network. Nueral Comput. Appl. 31, 1467–1480 (2019) 2. S. Jancy, C. Jayakumar, Sequence statistical code based data compression algorithm for wireless sensor network. Wirel. Pers. Commun. (2019) 3. J. Jose, S.C. Mana, B. Keerthi Samhitha, An efficient system to predict and analyse stock data using Hadoop techniques. Int. J. Recent Technol. Eng. (IJRT) 8(2), (2019). ISSN: 2277-3878 4. D.U. Nandini, E.S. Leni, Efficient shadow detection by using PSO segmentation and region based boundary detection technique. J. Suoer Comput. 75(7), 3522–3533 (2019). 5. M. Hussain, M. Ahmed, H.A. Khattak, M. Imran, A. Khan, S. Din, A. Ahmad, G. Jeon, A.G. Reddy, Towards ontology-based multilingual URL _ltering: a big data problem. J. Supercomput. 74(10), 5003–5021 (2018). 6. F. Benevenuto, G. Magno, T. Rodrigues, V. Almeida, Detecting spammers on Twitter, in Proceedings of the Collaboration, Electronic Messaging, Anti-Abuse and Spam Conference (CEAS), vol. 6, July 2010, p. 12
434
S. Arjun et al.
7. S. Gharge, M. Chavan, An integrated approach for malicious tweets detection using NLP, in Proceedings of the International Conference on Inventive Communication and Computational Technologies (ICICCT), March 2017, p. 435–438. 8. T. Wu, S. Wen, Y. Xiang, W. Zhou, Twitter spam detection: survey of new approaches and comparative study. Comput. Secur. 76, 265–284 (2018) 9. N. Manikandan, A. Pravin, LGSA: hybrid task scheduling in multi objective functionality in cloud computing environment. 3D Res. 10(2), 12 (2019) 10. V. Kanimozhi, T.P. Jacob, Artificial intelligence based network intrusion detection with hyperparameter optimization tuning on the realistic cyber dataset CSE-CIC-IDS2018 using cloud computing, in 2019 International Conference on Communication and Signal Processing (ICCSP), p. 0033–0036. (IEEE, 2019). 11. G. Nagarajan, R.I. Minu, V. Vedanarayanan, S.S. Jebaseelan, K. Vasanth, CIMTEL-mining algorithm for big data in telecommunication. Int. J. Eng. Technol. (IJET) 7(5), 1709–1715 (2015) 12. S.P. Mary, E. Baburaj, A novel framework for an efficient online recommendation system using constraint based web usage mining techniques (2016) 13. A. Uthirakumari, P. Asha, Hybrid scheduler to overcome the negative impact of job preemption for heterogeneous Hadoop systems, in 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT). IEEE, p. 1–5 (2016). 14. S. Vaithyasubramanian, A. Christy, D. Lalitha, Two factor authentication for secured login using array password engender by Petri net. Procedia Comput. Sci. 48, 313–318 (2015) 15. A. Sangari, L. Manickam, J. Martin, R. Gomathi, RC6 based security in wireless body area network. J. Theor. Appl. Inf. Technol. 74(1) (2015). 16. A. Sivasangari, S. Bhowal, R. Subhashini, Secure encryption in wireless body sensor networks, in Emerging Technologies in Data Mining and Information Security (Springer, Singapore, 2019), p. 679–686. 17. R. Sethuraman, T. Sasiprabha, A. Sandhya, An effective QoS based web service composition algorithm for integration of travel & tourism resources. Procedia Comput. Sci. 48, 541–547 (2015) 18. A. Jesudoss, N.P. Subramaniam, Enhanced certificate-based authentication for distributed environment. in Artificial Intelligence and Evolutionary Algorithms in Engineering Systems (Springer, New Delhi, 2015), p. 675–685. 19. A. Velmurugan, T. Ravi, Alleviate the parental stress in neonatal intensive care unit using ontology. Indian J. Sci. Technol. 9, 28 (2016) 20. V.V. Kaveri, S. Gopal, Notifying and filtering undesirable messages from online social network (OSN), in International Conference on Innovation Information in Computing Technologies. IEEE, p. 1–8 (2015)
Route Search on Road Networks Using CRS K. Nitish, K. Phani Harsha, and S. Jancy
Abstract This proposed system executed by capability security shielding reliant on the spot based help i.e. (LBS). The customer separates unequivocal area nearest customer need to show up at that area inside time, careful course, etc. With the advances in geo-arranging developments and position based for the most part benefits, it is nowadays ordinary for road frameworks to have printed substance on the vertices. In any case, we propose a voracious estimation and a one of a kind programming computation source for development. We involve branch and bound algorithm to this source. This algorithm is further divided into AB and PB tree. For developing the discovery of up and corner, we analyze AB tree. For the demolishing of the size and the structure, we use PB tree. Therefore, for the development of viability this kind of BAB algorithm is created. Keywords Spatial watch word questions · Sign · Point-of-interest · Travel course search · Inquiry preparing
1 Introduction Location based for the most part administration (LBS) is ascending as an amazing application in portable data administrations with the quick advancement in remote correspondence and site situating advances [10, 24]. Clients with location mindful cell phones will scrutinize their environment (e.g. discovering every single peering focus inside five miles or the nearest two service stations from my present location) wherever and whenever [15]. Be that as it may, however this present processing worldview brings extraordinary comfort for information get to, the noteworthy of K. Nitish (B) · K. Phani Harsha · S. Jancy Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] S. Jancy e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_45
435
436
K. Nitish et al.
Table 1 Comparison of various algorithms Paper title
Author
Year Description
Shortest path determination
T. Akiba
2018 Decomposable searching problems static-dynamic transformation
More accuracy in nearest neighbours
C. S. Jensen 2017 Fast exact shortest path distance queries on large networks
Efficient algorithms for nearest neighbour
Y. Kanza
The cost can be of various different J. L. types such as travel, distance, time Bentley or budget
2016 Heuristic algorithms for route search queries over geographical data 2015 Nearest neighbour queries in road networks
client locations to support providers raises a need of interruption on location security that has hampered the broad utilization of LBS [1–4]. In this way, an approach to extravagant LBS with conservation of location protection has been increasing expanding examination consideration as of late [5, 6]. Inside the writing, there are a unit basically two classes of ways to deal with save location protection for LBS the essential is through information get to the executives [11, 12]. Client locations territory unit sent to the administration providers as was normal [13, 14]. It relies upon the administration providers to constrain access to keep location data through principlebased polices [20, 21]. The second is to utilize a reliable middleware running between the customers and furthermore the administration providers [19]. A client will indicate for each location-based inquiry the protection request with a base spatial space needs to conceal the location [22, 23]. This data would be certainly useful if the telephone’s physical region data changes into extra reliable [16]. It may yield a contraption’s region to be known once its GPS setting is executed truly [17, 18].
2 Related Works See Table 1.
3 Existing System In existing framework used to acknowledge protection safeguarding POI inquiry, however, some of them are not intended for POI question. Answers for public LBS just gave by three procedures: (1) Privacy-Preserving LBS Based on Anonymous Communication: In this sort of arrangements, at least one outsiders hand off messages among clients and the LBS supplier. This methodology shrouds the linkage between client personalities and messages from the LBS supplier [7]. The inquiry territory would be presented to
Route Search on Road Networks Using CRS
437
the LBS supplier; however, the client sending the question is tucked away among a lot of location obfuscation: In this sort of answers for keeping the LBS supplier from knowing clients’ exact locations, clients submit low precision locations or phoney locations alongside genuine locations [8, 9]. These arrangements offer a powerless degree of security. (2) Privacy-Preserving LBS Based on Spatial Cloaking: This sort of arrangements joins unknown correspondence and location jumbling strategies together. To the LBS supplier, a client can’t be recognized from a lot of clients in a shrouding region, and the shrouding territory rather than clients’ exact locations is sent to the LBS supplier. These systems don’t permit the cloud to look scrambled information. Along these lines, they can’t be utilized for redistributed LBS where LBS information in the cloud is encoded.
4 Proposed System At this moment of three sorts of substances, LBS provider, LBS customers and cloud as follows. LBS provider: LBS provider has copious of LBS data, which are POI records. The LBS provider grants affirmed customers (i.e. LBS customers) to utilize its data through area-based inquiries. Because of the fiscal and operational favourable circumstances of data re-appropriating, the LBS provider offers the request organizations by methods for the cloud. Thusly, the LBS provider scrambles the LBS data and redistributes the encoded data to the cloud.
5 Module Descriptions 5.1 User Location Register The portable client distinguishes specific location and to arrive at determined location utilizing location-based administrations. The portable client first technique is enrolment for security location between certain point call purposes of crossing point (POI) (Fig. 1).
5.2 User Query Process The client sends location subtleties and security data to the specialist organization. The administrations supplier gets to client subtleties based on inquiry process for encoded information (Fig. 2).
438
K. Nitish et al.
Location of user
Query of users
Combine user information
Search result
Making Query analysis
Fig. 1 User location register
Query analysis
Give accurate location
Hidden of user privacy
Fig. 2 User query process
Service provider
Access user details
Form query based on user location
Send to cloud storages
Fig. 3 Location-based services
5.3 Location-Based Services See Fig. 3. The procedure of question-based encryption strategy is to include the age of code during the location procedure. The versatile client gets to the present location for sources to goal precisely. The specialist organization empowers administration for location based, however shrouded the client security.
5.4 Verify the Accurate Location See Fig. 4. At last, the versatile client checks the right procedure and furthermore gets to the exact consequence of inquiry-based encryption inside encompassing of goal location. The specialist co-op increasingly secure re-appropriates of client protection.
Route Search on Road Networks Using CRS
Cloud send to result of query
Get service provider
439
Send to user with code
User get accurate result
Fig. 4 Verify the accurate location
[Algorithm 1: Strategy findNextMin()] Input: Source vertexv,u and piece of informationμ(w,d) Output: min{dm(μ,σ)}and coordinate vertex vFrom u, do network traversal; if a match vertex v is found at the point dG the system separation among u and v; while do Find next v0 contains w, in this way acquire d0G; if dG< d and d0G > d then break; else v v0 and dG d0G; return min{dm(μ,σ)}and v;
6 Conclusions The assessment of the cumulative impacts of road networks and use is seldom adequate. Although many laws, regulations and policies require some consideration of ecological effects of transportation activities, such as road construction; the legal structure leaves substantial gaps in the requirements. Impacts on certain resources are typically authorized through permits. Permitting programmes usually consider only direct impacts of road construction and use on a protected resource, even though indirect or cumulative effects can be substantial. Therefore, by using greedy algorithm, we can sort it out the issues.
7 Results and Discussions See Figs. 5, 6, 7 and 8.
440 Fig. 5 Sign in page
Fig. 6 Location and searching query
K. Nitish et al.
Route Search on Road Networks Using CRS
Fig. 7 Searching map
441
442
K. Nitish et al.
Fig. 8 Nearby places
References 1. I. Abraham, D. Delling, A.V. Goldberg, R.F. Werneck, Hierarchical hub labelings for shortest paths, in ESA (Springer, 2012), p. 24–35. 2. T. Akiba, Y. Iwata, K. Kawarabayashi, Y. Kawata, Fast shortestpath distance queries on road networks by pruned highway labeling, in ALENEX, (SIAM, 2014), p. 147–154 3. T. Akiba, Y. Iwata, Y. Yoshida, Fast exact shortest-path distance queries on large networks by pruned landmark labeling, in SIGMOD (ACM, 2013), p. 349–360 4. X. Cao, L. Chen, G. Cong, X. Xiao, Keyword-aware optimal route search. PVLDB 5(11), 1136–1147 (2012). labeling, in WWW (ACM, 2014), p. 237–248. 5. H. Chen, W.-S. Ku, M.-T. Sun, R. Zimmermann, The multi-rule partial sequenced route query, in SIGSPATIAL (ACM, 2008), p. 10 6. X. Cao, L. Chen, G. Cong, X. Xiao, Keyword-aware optimal route search. PVLDB 5(11), 1136–1147 (2012) 7. X. Cao, G. Cong, C.S. Jensen, B.C. Ooi, Collective spatial keyword querying, in SIGMOD (ACM, 2011), p. 373–384. 8. I. De Felipe, V. Hristidis, N. Rishe, Keyword search on spatial databases, in ICDE, 2008. 9. E.W. Dijkstra, A note on two problems in connexion with graphs. Numerischemathematik 1(1), 269–271 (1959) 10. S. Jancy, C. Jayakumar, Pivot variables location based clustering algorithm for reducing dead nodes in wire less sensor network. Neural Comput. Appl. 31, 1467–1480 (2019) 11. S. Jancy, C. Jayakumar, Sequence statistical code based data compression algorithm for wireless sensor network. Wirel. Pers. Commun. 106, 971–985 (2019)
Route Search on Road Networks Using CRS
443
12. J. Jose, S.C. Mana, B. Keerthi Samhitha, An efficient system to predict and analyze stock data using Hadoop techniques. Int. J. Recent Technol. Eng. (IJRTE) 8(2) (2019). ISSN: 2277-3878 13. D.U. Nandini, E.S. Leni, Efficient shadow detection by using PSO segmentation and regionbased boundary detection technique. J. Supercomput. 75(7), 3522–3533 (2019) 14. A. Pravin, T.P. Jacob, G. Nagarajan, An intelligent and secure healthcare framework for the prediction and prevention of Dengue virus outbreak using fog computing. Health Technol. 1–9 (2019). 15. V. Kanimozhi, T.P. Jacob, Artificial intelligence based network intrusion detection with hyperparameter optimization tuning on the realistic cyber dataset CSE-CIC-IDS2018 using cloud computing, in 2019 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2019), pp. 0033–0036. 16. P.K. Rajendran, B. Muthukumar, G. Nagarajan, Hybrid intrusion detection system for private cloud: a systematic approach. Procedia Comput. Sci. 48(C), 325–329. 17. S.P. Mary, E. Baburaj, A novel framework for an efficient online recommendation system using constraint based web usage mining techniques (2016). 18. P. Asha, S. Srinivasan, Distributed association rule mining with load balancing in grid environment. J. Comput. Theor. Nanosci. 13(1), 33–42 (2016) 19. V. Vineetha, P. Asha, A symptom based cancer diagnosis to assist patients using naive bayesian classification. Res. J. Pharm. Biol. Chem. Sci. 7(3), 444–451 (2016) 20. A. Sangari, L. Manickam, J. Martin, R. Gomathi, RC6 based security in wireless body area network. J. Theo. Appl. Inf. Technol. 74(1) (2015) 21. R. Sethuraman, T. Sasiprabha, A. Sandhya, An effective QoS based web service composition algorithm for integration of travel & tourism resources. Procedia Comput. Sci. 48, 541–547 (2015) 22. A. Jesudoss, N.P. Subramaniam, Enhanced Kerberos authentication for distributed environment. J. Theo. Appl. Inf. Technol. 69(2), 368–374 (2014) 23. A. Velmurugan, T. Ravi, Alleviate the parental stress in neonatal intensive care unit using ontology. Indian J. Sci. Technol. 9, 28 (2016) 24. U. Mohan Kumar, P. Siva SaiManikanta, M.D. AntoPraveena, Intelligent security system for banking using Internet of Things. J. Comput. Theor. Nanosci. 16(8), 3296–3299 (2019)
Smart Fish Farming S. Guruprasad, R. Jawahar, and S. Princemary
Abstract This project allows a user to control all the types of equipment that controls the water quality, temperature, and water cleanliness. Using this project, the user can view the graphically visualized water data like water temperature, turbidity, ph level, oxygen level, and also the project handles all the automation works like filling the tank automatically and maintains the water temperature and also notifies the user if there are any change in data values. There are many scientific reasons involved in the growth of the fish. So the profit can be achieved easily by making use of some smart techniques; for achieving the growth rate of the fish, we need to maintain the correct PH level, DO level, temperature, and turbidity for good fish growth, well grown fish can be sold for a good price as the market rate is based on the weight and the size of the fish reared. Keywords Graphically · Visualized data · Automation · Notification · IOT
1 Introduction The project can be established in any circumstances like indoors or outdoors [1, 2]. Delta regions like Tamil Nadu and Kerala are the states situated near the coastline, so to start a fish farming industry, it is important to monitor the water and determining the correct species of fish to rare, failing in that can result in great loss like not attaining the correct growth of fish and also death of the fish [3–5]. Using correct pieces of equipment for fish farming is a must because aeration and oxygenation S. Guruprasad (B) · R. Jawahar · S. Princemary Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] R. Jawahar e-mail: [email protected] S. Princemary e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_46
445
446
S. Guruprasad et al.
Fig. 1 Smart fish farming setup
of water and maintaining the correct ph level of water and automation of the water pumping system will help the user in making a profit [6, 7]. Making use of this project can help the user to control all the equipment, and the quality of water can be monitored [8, 9]. Quality means the temperature, PH level, oxygen level, and TDS content of water [10]. The project has features like CCTV monitoring system as the user can monitor his fish farm in the UI screen provided by the system in his mobile phone or laptop without the Internet [11, 12]. The outline of the project is given Fig. 1.
2 Existing System There are many projects which involves analyzing water quality, and there are some projects which analyze water quality for fish farming in available water resources like pond, lake, and ocean, but this project involves in automation and quality management of Aqua culture system established in closed indoor, and also there are no risk in exposing the rearing fishes to microbiological organisms [13]. The project [14, 15] involves analyzing the water quality in open water fish farming, and the message is transferred through GSM, but our project deals with local server message transfer and transfers the message through MQTT protocol [16, 17]. Some project like [18–21] involves analyzing the water quality and automating feeder system for shrimp and other fishes and involves data logging using HTML [22, 23]. Our project involves in data logging in local sever which can be accessed and automates, oxygen level maintenance, PH level maintenance, oxygen level maintenance, and data logging using Node.js and Johnny five library which is latest technology [24]. This project helps the user to analyze the water dataset in visualized form that is one of the features from other [25, 26].
Smart Fish Farming
447
3 Proposed System • Generally, a fish farm establishment can be more expensive for the sensors and the electrical system. But this prototype project gives the same result with low-cost pieces of equipment. • Recently, Chennai and other northern parts of Tamil Nadu and Andhra Pradesh have severe drought and low groundwater level so this project supports the BioFloc system, and the water usage can be minimized thus saving water, and it is best suitable for areas like Chennai, Madurai, and the northern part of Tamil Nadu. • Through the dashboard, the user can view the oxygen level, pH level, and water temperature graphically so there is no need for water testing. • By using these sensors, all the work is automated, and it will also be easy to monitor the fish farm remotely from other locations. If the condition is failed, then there will be a notification that informs us of the correct conditions required for the fish species. • Components required: 1. 2. 3. 4. 5. 6. 7. 8. 9.
Raspberry pi 3B+ Arduino UNO DO sensor PH sensor Water level sensor Turbidity sensor Bread board Jumper wires Four-channel relay.
Raspberry pi 3B+ Raspberry Pi is a single-board computer with 1.4 gigahertz 64-bit quad-core ARM A72 processor. It also supports Wi-Fi and Bluetooth connectivity which is an important feature required for this project. It has four USB ports; we use one of the ports for connecting with UNO board for serial communication. It also has GPIO in which we connect four-channel relay for controlling heavy electrical appliances like water pump, heater, oxygen unit, and other electrical equipment. We use a Raspberry pi as a central device (Fig. 2). Setup: We have coded the UNO board in such a way that it collects all the sensor values, and it sorts all the values and sends the values in six indexes separated by “,.” Then, these values are sent to Raspberry pi through serial read, and the values are visualized, and it is also used by Raspberry pi for automation. Operation: There are four important operations in this project they are:
448
S. Guruprasad et al.
Fig. 2 Raspberry pi 3B+
1. 2. 3. 4.
Data collection Data splitting Data visualization Automation (Fig. 3).
Working: In this project, we collect data of the water in which fish has to be reared from many sensors like DO sensor, PH sensor, turbidity sensor, and EC sensor and send the data
Fig. 3 Block diagram
Smart Fish Farming
449
to the Raspberry pi, and we use Node-Red which visualizes this data and automates the aquaculture system like filling the water, maintaining the PH level and oxygen level of the water. So we have used Node-Red and Node.js for creating dashboard which is of two one that is for the control panel and second is for data graph; we collect data of the water in which fish has to be reared from many sensors like DO sensor, PH sensor, turbidity sensor, and EC sensor and send the data to the Raspberry pi, and we use Node-Red which visualizes this data and automates the aquaculture system like filling the water, maintaining the PH level and oxygen level of the water. We do this because is helpful for the people who have the plan for starting their fish farming industry. Fish farming is the method of raring the same species of fish like mud crab, carp, shrimp, or other fresh or saltwater fish on a large scale on an artificial environment that has a favorable environment for growth and breeding (Fig. 4). Role of Node-Red: Node-RED is an integrated development environment in which it uses the needed Node.js version for its functioning. Node-RED helps in creating IoT applications by different inbuilt embedded functions like delay, trigger, (MQTT, HTTP) communication between IoT devices and sensors and even Node.js dashboard. It provides a small local server in which the user can log into it and can make use of the system. In our
Fig. 4 Hardware architecture
450
S. Guruprasad et al.
project, the Raspberry pi acts as a local server with Node-Red running and collects the sensor data from the UNO microcontroller which is then visualized and automated. Four-channel relay is directly connected with Raspberry pi for automation.
4 Results and Discussion In this project, we collect data of the water in which fish has to be reared from many sensors like DO sensor, PH sensor, turbidity sensor, and EC sensor and send the data to the Raspberry pi, and we use Node-Red which visualizes this data and automates the aquaculture system like filling the water, maintaining the PH level and oxygen level of the water. So, we have used Node-RED for creating and Node.js for making dashboards in which one of the dashboards is the control panel and another for visualization. The sensor sends the data to the Raspberry pi, and we use Node-Red which visualizes this data and automates the aquaculture system like filling the water, maintain the PH level and oxygen level of the water. So we have used Node-RED for creating and Node.js for making dashboards in which one of the dashboards is the control panel and another for visualization (Fig. 5). So the water quality can be maintained according to below value to attain the growth of fish and below fish species can be reared with this project. Table 1 is the parameter which can be achieved to gain a profit by achieving fish growth. If there is any change in the below values in the table that will alert the user through notification or mail.
Fig. 5 Flow chart
Smart Fish Farming
451
Table 1 Parameters for rearing Fish type
pH level
Dissolved oxygen level
Temperature (°C)
Turbidity (ntu)
Mud crab
7.77–7.96
Female (2.0–20.7)–Male (5.0–27.3)µl
25–30
5 ppm
25–30
4 mg/L
25–32
9.69 ± 3.41
5 Conclusion The water quality can be maintained according to below value to attain the growth of fish, and below fish species can be reared with this project. The below table is the parameter which can be achieved to gain a profit by achieving fish growth. If there is any change in the below values in the table that will alert the user through notification or mail.
References 1. S.P. Mary, E. Baburaj, A novel framework for an efficient online recommendation system using constraint based web usage mining techniques (2016) 2. M.P. Selvan, A. Chandrasekar, K. Kousalya, An approach towards secure multikeyword retrieval. J. Theo. Appl. Inf. Technol. 85(1) (2016) 3. C. Deng, Y. Gao, J. Gu, X. Miao, Research on the growth model of aquaculture organisms based on neural network expert system, in Sixth International Conference on Natural Computation (ICNC 2010), p. 1812–1815 4. W.-T. Sung, J.-H. Chen, D.-C. Huang, Y.H. Ju, Multisensors realtime data fusion optimization for IoT systems, in 2014 IEEE International Conference on Systems, Man, and Cybernetics, Oct 5–8, 2014, San Diego, CA, USA 5. W. Cheunta, N. Chirdchoo K. Saelim, Efficiency improvement of an integrated giant freshwaterwhite prawn farming in Thailand using a wireless sensor network. Rese. Gate (2014) 6. J.-H. Chen, W.-T. Sung G.-Y. Lin, Automated monitoring system for the fishfarm aquaculture environment, 2015 IEEE International Conference on Systems, Man, and Cybernetics 7. K. Patil, S. Patil, S. Patil, V. Patil, Monitoring of turbidity, pH & temperature of water based on GSM. Int. J. Res. Emerg. Sci. Technol. 2(3) (2015) 8. M. Pradeep Kumar, J. Monisha, R. Pravenisha, V. Praiselin, K. Suganya Devi, The real-time monitoring of water quality in IoT environment. Int. J. Innov. Res. Sci., Eng. Technol., 5(6), (2016) 9. M. Shirode, M. Adaling, J. Biradar, T. Mate, IOT Based Water Quality Monitoring System (Department of Electronics & Telecommunication Keystone School of Engineering, Pune, Maharashtra, India) 10. A. Pravin, T.P. Jacob, G. Nagarajan, An intelligent and secure healthcare framework for the prediction and prevention of Dengue virus outbreak using fog computing. Health Technol. 1–9 (2019) 11. A. Velmurugan, T. Ravi, Alleviate the parental stress in neonatal intensive care unit using ontology. Indian J. Sci. Technol. 9, 28 (2016)
452
S. Guruprasad et al.
12. A. Sangari, L. Manickam, J. Martin, R. Gomathi, RC6 based security in wireless body area network. J. Theo. Appl. Inf. Technol. 74(1) (2015) 13. G. Nagarajan, R.I. Minu, Wireless soil monitoring sensor for sprinkler irrigation automation system. Wireless Pers. Commun. 98(2), 1835–1851 (2018) 14. S. Vengaten, M. Wilferd Roshan, S. Prince Mary, D. Usha Nandini. Track Me & unlock: secured mutual authentication system of integrating phone unlock & women’s safety application using MEMS, in IOP Conference Series: Materials Science and Engineering, vol. 590(1) (IOP Publishing, 2019), p. 012004 15. Sheela, A.C. Santha, C. Kumar, Duplicate web pages detection with the support of 2D table approach. J. Theo. Appl. Inf. Technol. 67(1) (2014) 16. P. Kumari, S. Jancy, Privacy preserving similarity based text retrieval through blind storage. Am.-Eurasian J. Sci. Res. 11(5), 398–404 (2016) 17. V. Vineetha, P. Asha, A symptom based cancer diagnosis to assist patients using naive bayesian classification. Res. J. Pharm. Biol. Chem. Sci. 7(3), 444–451 (2016) 18. N.P. Singh, Joint Director for NEH Region, Tripura Centre, Lembucherra-799210 Tripura West 19. S. Kayalvizhi, G. Koushik Reddy, P. Vivek Kumar, N. VenkataPrasanth, Cyber aqua culture monitoring system using Ardunio And Raspberry Pi. Int. J. Adv. Res. Electr., Electron. Instr. Eng. (2015) 20. S.B. Chandanapalli, E. Sreenivasa Reddy, D. Rajya Lakshmi, Design and deployment of aqua monitoring system using wireless sensor networks and IAR-kick. J. Aquacult. Res. Dev. (2014) 21. D.S. Simbeye, S.F. Yang, Water quality monitoring and control for aquaculture based on wireless sensor networks. J. Netw. 9(4) (2014) 22. S.L. JanyShabu, C. Jayakumar, Multimodal image fusion using an evolutionary based algorithm for brain tumor detection. Biomed. Res. 29(14), 2932–2937 (2018) 23. M. Selvi, P.M. Joe Prathap, Performance analysis of QoS oriented dynamic routing for data aggregation in wireless sensor network. Int. J. Pharm. Technol. 9(2), 29999–30008 (2017) 24. V. Kanimozhi, T.P. Jacob, Artificial intelligence based network intrusion detection with hyperparameter optimization tuning on the realistic cyber dataset CSE-CIC-IDS2018 using cloud computing, in 2019 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2019), p. 0033–0036 25. J. Refonaa, M. Lakshmi, Cognitive computing techniques based rainfall prediction—a study, in International Conference on Computation of Power, Energy Information Communication (ICCPEIC), 2018, IEEE, p. 142–144 (2018), PP. 1–6 26. R. Sethuraman, T. Sasiprabha, A. Sandhya, An effective QoS based web service composition algorithm for integration of travel & tourism resources. Procedia Comput. Sci. 48, 541–547 (2015)
Perceptual Image Hashing Using Surf for Tampered Image Detection Chavva Sri Lakshmi Rama Swetha, Chakravaram Divya Sri, and B. Bharathi
Abstract Photographs have become a part of everyone’s life. It connects us to our past and is used as evidence in various fields like advertising, intelligence, journalism, science, etc. Today, photographs are replaced by digital images. Digital images are the collection of small bits of data, i.e., pixels, which are stored in computers. The problem with digital images is that they can tamper easily. Tampering is nothing but replacing the original content with some new content. Nowadays, digital images can be tampered effortlessly by using software like Photoshop, PhotoPlus, Corel Paint Shop. In this digital world, it is impossible to find out whether it is a real image or a tampered one. As a result, tampered image detection has become more important to verify digital image authenticity. In this paper, the techniques that are used to detect tampered images which are explained in detail, and also a new method known as perceptual image hashing using the speeded up robust features (SURF) technique is briefed. Keywords Surf · Cloning · Tampering · Perceptual image hash · Classification · Physically unclonable function (PUF)
1 Introduction Image processing [1, 2] is a fast-growing technology in today’s world. Image processing is a process to perform operations on the images. This process is used to extract some important features from the images to get some useful features. C. S. L. R. Swetha (B) · C. Divya Sri · B. Bharathi Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] C. Divya Sri e-mail: [email protected] B. Bharathi e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_47
453
454
C. S. L. R. Swetha et al.
Image processing is one kind of signal processing, where the input is an image, and the output will be a featured image associated with another image. It is one of the methods to convert the image into a digital image. In the image processing system [3, 4], the images are treated as a two-dimensional signal and apply a set of image processing methods for the current images [5, 6]. The image processing is used in business, engineering, and computer disciplines. Image processing [7, 8] and classification involve three steps: First import the image using image acquisition tools [9]. After the image is imported, analyze and manipulate the image. Finally, the analyzed image can be altered based on image analysis. Image processing is processed in two types, namely analog and digital processing. Analog processing is used for printing hard copies and photographs. Digital processing [10, 11] helps in image manipulation. Image tampering is said to be a digital art that needs a detailed understanding of the image properties and requires visual creativity [12]. There are many reasons behind image tampering; one can do it for fun or to create false evidence. There are different ways of image tampering: Cloning: It is an art of copying an image portion that is present in one area to another part by simply pasting it. Cloning [5, 13] is a straight forward process. Initially, set a source point and paste it somewhere else on the image. Unwanted parts are removed and replaced. It is commonly used to hide some elements of the pictures [14]. Resizing: Resizing can be done by adding or subtracting the pixels. This process is also called resampling. When the image is resized, the pixel values of the image are increased or decreased in terms of width and height. Cropping is another way to resize an image. Slicing: It is a process of partitioning an image into several subimages that can be saved in different formats. Data mining [15, 16] is a process used to extract data from a large set of any raw data. It implies that analyzing data patterns in a large batch of data using one or more software. Machine learning [8, 17] is an application that provides the system to learn and improve from experiences. MATLAB [11] is a programming language and software which has high performance on numerical computation and has a great visualization. The software provides an interactive environment. The MATLAB has hundreds of built-in functions. These functions are used for technical computation, animations, and graphics. It is used by engineers and scientists. The MATLAB uses MATLAB language which is a matrix-based and converts the computational mathematics into natural language [18]. It is used to analyze the data. It is used to develop different algorithms. By using MATLAB, we can create various applications and models. The MATLAB uses languages, apps, and math functions that can be enabled quickly and explores various approaches that arrive at a solution. It can be used by different range applications like image processing, deep learning, image and video processing, signal processing, etc.
Perceptual Image Hashing Using Surf for Tampered Image Detection
455
2 Existing System The process of converting a watermark image into a color host image by using [19] discrete wavelet transform (DWT) and QR decomposition. In this process, an embed image is transformed into a one-level DWT, and further, it is divided into pixel blocks. Then, every pixel block is completely decomposed by QR decomposition. In this process, the watermark should be extracted from the watermarked image without the help of the original image. Digital image watermarking [20] is one of the techniques in which the message of a digital signal is hidden inside the signal itself. The main drawback of this is that it is time-consuming and will be easily eliminated by anyone. So, the quality of the watermark is compromised. To overcome this, multiple digital images watermarking came into existence. In this, only single values differ with single or multiple images watermarking. Thus, the quality of the watermarked image improved. A real image is used to portrait a digital image by storing it in the form of numbers using a digital computer. The translation of an image into numbers is done by dividing the image into small areas called pixels. They can be easily manipulated with the help of powerful software. The availability of editing software and powerful image processing makes it easy for digital images to be edited. One of the major disadvantages is important features from an image that can be easily added or removed without showing any tampering traces. Copy move attack [21] is a type of digital forgery. It is a process in which a tiny portion of the image which is copied from one place and then pasted in another place in the image. This is done for the intent to cover the main image feature. Digital image forgeries are becoming a serious social issue because of very powerful image processing tools. Copy move forgery can be easily detected by extracting scale-invariant feature transform descriptors of an image. These descriptors are invariant for changes in scaling, rotation, illumination, etc. Due to the similarity between the region which is copied and the region in which it is pasted, the descriptors are matched against each other to find out any possible forgery of images. Different experiments have been done to illustrate the efficiency of this copy–move forgery on various forgeries and test its sensitivity and robustness in the post image processing like lossy and additive noise JPEG compression, etc. The main purpose of image forensics is to find out the digital image authenticity among many different attacks of tampering copy–move is one of the most common and popular ones. By using local visual features like SIFT [21, 22], we can easily detect copy–move forgeries. In this type of method, matching of SIFT properties is succeeded by clustering procedure in which key points are grouped which are geographically closed. This process is unsatisfactory when the key points present in the group are geographically away from each other and when the source is close to the pasted area. In this type of case, it is necessary to better estimate a copied area in order to find an accurate localization where the forgery has taken place. By using the JLinkage algorithm, we can perform clustering which is robust in geometric transformation. Many experiments are done on different datasets showed that this
456
C. S. L. R. Swetha et al.
method outperforms many other techniques in terms of reliability of copy–move detection. Many algorithms are proposed for copy–move forgery detection but only a few databases [23] are available for evaluating the algorithms. A new database is developed consisting of 260 forged image sets. Each image set consists of an original image forges image and two masks. Images are divided into five categories according to the manipulation techniques like rotation, scaling, translation, distortion, and combination. Post-processing methods like JPEG compression, noise adding, blurring color reduction are applied to all original and forged images. Each PUF [24] is different, and it can generate the response based that is unique to the respective PUF. Each parameter of the cell is measured by comprising many cells. These cells are arranged in a matrix. An addressable PUF will be generated to protect the network by using an addressable PUF generator. This does not require any storage in the database, and the data is secured. Digital image watermarking [25] is a process the digital information into digital signals. Watermarking is one of the techniques that fixe data into digital contents such as text, images, video, and audio data without disturbing the quality of the digital media. This process is an efficient solution to avoid illegal copying of information from multiple networks. Nowadays, the safety and security for digital images have become a major top prioritized issue. To overcome this problem watermarking is used. It is a process that is used to check authentication and copyright protection. Tampering is the processing of changing the image shape and size or adding unwanted information into the image. Many algorithms are used for the detection of the tampered image. One of the major problems in image tampering, when the transmitted image is received under different attacks like content-preserving, etc. To overcome this problem, the hashing method is used to detect location-context images [10]. In this process, the hash key will be generated and attached to the image before sending it to the destination source. Later at the destination, the image along with the hash key will be analyzed with all the geometric transformation, and the values are compared with the original image. The comparison will be done by using restored image, and authentication of the image is done by the color hash, global, etc., and these methods are used to check whether the received image has the same content or any tampering is done. If there is any tampering done to the image or information using a multi-scale hash component, the tampered locations will be displayed. Watermarking [26] is a process of hiding the information about the digital signals within the signal itself. It fixes the information into different digital signals like image, audio, etc. The process of restoring the original image without any misinterpretation even if the extracted data is hidden. The novel reversible watermarking is another technique for watermarking. This process uses the interpolation technique which converts digital data into images with indistinguishable modification. Identification is also called as RFID or privacy and security cryptographic. It is used for tracking the object and image authentication for RFID tags. These tags have increased security and privacy problems for users. To overcome this problem, hardware-based theories are used on the RFID tags for security which rely on a physically unclonable function [27]. These functions explore the inherent variability for delay of wires and delay of
Perceptual Image Hashing Using Surf for Tampered Image Detection
457
the parasitic gate for manufacturing circuits which implement the order of magnitude reduction for the count of the gates when it is compared with traditional functions using cryptographic techniques. Some protocols are used for preserving the privacy tag for identification and to secure codes of a message which are authenticated. To preserve the tags, the comparison will be done for PUF’s and digital cryptographic functions. Systems which are used for generating identical pattern use memory as physically unclonable function (PUF). The PUF is used to configure the generated response of different patterns which are dependent on physical characteristics and based on random characteristics. These random characteristics which are generated by the memory that will have vulnerabilities such as freezing attack which leads to aging. To overcome this problem, a memory-overwriting device is introduced. These devices purify the memory location to discover the uncertainty in the patterns and avoid vulnerabilities from the responses. And also a device called anti-degradation [28] will write the inverse responses which are taken from the memory. This process reduces the effects of aging. The digital images will be generated by different sources like cameras, scanners, computer graphics, etc. These digital images can be very important evidence for investigations like a criminal and forensic forgery. It is very easy to tamper digital images. To overcome this problem, SVM classification [29] is used. In SVM classification, the image which is captured with the help of digital cameras or a computer-generated image or an image captured using scanner removes feature and removes noise that is present in the images will be detected.
3 Proposed System Source image can be identified mainly by using machine learning-based methods, which involves three steps: Image feature extraction, classifier training, and image source class prediction. The selection of appropriate features that represent the unique characteristics of the underlying devices is important. The structure and stages of processing. Digital images consist of many picture elements known as pixels. Each pixel has different numeric value. The images may be a vector type or raster type. Vector images are images that consist of both magnitude and direction. Raster images consist of a fixed number of rows and columns of pixels. By default, digital images are raster images or bit-mapped images. Digital image processing is one the process of creating, communicating, and displaying digital images by using some algorithms. Digital image includes different features that can be algorithmically extracted on the bases of knowledge of sensor imperfection, color filter array interpolation, and statistical image features. The speeded-up robust features (SURF) [6] are a detector used for detecting local features. It is used for recognizing the objects, registering images, and classifying them. It is inspired by the scale-invariant feature transform (SIFT) descriptor. SIFT applications include video tracking, image stitching, navigation, and robotic
458
C. S. L. R. Swetha et al.
mapping. The key points of SIFT objects are extracted and stored in databases. The feature matching process is done by comparing the Euclidean distance of the features of the reference image stored in the database and the new image. Due to the high dimensionality of the SIFT descriptor, the succeeding feature matching process slows down, and this becomes the main drawback of SIFT. SURF is more robust and faster than SIFT. The algorithm used in SURF is the same as SIFT. The only difference is the details used in each step of the algorithm. SURF performs many popular features for robustness, repeatability, speed, distinctiveness, and accuracy. It can be compared and computed much faster and is highly invariant to lightning, translation, scale, contrast, and rotation. Rotation-invariant SURF features: To achieve efficient transmission, only a small constant number of the strongest SURF features are kept. For images that have feature points more than a predetermined threshold.
4 System Architecture From 1.4, an input image in JPEG format will be taken. The image will be converted into a grayscale image by using an inbuilt function rgb2gray in MATLAB. The image will undergo different filtering techniques. Median filtering [16] is a process of sorting some pixel values in ascending order and finding the central value of the sorted order. The median value has the ability to eliminate salt and pepper noise. Morphological filtering [18] is a process of collecting non-linear operations which are related to the feature of the image like boundaries. Histogram equalization [13] plays a major role in image processing. It is a technique used to contrast the images. These filtering techniques [3] are used to remove noise, contrast the images, and improve the blurriness of the image and thus enhance the visual clarity of the images. By using speeded up robust features (SURF), feature extraction takes place. The features which are extracted which undergo the process. Segmentation [18] means dividing an image into many sub-parts, and each part comprises many pixels. The main reason for segmentation is to get a simplified image so that it is easier and meaningful to analyze. The pixels present in a region possess similar characteristics in terms of intensity, color, texture. According to the pixel limit, the pixel values within that range are grouped. A fusion of the pixel takes place. Perceptual image hash is a technique that is used to generate a hash key. The hash values of the image will be stored in the database. A new image is considered to find out whether an image has tampered or not. The new image undergoes the above process and generates a hash key which is stored in the database. Cloning is nothing but a direct copy of an original image. It copies all the content of the original image to form a duplicate image. There are many software’s available for cloning in online. Feature extraction is a process of extracting the features of an image. It calculates the direction and magnitude of each image. It removes all the points that do not have
Perceptual Image Hashing Using Surf for Tampered Image Detection
459
Fig. 1 Steps to detect image tampering
radial symmetry. The feature descriptor is used to identify the center of each circular object. Since the object is circular, it has no gradient angle. All angles are equal. It is an angle invariant descriptor. A physically unclonable function is used as a unique identifier and follows an unpredictable way to map challenges against responses. The hash values of the old image and the new image are compared using PUF [8]. If there are equal hash values, then it means that the image is not tampered; otherwise, it is tampered (Fig. 1).
5 Conclusion Image processing is one of the processes to perform operations on the images. This process is used to extract some important features from the images to get some useful features. Image processing is one kind of signal processing where the input is an image, and the output will be a featured image associated with another image. Cloning is nothing but a direct copy of an original image. It copies all the content of the original image to form a duplicate image. SURF is a first robust technique for feature extraction to detect a tampered image even if there are any manipulations such as JPEG compressions, rotation, filtering,
460
C. S. L. R. Swetha et al.
scaling, noising are done on the digital images. SURF is one type of scale-invariant feature transform descriptor. It transforms and describes the features of the images. The extracted features undergo a special kind of hashing known as “Perceptual image hashing” which is used to generate the hash key. A physically unclonable function is used to uniquely identify a secrete key and maps it against the responses. By using the secret key, tampered image can be detected.
References 1. https://ieeexplore.ieee.org/abstract/document/4517944. 2. https://www.encyclopedia.com/computing/news-wires-white-papers-and-books/digitalimages. 3. https://www.britannica.com/science/cloning. 4. https://en.wikipedia.org/wiki/Physical_unclonable_function. 5. G. Nagarajan, R.I. Minu, Fuzzy ontology based multi-modal semantic information retrieval. Procedia Comput. Sci. 48, 101–106 (2015) 6. https://kids.mongabay.com/lesson_plans/lisa_algee/mining.html 7. https://sisu.ut.ee/imageprocessing/book/1 8. https://www.geeksforgeeks.org/digital-image-processing-basics/ 9. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_sift_i ntro/py_sift_intro.html 10. C.-P. Yan, C.-M. Pun, X.-C. Yuan, Multi-scale image hashing using adaptive local feature extraction for robust tampering detection 121, 1–16 (2016). https://www.sciencedirect.com/sci ence/article/pii/S0165168415003 709. 11. https://scikitimage.org/docs/0.12.x/auto_examples/xx_applications/plot_morphology.html 12. https://www.mathworks.com/discovery/what-is-matlab.html. 13. https://en.wikipedia.org/wiki/Image_segmentation 14. A. Pravin, T.P. Jacob, G. Nagarajan, An intelligent and secure healthcare framework for the prediction and prevention of Dengue virus outbreak using fog computing. Health Technol. 1–9 (2019). 15. https://www.tutorialspoint.com/dip/image_processing_introduction.htm 16. https://www.geeksforgeeks.org/machine-learning/ 17. https://ieeexplore.ieee.org/abstract/document/4429280 18. https://www.simplilearn.com/classification-machine-learning-tutorial 19. S. Jia, Q. Zhou, H. Zhou, A novel color image watermarking scheme based on DWT and QR Decomposition. J. Appl. Sci. Eng. 20(2), 193–200 (2017). https://pdfs.semanticscholar.org/ f59d/6ce27dd8dbeade093e51f62c09e27392f387.pdf. 20. T. Rathi, P. Rudra Maheshwari, M. Tripathy, R. Saraswat, X. Felix Joseph, A comparative analysis of watermarked and watermark images using DCT and SVD based multiple image watermarking, in International Conference on Advances of Science and Technology, vol. 274 (LNICST), p. 574–581 (2019). https://link.springer.com/chapter/10.1007/978-3-030-153571_46 21. A. Jessica Fridrich, B. David Soukal, A. Jan Lukas, Detection of copy-move forgery in digital images (2003). https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.121.1962. 22. I. Amerin, L. Ballan, R. Caldell, A. Del Bimbo, L. Del Tongo, G. Serra, Copy-move forgery detection and localization by means of robust clustering with J-linkage 28(6), 659–669 (2013). https://doi.org/10.1016/j.image.2013.03.006. 23. D. Tralic, I. Zupancic, S. Grgic, M. Grgic, CoMoFoD—new database for copy-move forgery detection New database for copy-move forgery detection. IEEE (2013). https://ieeexplore.ieee. org/abstract/document/6658316/keywords#keywords.
Perceptual Image Hashing Using Surf for Tampered Image Detection
461
24. B.F. Cambou, PUF-based password generation scheme. https://patents.google.com/patent/US1 0320573B2/en. 25. A.S. Kapse, S. Belokar, Y. Gorde, R. Rane, S. Yewtkar, Digital, image security using digital watermarking. Int. Res. J. Eng. Technol. 05(03), 163–166 (2018). https://www.irjet.net/arc hives/V5/i3/IRJET-V5I336.pdf. 26. L. Luo, Z. Chen, M. Chen, X. Zeng, Z. Xiong, Reversible image watermarking using interpolation technique. IEEE Trans. Inf. Forensics Secur.5(1), 187–193 (2010). https://ieeexplore. ieee.org/abstract/document/5313862. 27. L. Bolotnyy, G. Robins, Physically unclonable function-based security and privacy in RFID systems, in 2007, Fifth Annual IEEE International Conference on Pervasive Computing and Communications, https://ieeexplore.ieee.org/abstract/document/4144766. 28. P.T. Tuyls, G.J. Schrijen, Physically unclonable function with tamper prevention and anti-aging system (2010). https://patents.google.com/patent/US8694856B2/en. 29. N. Khanna, G.T.-C. Chiu, J.P. Allebach, E.J. Delp, Forensic techniques for classifying scanner, computer generated and digital camera images, in IEEE International Conference on Acoustics, Speech and Signal Processing, 2008
Diabetic Retinopathy Detection Athota Manoj Kumar, Atchukola Sai Gopi Kanna, and Ramya G. Franklin
Abstract Diabetic retinopathy is a problem with diabetes which affects the eyes. It is caused by damage to the light-touchy tissue veins at the back of the eye (retina). Diabetic retinopathy may not cause any side effects or simply mellow vision problems from the onset. It can be causing visual impairment in the long run. The quantity of specialists contrasted with the quantity of patients in India is very low which prompts deferred analysis of different maladies. However, deferred conclusion of diabetic retinopathy prompts irreversible harm to eyes, further prompting total lasting visual deficiency. This ailment can be dealt with, however, the harm isn’t totally reversible. To stay away from this circumstance, we chose to robotize the procedure of analysis by utilizing AI. The expansion in cases of diabetes restrains current manual testing ability. Nowadays, new calculations for helped analysis are proving significant. Early diabetes exploration will benefit patients and reduce the terrible results of well-being, such as visual impairment, so we use the measurement of support vector machine (SVM) to group the removed histogram. A binning program with a histogram for highlights portrayal is proposed. Keywords Diabetic retinopathy · Support vector machine · Retail funds · Local energy-based shape histogram
A. M. Kumar (B) · A. S. G. Kanna · R. G. Franklin Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] A. S. G. Kanna e-mail: [email protected] R. G. Franklin e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_48
463
464
A. M. Kumar et al.
1 Introduction Diabetes is a condition that arises when the pancreas does not release enough insulin or the body cannot properly absorb it. At the point when your glucose gets excessively high, it can harm the veins in your eyes. This harm may prompt diabetic retinopathy. Truth is told, the more somebody has diabetes, and the more probable he is to have retinopathy. Retinopathy is the clinical term for harm to the little veins (vessels) that feed the retina, the tissue at the rear of your eye that catches light and transfers data to your cerebrum. These veins are frequently influenced by the high glucose levels related with diabetes. Tragically, these fragile vessels discharge no problem at all. The eye is one of the human body’s most enormous and sensitive parts. The eye’s interior portion, going up against the convergence point, joins retina, optic plate, macula, and fovea. By looking through the understudy, it will generally be seen during an eye test [1]. The retina is to the light fragile, which is realized by the damage that occurs in the retina in view of diabetes [2]. Commonly, DR is asymptomatic until it turns into a genuine danger for vision; along these lines, diagnosing DR at beginning periods can be vital to build the odds of early treatment [3]. Via programmed screening, DR can be recognized in beginning periods, while limiting subjectivity and human mistakes in the manual methodology [1, 2, 4]. Diabetic Retinopathy is depicted by the proximity of hemorrhages on the retina and level of earnestness is picked by how much mischief has been caused to the veins in the retina. For the most part, fundoscopic shots of the retina are screened by ophthalmologists to distinguish the disease [5, 6]. Different experts have devoted their resources to the development of a new computer-aided detection system for DR. Non-uniform lighting, low unpredictability, little wounds, and the closeness of articles in a run of the mill retina that have practically identical credits to HEM [3, 7, 8]; for instance, optic plate, veins, etc., make it hard to absolutely recognize HEM. The thought is to utilize machine learning to see and separate DR in a subject to vitalize the technique so as to manage the issue of ace to diabetic patient degrees in India [4]. It is an open-source programming library for data flow and differentiable programming on a large scale [9]. It is a math library for delegates and is also used for machine learning. With the help of TensorFlow, we can use the basic limits required to fabricate a K-means estimation for extraction of features HEM and support vector machine and K-NN for picture request [10]. In the eventual outcome of diabetic retinopathy customized recognizable proof using support vector machine estimation and it has an accuracy of 97.608% [8, 10, 11].
2 Related Works In order to guarantee the HEM structure, we have proposed a perceive DR using surface highlights. In [7, 12, 13], the developers use Local Binary Pattern (LBP) for
Diabetic Retinopathy Detection
465
Hemorrhage and Exudate (HMA) confirmation. The tests were done on an 88-photo database. We had 85.15% standard accuracy and 0.86 AUC. The producers used fluorescein angiography (FA) in [14–16] to support images to perceive aneurysms (MA) of a downsized scale. They used windows Radon Transfer (RT) and multicover. Tests were supported on three tables of 121, 52, and 21 freely pictured. The developers research obtained the best results for the two first databases with an effect of only 93% and unambiguously 74% for the main database and 100 and 70% for the next one. Mother can be seen using this imaging technique efficiently, in any case, the system requires the relation of specific blends to the individual, rendering this process less interesting as it can create unbeguiling impacts on performance [17]. The most ideal approach for handling retinal fundus looks which is still to use traditional retinal fundus imagery impels. In [18], w is discrete. Eight examples on centrality were removed. Three highlights are valued in three different ways (even, vertical, and corner to corner) from the three-level coefficients and six important. Social potential with various bits was performed using Support Vector Machine (SVM). The developers took 240 pictures of the retinal fundus (traditional 120 and the remain with different degrees of DR) [19]. They revealed a 98% accuracy, affectability, and unequivocally result using SVM with requesting three polynomial pieces [20, 21]. The possible results of these research works are charming; at any rate, the calculation of the photos used for test experiments is almost nothing. Important learning models for the DR territories have been suggested as of late [22–24]. In 71,896 retinal images [25, 26] arranged a huge coevolutionary type and got an AUC of 0.936. Using ten extra datasets, they took a stab and increased an AUC running from 0.887 to 0.982. Noteworthy, learning strategies are information hungry and need enormous picture numbers to arrange. Right now, use an open accessible dataset with a set number of images [5, 6]. For the most part, DR directions using surface highlights are based on Local Ternary Pattern (LTP) and Local Energy-based Shape Histogram (LESH) with acquired accuracy 0.904 using SVM.
3 Proposed System We proposed right now, robotized approach for order of the infection diabetic retinopathy utilizing fundus pictures is introduced. Right now, a programmed appraisal arrangement of diabetic retinopathy utilizing support vector machine has been examined with different preprocessing systems including post filtration followed by the extraction of a few highlights, for example, shading, shape, force, entropy, vitality, surface, and so on. Arrangement of anomalies from ordinary fundus retinal pictures can be performed with different classifiers. From perception, support vector machine is the best classifier for removing and characterizing the irregularities in retina like microaneurysms, hard exudates, delicate exudates, neovascularization, and macular edema in a compelling way (Fig. 1). We prove they’ve been at features removed from LBP. Support Vector Machines (SVM) are used to collect the histogram that has been removed. A histogram binning
466
A. M. Kumar et al.
Fig. 1 Overview of the proposed system
method is proposed for depiction of features. In the delayed consequence of diabetic retinopathy customized revelation using SVM count and it has an exactness of 97.608%. All the methods used for the request were worthy of implementation; however, from the results obtained, SVM is more [3] advantageous than CNN and DNN. Consequently, this work has provided a strong diabetic retinopathy diagnosis technique that helps to diagnose the infection in the starting time, which typically decreases the manual work. Support vector machine classifier achieves an increasingly unmistakable precision in area of diabetic retinopathy which makes the end and screening of retinal pictures for the ophthalmologists in a less difficult way. A. Pre-Processing In distinguishing variations from the norm related with fundus picture, the pictures must be pre-processed so as to address the issues of lopsided light issue, inadequate complexity among exudates and picture foundation pixels and nearness of commotion in the information fundus picture. B. Segmentation The primary target of division is to aggregate the picture into areas with same property or qualities. It assumes a significant job in picture investigation framework by encouraging the depiction of anatomical structures and different locales of premium. C. Edge Enhancement Edge upgrade is a picture preparing channel that upgrades the edge difference of a picture to improve its obvious sharpness. Most computerized cameras likewise play out some edge upgrade.
Diabetic Retinopathy Detection
467
D. Color Space Conversion Shading space transformation is the interpretation of the portrayal of shading starting with one premise then onto the next. This commonly happens with regards to change over a picture that is spoken to in one shading space to another shading space; the objective being to make the interpreted picture look as comparative as conceivable to the first. E. Binarization Binarization is the way toward changing information highlights of any element into vectors of paired numbers to make classifier calculations increasingly productive. F. Morphologicalhole Filling Morphology is an image processing procedure that is based on shapes. Morphological operation applies a structuring feature to an input image, producing the same-size output image.
4 Results and Discussion We have used the standard public datasets for the project. We have different software testing to test the software. The process of testing is to discover the unprecedented errors present in the software. Test case which has a high possibility to identify the general errors. This successful test helps to solve the still unknown errors. The source code is python, and we use anaconda navigator and jupyter notepad to compile and view the source code. The SVM and K-NN algorithm are used in this project to get the accuracy output.
Fig. 2 Input retina image
468
A. M. Kumar et al.
Fig. 3 Appearance of retina
Fig. 4 Result after processing the image
After compiling the source code, a new window opens as shown in Fig. 2 where we have to give the retina image. The input retina image will appear on the windows as shown in Fig. 3. There are several options to see the blood vessels and exudates of the retina. The image will be processed and shows weather the retina is affected or not as shown in Fig. 4.
Diabetic Retinopathy Detection
469
5 Conclusion DR where the eye is damaged by blood from the veins. If in question, the DR time is decided by a choice regarding dependent on veins, overflows, hemorrhages, micro aneurysms, and surface area. At the present time, to understand the DR phases, an ophthalmoscope is used by an ophthalmoscope to image the veins and their brain. Starting late electronic imaging opened up as a DR screening gadget. This provides amazing invariable records of the retinal presentation, which can be used to test the development or reaction to treatment, and which an ophthalmologist may examine; mechanized pictures can be dealt with using changed inspection structures. At last, SVM calculation gets best exactness level 99%. BDR and PDR patients can be recognized from the shading picture or dim level utilitarian picture. Different kinds of diabetic retinopathy are associated with the red spots and seep between both BDR and PDR levels of the illness. The sorts are required to be SDR reference to the ophthalmologist. Natural improvement of an AI calculations utilized by the ophthalmologist to speak to monetary pictures. This present and future work in bookkeeping ought to be utilized to build up a DR evaluating and database framework.
References 1. A.N. Repaka, S.D. Ravikanti, R.G. Franklin, Design and implementing heart disease prediction using naives Bayesian, in Proceedings of the International Conference on Trends in Electronics and Informatics, ICOEI 2019, p. 292–297 (2019) 2. S. Mohammadan, A. Karsaz, Y.M. Roshan, A comparative analysis of classification algorithms in diabetic retinopathy screening, in 2017 7th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, p. 84–89 (2017) 3. G. Nagarajan, R.I. Minu, B. Muthukumar, V. Vedanarayanan, S.D. Sundarsingh, Hybrid genetic algorithm for medical image feature extraction and selection. Procedia Comput. Sci. 85, 455– 462 (2016) 4. R.G. Franklin, B. Muthukumar, A prediction system for diagnosis of Diabetes Mellitus. J. Comput. Theor. Nanosc. 17(1), 6–9 (2017). ISSN: 1546-1955 5. X. Tan, B. Trigg’s, Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process. 19(6), 1635–1650 (2010) 6. S.K. Wajid, A. Hussain, Local energy-based shape histogram feature extraction technique for breast cancer diagnosis. Expert Syst. Appl. 42(20), 6990–6999 (2015) 7. S. Bhutada, Ch. Mukundha, G. Shreya, Ch. Lahari, Convolutional neural networks for automatic classification of diabetic retinopathy. Int. Res. J. Eng. Technol. (IRJET) 05(04) (2018). e-ISSN: 2395–0056 8. K.H. Englmeier, K. Schmid, C. Hildebrand, S. Bachler, M. Porta, M. Maurino, T. Bek, Early detection of diabetes retinopathy by new algorithms for automatic recognition of vascular changes. Eur. J. Med. Res. 9(10), 473–488 (2004) 9. M. RatnaKaavya, V. Ramya, R.G. Franklin, Alert system for driver’s drowsiness using image processing, in Proceedings—International Conference on Vision Towards Emerging Trends in Communication and Networking, ViTECoN 2019, p. 284–288 (2019) 10. B.M. Ege, O.K. Hejlesen, O.V. Larsen, K. Møller, B. Jennings, D. Kerr, D.A. Cavan, Screening for diabetic retinopathy using computer-based image analysis and statistical classification. Computer. Methods Programs Biomed. 62(3), 165–175 (2000)
470
A. M. Kumar et al.
11. K. Estabridis, R.J.P. de Figueredo, Automatic detection and diagnosis of diabetic retinopathy, in IEEE International Conference Image Processing, ICIP, 2007 12. Cigna healthcare coverage position—A Report (2007). Retrieved from: https://www.cigna. com/customer_care/healthcare_professional/coverage_positions/medical/mm_0080_coverag epositioncriteria_imaging_systems_optical.pdf. Last accessed 5 Dec 2007 13. J.M. Cree, J.J.G. Leandro, J.V.B. Soares, R.M. Cesar Jr., H.F. Jelinek, D. Cornforth, Comparison of various methods to delineate blood vessels in retinal images, in Proceedings of the 16th Australian Institute of Physics Congress, Canberra, 2005 14. M.N. Ashraf, Z. Habib, M. Hussain, Texture feature analysis of digital fundus images for early detection of diabetic retinopathy, in 2014 11th International Conference on Computer Graphics, Imaging and Visualization, Singapore, p. 57–62 (2014) 15. Diabetic Retinopathy. Retrieved from: https://www.hoptechno.com/book45.htm. Last accessed 17 Jan 2009 16. Early Treatment Diabetic Retinopathy Study Research Group, Grading diabetic retinopathy from stereoscopic color fundus photographs: an extension of the modified Airlie House classification, ETDRS report number 10. Ophthalmology 98, 786–806 (1991) 17. A. Pravin, T.P. Jacob, G. Nagarajan, An intelligent and secure healthcare framework for the prediction and prevention of Dengue virus outbreak using fog computing. Health Technol. 1–9 (2019) 18. M. Tawakoni, R. Shahri, H. Pourreza, A. Mehdi Zadeh, T. Banaee, M. BahreiniToosi, A complementary method for automated detection of microaneurysms in fluorescein angiography fundus images to assess diabetic retinopathy. Pattern Recognit. 46(10), 2740–2753 (2013) 19. K.S. Varun, I. Puneeth, T.P. Jacob, Hand gesture recognition and implementation for disables using CNN’S, in 2019 International Conference on Communication and Signal Processing (ICCSP). IEEE, p. 0592–0595 (2019) 20. A. Ben-Hur, C.S. Ong, S. Sonnenberg, B. Schölkopf, G. Rätsch, Support vector machines and Kernels for computational biology PLoS Comput. Biol. 4(10), e1000173 (2008) 21. Microaneurysms in diabetic retinopathy. Br. Med.J.3(5774):548–549,1971.https://www.jstor. org/pss/25415740. 22. K. Noronha, U.R. Acharya, K.P. Nayak, S. Kamath, S.V. Bhandary, Decision support system for diabetic retinopathy using discrete wavelet transform. ProcInstMechEng H 227(3), 251–261 (2013) 23. M.B., Brenner, E.M., Cooper, D. de Zeeuw, F.W. Keane, E.W. Mitch, H.H. Parving, G. Remuzzi, M.S. Snapinn, Z. Zhang, S. Shahinfar, Effects of Losartan on renal and cardiovascular outcomes in patients with type 2 diabetes and nephropathy. NEJM 345(12), 861–869 (2001) 24. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, M. Goldblum, Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Trans. Med. Image. 8(3), 263–269 (1989) 25. V. Gulshan, L. Peng, M. Coram, M. Stumpe, D. Wu, A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams, J. Cuadros, R. Kim, R.Raman, P. Nelson, J. Mega, D. Webster, Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22), 2402–2410 (2016) 26. T. Ojala, M. Pietikäinen, D. Harwood, A comparative study oftexture measures with classification based on featured distributions. Pattern Recognit. 29(1), 51–59 (1996)
Traffic Status Update System With Trust Level Management Using Blockchain Bhanu Prakash Yagitala and S. Prince Mary
Abstract This project aims to provide a traffic update for vehicles, which helps to minimize the traffic jam. The project uses the blockchain concept to store traffic status. We use nodes and status transfer units to update the traffic status in the blockchain server. A system using vehicular networks can generate broadcast messages which are transmitted over transfer units and received at node end. This project uses a decentralized trust level process to check the credibility of the message, thus making it more reliable than other traffic update systems. Keywords Road side unit (RSU) · Trust level (TL) · Blockchain · Vehicular networks (VANET)
1 Introduction In this project, the traffic status is updated by taking information from the vehicles that are in traffic and sent that information to nearby RSU [1, 2]. This info will be shared with other units and will be useful to give traffic updates to vehicles which request for traffic status [3–5]. Modern vehicles are having more sensors, computational and communication devices within them when developed [6]. This makes them more autonomous. Vehicular networks became an important part of smart vehicles as in fifth Gen networks [7–11]. Vehicular networks provide features to share messages over to neighbors [12]. Trust level management gives other nodes to validate the credibility of the received messages [13]. Normally, the rating of a certain vehicle will be random, and it will increase based on its behavior of giving the right status, which is generated when traffic status is requested [14, 15]. Trust level management can be either centralized B. P. Yagitala (B) · S. Prince Mary Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] S. Prince Mary e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_49
471
472
B. P. Yagitala and S. Prince Mary
or decentralized. Centralized constitutes a central server for all processes and stores data over that one server only, e.g., cloud server [16]. This centralized server may cause long delays in generating responses for the requested node, thus not satisfying the requirement for accurate service in VANET systems maintain a trust level for each node and are incremented based on tasks given [17–19].
2 Existing System The existing system can provide the service of transmitting the traffic update between vehicles but cannot provide a trust level for the broadcasted message [19]. Vehicles trust messages are based on the nodes from which it is collected [12]. That information may or may not be correct, because there is no credibility for that message [20, 21]. Thus, the system can be improved by providing trust level decentralized management [22]. Drawbacks of the Existing System: • • • •
Not very effective Do not produce trusted results Weak credibility Not decentralized.
3 Proposed System The proposed system maintains a trust level for each node. The user who requests for the traffic update can assess that message based on the trust level, and this will be more user-friendly. This is done by analyzing all the possible updates from the users, and then, based on the collected information, a calculation is done. By similar majority suggestions, the conclusion is sent to the traffic requested user, and along with that, the percentage calculation is also made out of a hundred so that it increases the trust level of the messages and provides a clear idea to the user [22, 23]. Advantages of the Proposed System: • • • •
Traffic updates received are dynamic Helps to reduce traffic jams Time management between vehicles Blockchain server for storing data is used for faster transmissions.
Algorithm: Secure Hash Algorithm—256 Bit (SHA-256).
Traffic Status Update System With …
473
The blockchain system uses SHA—256 bit algorithm for securing the data stored in it. The data will be in the form of hash. SHA is a function that is used to transform any text into 256-bit size strings. Previous versions are SHA0, SHA1, and SHA2. Every block in the blockchain consists of two parts, namely data and hash. This algorithm is utilized for generating hash, so-called as a hash function. Techniques: The system has various techniques for developing the traffic update system using decentralized blockchain server. • • • •
Ad hoc networks VANET AODV routing protocol Hashing in blockchain.
Phases: • • • •
Node creation and network formation RSU creation and communication Update traffic to RSU Requesting traffic status.
In Fig. 1, the flow diagram of the project is proposed. A total of four modules are used in the process. The user has to create a register for a login node, and we need RSU for establishing data communication and to update traffic data in the database, finally utilized by users. Based on the traffic update nodes given to RSU, they get a rating in the form of trust level which is stored in blockchain database. Phases: • Node creation and Network Formation: In Fig. 2, the user must register and assume as a node. Each node covers a range of distance, and it has a unique name and port number to communicate with other nodes. Every node connects to the nearby roadside units for communication [24, 25]. Nearby nodes are calculated based on the coverage of each node, and when each node comes near to other nodes, they will be considered as a neighbor node. All these nodes will be in ping with nearby roadside units. • RSU Creation and Building Communication: Every vehicle is equipped with devices and sensors which are used to update and request traffic status. Nodes can automatically update status and warn other vehicles who are requested for traffic status. Due to the rapid change in the traffic environment, the data cannot be stored very long. In Fig. 3, by using this, vehicle needs to periodically get a rating on the trust level. Nodes and RSUs are less secured, and hence, they can be hacked easily, which impacts
474
B. P. Yagitala and S. Prince Mary
Fig. 1 Flow diagram
the trust management system in vehicles. So, only the RSU will update ratings for vehicle-based nodes. The system depends upon the capacity of transmission of broadcast messages between neighboring RSUs. • Update Traffic to RSU: In Fig. 4, the ratings are provided based on the status they given whether true or false about traffic updates. Due to this, we cannot store and manage logically the rapid change in update status longer. Therefore, the vehicle needs to get a rating based on the status they given about traffic. So, we assume that RSU can calculate the trust level for a vehicle based on rating. These updates are stored in a blockchain server for processing and are modified regularly when other vehicles update the status of traffic. • Requesting Traffic Status: In Fig. 5, the vehicle which requests for traffic update gets information about traffic status from vehicles which update status in the location, but RSU collect status based on a majority of vehicles that give the same update and increase the trust level of the
Traffic Status Update System With …
475
Fig. 2 Node creation and network formation
vehicle that has given update will automatically increase. This will reduce traffic on roads and hence making the traffic management system to be effective and efficient.
4 Conclusion The proposed system uses a hashing technique in the blockchain server to store data online and maintain a database to store traffic updates. The road environment and other nodes and units are developed using Java which is connected by the threading concept. Thus, making the vehicles running over threads, it is possible for ease in communication. This project contributes to its work for minimizing traffic jams on roads and hence providing the best traffic management system.
476
Fig. 3 RSU creation and communication
Fig. 4 Update traffic to RSU
B. P. Yagitala and S. Prince Mary
Traffic Status Update System With …
477
Fig. 5 Blockchain server storage
References 1. P.K. Rajendran, B. Muthukumar, G. Nagarajan, Hybrid intrusion detection system for private cloud: a systematic approach. Procedia Comput. Sci. 48(C), 325–329 (2015) 2. M.P. Selvan, A. Chandrasekar, K. Kousalya, An approach towards secure multikeyword retrieval. J. Theor. Appl. Inf. Technol. 85(1) (2016) 3. H. Zhou et al., Chain cluster: engineering a cooperative content distribution framework for highway vehicular communications. IEEE Trans. Intell. Transp. Syst. 15(6), 2644–2657 (2014) 4. S. He, D.-H. Shin, J. Zhang, J. Chen, Y. Sun, Full-view area coverage in camera sensor networks: dimension reduction and near-optimal solutions. IEEE Trans. Veh. Technol. 65(9), 7448–7461 (2016) 5. K. Zheng, Q. Zheng, P. Chatzimisios, W. Xiang, Y. Zhou, Heterogeneous vehicular networking: a survey on architecture, challenges, and solutions. IEEE Commun. Surveys Tuts. 17(4), 2377– 2396 (2015), 4th Quart 6. K.S. Varun, I. Puneeth, T.P. Jacob, Hand gesture recognition and implementation for disables using CNN’S, in IEEE International Conference on Communication and Signal Processing (ICCSP) (2019), pp. 0592–0595 7. A. Wasef, R. Lu, X. Lin, X. Shen, Complementing public key infrastructure to secure vehicular ad hoc networks. IEEE Wireless Commun. 17(5), 22–28 (2010)
478
B. P. Yagitala and S. Prince Mary
8. K. Zheng, F. Liu, L. Lei, C. Lin, Y. Jiang, Stochastic performance analysis of a wireless finite-state Markov channel. IEEE Trans. Wireless Commun. 12(2), 782–793 (2013) 9. K. Zhang et al., Security and privacy in smart city applications: challenges and solutions. IEEE Commun. Mag. 55(1), 122–129 (2017) 10. Q. Li, A. Malip, K.M. Martin, S.-L. Ng, J. Zhang, A reputation based announcement scheme for VANETs. IEEE Trans. Veh. Technol. 61(9), 4095–4108 (2012) 11. T. Roosta, M. Meingast, S. Sastry, Distributed reputation system for tracking applications in sensor networks, in Proceedings of 3rd Annual International Conference on Mobile and Ubiquitous Systems (San Jose, CA, USA, Jul. 2006), pp. 1–8 12. J. Refonaa, Dr. M. Lakshmi , Cognitive computing techniques based rainfall prediction— a study, in IEEE International Conference on Computation of Power, Energy Information Communication (ICCPEIC) (2018), pp. 1–6 (2018), pp. 142–144 issue indexed in WOS, (Scopus) 13. A. Pravin, T.P. Jacob, G. Nagarajan, An intelligent and secure healthcare framework for the prediction and prevention of dengue virus outbreak using fog computing. Health Technol. 1–9 (2019) 14. V. Vineetha, P. Asha, A symptom based cancer diagnosis to assist patients using naive bayesian classification. Res. J. Pharm. Biol. Chem. Sci. 7(3), 444–451 (2016) 15. A. Sivasangari, J. Martin Leo Manickam, Energy efficient and security based data communication in wireless body sensor networks. J. Pure Appl. Microbiol. 9, 701–711 (2015) 16. J.G. Vivekananda, J. Naveen Nethra Reddy, S.P. Mary, B. Bharathi, A survey on smart hotel management system using IoT. Int. J. Pure Appl. Math. (2018) 17. S. Li, X. Wang, Quickest attack detection in multi-agent reputation systems. IEEE J. Sel. Topics Signal Process. 8(4), 653–666 (2014) 18. M.E. Mahmoud, X. Shen, An integrated simulation and punishment mechanism for thwarting packet dropping attack in multihop wireless networks. IEEE Trans. Veh. Technol. 60(8), 3947– 3962 (2011) 19. A.C.S. Sheela, C. Kumar, Duplicate web pages detection with the support of 2D table approach. J. Theor. Appl. Inf. Technol. 67(1) (2014) 20. V.J. Brinda, J. Shabu, A trustworthy eWOM in social networks, in IEEE International Conference on Information Communication and Embedded Systems (2016), (Scopus) 21. M. Selvi, P.M. Joe Prathap, Secure data aggregation protocol with efficient energy in sensor networks, in Int. J. Recent Technol. Eng. (IJRTE) 8(4) (2019), ISSN: 2277-3878, (Scopus) 22. A. Jesudoss, N.P. Subramaniam, Securing cloud-based healthcare information systems using enhanced password-based authentication scheme. Asian J. Inf. Technol. 15(14), 2457–2463 (2016) 23. K.M. Prasad, R. Sabitha, K. Muthukumar, Providing cluster categorization of heuristics technique for increasing accuracy in severe categorization of road accidents, in IEEE International Conference on Communication and Signal Processing (ICCSP) (2017), pp. 1152–1159 24. A. Velmurugan, T. Ravi, Alleviate the parental stress in neonatal intensive care unit using ontology. Indian J. Sci. Technol. 9, 28 (2016) 25. U. Mohan Kumar, P. Siva SaiManikanta, M.D. AntoPraveena, Intelligent security system for banking using internet of things. J. Comput. Theor. Nanosci. 16(8), 3296–3299 (2019)
Unique and Dynamic Approach to Predict Schizophrenia Disease Using Machine Learning Nelisetti Ashok, Tatikonda Venkata Sai Manoj, and D. Usha Nandini
Abstract The society environment and its wide array of influences factors are very much connected with each phase of human life operating and development. Schizophrenia is a chronic and severe mental disease that alters human memory and mind. Mental fitness and mental disease are dictated by various and collaborating social, mental, what’s more, organic variables, similarly as well-being and sickness when all is said in done. The clearest proof identifies with the dangers of mental ailments, which in the created and creating world are related to markers of neediness, including low degrees of training. The singular goal of this proposed project is to automatically distinguish patients with schizophrenia and without schizophrenia. Detection of schizophrenia in initial periods is crucial to prevent comprehensive mental illness. Currently, identifying the schizophrenia discourse which is extensively distributed in our culture and forecasting the forthcoming phases of the mental illness has become extremely crucial in modern-day society. Normally patients express their health condition on one of the most popular social media networks called Twitter. We introduce a unique and dynamic machine learning (ML) approach
N. Ashok (B) · T. V. S. Manoj · D. Usha Nandini Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] T. V. S. Manoj e-mail: [email protected] D. Usha Nandini e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_50
479
480
N. Ashok et al.
to predict the schizophrenia discourse from Twitter tweets. This unique approach is extremely adaptable as practically any supervised machine learning algorithm can be utilized for prediction. The efficiency and accuracy of the machine learning algorithm are directly proportional to the excellence of feature engineering which needs a high level of domain knowledge. To overcome this problem, this paper proposes a Deep Neural Network (DNN) which is capable of automatically identifies the pattern of schizophrenia discourse and classifies the schizophrenia discourse. Keywords Machine learning · Deep learning · Convolutional neural network CNN · Long-Short term memory LSTM · Natural language processing · Discourse comprehension · Schizophrenia
1 Introduction Schizophrenia is a dangerous neurological sickness, which seems in roughly 1% of the adult population around the globe. It affects remembrance, awareness, linguistic, and executive functions of the brain [1]. According to the World Health Organization (WHO) stated that the worldwide incidence of global schizophrenia is just about 20 million people. Among that, 69% of schizophrenia disorder patients are not obtaining proper meditations. There are various causes of schizophrenia disorders. Any research does not distinguish one particular aspect [2]. Hereditary and environmental influences perform a major role in its diagnosis. It is normally developed in adulthood under the age of 26. The patients which are affected by this disease will have intellectual deficiencies, including reading disorders [3]. They will have difficulty to select suitable phrases while they are talking and composing. It modifies patients’ observation, points of view, and conduct as confirm by fantasies, dreams, muddled discourse or conduct, social withdrawal, and differed intellectual deficiencies (Fig. 1). Schizophrenia Word Cloud Schizophrenia Discourse (SD) is a mental ailment, which is the highly predominant neurodegenerative disease in this globe [4]. It is a radical and dense mental illness influencing susceptible peoples, whose pervasiveness is projected to twice by 2030. Diagnosis and medication of neurological Discourse are emphases in brain study. Despite the fact that meds have been created to treat schizophrenia, they don’t work for all patients, and there is proof that a few types of schizophrenia seem, by all accounts, to be treatment safe. This just underlines the significance of leading fruitful schizophrenia clinical preliminaries [5]. According to the artificial intelligence study, social media text analysis has been a leading area of study in the past two to three decades that has been commonly applied in a wide array [6].India of research such as healthcare, smart home, surveillance systems, human-computer interaction, gaming, and so on. With an exceptional development of web-based social networking, a huge number of individuals deliberately share a lot of information by communicating their states of mind, sentiments, feelings, and day–by-day battles with emotional well-being issues via web-based networking media stages like Twitter [7]. This offers open doors for a new comprehension of
Unique and Dynamic Approach …
481
Fig. 1 System architecture
these networks. With the improvement of Internet utilization, individuals have begun to impart their encounters and difficulties to emotional wellness issue through online discussions, small scale web journals, or tweets [8, 9]. Their online exercises roused numerous specialists to present new types of potential human services arrangements and techniques for early melancholy recognition frameworks [10, 11].
2 Literature Survey 2.1 Neuroimaging Biomarkers for Schizophrenia In the year 2015, schizophrenia is a mental issue that typically shows up in late immaturity or early adulthood. Described by dreams, mental trips, and other subjective troubles, schizophrenia can regularly be a long-lasting battle [12, 13]. Schizophrenia is a mental issue wherein practical and basic cerebrum systems are upset. Old style organize investigation has been utilized by numerous scientists to measure mind systems and to examine the system changes in schizophrenia, however lamentably measurements utilized in this old style technique exceptionally rely upon the systems’ thickness and weight; the examinations made by this strategy are one-sided.
482
N. Ashok et al.
2.2 Mining Twitter Data to Improve Detection of Schizophrenia In the year 2016, customary techniques either need enough memorable information or to keep the customary checking on persistent exercises for recognizable proof of a patient related to a mental ailment infection [14]. Strategy: In request to address this issue, They proposed a philosophy to characterize the patients related to constant mental sickness maladies (for example Uneasiness, Depression, Bipolar, and Consideration Deficit Hyperactivity Disorder) in view of the information removed from the Reddit, an outstanding arrange network stage. The proposed technique is utilized through the co-training technique by joining the discriminative intensity of generally utilized classifiers [15].
2.3 Recommender Systems Based on Social Networks The customary recommender frameworks, particularly the cooperative sifting recommender frameworks, have been considered by numerous scientists in the previous decade. In any case, they disregard the social connections among clients [16, 17]. Truth be told, these connections can improve the precision of the proposal. Lately, the investigation of social-based recommender frameworks has become a functioning examination subject. Right now, propose a social regularization approach that consolidates interpersonal organization data to profit recommender frameworks [18, 19]. The two clients’ companionships and rating records (labels) are utilized to foresee the missing qualities (labels) in the client thing framework. Particularly, we utilize a bi-clustering calculation to recognize the most reasonable gathering of companions for producing diverse last proposals [20].
3 System Architecture 3.1 Modules • • • •
Loading live data Data Preprocessing Building the model Training Model.
Unique and Dynamic Approach …
3.1.1
483
Modules Description
Loading Live Data Text mining is the application of natural language processing techniques and analytical methods to text data in order to derive relevant information. Twitter data constitutes a rich source that can be used for capturing information about any topic imaginable. This data can be used in different use cases such as finding trends related to a specific keyword, measuring brand sentiment, and gathering feedback about new products and services. Data Preprocessing Data preprocessing is an important step to prepare the data to form a Schizophrenic Discourse model. There are many important steps in data preprocessing, such as data cleaning, data transformation, and feature selection. Schizophrenic Discourse Data cleaning and transformation are methods used to remove outliers and standardize the data so that they take a form that can be easily used to create a model. A Schizophrenic Discourse data set may contain hundreds of variables (descriptors); however, many of these variables will contain redundant data. In order to simplify the dimensionality of the model, it is important to select only variables that contain unique and important information. Data mining procedures can be used to remove variables that do not contribute to the Schizophrenic Discourse model. Building the Model An LSTM network is a kind of recurrent neural network. A recurrent neural network is a neural network that attempts to model time or sequence-dependent behavior— such as language, stock prices, electricity demand, and so on. This is performed by feeding back the output of a neural network layer at time t to the input of the same network layer at time t + 1.
3.2 Training The Long-Short-Term Memory network, or LSTM network, is a recurrent neural network that is trained using Backpropagation through Time and overcomes the vanishing gradient problem. As such, it can be used to create large recurrent networks that in turn can be used to address difficult sequence problems in machine learning and achieve state-of-the-art results. Instead of neurons, LSTM networks have memory blocks that are connected through layers. A block has components that make it smarter than a classical neuron and a memory for recent sequences. A block contains gates that manage the block’s state and output. A block operates upon an input sequence and each gate within a block uses the sigmoid activation units to control whether they are triggered or not, making the change of state and addition of information flowing through the block conditional.
484
Fig. 2 Classification of the algorithm
3.3 Classification See Fig. 2.
3.4 Data Flow Diagram See Fig. 3.
Fig. 3 Data flow diagram
N. Ashok et al.
Unique and Dynamic Approach …
3.5 Program Code
import warnings warnings.filterwarnings("ignore") import numpy as nP import pandas as pd import matplotlib.pyplot as plt import nltk import ftfy import re from math import exp from numpy import sign from sklearn.metrics import classification_report, confusion_matrix, accuracy_score from gensim.models import KeyedVectors from nltk.corpus import stopwords from nltk import PorterStemmer from keras.models import Model, Sequential
485
486
N. Ashok et al.
from keras.callbacks import EarlyStopping, ModelCheckpoint from keras.layers import Conv1D, Dense, Input, LSTM, Embedding, Dropout, Activation, MaxPooling from keras.preprocessing.text import Tokenizer from keas.preprocessing.sequence import pad_sequences np.random.seed(1234) EPOCHS= 10 LEARNING_RATE = 0.1 TEST_SPLIT = 0.2 DEPRES_NROWS = 3200 RANDOM_NROWS = 12000 MAX_EQUENCE_LENGTH = 140 MAX_NB_WORDS = 20000 EMBEDDING_DIM = 300 TRAIN_SPLIT = 0.6 DEPRESSIVE_TWEETS_CSV = 'SchizophrenicDiscourseDatasets.csv' RANDOM_TWEETS_CSV = 'NonSchizophreniaDataset.csv' EMBEDDING_FILE = 'GoogleNews-vectors-negative300.bin.gz' random_tweets_df = pd.read_csv(RANDOM_TWEETS_CSV, encoding = "ISO-8859-1", usecols = range(0,4), nrows = RANDOM_NROWS) depressive_tweets_df.head() random_tweets_df.head() word2vec = KeyedVectors.load_word2vec_format(EMBEDDING_FILE, binary=True) depressive_tweets_df = pd.read_csv(DEPRESSIVE_TWEETS_CSV, sep = '|', header = None, usecols = range(0,9), nrows = DEPRES_NROWS) cList = { c_re def expandContractions(text, c_re=c_re): def replace(match): return cList[match.group(0)] return c_re.sub(replace, text) def clean_tweets(tweets): cleaned_tweets = [] for tweet in tweets:
Unique and Dynamic Approach …
487
tweet = str(tweet) if re.match("(\w+:\/\/\S+)", tweet) == None and len(tweet) > 10 tweet = ' '.join(re.sub("(@[A-Za-z0-9]+)|(\#[A-Za-z0-9]+)|()|(pic\.twitter\.com\/.*)", " ", tweet).split()) tweet = ftfy.fix_text(tweet) tweet = expandContractions(tweet tweet = ' '.join(re.sub("([^0-9A-Za-z \t])", " ", tweet).split( stop_words = set(stopwords.words('english')) word_tokens = nltk.word_tokenize(tweet) filtered_sentence = [w for w in word_tokens if not w in stop_wordstweet = ' '.join(filtered_sentence) tweet = PorterStemmer().stem(tweet) cleaned_tweets.append(tweet) return cleaned_tweets depressive_tweets_arr = [x for x in depressive_tweets_df[5]] depressive_tweets_arr random_tweets_arr = [x for x in random_tweets_df['SentimentText']] X_d = clean_tweets(depressive_tweets_arr) X_d X_r = clean_tweets(random_tweets_arr) tokenizer = Tokenizer(num_words=MAX_NB_WORDS) tokenizer.fit_on_texts(X_d + X_r) sequences_d = tokenizer.texts_to_sequences(X_d) sequences_d sequences_r = tokenizer.texts_to_sequences(X_r)
488
N. Ashok et al.
8 sequences_r
word_index = tokenizer.word_index print('Found %s unique tokens' % len(word_index)) data_d = pad_sequences(sequences_d, maxlen=MAX_SEQUENCE_LENGTH) data_r = pad_sequences(sequences_r, maxlen=MAX_SEQUENCE_LENGTH) print('Shape of data_d tensor:', data_d.shape) print('Shape of data_r tensor:', data_r.shape) nb_words = min(MAX_NB_WORDS, len(word_index)) embedding_matrix = np.zeros((nb_words, EMBEDDING_DIM) for (word, idx) in word_index.items(): if word in word2vec.vocab and idx < MAX_NB_WORDS: embedding_matrix[idx] = word2vec.word_vec(word) labels_d = np.array([1] * DEPRES_NROWS) labels_r = np.array([0] * RANDOM_NROWS) perm_d = np.random.permutation(len(data_d)) idx_train_d = perm_d[:int(len(data_d)*(TRAIN_SPLIT))] idx_test_d = perm_d[int(len(data_d)*(TRAIN_SPLIT)):int(len(data_d)*(TRAIN_SPLIT+TEST_SPLIT))] idx_val_d = perm_d[int(len(data_d)*(TRAIN_SPLIT+TEST_SPLIT)):] perm_r = np.random.permutation(len(data_r)) idx_train_r = perm_r[:int(len(data_r)*(TRAIN_SPLIT))] idx_test_r = perm_r[int(len(data_r)*(TRAIN_SPLIT)):int(len(data_r)*(TRAIN_SPLIT+TEST_SPLIT))] idx_val_r = perm_r[int(len(data_r)*(TRAIN_SPLIT+TEST_SPLIT)):] data_train = np.concatenate((data_d[idx_train_d], data_r[idx_train_r])) labels_train = np.concatenate((labels_d[idx_train_d], labels_r[idx_train_r])) data_test = np.concatenate((data_d[idx_test_d], data_r[idx_test_r]))
Unique and Dynamic Approach …
489
Fig. 4 Data collection labels_test = np.concatenate((labels_d[idx_test_d], labels_r[idx_test_r])) data_val = np.concatenate((data_d[idx_val_d], data_r[idx_val_r])) labels_val = np.concatenate((labels_d[idx_val_d], labels_r[idx_val_r])) perm_train = np.random.permutation(len(data_train)) data_train = data_train[perm_train] labels_train = labels_train[perm_train
4 Results and Discussion See Figs. 4, 5, 6 and 7.
5 Conclusion With the exception of Alzheimer’s dementia, schizophrenia is the first mental disorder to which the prediction and prevention program of modern medicine has hitherto systematically been applied.
6 Future Work In this novel system, we implemented an artificial intelligence application to division normal and abnormal Schizophrenia subjects from social media networks using machine learning methodology. This work proposes an LSTM + CNN based framework for automated schizophrenia discourse detection, to possibly gather connection phrases from tweets text contents. The semantic consistency between the machine and human affiliation systems is tentatively confirmed
490
Fig. 5 Preprocessing
Fig. 6 Training
N. Ashok et al.
Unique and Dynamic Approach …
491
Fig. 7 Classification and prediction
References 1. A. Pravin, T.P. Jacob, G. Nagarajan, An intelligent and secure healthcare framework for the prediction and prevention of dengue virus outbreak using fog computing. Health Technol. 1–9 (2019) 2. S.L. Jany Shabu, C. Jaya Kumar, Multimodal image fusion and bee colony optimization for brain tumor detection. ARPN J. Eng. Appl. Sci. 13, 1819–6608 (2018) 3. J. Refonaa, G.G. Sebastian, D. Ramanan, M. Lakshmi, effective identification of black money and fake currency using NFC, IoT and android, in IEEE 2018 International Conference on Communication, Computing and Internet of Things (IC3IoT) (2018, February), pp. 275–278 4. K.S. Varun, I. Puneeth, T.P. Jacob, Hand gesture recognition and implementation for disables using CNN’S, in 2019 IEEE International Conference on Communication and Signal Processing (ICCSP) (2019, April), pp. 0592–0595 5. G. Nagarajan, R.I. Minu, B. Muthukumar, V. Vedanarayanan, S.D. Sundarsingh, Hybrid genetic algorithm for medical image feature extraction and selection. Procedia Comput. Sci. 85, 455– 462 (2016) 6. A. Velmurugan, T. Ravi, Allergy information ontology for enlightening people, in IEEE 2016 International Conference on Computing Technologies and Intelligent Data Engineering (ICCTIDE’16) (2016, January), pp. 1–7 7. M.P. Selvan, A.C. Sekar, ASE: automatic search engine for dynamic information retrieval. J. Comput. Theor. Nanosci. 13(11), 8486–8494 (2016) 8. P. Asha, S. Srinivasan, Hash algorithm for finding associations between genes. J. Biosci. Biotechnol. Res. Asia ‘BBRA’ 12(1), 401–410 (2015), ISSN: 0973-1245 9. A. Sivasangari, J. Martin Leo Manickam, Energy efficient and security based data communication in wireless body sensor networks. J. Pure Appl. Microbiol. 9, 701–711 (2015) 10. K.M. Prasad, R. Sabitha, K. Muthukumar, Providing cluster categorization of heuristics technique for increasing accuracy in severe categorization of road accidents, in IEEE 2017 International Conference on Communication and Signal Processing (ICCSP) (2017, April), pp. 1152–1159 11. A. Jesudoss, N.P. Subramaniam, Securing cloud-based healthcare information systems using enhanced password-based authentication scheme. Asian J. Inf. Technol. 15(14), 2457–2463 (2016)
492
N. Ashok et al.
12. E. Allen, E. Damaraju, S.M. Plis, E. Erhardt, T. Eichele, V.D. Calhoun, Tracking wholebrain connectivity dynamics in the resting state. Cereb. Cortex 24(3), 663–676 (2013) 13. E. Agirre, K. Bengoetxea, J. Nivre, Y. Zhang, K. Gojenola, On wordnet semantic classes and dependency parsing, in Proceedings of the 52th Annual Meeting of the Association of Computational Linguistics (Baltimore (Maryland), USA, June, 2014), pp. 649–655, Association for Computational Linguistics 14. M. Bansala, K. Gimpel, K. Livescu, Tailoring continuous word representations for dependency parsing, in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 (Vol. 2: Short Papers, Baltimore, MD, USA, June 22–27, 2014), pp. 809– 815 15. K. Bengoetxea, E. Agirre, J. Nivre, Y. Zhang, K. Gojenola, On wordnet semantic classes and dependency parsing (2014), pp. 649– 655. ACL 16. I. Goenaga, K. Gojenola, N. Ezeiza, Combining clustering approaches for semisupervised parsing: the basque team system in the sprml2014 shared task, in First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages: Shared Task on Statistical Parsing of Morphologically Rich Languages, (Dublin, Ireland, August). Dublin City University 17. P. Patel, P. Aggarwal, A. Gupta, Classification of schizophrenia versus normal subjects using deep learning, in Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing, ser. ICVGIP’16, (ACM, New York, NY, USA, 2016), pp. 28:1–28:6. [Online]. Available: http://doi.acm.org/10.1145/3009977.30 18. R.S.B. Krishna, M. Aramudhan, Prognostic classification of tumor cells using an unsupervised model (2016) 19. U. Mohan Kumar, P. Siva SaiManikanta, M.D. AntoPraveena, Intelligent security system for banking using Internet of Things. J. Comput. Theor. Nanosci. 16(8), 3296–3299 (2019) 20. N. Srinivasan, C. Lakshmi, stock price prediction using fuzzy time-series population based gravity search algorithm. Int. J. Softw. Innov. (IJSI). 7(2), 50–64 (2019)
Live Bus Tracking System Akash Singh, Shivam Choudhary, and A. Mary Posonia
Abstract In today’s generation of fast Internet and improved connectivity, people are in a great hurry to reach their destination as fast as possible. In this scenario, the people who rely on public transport for their daily commute really need to know when their buses will come to their stop and what is the exact location of the bus. This paper tries to solve the problem of today’s world by building an Android app which will help to track the live location of the bus. The system uses a GPS-enabled mobile phone with the Android application installed in driver’s phone which will help to track the live location of the bus. User can just login to the Android application and can see the buses available in his/her or vicinity. Keywords GPS · Android · SDK · ADT · IDE
1 Introduction All the users of local public transport system do not have full access to the data related to various aspects of the system. Aspects like the bus numbers going in a specific route, where exactly the bus stops are, how to move from place A to place B, which routes do the buses follow, estimated time to reach a particular destination from a particular source, where exactly the buses are and how long would it take to reach the user, and updates like if the bus got into any trouble which can be hindrance for the bus to reach its destination [1, 2]. All this important data related to the public transport system and the buses are not available to the general users A. Singh · S. Choudhary (B) · A. Mary Posonia Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] A. Singh e-mail: [email protected] A. Mary Posonia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_51
493
494
A. Singh et al.
[3, 4]. This system focuses on these problems and tries to solve the user’s problem by making this information available to them with just the click of the finger [5, 6]. This system overcomes the issues faced in the antecedently engineered applications such as “Live Chennai MTC”, “MTC Bus Route” and “Chennai MTC Info” [7]. The platform chosen for the development of this system is Android; the reason being Android is the most widely used UNIX-based Mobile Operating System with wide variety of features, like low cost, user friendly, easy to use and its wide presence in the market [1]. The IDE used for the development of this application is Android Studio along with Android Software Development Kit (SDK) and Android Development Tools (ADT) [8].
2 Literature Survey The past few years have seen a tremendous growth in the area of digital transformation. Lots of people are showing interest in the area of digital transformation [9]. Nowadays, all the systems have started digitizing, and many developers are showing interest in development of new technologies to develop interesting products for the society which helps to transform how the society used to work earlier [10]. If we see how we used to do things few decades back and how we do them now, then one can notice a huge change in that [11]. Local public transportation system has various issues and zero liability as users do not know which bus goes to which destination, or which bus should they be taking to reach from a particular source to destination or where exactly the bus is and how long would it take to reach the user [1]. Many people before have tried to develop a system that solves the above-mentioned problem and helps for the betterment of the society. An application has been built in Chennai before named “Live Chennai MTC” which is indeed a great app but is unable to solve the above-mentioned problems efficiently [12]. It is not able to solve the problem of the ability to show the live location of the bus, which indeed is a very important feature. It shows the bus routes very effectively and has huge database of the route maps of the buses and the bus stops [13]. Another application being built in Chennai is “MTC Bus Route”; this app shows the timings of the bus effectively but sometimes give wrong route maps and is unable to give the info regarding bus breakdown or any failures [14]. Similarly, many other apps are also being built like “Chennai MTC Info” [15], “Delhi Bus Navigator” [16], “Bangalore BMTC Info” [17], etc., but none of the apps is effective enough to solve all the problems of the existing public transport system but solves one or more problem, but still, none of the apps provides the main problem of being able to provide the live location of the bus [18]. Therefore, the proposed system is being built to solve all the problems of the public transport system [19, 20].
Live Bus Tracking System
495
3 Proposed System The idea behind the development of this system is to ease the life of the people who rely on public transport system for their day-to-day commute. This app adds the necessary features such as being able to look for the bus route, live location of the bus and getting notifications regarding the problems faced by the bus during the commute. It also is able to determine the estimated time required for the bus to reach the user [19]. The user can login to the app, see the available buses around him and can see the details of the bus like live location, estimated arrival time and route map of the bus. The app stores the latitude and longitude coordinates of the bus online on a cloud-hosted NoSQL database which is then synced across all the devices to show the plotting on the map. The proposed system is divided into the following modules: Admin Module, Driver Module, User Module.
3.1 Admin Module The admin module has various static functions such as adding the bus details like the route map, bus numbers, adding stops, database, etc. [10]. The admin is also responsible for adding the details of the bus driver like year of experience, age, date of birth, etc. These credentials are then provided to the driver which can be used by them for logging into the system to provide the service (Figs. 1 and 2).
Fig. 1 Admin module
496
A. Singh et al.
Fig. 2 Driver registration page
3.2 Driver Module The driver does not have the functionality to sign up to the application. Only the admin can be able to sign up the new driver into the system as the driver details have to be validated before they can be registered into the system and not any random person can be a driver. Therefore, only admin can register a new driver into the system after verifying them. Driver has various functionalities such as they can choose which bus they are driving and start the service; starting the service simply means going live and sharing their live location across the system. This location is then accessed in user module to check the live location of the bus, so they can plan their commute accordingly. Driver also have other functionalities such as stopping their service, and the other main service is the ability to send push notification to other users if the bus is in any trouble so that the users can know that the bus may be delayed due to some technical failure or any other reason (Figs. 3 and 4).
3.3 User Module The user-side module is the view layer of the system which provides the interactive application with various functionalities. User can register themselves in the app with email id and password and then start using the services of the app. After logging in into the app, the user can see the available buses around him and can also see the route from which the bus is going to travel. The main functionality is the ability to track
Live Bus Tracking System
497
Fig. 3 Driver home page
the real-time live coordinates of the buses and see the estimated time for the bus to reach the user. The real-time location of the buses is displayed by storing coordinates of the buses such as latitude and longitudeon a cloud-hosted NoSQL database which is then accessed on user side to plot the map for it and is then displayed on the user side [8] (Figs. 5, 6 and 7). To implement the proposed system, the following configurations are required:
3.4 Hardware Part This project requires Android mobile with GPS capability as hardware device with the following specifications, i.e., minimum RAM of 1 gigabytes and above, minimum ROM of 512 megabytes and above, Android version of 4.1 and above; GPRS support is also required.
498
A. Singh et al.
Fig. 4 Complaints registration page
3.5 Software Part The modules are being developed for Android OS and are implemented using Java for Android and SQL database for storing the necessary details related to the system.
4 Result and Discussion The proposed system is developed with a good UI for user friendly and easy-to-use interface and improved performance as compared to the existing system available (Fig. 8).
Live Bus Tracking System
499
Fig. 5 User login page
5 Conclusion The system is being designed, evaluated and developed successfully with a great UI which is more user friendly due to the ease of use that it provides and the vast availability of the Android app to all the user due to the free availability of the Android app. Now, the developed Android app can effectively show various buses moving around the user so as the user can track the live location of the bus and plan their commute accordingly based on the data that they gain from the app. The app gives the visual representation of the location of the bus by plotting its coordinates on the map. It plots the location of the bus as well as the user to give better insights. The app also successively shows the routes of the bus. There is a great possibility of improvements and usability to this project. An Android app can be changed further using the cloud. Using video camera in this device will take this system to the next security level. It will help to closely keep an eye on the crimes currently happening on a day that is seen every day by ordinary people. This would be a significant achievement in decreasing the number of criminal
500
A. Singh et al.
Fig. 6 Bus details page
activities that take place regularly on the public transport system. Alternatively, the bus speed can be measured so as to ensure that bus runs on the government-approved speeds which can then help to reduce the number of accidents that can happen.
Live Bus Tracking System
Fig. 7 Bus location tracking on map
501
502
A. Singh et al.
Fig. 8 Performance analysis
References 1. M.P. Selvan, A.C. Sekar, ASE: Automatic search engine for dynamic information retrieval. J. Comput. Theor. Nanosci. 13(11), 8486–8494 (2016) 2. N. Srinivasan, C. Lakshmi, Stock price prediction using fuzzy time-series population based gravity search algorithm. Int. J. Softw. Innov. (IJSI) 7(2), 50–64 (2019) 3. P. Asha, S. Srinivasan, Hash algorithm for finding associations between Genes. J. Biosci. Biotechnol. Res. Asia ‘BBRA’ 12(1), 401–410 (2015). ISSN: 0973-1245 4. A. Sivasangari, J. Martin Leo Manickam, Energy efficient and security based data communication in wireless body sensor networks. J. Pure Appl. Microbiol. 9, 701–711 (2015) 5. A. Jesudoss, N.P. Subramaniam, EAM: Architecting efficient authentication model for internet security using image-based one time password technique. Indian J. Sci. Technol. 9(7), 1–6 (2016) 6. A. Velmurugan, T. Ravi, Allergy information ontology for enlightening people, in 2016 International Conference on Computing Technologies and Intelligent Data Engineering (ICCTIDE’16) (IEEE, 2016), pp. 1–7 7. A. Pravin, T.P. Jacob, G. Nagarajan, An intelligent and secure healthcare framework for the prediction and prevention of dengue virus outbreak using fog computing. Health Technol. 1–9 (2019) 8. A.V.A. Mary, M.P. Selvan, Christy, Public auditing for secure cloud storage using MD5 algorithm. Int. J. Recent Technol. Eng. (IJRTE) 8(3) (2019) ISSN: 2277-3878 9. K.M. Prasad, R. Sabitha, K. Muthukumar, Providing cluster categorization of heuristics technique for increasing accuracy in severe categorization of road accidents, in 2017 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2017), pp. 1152–1159 10. M. Maheswari, S. Geetha, S. Selva kumar, Adaptable and proficient hellinger coefficient based collaborative filtering for recommendation system. Cluster Comput. 22, S12325–S12338 (2019). https://doi.org/10.1007/s10586-017-1616-7 11. K.S. Varun, I. Puneeth, T.P. Jacob, Hand gesture recognition and implementation for disables using CNN’S, in 2019 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2019), pp. 0592–0595 12. Live Chennai MTC. https://play.google.com/store/apps/details?id=jbsoft.livechennai.mtc
Live Bus Tracking System
503
13. M.D.V. Ajay, N. Adithya, A.M. Posonia, TECHNICAL ERA: an online web application, in IOP Conference Series: Materials Science and Engineering, vol. 590, no. 1 (IOP Publishing, 2019), p. 012002 14. MTC Bus Route. https://play.google.com/store/apps/details?id=hari.haran 15. Chennai MTC Info. https://play.google.com/store/apps/details?id=com.mtc 16. Delhi Bus Navigator. https://play.google.com/store/apps/details?id=com.hashtag.delhibusnavi gator 17. Bangalore BMTC Info. https://play.google.com/store/apps/details?id=com.bmtc 18. G. Nagarajan, K.K. Thyagharajan, A machine learning technique for semantic search engine. Procedia Eng. 38, 2164–2171 (2012) 19. K. Srilatha, V. Ulagamuthalvi, A comparative study on tumour classification. Res. J. Pharm. Tech. 12(1), 407–411 (2019) 20. P. Paul, R.G. Franklin, Fragmenting the data in cloud for enhancing security and performance. Res. J. Pharm. Biol. Chem. Sci. 7(3), 349–355 (2016)
A System for Informed Prediction of Health Rakshith Guptha Thodupunoori, Praneeth Sai Ummadisetty, and Duraisamy Usha Nandini
Abstract As we all know that there is a larger population, patients are in line at the hospital’s front for care. Smart health prediction is a framework in which this problem can be resolved by using machine learning algorithm. This is an informed health prediction system. Once the standard of medical information is incomplete, the accuracy of the study will be reduced. Moreover, completely different regions have a distinctive appearance of bound regional diseases, which can lead to a weakening of the prediction of unwellness outbreaks. The patient provides symptoms in this model after that decision tree algorithm is used to determine the possible illness. This framework is implemented by using Flask for interface and decision tree algorithm. Keywords Health care · Smart health · Disease prediction · Prediction model · Symptom checker
1 Introduction Health care is a field in which decisions are generally associated with very high risks and high costs. Health-related decisions are critical because they can cost a person his/her life. Doctor analyzes the patient’s symptoms while diagnosing the disease. The final disease is expected based on the symptoms. In today’s computerized environment, given the automated and complex requirements of healthcare system, predicting the disease and delivering effective medicines through userfriendly mobile applications should be more efficient. This research is primarily R. G. Thodupunoori (B) · P. S. Ummadisetty · D. Usha Nandini Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] P. S. Ummadisetty e-mail: [email protected] D. Usha Nandini e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_52
505
506
R. G. Thodupunoori et al.
aimed at the health concerns and those who want to be their own doctor. It is an interactive tool for people who want to learn about the effects they are going through regarding health problems. Machine learning is computers that are trained to optimize performance using example data or past data. Machine learning is exploring computer systems that learn from data and experience. Machine learning model has two passes: training, testing. Disease prediction using the symptoms of patients and history machine learning technology has been struggling over the past decades. Machine learning technology gives an excellent medical framework to solve health concerns efficiently. A well-known method for storing information is the healthcare system which uses a database. For standard database systems, it is sometimes not possible to meet the requirements of the customer due to the existence of huge data and provide them with the exact information they need to make a decision. Nevertheless, when the consistency of medical data is insufficient, this precision of the study is reduced. The healthcare system is far from being perfect and boring. For safer and automatic treatment, these huge record sets can be analyzed and studied. To assess the patient, diagnose them, and treat them with medication, this can be accomplished with machine learning systems.
2 Related Work Sivaranjani et al. [1] explained that healthcare system needs to be modernized. It means that the healthcare system data should be properly analyzed. To categorize it into group of diseases, symptoms, and treatments. In this paper, big data is used to predict the disease and avoid preventable deaths. Big data provides security and privacy. This approach is little bit expensive. Ankita et al. [2] discussed about the thyroid disease. This paper is only for prediction of thyroid disease only by using KNN, SVM, and decision tree algorithms. This framework is used to estimate risk on patient’s chance of getting thyroid disease. This framework requires less number of elements of a person to diagnose thyroid disease. Khatavkar et al. [3] paper say that decisions related to health are crucial as it may cost a person his/her life. The way the doctor makes the decision is similar to the doctor’s decision-making power that can be given to the computer in which the machine can recognize and interpret the symptoms and then assign them based on possible diseases just as the doctor does. This also uses patient’s history to make their results better with the help of data mining algorithm. It also suggests homemade remedies for instant relief. But this paper restricted to viral and regular disease only. Pooja reddy et al. [4] discusses about techniques and applications of data mining in healthcare industry. Their aim is to predict disease by using already existing information. By using that information, it analyzes and extracts new patterns. They used machine learning and database management to predict the disease. Pawar et al. [5] paper do the same as our paper but they used apriori algorithm for identifying probability of diseases. They have created a prototype in which user needs to give his/her symptoms. This framework also provides some health tips that
A System for Informed Prediction of Health
507
to stay healthy. This will work on real-time data also. Alam et al. [6] used data mining techniques for prediction of diabetes. It is mainly predicted by using various attributes via principal component analysis method and they used k-means, KNN, and ANN for prediction at early stage. This paper is concerned only with diabetes that to its accuracy is 75.7%. Gomathi et al. [7] paper discussed about various data mining methods that are used for predicting probable disease. They used Naïve Bayes algorithm and decision tree algorithm for predicting the disease. This paper mainly focussed on predicting diabetes, heart disease, and breast cancer. They also compared these two different classifiers on various disease and performance for accuracy in prediction. Kumar et al. [8] paper applied many machine learning algorithms to predict heart diseases. They also compared classifiers and select four among them for predictive analysis. But still, it needs some accuracy for prediction and it is not suitable for predicting multiple diseases. Begum et al. [9] papers discussed about predicting thyroid disease at early stages. For this, data mining techniques plays an prominent role for making decision and disease prediction. They used KNN, SVM, Naïve Bayes, and ID3 classifiers. This paper also finds the correlation toward hyperthyroidism and hyporthyroidism. Manish Kumar said that healthcare system is producing huge amounts of data which need to undergo for mining to discover hidden info for prediction and diagnosis. This paper implemented by using six machine learning algorithms for predicting kidney disease and also compared the performance of six classifiers.
3 Existing System Machine will predict unwellnesss, however, unable to predict the sub-kinds of the diseases reasoned by incidence of one disease. It’s not able to predict all doable requirements of the folks. In current past, uncounted unwellness estimate classifications are advanced and in procedure. The standing organizations coordinate a mix of machine learning algorithms in which diseases are considered to be judiciously accurate. However, the restraint with the prevailing systems area unit specked. First, the prevailing systems area unit dearer solely wealthy folks may get to such calculation systems. And also, once it involves of us, it becomes even higher. Second, the guessing mechanisms have so far been non-specific and infinite. In such a way, a computer can envisage a positive disease but cannot anticipate subtypes of diseases and diseases caused by the presence of a single bug. For occurrence, if a group of people are foreseen with diabetes, doubtless some of them might have complex risk for heart viruses due to the actuality of diabetes. The remaining schemes fail to foretell all possible surroundings of the tolerant. In existing system maximum, they are predicting for only specific diseases only.
508
R. G. Thodupunoori et al.
Fig. 1 Proposed architecture
4 Proposed System In the proposed system, decision tree machine learning algorithm predicts disease. It is implemented to increase the efficiency of operations and it reduces the time required to retrieve request. Machine learning algorithm improves accuracy and the proposed system begins with the idea that the ancestors did not execute. This analyzes the diseases easily and it is simple lightweight framework of Python. In this paper, it contains two modules. This system is used to predict 41 diseases (Fig. 1).
4.1 User Module 4.2 Decision Tree Classifier Decision tree is the most popular and best-known method of classification and prediction method. A decision tree is a graph-like flowchart in which each internal node represents a test on an attribute, each branch is a test result and each leaf node (terminal node) carries a class identifier. Decision tree is a kind of supervised learning algorithm (with a pre-defined target variable) mostly used in classification challenges. It serves for categorical as well as continuous variables of input and output. The pre-processing of the database contains NaN values. NaN values cannot be processed by programming, therefore, these values must be converted to numerical values. In this approach, the mean of the column is calculated and the mean values of the NaN are replaced by the mean values. Splitting the entire database is divided into a training and testing database. The 80% data is used for training, while the remaining 20% data is used for testing.
A System for Informed Prediction of Health
509
4.3 Decision Tree Algorithm Place dataset best attribute at the tree root. Divide the training set into subtypes. Subtype should be made in such a way that each subset contains data with the same attribute value. Repeat initial two steps on each subset until the leaf nodes are located in all tree branches. While building our decision tree classifier, we can increase its accuracy by tuning it to different parameters. But this tuning should be done carefully because, by doing so, our algorithm can overfit our training data and ultimately build a worse model.
4.4 Python Packages Numpy: NumPy is a Numeric Python module and provides fast mathematical operations. NumPy provides robust data structures for efficient computing of multidimensional arrays and matrices. We used NumPy to read data files to NumPy arrays and to manipulate data. Pandas: It provides reading and writing data b/w different files. DataFrames can hold a variety of multidimensional array data types. Scikit-learn: It is a machine learning library. We are using its train_test split for the decision tree classifier. By using the packages and libraries, it predicts the probable disease with dosomething() function. This function takes symptoms entered by user as input and it analyzes the disease associated with that symptoms and predicts the disease (Fig. 2).
4.4.1
Introduction to Flask
This framework uses Flask for Web Interface. Flask is a lightweight Web application framework of Python and baseband on the WSGI toolkit and jinja2 template engine. Flask takes the versatile Python programming language and offers a simple Web creation framework. When loaded into Python, Flask can be used to save time for Web applications that build up and it leaves the essence basic and stretchable. It has no Fig. 2 Symptom checker
def dosomething(symptom): user_input_symptoms = symptom user_input_label = [0 for i in range(132)] for i in user_input_symptoms: idx = dictionary[i] user_input_label[idx] = 1 user_input_label = np.array(user_input_label) user_input_label = user_input_label.reshape((-1,1)).transpose() return(dt.predict(user input label))
510 Fig. 3 Sample program in flask
R. G. Thodupunoori et al. from flask import Flask app = Flask(__name__) @app.route('/') def demo():
abstraction layer of the database, type validation, or any other elements. Flask allows prolongations. Extensions exist for object-related mappers, type validation, upload management, and different techniques for flexible authentication. This is followed by a simple program that is implemented in flask (Fig. 3). Flask is a simple lightweight Python Web framework that used in this paper to design the interface of project. It is very fast and simple. This framework uses CSV file where it contains all the disease names and symptoms associated with it. This helps decision tree algorithm to predict probable disease. From Flask, this framework used render_template, request, redirect, and url_for render template is a function in Flask used to render the html pages. For this, a templates directory is to be created in a folder. This is used to return the HTML pages which user wants. Request means that the data from the client’s Web page is transferred to the server as a global request object. Form is a dictionary entity that consists of the key and value pairs of the type parameters and its values. redirect() function is used to redirect the user to specified URL. Url_for() function enables hard coding of URLs to be avoided.
4.5 Find Doctor Module There are two labels in this module, such as specialty and location. Where the user needs to enter the speciality that is needed for that specific disease and the location where his/her present. This helps the user to find the location of the specialist nearby. After entering those details, it shows the details of doctor such as name, speciality, and location. This is achieved by the Python xlrd library which helps to read the Excel data. We have collected the real-time data of Chennai city where it includes maximum of areas. We have nearly 29 specialists all over Chennai for different health issues. These details are stored in Excel sheet. To access this data, Python uses xlrd module. This compares entered data with stored data and gives details as output (Fig. 4).
5 Results and Discussion After choosing the symptoms from the drop-down list, the disease name will be displayed and presented on the right side of the home page. After getting the name of the disease, we need to click on the doctor tab (Fig. 5).
A System for Informed Prediction of Health Fig. 4 Find doctor tab
511
def get_location(): specialitySearchTerm = request.form['speciality'] print("The search term in doctor field is: ", specialitySearchTerm) df = pd.read_excel('Doctor Locations.xlsx', sheet_name = 'Sheet1') df = df.iloc[:,0:3] print(df.shape) df = df.dropna() print(df.shape) df = df[df['Speciality'].str.contains(specialitySearchTerm)] locationSearchTerm = request.form['location'] print("The search term in location field is: ", locationSearchTerm) df = df[df['Area'].str.contains(locationSearchTerm)] i t(df)
Fig. 5 Home page screenshot
In find doctor tab, the user needs to enter the speciality and place where he wants to be treated. Upon pressing the search button, you will see the specialty, position, and doctor’s name (Fig. 6).
Fig. 6 Find doctor tab screenshot
512
R. G. Thodupunoori et al.
6 Conclusion and Future Enhancement In the above paper, we studied a machine learning model that can be used to predict probable disease. This framework is simple and small where user needs to enter his symptoms. Then, decision tree algorithm classifier analyzes and predicts the probable disease. This framework consists of find doctor module where it helps user or patient to find nearby doctors. This paper is implemented using real-time data. Basically, this model predicts probable disease and suggests nearby specialists. This paper uses Flask framework to view the pages which user requests. In the future, this model will show addresses as well as natural remedies to cure that particular disease. It should also include hospital ratings to help users in a better way.
References 1. A. Sivaranjani, S. Priyadharshini, A. Porkodi, A. Vijayalakshmi, S. Suseela, Smart health prediction using hadoop. Int. J. Eng. Res. Technol. (IJERT) ISSN: 2278-0181. 2. A. Tyagi, R. Mehra, A. Saxena, Interactive thyroid disease prediction system using machine learning technique, in 5th IEEE International Conference on Parallel, Distributed and Grid Computing (PDGC-2018) 3. A. Khatavkar, P. Potpose, P. Pandey, Smart health prediction system. IJSRD—Int. J. Sci. Res. Dev. 5(02) (2017) ISSN: 2321–0613 4. G. Pooja Reddy, M. Trinath Basu, K. Vasanthi, K. Bala Sita Ramireddy, R.K. Tenali, Smart E-health prediction system using data mining. Int. J. Innov. Technol. Exploring Eng. (IJITEE) 8(6) (2019). ISSN: 2278-3075 5. P.V. Pawar, M.S. Walunj, P. Chitte, Estimation based on data mining approach for health analysis. Int. J. Recent Innov. Trends Comput. Commun. 4(4), 743–746 ISSN: 2321-8169 6. T.M. Alam, M.A. Iqbal, Y. Ali, Z. Abbas, A model for early prediction of diabetes. Inf. Med. Unlocked 16, 100204 (2019) 7. K. Gomathi, D. Shanmuga Priya, Multi disease prediction using data mining techniques. Int. J. Syst. Softw. Eng. 8. M. Nikhil Kumar, K.V.S. Koushik, K. Deepak, Prediction of heart diseases using data mining and machine learning algorithms and tools. Int. J. Sci. Res. Comput. Sci. Inf. Technol. IJSRCSEIT 3(3), ISSN: 2456-3307 9. B.A. Begum, A. Parkavi, Prediction of thyroid disease using data mining techniques, in 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS) 10. M. Kumar, Prediction of chronic kidney disease using random forest machine learning algorithm. Int. J. Comput. Sci. Mob. Comput. IJCSMC 5(2), 24–33 (2016) 11. J. Jose, S.C. Mana, B. Keerthi Samhitha, An efficient system to predict and analyze stock data using hadoop techniques. Int. J. Recent Tech. Eng. (IJRTE) 8(2), (2019) ISSN: 2277-3878 12. S. Jancy, C. JayaKumar, Sequence statistical code based data compression algorithm for wireless sensor network. Wirel. Pers. Commun. 106, 971–985 (2019) 13. D. Usha Nandini, E.S. Leni, Efficient shadow detection by using PSO segmentation and regionbased boundary detection technique. J. Supercomput 75(7), 3522–3533 (2019)
A Safety Stick for Elders Korrapati Bhuvana, Bodavula Krishna Bhargavi, and S. Vigneshwari
Abstract Technology in recent decades is developing day-by-day in such a way that even a small device is equipped with many features. In this paper, we provide a detailed prototype of a safety stick that helps the elders to lead their life independently. Most commonly elders use a walking stick for their support, we are going to make that normal walking stick into a smart safety stick. Here, we use IoT and embedded technology. In an emergency like when they need medical help, by long-pressing, a simple button implemented walking stick the alert message is sent to the caregiver along with location information. Also, when in the threat of thieves they can use the same stick for their protection by pressing simple click it will give a shock to the thief and also a message is sent to caregiver simultaneously and even the neighbors with a siren sound. This upgraded smart stick makes aged people feel safe, secure, and independent. Keywords Bluetooth · Relay · Arduino · Location · Alert message
1 Introduction Population aging is a natural phenomenon and it is occurring quicker in developing countries, which have less time to modify to the outcomes of demographic transition. According to the statistics, by 2050 elders above 60 years of age are expected to be double in number. So, we must concentrate on the solutions to the challenges faced by the aged people. It is highly impossible to maintain track of the elder people all K. Bhuvana · B. K. Bhargavi (B) · S. Vigneshwari Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] K. Bhuvana e-mail: [email protected] S. Vigneshwari e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_53
513
514
K. Bhuvana et al.
of the time even when they are supplied with a caretaker. So implementing certain features in the device is helpful to the elders. Walking stick is the most helpful and commonly used device by the elderly. Hence, making a normal walking stick into a smart and safety stick that helps the aged people feel safe and independent. The features that are included should be helpful to the elders in order to overcome the fear of any intruders or thieves [1]. Embedded systems are used for this design and an android app for sending and receiving the alert message [2, 3]. Embedded systems are absolutely a combination of computer hardware and software programming which together form a factor of an electrical device that we use in our day-by-day life. Ever since the invention of the mobile phone, the usage is massive and now every individual is having a mobile phone [4]. Hence, by this device, we can easily send the alert message. With the help of these both embedded systems and mobile phones, we can easily design our prototype [5, 6].
2 Literature Survey The overall performance of the prototype relies upon how nicely the communication may be hooked up by means of Wi-Fi communications protocols including Bluetooth, Wi-Fi, and GPRS or GSM inserted in the design [7, 8]. The overall performance of stick to governing appliances requires Bluetooth and there is no trouble because as long as the variety is maintained to form its miles designed. In designing this smart stick, microcontroller and different modules are selected as a consequence for compatibility in size, price, and the power while ensuring reliability and accuracy. Node MCU that is designed with ESP8266 which is a low fee Wi-Fi chip with TCP or IP stack and microcontroller. Hence, it will be easier to upload programs and control GIPIOs [9, 10]. They proposed a blind navigation system that sends information from the controller mounted on white cane via Bluetooth connection to a smartphone with haptic mode alerting the user about obstacles through speech warning and vibration [11, 12]. An application was also developed to convert text to speech. On top of that, a Nano Arduino was used instead of common UNO Arduino in the effort to significantly reduce the size of the system which requires four ultrasonic sensors for best of is performance [13, 14]. The system was embedded into a small casing, located on a strategic position of a white cane to detect below waist obstacles [15]. The purpose of this paper is to get acquainted with the work accomplished in making normal walking stick smarter and even more helpful [16]. The literature related to this topic changed into reviewed and analyzed. As technology improves these safety sticks want to be modified [17, 18]. The simulation effects are expected for the ultrasonic sensors, water sensor, and ESP8266 in a single microcontroller. So, on this paper, a wide survey of the work related to this track is performed [19, 20].
A Safety Stick for Elders
515
3 Implementation and Prototype Design The safety stick is implemented with various features for the independent elders. Embedded systems which is the combination of both software and hardware are used for the safety stick is shown in Fig. 1. The block diagram of the safety stick where the Bluetooth, a relay is connected to the Arduino board. In case of an emergency such as when intruders come near to the elder person by pressing a button connected to the stick, the Bluetooth is turned on and it is paired with the Bluetooth of the elder person’s mobile, now a siren sound is produced from the phone to alert nearby people by an android application. We can even change the sounds and volume of the siren. With the relay connection the shock is given to the intruder. From the android app, the latitude and longitude data are sent to the caregiver with an alert message.
3.1 Ardunio UNO Arduino UNO is a microcontroller board where the input and output pins are available in both digital and analog forms. This can be programmable with Arduino IDE applications are developed by using C, C++, and Java. Here, the button, Bluetooth, and a relay are connected along with the power supply battery to the Arduino UNO board along with the programming done in Arduino IDE (Fig. 2).
Fig. 1 Architecture diagram
516
K. Bhuvana et al.
Fig. 2 Arduino board
Fig. 3 Relay
3.2 Relay Generally, relay is used as a switch that helps for the intake of the signals as input from the button and power supply is needed. Here, a relay is used for giving the shock to the thieves by the elders. The input pin as per the Arduino code, GND, and voltage pins are fixed to the respective pins of the Arduino (Fig. 3).
3.3 Bluetooth The HC05 is a module in Bluetooth that is used as a hardware connection in Arduino. When the power supply is on, then the Bluetooth is paired with the person’s phone, by pressing the button the location is sent to the caretaker from the android application that is designed. The receiver pin is fixed to the transmitter pin in the Arduino, and the VCC is fixed to the respective 5 V pin of the Arduino (Fig. 4).
A Safety Stick for Elders
517
Fig. 4 Bluetooth
Fig. 5 User interface
3.4 Software Android studio is open-source software that can be easily installed, where the android applications are developed. JAVA is used as a back-end and HTML is used as a front end to develop an application. The complete android package kit is a file layout required for installation the application software on mobiles with the android running system. In this application, the user interface is connected with the Bluetooth. Even the sounds and volume of the siren can be altered in the code. Most importantly, the location of the elder person is sent immediately to the caretaker with the exact latitude and longitude [1] directions through this application. This includes the directions of latitudes and longitudes that are to be sent as an alert message to the caretaker. This application is easily available and can be easily downloaded. User interface is connected with Bluetooth and rx value if true along with the latitude and longitude directions that is sent along the alert message to the given mobile number. We can design message with some index words such as “need help” with the exact location as designed in the application that is developed by the android (Figs. 5 and 6).
4 Result and Discussions The safety stick proposed in this paper is helpful to the independent elders. It is a prototype designed to protect aged people and make them feel safe. When the button
518
K. Bhuvana et al.
Fig. 6 Alert message
is pressed that is attached to the stick, firstly, the Bluetooth and relay are turned on. Now, the Bluetooth is connected to the person’s mobile a siren sound is obtained, and simultaneously, the latitude and longitude locations are sent to the caretaker in the alert message sent to them by the android application. The user interface is developed in such a way that we can even change the caretaker’s number if they are not available. From the relay, the shock can be given if any intruders occur. This stick can be used to the persons with the mild stage of Alzheimer’s if they forget their address as the location is shared with the caretaker. Even for women safety, this design is very useful as this provides the alert message of the location. The designed safety is a prototype that can be compatible and fit inside the stick (Fig. 7).
5 Conclusion The complete design and implementation of the project are presented in this paper and this safety stick helps as an efficient aid to the elders. The safety stick available in the markets is only used for support for elders and is not smart enough. The size of the module can be compatible by using the PCB to fit into the stick and even using the Aurdino Nano the size can be further reduced. We can even include some basic features such as LED, used when it is dark. The location can be given more accurately and within less time span. This smart stick is helpful to the independent elders by its features such as an alert message to the caregiver, alarm sound from the mobile of the elderly person, and even the latitude and longitude to the caregiver. By using this idea, we can provide compact safety equipment to the aged people.
A Safety Stick for Elders
519
Fig. 7. Designed safety stick
References 1. G. Nagarajan, R.I. Minu, A. Jayanthiladevi, Brain computer interface for smart hardware device. Int. J. RF Technol. 10(3–4), 131–139 (2019) 2. P. Paul, R.G. Franklin, Fragmenting the data in cloud for enhancing security and performance. Res. J. Pharmaceut. Biol. Chem. Sci. 7(3), 349–355 (2016) 3. A. Velmurugan, T. Ravi, Allergy information ontology for enlightening people, in 2016 International Conference on Computing Technologies and Intelligent Data Engineering (ICCTIDE’16) (IEEE, 2016), pp. 1–7 4. K. Sangeetha, P. Vishnuraja, D. Deepa, Stable clustered topology and secured routing using mobile agents in mobile ad hoc networks. Asian J. Inf. Technol. 15(23), 4806–4811 (2016) 5. L.M. Gladence, H.H. Sivakumar, G. Venkatesan, S.S. Priya, Home and office automation system using human activity recognition, in 2017 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2017), pp. 0758–0762 6. K.M. Prasad, R. Sabitha, K. Muthukumar, Providing cluster categorization of heuristics technique for increasing accuracy in severe categorization of road accidents, in 2017 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2017), pp. 1152–1159 7. L. Boppana, V. Jain, R. Kishore, Smart stick for elderly, in 2019 International Conference on Internet Of Things (iThings) and IEEE Green Computing and Communications and IEEE Cyber, Physical and Social Computing (CPSCom) (2019) 8. S.C. Mana, A feature based comparison study of big data scheduling algorithms, in 2018 International Conference on Computer, Communication, and Signal Processing (ICCCSP), Chennai, pp. 1–3 (2018) 9. S.A. Bouhamed, J.F. Eleuch, I.K. Kallel, D.S. Masmoudi, New electronic cane forvisually impaired people for obstacledetection and recognition, in Vehicular Electronics and Safety (ICVES) IEEE International Conference, pp. 416–420 (2012) 10. K. Osman, Smart phone assisted blind stick. Turkish Online J. Des. Art Commun.—TOADAC (2018)
520
K. Bhuvana et al.
11. A. Pravin, T.P. Jacob, G. Nagarajan, An intelligent and secure healthcare framework for the prediction and prevention of Dengue virus outbreak using fog computing. Health and Technol., 1–9 (2019) 12. K.S. Varun, I. Puneeth, T.P. Jacob, Hand gesture recognition and implementation for disables using CNN’S, in 2019 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2019), pp. 0592–0595 13. S. Divya, R. Vignesh, R. Revathy, A Distinctive model to classify tumor using random forest classifier, in 2019 Third International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, pp. 44–47 (2019) 14. G. Kalaiarasi, K.K. Thyagharajan, Clustering of near duplicate images using bundled features. Cluster Comput. 22(5), 11997–12007 (2019) 15. A. Tekade, M. Sonekar, M. Ninave, P. Dongre, Ultrasonic blind stick with GPS tracking system. Int. J. Eng. Sci. Comput. (2018) 16. G. Nagarajan, K.K. Thyagharajan, A machine learning technique for semantic search engine. Procedia Eng. 38, 2164–2171 (2012) 17. A. Ponraj, Optimistic virtual machine placement in cloud data centers using queuing approach. Future Gener. Comput. Syst. 93, 338–344 (2019) 18. R. Aishwarya, R. Yogitha, V. Kiruthiga, Smart road surface monitoring with privacy preserved scheme for vehicle crowd sensing. J. Comput. Theor. Nanosci. 16(8), 3204–3210 (2019) 19. M.P. Selvan, A. Gupta, A. Mukherjee, Give attention to overlapping network detection in networks for multimedia. J. Comput. Theor. Nanosci. 16(8), 3173–3177 (2019) 20. R. Jayashree, A. Christy, Improving the enhanced recommended system using Bayesian approximation method and normalized discounted cumulative gain. Procedia Comput. Sci. 50, 216–222 (2015)
Intelligent Analysis for Wind Speed Forecasting Using Neural Networks Bethi Gangadhar Reddy, Bhuma Dinesh Kumar, and S. Vigneshwari
Abstract An artificial neural network (ANN) is an data processing methodology that is coming up with an idea of biological nervous systems. An artificial neural network is an association of many artificial neurons that are coupled together according to specific network architecture. The objective of the neural network is to create the inputs through meaningful outputs. The plan of models for time arrangement forecast has discovered a strong establishment of insights. Right now, present a half and a half to deal with produce a total plan of a neural system for displaying and estimating time arrangement. This paper investigated the chance of building up a wind speed forecasting prototype utilizing artificial neural networks and ARIMA which could be utilized to gauge the Sathyabama Institute of Science and Technology Meteorological station Tamil Nadu, India, utilizing SPSS programming and Python. As indicated by the outcome, it tends to be inferred that half breed ANN model could deliver an adequate forecast of wind speed. The conjecture from 2015 wind speed is determined and validated dependent on the chosen model. Keywords Artificial neural network · SPSS · Multilayer perceptron · ARIMA · Radial basis function
B. G. Reddy (B) · B. D. Kumar · S. Vigneshwari Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] B. D. Kumar e-mail: [email protected] S. Vigneshwari e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_54
521
522
B. G. Reddy et al.
1 Introduction A neural network is an interrelated arrangement of fundamental processing elements, pieces, or nodes. The processing ability of the network is saved in the inter-unit association strengths, or weights, obtained by manner of a technique of the model too or getting to know from, a hard and fast of education patterns. An artificial neural network is a records of studying model; this is stimulated with the useful resource of the organic worried systems. The physical and statistical techniques are standard techniques usually used for wind speed forecasting. The aggregate of both techniques is also utilized in some models for integrating the blessings of them [1, 2]. The physical method considers parameters associated with the physical description of wind motion in and across the wind farm. It is based on weather forecasting statistics like atmospheric variables and additionally, the traits of wind farm environments like farm layout, obstacles, and roughness [3, 4]. This information is used for predicting wind electricity by considering wind speed. The physical technique does not require any input from past data [5]. This has a balance toward the difficulty engaged in obtaining physical information [6, 7]. The statistical methods bear in mind the training of records of wind pace data and generate an output without considering physical phenomena. A statistical approach includes artificial neural networks, fuzzy logic, regression tree and help vector machine, etc. The statistical technique produces a good bring about wind speed forecasting [8]. The hybrid procedure is association of various physical and statistical processes like a mixture of numerical weather prediction (NWP) and neural networks [9, 10]. Forecasting of wind velocity is a vital degree to locate the uncertainty and also can be used for some functions including energy commitment decision, strength increase or decrease decision, preservation association, and strength storage optimization. The forecasting gadget predicts wind velocity for wind energy generation [11]. Based on time horizon, wind pace forecasting is assessed into three types. The first-time horizon is a very brief period that is very beneficial for trading in intraday markets. It represents a couple of minutes to at least one hour only [12]. The secondtime horizon is a quick-term that is appropriate for the renovation schedule [13]. It represents one hour to 12 h [14, 15]. The third-time horizon is a medium or long time and this is beneficial for the renovation of non-renewable electricity generation. It represents numerous hours to 3 days [16]. Advanced technology has empowered taking inside the big volume of information on a continuous or periodic foundation in numerous disciplines [17, 18].
2 Literature Review Sheela and Deepa [19] proposes a neural network-based mixture processing model for wind speed forecast in sustainable power source frameworks. Wind vitality is one of the sustainable power sources which bring down the expense of power creation.
Intelligent Analysis for Wind Speed Forecasting …
523
Because of the vacillation and nonlinearity of wind, the precise wind speed forecast assumes a significant job in sustainable power source frameworks. Navas [20] investigates the chance of building up a wind speed expectation prototype utilizing diverse Artificial neural networks, which may be utilized to gauge the wind speed in Coimbatore utilizing SPSS programming. The suggested neural network models are tried on continuous wind information and upgraded over factual capacities. The goal is to anticipate exact wind speed and to execute well regarding minimization of blunders utilizing multi-layer perception neural network, categorical regression, and radial basis function neural network. Results from the paper have indicated a great understanding between the evaluated and estimated estimations of wind speed. It can be reasoned that the artificial neural network model with multilayer perceptron neural network could deliver the worthy expectation of the wind speed for provided on wind guidance. Shekhawat [21] analyzes that artificial neural network s can be adequately utilized for wind speed forecast despite the fact that wind speed may show complex time arrangement conduct. Different neural network architectures have been examined with their striking highlights. At last, the assessment parameters utilized for the assessment of any forecast model to be structured have been clarified with their physical essentialness and need.
3 Importance of Feature Selection Process An element choice approach allows a lower computational unpredictability of learning calculation to improve expectation execution, better facts understanding, and diminish information greater room. At the point, while the selections of various subsets of highlights are little, the chances of data substance might below. Database separator. This section describes how to classify a dataset in training, testing, and capture samples. The training sample includes the data files utilized to teach the neural network. A certain proportion of instance within a dataset should be allocated to a training pattern in order to gain a specific model. The test pattern is a hidden set of mathematical records used to trace errors while training to save you excess. It is clearly suggested that you do a training sample, and the network training would usually work best if the test pattern is lower than the training sample. For example, the index is 7, 3, 0 because contact numbers, tests, and holdout samples represent 70%, 30%, and 0%. Specifies 2, 1, 1 because the numbers are 50%, 25%, and 25%; 1, 1, 1 is like dividing the data into three-thirds between training, testing, and hosting.
4 Wind Speed Forecasting Using Neural Network The basic concept behind the ANN is to develop a tool that should perform the computation for demonstrating the brain function. This tool must carry out various
524
B. G. Reddy et al.
computations at a rate faster than the computed rate of the conventional framework. The ANN can be used for various intents like clustering, classification, prediction, etc. During the learning procedure, known patterns of a particular problem are presented to the network to improve its performance and its generalization ability. The generalization capability is a potential to respond to the pattern correctly, which was not used during the training process. An optimization method based on gradient descent is applicable to minimize the error or maximize the accuracy of the neural network. There are dual major categories of learning called supervised and unsupervised learning. In the supervised learning system, the class label of the pattern presented to the network is known, and it is used in the training process. If the class label is unknown or unused, the learning process is unsupervised. ANN has the capability of learning and generalizing the given input by assigning or adjusting its weights and biases for making useful decisions.
4.1 Proposed Neural Network Model In this model (refer Fig. 1), we collected wind speed data from the meteorological tower station, Sathyabama Institute of Science and Technology, Tamil Nadu, India. Using SPSS software, we are going to predict wind speed parameter where the 2015 data as output considering 2012, 2013, 2014 wind speed at the height of 50 m and date as input. Here, in this project, we did the prediction based on the time series pattern for respective seasons. In this model, we are using neural networks in which we used SPSS software consists of tools of multi-layer perceptron, radial basic function, and auto regressive integrated moving average Auto Regressive Integrated Moving Average (ARIMA). As considering R square value for the ARIMA model, it shows more accuracy compared to other models as its R square value is high compared to others, so we will prefer that model based on the performance measures.
5 Results 5.1 Descriptive Analysis Table 1 describes the different values of mean, Std. deviation, variance for different years of 2012, 2013, 2014, 2015. Figure 2 is descriptive statistics of 2015 year data of windspeed at the height of 50 m with date.
Intelligent Analysis for Wind Speed Forecasting … Research Location Nadu) Chennai,India
(Tamil
525 Parameters Wind Speed 32m Wind Speed 50m Wind Speed 50m Wind Speed 50m
Data Collection Resources Metrological Tower Data, Sathyabama Institute of Science and Technology
SOM
Wind Forecasting System Data (NN input)
of 2012@ height of of 2013@ height of of 2014@ height of of 2015@ height of
Real Data Normalization
Forecasting Tool: Multi Layer Perceptron
Neural Network Framework
Model Validation
Forecasting Tool: Radial Basic Function Forecasting Tool : ARIMA
Wind Forecasting System
Fig. 1 Architecture diagram
Table 1 Descriptive statistics of considered data Names
2012-WS32
2013-WS50
2014-WS50
2015-WS50
Total
40972
44114
41945
37848
Missing
5680
2538
4707
8804
Mean
3.318
3.693
4.062
4.017
Std. deviation
1.7775
1.7792
2.1992
4.001
Variance
3.159
3.166
4.837
16.008
5.2 Forecasting System In forecasting system, we used neural network architecture of two types. The first one is type of auto, i.e., it will consider or choose the architecture based on the size of input data or based on the need. The second is type of custom architecture in which we can select how many hidden layers we need and also we can select different kinds of activation functions for both hidden layers and output layers like hyperbolic tangent, sigmoid, identity and softmax. Here, in this project, we used multi-layer perceptron, radial basis function, and ARIMA model for forecasting.
526
B. G. Reddy et al.
Fig. 2 Discriptive analysis of 2015 data with date
5.2.1
Multilayer Perceptron
A multi-layer perceptron is a type of neural input network built. The MLP comprises of triple layers: an input layer, a hidden layer, and an output layer. With the exception of input fields, each node is a neuron which utilizes nonlinear function.
5.2.2
Radial Basis Function
The radial basis function is a task that determines the actual value of each input from its domain and the value generated by the radial basis function remains completely unique, i.e., it is a distance method and cannot be worse. The radial basis functions acts as a activation function.
5.2.3
Auto-regressive Integrated Moving Average(ARIMA)
The auto-regressive integrated moving average models are extension of ARMA models. These types fit to time sequence data to retrieve data points or to forecast subsequent data points in a sequence. The below tables are the output of multi-layer perceptron, radial basis function and ARIMA model.
Intelligent Analysis for Wind Speed Forecasting …
527
Table 2 Model summary of multi-layer perceptron Training
Testing
Sum of squares error
13268.576
Relative error
0.995
Stopping rule used
Relative change in training error criterion (0.0001) achieved
Training time
00:00:27.130
Sum of squares error
5790.655
Relative error
0.998
Multilayer Perceptron Table 2 is the output of multilayer perceptron which describes the Relative error in training is 0.995 and in testing is 0.998. From this we can conclude that the predicted value is very accurate as relative error in training is very close to the relative error in testing. Radial Basis Function Table 3 is the output of radial basis function which describes the relative error in training is 0.997 and in testing is 0.997. From this, we can conclude that the predicted value is very accurate as relative error in training is equal to the relative error in testing. ARIMA Model Table 4 is the output of ARIMA model summary which describes the R square value of 0.923. Table 3 Model summary of radial basis function Training
Testing
Sum of squares error
13157.46
Relative error
0.997
Training time
01:07.0
Sum of squares error
5610.419
Relative error
0.997
Table 4 Model statistics of ARIMA Model Model
Number Model fit statistics of predictors
2015-WS50-Model_1 3
Ljung-Box Q(18)
Number of outliers
Stationary R-squared Statistics R-squared
DF Sig
0.923
17
0.923
1343.624
0.000 0
528
B. G. Reddy et al.
Fig. 3 Graph representation of multi-layer perceptron
5.3 Model Validation 5.3.1
Multilayer Perceptron
Figure 3 is for multilayer perceptron where we got the R square value of 0.522.
5.3.2
Radial Basis Function
Figure 4 is for radial basis function where we got the R square value of 0.331.
5.3.3
Arima Model
Figure 5 is the representation of ARIMA model with observed and forecast value.
6 Conclusion From the above results, we got R square values of 0.522 for multi-layer perceptron, 0.331 for radial basis function, and 0.923 for ARIMA model. So, finally, we conclude
Intelligent Analysis for Wind Speed Forecasting …
529
Fig. 4 Graph representation of radial basis function
Fig. 5 Graph representation of ARIMA model
that ARIMA model is the best for prediction of wind speed for seasonality based on the R square value.
530
B. G. Reddy et al.
References 1. A. Velmurugan, T. Ravi, Allergy information ontology for enlightening people, in 2016 International Conference on Computing Technologies and Intelligent Data Engineering (ICCTIDE’16) (IEEE, 2016), pp. 1–7 2. B.K. Samhitha, S.C. Mana, J. Jose, M. Mohith, L. Siva Chandhrahasa Reddy, An efficient implementation of a method to detect sybil attacks in vehicular ad hoc networks using received signal strength indicator. Int. J. Innov. Technol. Explor. Eng. (IJITEE) 9(1) ISSN: 2278-3075 3. R. Jayashree, A. Christy, Improving the enhanced recommended system using Bayesian approximation method and normalized discounted cumulative gain. Procedia Comput. Sci. 50, 216–222 (2015) 4. L.M. Gladence, H.H. Sivakumar, G. Venkatesan, S.S. Priya, Home and office automation system using human activity recognition, in 2017 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2017), pp. 0758–0762 5. M.P. Selvan, A. Gupta, A. Mukherjee, Give attention to overlapping network detection in networks for multimedia. J. Comput. Theor. Nanosci. 16(8), 3173–3177 (2019) 6. H. Madsen, H. Nielsen, T. Nielsen, P. Pinson, G. Kariniotakis, Standardizing the performance evaluation of short-term wind power prediction models. Wind Eng. 29, 475–489 (2005) 7. N. Siebert, Development of methods for regional Wind Power Forecasting, Ph.D. dissertation, CEP—Centre Energétique et Procédés, ENSMP (2008) 8. S.P. Deore, A. Pravin, On-line Devanagari handwritten character recognition using moments features, in International Conference on Recent Trends in Image Processing and Pattern Recognition (Springer, Singapore, 2018), pp. 37–48 9. K.S. Varun, I. Puneeth, T.P. Jacob, Hand gesture recognition and implementation for disables using CNN’S, in 2019 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2019), pp. 0592–0595 10. G. Nagarajan, R.I. Minu, Fuzzy ontology based multi-modal semantic information retrieval. Procedia Comput. Sci. 48, 101–106 (2015) 11. S. Prince Mary, B. Bharathi, S. Vigneshwari, R. Sathyabama, Neural computation based general disease prediction model. Int. J. Recent Technol. Eng. (IJRTE) 8(2), 5646–5449. ISSN: 2277– 3878 12. K.M. Prasad, R. Sabitha, K. Muthukumar, Providing cluster categorization of heuristics technique for increasing accuracy in severe categorization of road accidents, in 2017 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2017), pp. 1152–1159 13. S. Divya, R. Vignesh, R. Revathy, A distinctive model to classify tumor using random forest classifier, in 2019 Third International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 2019, pp. 44–47 14. S. Monisha, S. Vigneshwari, a framework for ontology based link analysis for web mining. J. Theoret. Appl. Inf. Technol. 73(2), 307–312 (2015) 15. S. Vigneshwari, B. Bharathi, T. Sasikala, S. Mukkamala, A study on the application of machine learning algorithms using R. J. Comput. Theor. Nanosci. 16(8), 3466–3472 (2019) 16. P. Paul, R.G. Franklin, Fragmenting the data in cloud for enhancing security and performance. Res. J. Pharmaceut. Biol. Chem. Sci. 7(3), 349–355 (2016) 17. M.A. Roy, S. Vigneshwari, An effective framework to improve the efficiency of semantic based search. J. Theoret. Appl. Inf. Technol. 73(2), 220–225 (2015) 18. P. Pinson, Estimation of the uncertainty in wind power forecasting, Ph.D. dissertation, Ecole des Mines de Paris, Paris, France (2006) 19. K.G. Sheela, S.N. Deepa, Selection of number of hidden neurons in neural networks in renewable energy systems. J. Sci. Ind. Res. 73(10), 686–688 (2014) 20. K.B. Navas, Artificial Neural Network based computing model for wind speed prediction: a case study of Coimbatore, Tamil Nadu, India. Physica A (2019) 21. A.S. Shekhawat, Wind power forecasting using artificial neural networks, Paper id: IJERTV3IS041239 ,School of electrical engineering Vellore Institute of technology Chennai, India.
A Proficient Model for Action Detection Using Deep Belief Networks K. S. S. N. Krishna and A. Jesudoss
Abstract Picture processing and also personal computer vision have acquired a massive advance within the part of printer mastering strategies. Some of the primary exploration places in just printer learning are in fact pattern recognition as well as action detection. Motion recognition is really completely new improvements of pattern recognition methods by which the activities created by every measure or perhaps lifestyle getting is in fact tracked as well as administered. Motion recognition continues to encounter many difficulties which should be browsed on as well as conduct realize the activities is an incredibly very little time. Networks such as neural networks as well as SVM are in fact accustomed to instruct the networking inside such a manner that they are proficient to determine a pattern of excitement when a novice driver frame is really supplied. With this particular newspaper, we have suggested a design that detects patterns of measures by a picture or perhaps a video clip. Bounding cardboard boxes are in fact accustomed to determine the activities as well as localize it. Serious belief network is actually used to instruct the product by which numerous pictures owning steps are supplied as soon as an instruction set. The overall performance evaluation was completed on the item, and it is discovered that the activities are detected by it truly properly when a novice driver impression is actually supplied to the network. Keywords Action detection · Features · Extraction · SVM · Deep belief networks · Classification · Recognition · Prediction
K. S. S. N. Krishna · A. Jesudoss (B) Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] K. S. S. N. Krishna e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_55
531
532
K. S. S. N. Krishna and A. Jesudoss
1 Introduction Abundant exploration is effective and is actually completed throughout the earth within an assortment of areas. One specific area is really machine learning methods that are really contagious upward an incredibly fast speed having a big number of investigation which performs especially contained motion detection as well as pattern recognition [1]. Motion detection plays an important function within detecting as well as figuring out behavior inside an arena or perhaps a picture just like it is employed in figuring out the lots of automobiles of a particular period over a highway [2] or perhaps the number of passengers positioned inside a bus stop throughout a good time period [3] in addition to a terrific offer a bit much more. While pattern recognition plays an extremely immersive function with respect to classifying floodaffected places [4] along with the acreage coverage distinction [5] within a particular region, pattern recognition is actually common within forest aspects to monitor the movements on the creatures within the forest in addition to within animations. The activity detection is actually finished with the use of various networks such as SVM networking [6] and KNN networking [7]; therefore a day’s neural networks are in fact taking part in an excellent function within detecting the measures [8]. Actions detection is actually accomplished on a picture by which the characteristics are originally taught on another cluster of photos than the networking visits a place whereby when a novice driver impression is really as long as it is in a position to immediately identify all the skilled steps. Preceding scientific tests indicate that this use on the SVM networking within detecting behavior as well as bounding it with measures containers [9]. Lots of researchers have really employed deep perception networks to sketch away from the features of proficient steps as well as utilize this attribute to classify the novice driver steps [10]. Through these specific papers, we have suggested a unit that could classify measures coming from a photograph. The greater part on the photographs has activity within itself plus the job of ours detects the activities coming from a particular image as well as bounds a package close to it such as it is easy for any onlooker [27]. The pictures are in fact taught with deep perception networks that greatly extract every one of the qualities of this photograph within each and every level and also runs the computation and offer the result to the consequent level [28]. The usefulness of this networking is really noticed plus it is found that it is in a position to successfully classify the activities of a picture or a video clip by frame by frame with much less computational period. The vast majority of this part could be as follows: Sect. 2 is made up of the literature survey, Sect. 3 is made up of the strategy employed in the paper, and also part III is made up of various results received. The newspaper is really concluded in the previous by bringing up the right succeeding that is effective which could be utilized and even put into the suggested to deliver the results.
A Proficient Model for Action Detection …
533
2 Literature Survey Plenty of scientists are actually focusing on movement detection in addition to area recognition for a selection of purposes. With regards to [11], the YOLO tactic is actually created for movement detection which bounds the tasks using a bundle as well as labels the title of the hobby that it is determined. It is stated that it is really a great deal far more exact when in deep distinctions with regular solutions. Movement recognition is really acquiring an innovative pavement of printer learning methods. Gomez et al. have recommended a picky investigation algorithm for choosing the tasks of a phrase pamphlet. The algorithm creates a hierarchy of phrase hypotheses along with outcomes belonging in a great recollection speed within serious comparability together with some other algorithms [12]. Assessments completed on ICDAR benchmarks confirmed that this particular novel technique proposed about [13] which extracted the specific scenes from the organic and natural scenes inside a more effective way. When in contrast to frequent remedies, the suggested algorithm shown harder adaptability to measures to drop with scenarios which are difficult. Convolutional neural networks used in biomedical image segmentation enables the actual localization of neuronal structures when finding inside an electron microscopic heaps. The picky exploration algorithm for detecting the tasks was used to produce possible pastime areas. The selective search algorithm created a little set of data-driven, topquality locations, category impartial, yielding ninety nine % recollection together with a mean average best overlap of 0.879 from 10,097 pimples [14]. Several male made info really that was quickly and also scalable was generated by Gupta A [15]. These man-made photographs were definitely completely accustomed to teach a completely convolution regression network that correctly carried through bounding program as well as movements detection regression throughout many scales and all places of the canon Powershot a495 within a picture. The realization layout managed to figure out the pursuits inside the social networking considerably and also observed it outperforming the current techniques for movements detection inside all natural pictures. An F method of computing 84.2% about the regular structure ICDAR 2013 benchmark was attained by it. Furthermore, it processed fifteen photographs an additional for a GPU. TextBoxes++ as a single picture-oriented scene action detector which detects arbitrary concentrated arena pastime with both substantial precisions in addition to effectiveness within just one town forward pass. Generally, at this time, there appeared to be hardly any post-processing procedure as well as had a very good boy optimum suppression [16]. The suggested design and style were examined on four public specifics sets. Within complete assessments, Textboxes++ claimed that it outperformed battling strategies inside keywords of runtime, accuracy, and action localization. Textboxes++ achieved an F method of computing 0.817 throughout 11.6 frames/s for 1024 × 1024 ICDAR 2015 incidental pastime pictures together with an F means of computing 0.5591 throughout 19.8 frames/s for 768 × 768 COCO Action photographs. A novel tactic was suggested around [17] known as cascaded localization network (CLN) that joined up with two personalized convolution nets as well as
534
K. S. S. N. Krishna and A. Jesudoss
put on it to understand the guidebook areas additionally the area pastime originating coming from a coarse-to-fine fashion. The social networking received a favorite characterwise exercise saliency detection and also was traded out there with string smart pastime area detection, which remained much coming from a number of bottom-up processing actions including persona clustering in addition to segmentation. Rather than using the unsuited symbol-based Web site visitors indication datasets, a tough traffic guide panel dataset was gathered up to instruct and assess the suggested framework. Experimental results confirmed that this suggested framework outperformed a number of simply of late is printed pastime knowing frameworks to drop with genuine interstate scenarios [18]. Guo et al. advised a cost-optimized method of procedures style detection which labored nicely for scanned files that have been found with sheet-fed and flat-bed scanners, cell phone digital camera models, along with combined with many other basic imaging assets [19]. The most recent advances produce utilization of abundant notion networks to come down with serious printer learning techniques [20]. Mohamed et al. speak about phone recognition making use of great notion networks [21]. Strong notion networks are in fact typical inside the great bulk within the apps as being a hookup within in between each amount is produced by them and in addition could be used for component removal with a lot less computational time period. Christy et al. have proposed technology to control the vehicle by automatically sense the accident prone zones in order to prevent accidents with the use of sensors, RFID tags and machine learning techniques [21]. Gandhi and Srivatsa propose an architecture including attack tree construction, attacks detection, and clustering of alerts. By calculating the predicted entropy for a router, alerts are raised for flows in which the predicted entropy is more than a threshold value. Then, the alerts are grouped into different clusters according to their source, target, time, and attack type. It helps to avoid group redundant alerts and to associate alerts that are of the same nature [22]. Prayla et al. have tried to extract semantic implications and applied to develop the vector space thereby detecting the spam mail in the server side [23]. Jesudoss et al. proposed an enhanced authentication using Kerberos protocol for distributed environment. An additional session key is introduced and it serves as pre-authentication to Kerberos [24]. Joseph Manoj et al. proposed a feature selection process which is implemented for text categorization using ant colony optimization (ACO) and neural artificial network (ANN) algorithms [25]. Roobini et al. proposed a system to analyze the Alzheimer diseases by identifying the discovering the highlights important to Alzheimer Disease and furthermore different application-based PCA results [26].
3 Proposed Work The unit recommended within the existing exploration produces utilization of deep perception networks to instantly recognize the activities completed in the video clip as well as a picture.
A Proficient Model for Action Detection …
535
3.1 Video to Image Conversion In case a video clip is actually provided for training the network, then it is segmented into many frames that depict the functions of a picture. Subsequently, the video clip is in fact used to instruct the community. The other frame by frame processing is really carried through to determine the activities performed. A dataset is really used to instruct the networking which includes various photographs as well as video clip fasteners exactly where various steps are in fact performed. Several of the actions used to train the networking are in fact bouncing, strolling, positioned, and resting. These steps are actually trained to the networking as well as each time a novice driver impression is actually supplied it is in a position to classify if the person is actually positioned, resting, or strolling.
3.2 Segmentation It undergoes another exercise as nook advantage detection as well as positionsensitive segmentation. This specific segmentation is really used for detecting the particular patterns within the photograph. Following the networking is consistently taught on a number of pictures, it retails the unique patterns that are taken when an activity is really done in the image or the video clip. After the instruction is really carried out, the assessment section arrives exactly where a novice driver impression is in fact supplied to the networking to determine the activities found in the photograph. The teaching established comprises 70% of photographs while the assessment set comprises 30% of pictures.
3.3 Deep Belief Network Deep perception network usually termed as DBN is really printer mastering words that link the various pH levels of neural networks. The neural networks go similar to the neuron found in a man’s mind. The DBN procedure function extraction within each and every level as well as goes on to the consequent level. The result of the previous coating is utilized once the feedback of this following is the corresponding level. Figure 1 reveals the normal structure of precisely how deep belief networks deliver the results. When an impression is really supplied, the networking measures areas which are in fact extracted plus it is handed right down to the recognition networking by which the characteristics are in fact extracted along with the labeling on the measures is really accomplished. The result of existing coating functions when the feedback of the next coating. The quantity of levels employed at the networking totally is driven through the performance of this Panasonic phone. The number of amounts, and the activation
536
Fig. 1 Architectural design of deep belief network
Fig. 2 Software design
K. S. S. N. Krishna and A. Jesudoss
A Proficient Model for Action Detection …
537
Fig. 3 Block diagram of the proposed model
capabilities, may be converted as well as noticed to recognize the top highly effective airer. Motion recognition can be quite difficult to do, and hence, SIFT function extraction is originally carried out on the photograph to reach find out about the skills. A textual histogram is actually created by using Bag of Visual Words (BOVW) that display the patterns noticed with the steps. The software package design belonging to the suggested style is actually depicted in Fig. 2. The picture implies that the deep belief networks is made up of countless levels as well as it is entirely based on the basic need of this application program. The impression is supplied once the feedback, and it is transferred through the neural community. The common obstruct diagram of this product is talked about in Fig. 3. The impression is originally scanned and also picked up as a consequence of this dataset. Subsequently, the picture is actually prepared in such a manner that the quality of the image is actually elevated which might include much more spatial and specification information. Following the photograph is really switched into the attributes that are in fact extracted by diverse levels within the community. Following the instruction is really carried out, and a novice driver impression is in fact supplied to the networking that reads the photograph. Following the photograph is really read through, and the outline of this picture is really realized initially. Consequently, the region segmentation structure happens by which the tasks within the photo are in fact segmented separately. Then, abounding package is really bounded on the subject of the joy patterns that are based on means of this community.
4 Results and Discussion The experimental outcomes had been achieved on an assortment of datasets. The dataset was comprised of various photographs that contained conduct within it. The experiment was carried through inside MATLAB R2017b whereby a complete perception networking was developed. The computer vision toolbox found in the MATLAB was used to produce the community. Following the networking was
538
K. S. S. N. Krishna and A. Jesudoss
Fig. 4 Efficiency of the model
designed with unique levels, and the dataset was jam-packed to carry out the instructions. The instructing was furnished on the networking by which the attributes were definitely extracted. Then, a novice driver photograph was offered to the networking to determine the pursuits in the photographs offered. Figure 4 present the experimental outcomes that had been gotten when a novice driver photograph was supplied. The photograph was comprised of the two pursuits along with elements with no recreation. The performance of this networking was really appropriate in addition to the results that are actually discovered within the following figure. The networking handled to determine the tasks within the photo plus bounded a package to the pastime to truly succeed apparently to the onlooker.
5 Conclusion Swiftly, improvement inside the upcoming specialized era has been gained by arena recognition and also activity detection. The greater part on the applications is counted on motion detection as well as arena recognition including the movements of creatures of the forest parts, and furthermore, the activities changed utilization of animations. From our suggested performance, we have suggested a unit that may recognize the activities through the specified pictures as well as distinguish them by bounding containers. The difference was completed utilizing deep belief networks, and the suggested design had excellent precision. The DBNs extracted the features on the photographs in the teaching set and then were at a place to properly classify the activities completed in the completely new photographs offered to the community. Some
A Proficient Model for Action Detection …
539
other details as effectiveness as well as the computational period for your undertaking had been moreover deemed that was found to become far better as compared to the regular procedures. Succeeding is effective and is able to have the use of additional major mastering techniques along with focusing on the protection issues of neural networks.
References 1. M. Pandey, S. Lazebnik, Scene recognition and weakly-supervised action localization with deformable part-based models (2011) 2. J.A. Pawlicki, M.A. Mcmahon, S.G. Chinn, J.S. Gibson, U.S. Patent No. 7,038,577. U.S. Patent and Trademark Office, Washington, DC 3. T.J. Castello, P.J. Tkacik, U.S. Patent No. 7,490,841. U.S. Patent and Trademark Office, Washington, DC (2009) 4. J. Akshya, P.L.K. Priyadarsini, A hybrid machine learning approach for classifying aerial images of flood-hit areas, in 2019 International Conference on Computational Intelligence in Data Science (ICCIDS) (IEEE, 2019), pp. 1–5 5. H. Nemmour, Y. Chibani, Multiple support vector machines for land cover change detection: an application for mapping urban extensions. ISPRS J. Photogramm. Remote Sens. 61(2), 125–133 (2006) 6. E. Osuna, R. Freund, F. Girosit, Training support vector machines: an application to face detection, in 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1997. Proceedings (IEEE, 1997), pp. 130–136 7. C.L. Liu, C.H. Lee, P.M. Lin, A fall detection system using the k-nearest neighbor network. Expert Syst. Appl. 37(10), 7174–7181 (2010) 8. C. Szegedy, A. Toshev, D. Erhan, Deep neural networks for action detection, in Advances in Neural Information Processing Systems, pp. 2553–2561 9. C.S. Shin, K.I. Kim, M.H. Park, H.J. Kim, Support vector machine-based action detection in digital video, in Neural networks for signal processing X, 2000. Proceedings of the 2000 IEEE Signal Processing Society Workshop, vol. 2 (IEEE, 2000), pp. 634–641 10. S. Gidaris, N. Komodakis, Action detection via a multi-region and semantic segmentationaware DBN model, in Proceedings of the IEEE International Conference on Computer Vision, pp. 1134–1142 (2015) 11. J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real-time action detection, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016) 12. L. Gómez, D. Karatzas, Action proposals: an action-specific selective search algorithm for word spotting in the wild. Pattern Recogn. 70, 60–74 (2017) 13. Z. Zhang, W. Shen, C. Yao, X. Bai, Symmetry-based action line detection in natural scenes, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2558– 2567 (2015) 14. O. Ronneberger, P. Fischer, T. Brox, U-net: convolutional networks for biomedical image segmentation, in International Conference on Medical Image Computing and ComputerAssisted Intervention (Springer, Cham, 2015), pp. 234–241 15. J.R. Uijlings, K.E. Van De Sande, T. Gevers, A.W. Smeulders, Selective search for action recognition. Int. J. Comput. Vision 104(2), 154–171 (2013) 16. A. Gupta, A. Vedaldi, A. Zisserman, Synthetic data for action localization in natural images, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2315– 2324 (2016)
540
K. S. S. N. Krishna and A. Jesudoss
17. G. Nagarajan, R.I. Minu, A.J. Devi, Optimal nonparametric bayesian model-based multimodal BoVW creation using multilayer pLSA. Circuits Syst. Signal Process. 39(2), 1123–1132 (2020) 18. M. Liao, B. Shi, X. Bai, Actionboxes++: a single-shot oriented scene action detector. IEEE Trans. Image Process. 27(8), 3676–3690 (2018) 19. X. Rong, C. Yi, Y. Y. Tian, Recognizing action-based traffic guide panels with cascaded localization network, in European Conference on Computer Vision (Springer, Cham, 2016), pp. 109–121 20. Y. Guo, Y. Sun, P. Bauer, J.P. Allebach, C.A. Bouman, Action line detection based on costoptimized local action line direction estimation, in Color Imaging XX: Displaying, Processing, Hardcopy, and Applications, vol. 9395 (International Society for Optics and Photonics, 2015), p. 939507 21. H. Lee, R. Grosse, R. Ranganath, A.Y. Ng, Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations, in Proceedings of the 26th Annual International Conference on Machine Learning (ACM, 2009), pp. 609–616 22. A. Christy, S. Vaithyasubramanian, V.A. Mary, J. Naveen Renold, Artificial intelligence based automatic decelerating vehicle control system to avoid misfortunes. Int. J. Adv. Trends Comput. Sci. Eng. 8(6), 3129–3134 (2019) 23. G.M. Gandhi, S.K. Srivatsa, An entropy algorithm to improve the performance and protection from denial-of-service attacks in NIDS, in 2009 Second International Conference on Computer and Electrical Engineering, Dubai, pp. 603–606 (2009) 24. S. Prayla Shyry, V.S.K. Charan, V.S. Kumar, Spam mail detection and prevention at server side, in 2019 Innovations in Power and Advanced Computing Technologies (i-PACT), pp 1–6 25. A. Jesudoss, N.P. Subramaniam, Enhanced Kerberos authentication for distributed environment. J. Theoret. Appl. Inf. Technol. 69(2), 368–374 (2014) 26. R. Joseph Manoj, M.D. Anto Praveena, K. Vijayakumar, An ACO–ANN based feature selection algorithm for big data. Cluster Comput. 22, 3953–3960 (2019) 27. M.S. Roobini, M. Lakshmi, Advancement of principal component judgement for classification and prediction of Alzheimer disease. Int. J. Recent Technol. Eng. 8(23). ISSN: 2277-3878 28. T.P. Jacob, Implementation of randomized test pattern generation strategy. J. Theoret. Appl. Inf. Technol. 73(1)
Semantic-Based Duplicate Web Page Detection A. C. Santha Sheela and C. Jayakumar
Abstract The Internet search has become a regular activity for billions of web users to find information. Users typically rely on search engines for information retrieval. Thus, it has become increasingly important for users to find the best results for their queries. There are nevertheless challenges to provide the most efficient, appropriate, and trustworthy results to the user in every web search environment. But there exist numerous unwanted repetitive web pages that escalate time complexity and indexing space issues; hence, identifying and eliminating such pages become a necessity for the communities responsible for information retrieval and web mining. The main aim for web content mining is to identify the duplicate web page content in web search engine. The literature shows the lack of a complete and time efficient duplicate detection model for web search optimization. The present study has identified this need for an enhanced duplicate web page detection technique to improve web search. Increasing the Web site usability and user satisfaction is the most crucial factors in web page detection scenario. Keywords Duplicate detection · Semantic web · Search engine · Information retrieval
1 Introduction Web mining transforms the World Wide Web into a more usable environment where users can access the information they need quickly and easily. The data, documents, and multimedia from the World Wide Web are found and analyzed. Web mining uses A. C. Santha Sheela (B) Research Scholar, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] C. Jayakumar Department of Computer Science and Engineering, Sri Venkateswara College of Engineering, Kancheepuram, India e-mail: [email protected]
© Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_56
541
542
A. C. Santha Sheela and C. Jayakumar
document content, the layout of hyperlinks, and the statistics to help users fulfill their knowledge requirements. The site and search engines themselves provide document relationship information. Web mining is the exploration of these ties and is carried out in three fields that often overlap. First of all, material mining, content by keywords is described by search engines. Find the keywords of content and connection the content of a page and web mining is a user’s query web. The hyperlinks include information on certain web-based documents considered relevant to another document. Such ties improve the document’s breadth and include the Web’s multidimensionality. The second field of web mining is mining this connection structure. Finally, there is a link to other web-based documents found by previous searches. The relationships in the search and access logs are documented. The third field of web mining is mining these logs. Understanding the user’s position in web mining is also significant. The web pages returned as response to a question can be influenced by the review of previous sessions of the user, preferred view of information and expressed preferences. The interdisciplinary complexity of web mining is covering fields including information retrieval, processing of natural languages, knowledge extraction, computer education, software, software mining, and data storage. M-commerce, e-commerce, e-government, e-learning, remote learning, and organization are realistic for the use of web mining techniques.
1.1 Types of Web Mining There are three types of web mining which are discussed below. • Web Structure Mining • Web Usage Mining • Web Content Mining.
1.2 Web Content Mining The mining of web content, scans, and excerpts from web documents, photographs, graphs, and images. It is often referred to as text mining. Web content mining uses two forms of approaches. The two methods are the approach to the database and the agent approach. The database approach helps in the semi-structured data recovery from web records. The agent-based approach searches for information and assists in the organization of the information obtained. Mining of web content analyzes web resource content. Content data represent the information gathered on the web page for the purpose of transmitting to users. Unstructured data are the bulk of information available on the Internet. The knowledge recovery view and the database view are two separate perspectives on web content mining. The main objective of content
Semantic-Based Duplicate Web Page Detection
543
mining is to enhance filtering and discovery of informationn users’ details. The main objective of the database view is the management of site knowledge.
1.3 Web Content Mining Web content mining includes the following: unstructured mining, organized mining, semi-structured mining, and multimedia mining approaches.
1.4 Web Content Mining Tools The tools used to access the critical details for users are the applications. The tools are also used. It gathers knowledge that suits perfectly. Several of them are Web Info Extractor, Mozenda, Screen-Scraper, and Web Content Extractor.
1.5 Semantic-Based Web Page Duplication Detection Semantic Web is an extension of the current Internet that offers well-defined knowledge that enhances the interoperability of computers with human beings. Most tasks and decisions are left to machines with the concept of semantic network. This can be achieved by adding expertise to the content of web pages in understandable machine languages and creating intelligent software providers who can process this information. On the subject, in addition, since the Semantic Web is made up of structured data and explicit metadata, it allows easy access to semantic information and search power [1, 2]. Several variables such as search context, location, purpose and word changes and synonyms, general queries and specific queries are covered by semantic search tools. In order to have appropriate search results, both definition matching and natural language queries are used. Big web search engines such as Google and Bing contain some semantic search components. Semantic analysis not only looks for data but also for the logical meaning of the keyword data presented [3]. In comparison with other algorithms of search, semantic search relies on the meaning, purpose and definition of the keyword sentence searched: location synonyms of a term, current trends, word changes, and other elements of natural language in the hunt [4]. The principle of semantic search is derived from various algorithms, including idea mapping, graphics, and fuzzy logic.
544
A. C. Santha Sheela and C. Jayakumar
2 Related Work It developed a method to use search engine to calculate semantic similarity between words [5–7]. To find semantic similarities, you used fragments and a few pages from the web search engine. The text snippets contain important word knowledge [8]. The proposed new measure of similarity measures similarity by applying extracted Lexicon pattern from the snippetings [9]. They had the opportunity to find a suitable example for a definitive relation of terms [10, 11]. The system has incorporated vector machine support page counting based similitude score and lexicosyntatic pattern. With Charles–Miller data set and program, they evaluated the efficiency of the method better than the current semantical similarity test [12]. They prepared WorldNet training data set and the device demonstrated improved mining accuracy [13]. Suggest that the semantic grammar hybrid approach is a major tool for plagiarism in natural language detection [6]. The effects of plagiarism detection are enhanced and successful by detecting copied content of Web sites and by rewriting or interchanging altered Web sites with a few word(s), of similar meaning, which is not recognized by grammar [14]. This way the plagiarized parts can be found at any place on the web document which the semanticized approach cannot detect [15, 16] and calculate the resemblance between the documents. Implemented a semantic detection technique for plagiarism which was based on the function of flush membership to quantify the degree of similarity [7, 17]. In four key stages, the system has been evolving. First of all, it includes preprocessing and tokenization. The Jaccard coefficient and shingling algorithm are then used to obtain the candidate lists of documents for any suspect document. Next, the suspect document and the related applicant documents are comparable in detail. Fuzzy similarity, which varies from 0 to 1, is calculated; 0 is for completely different phrases and 1 is for reproductive phrases. The decision is based on the measured fluctuating similarity to a threshold. At the end of the process, there will be a post-processing step as consecutive paragraphs are merged [18, 19]. The elimination of almost duplicates ensures that bandwidth can be maintained and storage costs can be reduced, and search index efficiency improved. [20, 21] In addition, this also reduces the remote host hosting the web pages. Most of this is possible by nearly identical systems of webpage detection. The other issue concerns the size of web pages where there are billions of web pages in the search engine index that contribute to a multi-terabyte database. The second problem is the crawler’s ability to sweep thousands of Web sites. In order to demonstrate its consistency in the semantic web, [13, 22] have examined the applicability of machine learning on the semanticized web. They discussed the similarity and remote method, kernel equipment, multivariate prediction models, relation graphical models and probabilistic first-order approaches to statistical mining of the Semantic Web.
Semantic-Based Duplicate Web Page Detection
545
3 Proposed Work Preprocessing web page involves (a) labeling (b) lemmatization, and (c) tests on the resemblance. Tagging is the tagging method for a term on the text of the web page. Tokenization is to split sentences into tokens and throw away punching and other unwanted characters. WordNet is used to search for connections between two tokens. Lemmatization is a simple technique language processing that completes the morphological analysis and defines the basic form or dictionary of a word called lemma. Looking for similarities compares the content of two web pages. Next, the pair of terms will be assigned a similarity. In order to evaluate similarity between words, most semantic similarities tests use WordNet. Knowledge can be obtained over the web only through search, and as this knowledge is plentiful, the reliability of the search engine becomes a major problem. The user expends tremendous effort and time when the results of the search engines are filtered, especially when it involves almost duplicate data. In order to solve this question, a virtually dubbed device is capable of defining the following documentation: Windows 7, Intel Pentium (R), G2020 CPU, and processor speed 2.90 GHz, respectively. Experiments focused on these settings were given. The required software configuration is mentioned below, the operating system of the operating system of Windows 7, Front End of the operating system ML, Back End the operating system MYSQL. Data sets were gathered from web content in online sources and built in the ML framework. The entire data was kept in place.
4 Result and Discussion MYSQL and also getting from output results stored in MYSQL (Fig. 1). A. Preprocessing The web page material preprocessing means that the sliding web document is preprocessed. The preprocess involves actions such as halting, replacement of words, and tokenization. Stemming requires the contrast of the root types of the words in its database with the documents. Deleting words is the act of not taking into consideration other terms that do not impact the final outcome. Tokenization is characterized as dividing words into tiny, detailed definitions. B. Semantic similarity Semantic similitude measure is the ability to evaluate the similarity of words, sentences, texts, concepts, or instances between different terms. The goal of the semanticized similarity measures between two papers is to find the degree to which words, conceptually similar, are important. WordNet is a synset representing a word’s exact significance. Under one form of POS, the precise meaning of a single word is called meaning. Align and decide the most suitable context for each word in each sentence.
546
A. C. Santha Sheela and C. Jayakumar
Fig. 1 Input datasets selecting(web page content)
Fig. 2 Test data selecting (relevant web page content)
Figure 2 shows the test data of web page content (relevant web page content) to selecting the test dataset folders and display the web page content (Fig. 3).
Semantic-Based Duplicate Web Page Detection
547
Fig. 3 Output results
5 Conclusion The efficiency of a web search is affected by several other factors. Several steps have been taken to automate the site search, and work is still under way. However, the performance of these web search approaches depends on the amount of data available through the internet. Almost every Web site has a double issue Good. Double content is either two or more similar pieces of materials, the only difference is the URL. The existence of duplicate contents degrades web search efficiency when heterogeneous information is combined. These web pages either increase the storage space of the index or increase the service costs. The output shows that semantinal search performs better than semantic-based search engines and generates an efficient output.
References 1. C. Preethi, K.M. Prasad, Analysis of vehicle activities and live streaming using IOT, in 2019 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2019), pp. 0754–0757 2. P. Paul, R.G. Franklin, Fragmenting the data in cloud for enhancing security and performance. Res. J. Pharmaceut. Biol. Chem. Sci. 7(3), 349–355 (2016)
548
A. C. Santha Sheela and C. Jayakumar
3. G. Nagarajan, K.K. Thyagharajan, A machine learning technique for semantic search engine. Procedia Eng. 38, 2164–2171 (2012) 4. M.P. Selvan, A. Gupta, A. Mukherjee, Give attention to overlapping network detection in networks for multimedia. J. Comput. Theor. Nanosci. 16(8), 3173–3177 (2019) 5. B. Danushka, M. Yutaka, I. Mitsur, A web search engine-based approach to measure semantic similarity between words. IEEE Trans. Knowl. Data Eng. 23(7) (2011) 6. H. Ahangarbahan, G.A. Montazer, A mixed fuzzy similarity approach to detect plagiarism in Persian texts, in International Work- Conference on Artificial Neural Networks (Springer, pp. 525–534) (2015) 7. R. Erhard, A. Philip, Bernstein. A survey of approaches to automatic schema matching. VLDB J. 10(4), 334–350 (2001) 8. S.P. Deore, A. Pravin, On-line Devanagari handwritten character recognition using moments features, in International Conference on Recent Trends in Image Processing and Pattern Recognition (Springer, Singapore), pp. 37–48 9. K.S. Varun, I. Puneeth, T.P. Jacob, Hand gesture recognition and implementation for disables using CNN’S, in 2019 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2019), pp. 0592–0595 10. A. Velmurugan, T. Ravi, Optimal symptom diagnosis for efficient disease identification using somars approach. J. Comput. Theor. Nanosci. 14(2), 1157–1162 (2017) 11. A. Praveena, M.K. Eriki, D.T. Enjam, Implementation of smart attendance monitoring using Open-CV and Python. J. Comput. Theor. Nanosci. 16(8), 3290–3295 (2019) 12. B.K. Samhitha, S.C. Mana, J. Jose, M. Mohith, L. Siva Chandhrahasa Reddy, An efficient implementation of a method to detect sybil attacks in vehicular ad hoc networks using received signal strength indicator. Int. J. Innov. Technol. Explor. Eng. (IJITEE) 9(1) (2019). ISSN: 2278-3075 13. B.S. Alsulami, M.F. Abulkhair, F.E. Eassa, Near duplicate document detection survey. Int. J. Comput. Sci. Commun. Netw. 2(2), 147–151 (2010) 14. L.M. Gladence, T. Ravi, Heart disease prediction and treatment suggestion. Res. J. Pharmaceut. Biol. Chem. Sci. 7(2), 1274–1279 (2016) 15. T.S. Kala, A. Christy, A pattern matching algorithm for reducing false positive in signature based intrusion detection system. Int. J. Eng. Technol. 8(2), 580–586 (2016) 16. R. Jayashree, A. Christy, Improving the enhanced recommended system using Bayesian approximation method and normalized discounted cumulative gain. Procedia Comput. Sci. 50, 216–222 (2015) 17. A.Z. Broder, Identifying and filtering near-duplicate documents, in CPM 2000. LNCS, vol. 1848, ed. by R. Giancarlo, D. Sankoff 18. N. Srinivasan, C. Lakshmi, Stock price prediction using fuzzy time-series population based gravity search algorithm. Int. J. Soft. Innov. (IJSI) 7(2), 50–64 (2019) 19. V.V. Kaveri, V. Maheswari, A model based resource recommender system on social tagging data. Indian J. Sci. Technol. 9, 25 (2016) 20. J. Prasanna Kumar, P. Govindarajulu, Duplicate and near duplicate documents detection: a review. Eur. J. Sci. Res. 32(4), 514–527 (2009) 21. V.A. Narayana, P. Premchand, A. Govardhan, Effective detection of near-duplicate web documents in web crawling. Int. J. Comput. Intell. Res. 5(1), 83–96 (2009) 22. A. Rettinger et al., Mining the semantic web statistical learning for next generation knowledge bases. Data Min. Knowl. Disc 24, 613–662 (2012)
An Integrated and Dynamic Commuter Flow Forecasting System for Railways Y. Bevish Jinila, D. Goutham Reddy, and G. Yaswanth Reddy
Abstract Uncertain and instable passenger flow in Urban Metro Transit is a growing concern in the recent rail transport system. It is vital to forecast the passenger flow, in order to provide a reliable daily operation and management. Short-term forecasting has become the most important component for an efficient rail management system. Existing literatures on passenger flow forecasting is based on extreme kernel approach that learns and forecasts signals with different frequencies. These approaches are not able to train and remember over a long time due to issues of backpropagated error. By addressing this problem, holt-winters forecasting algorithm is used. Experimental discussion shows that the holt algorithm provides better efficiency based on metrics including accuracy and F-measure. Keywords Rail transit · Forecasting · Short term · Passenger flow
1 Introduction Developing cities metro electric rail transport is one of the most important parts of the urban infrastructure. It is the most commonly used transportation in metro cities [1]. It helps in reducing air pollution as most of these transport systems uses environment-friendly fuel sources and also reduces traffic congestion. It is important to predict the passenger flow correctly to avoid the wastage of resources. And, in addition, this data can be used by the commuters to precisely schedule their travel on time. Further, improving the operations of existing rail system can reduce the energy consumption and CO2 emission. The energy consumption can be reduced by efficiently planning the number of trips scheduled, lighting, and ventilation. The travel forecast in metropolitan area is modeled and studied [2]. The inference of this work presented the total travel demand of a particular region. Y. B. Jinila (B) · D. G. Reddy · G. Y. Reddy Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_57
549
550
Y. B. Jinila et al.
The metro transportation system of the several cities in India is doing well and is also placing more and more resources in the sector for passenger’s safety and comfort. This transportation system is also using resources uniformly resulting in using more number of resources in places required less resources. This results in the overflow of resources and manpower in places not necessary. The timing of the trains in the metro transport services is not according to the passenger crowd. This results in running the trains with less number of passengers in each route which often results in loss of operational costs which is not profitable or barely covers the operational costs. And other key issues is the availability according to the timing results in different routes are more frequent in less rush periods which results in exactly in the same situation as addressed above. The less investment in research and development means sticking with the older equipment for an extended period of time. The older equipment is not compatible with the modern needs of the passengers [3]. The emission of the pollution from the older vehicles is higher than that of the modern vehicles this results in the emission of gases that contains more or higher carbon levels. This is quite common problem in the developing nations [4]. This problem can be achieved through modern technology usage. There are number of useful ways that costs less and arrange this system to gain more profits through the usage of technology in the service sector. There are a variety of methods used for forecasting the passenger flow in a rail transit system namely neural network models like Support Vector Machine (SVM), Convolution Neural Network (CNN), graphical CNN, Recurrent Neural Networks (RNN) and hybrid models, time series models, frequency based models, elasticity based models [5]. The recent existing method is a hybrid model which includes the combination of wavelet transform and machine learning. The idea behind this model is to classify the passenger flow datasets into lower- and higher-level sequences and then learn and forecast different frequencies using machine learning method. Finally, different prediction sequences are reconstructed using wavelet transform [6, 7]. Due to the backpropagation error, they are unable to train and memorize data over a long time. To address this issue, Holt-Winters forecasting model is proposed [8]. The inputs of the model are abnormal features, which consist of the recent time series data. Section 2 elaborately describes about the related work. Section 3 elates on the proposed methodology of forecasting the passenger flow. Section 4 discusses on the performance analysis and discussion. Section 5 discusses the conclusion of the work and briefs on the future enhancements [9].
2 Related Work This section briefly shows the related literatures on the prediction of passenger flow in rail transport system [10, 11]. As per the directions of the metro train systems the resources like fuel are managed and distance traveled the fuel is refilled through gas stations [12]. This notes and referred how to save the resource and the searching of
An Integrated and Dynamic Commuter Flow Forecasting …
551
the nearby stations and gas refilling stations [13]. Those are used to maintain the traffic flow prediction [14]. The improvements are done through the traffic networks through volume of traffic. The routes should be used to compare the destinations and the passenger datasets [15]. The traffic flow prediction can be achieved through seasonal ARIMA process. This model requires the limited input data through datasets [16]. The traffic flow on the rails can also be predicted by combining Kohonen map with ARIMA [17, 18]. Through this model requires time series models to forecast the traffic flow and get to the destinations through minimum time delays [19], the resources also should be utilized through different processes and the requirements through modeling the outcome of the resources shared on the efficiency of supply chain [20]. A hybrid optimization of computation of intelligence techniques for highway passenger flow or concentration prediction is conducted and recorded as the data [21]. The neural network based on radial basis function is used for freeway traffic volume forecasting [22]. The traffic flow is also studied in [23, 14, 25].
3 Proposed Methodology The methodology that is used in the prediction is based on Holt-Winters time series forecasting model and SARIMA. The extreme machine learning used to study the relevant datasets which are used to make prediction of the future of the data. The statistical representation is done through many graph structures which will be shown in Fig. 1. The immediate requirement is to out write a code that will analyze the data and implement the prediction required. The main way of the representation is to through short long-term memory. The basic idea is to analyze the weekly data and in special days through long-term memory. That means we are not only using regular day schedules but also using the data which is recorded in the previous events.
Fig. 1 Proposed system architecture
552
Y. B. Jinila et al.
3.1 Holt-Winters Forecasting Model Holt-Winters forecasting model is a time series approach that helps to predict the behavior over time. Holt-Winters is one of the models of time series behavior. This model consists of three features, namely average, slope, and seasonality. This model works based on the computation of central value and further addition of the slope and seasonality.
3.2 Seasonal Autoregressive Integrated Moving Average (SARIMA) The SARIMA model process requires three new seasonal parameters which are hyperactive, and these parameters are stated as autoregression, differencing, and average. There is an additional parameter included and is called as period of seasonality. SARIMA is used in Python using three steps: 1. Define the model 2. Fit the defined model 3. Make the prediction with the fit model. The following topics will be covered while executing this process: 1. 2. 3. 4. 5. 6.
Stationarity (Differencing and augmented Dickey–Fuller test) ACF and PACF plots Grid search and Akaike information criterion (AIC) Walk forward validation Mean absolute percentage error (MAPE) Exogenous variable.
The above-mentioned processes, the SARIMA, will be processed step by step until the each step thar are mentioned above are responsible for the end result. The effectiveness of the SARIMA will be measured using mean absolute percentage error. These results are plotted on a graph to compare the results got out if the process that is accurate enough to go through the next procedures that is present in the prediction sector.
4 Experimental Discussion This section shows the experimental results obtained in the implementation of the forecasting algorithm. To implement this approach, the dataset of traffic flow is taken from the Kaggle dataset. Figure 2 shows the statistical figure that represents
An Integrated and Dynamic Commuter Flow Forecasting …
553
Fig. 2 Month versus passenger count
the number of passengers getting in and out of the train at the destinations of the entire year. It clearly shows the usage of the railway passengers over a period of time. The data is analyzed with the time series holt algorithm. Figure 3 shows the graphical representation of the data distribution. Figure 4 shows the results obtained after prediction using the rolling mean and standard deviation. The autocorrelation and partial autocorrelation are shown in Fig. 5.
Fig. 3 Graphical representation of the datasets with a linear base line using holt algorithm.
554
Y. B. Jinila et al.
Fig. 4 Prediction results of rolling mean and standard deviation
5 Conclusion Urban rail transport system is important in the construction of Urban Transport Infrastructure as rail transport is the most preferred method of transport in short distances. It is important to predict the passenger flow correctly because in some days passenger flow is very low like during festivals or public holidays. In this paper, a time series-based algorithm on forecasting is suggested. The experimental results show that a better forecasting can be done prior to the travel.
An Integrated and Dynamic Commuter Flow Forecasting …
555
1.2 1 0.8 0.6 0.4 0.2 0 1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17
9
10 11 12 13 14 15 16 17
-0.2 -0.4
(a) 1.2 1 0.8 0.6 0.4 0.2 0 -0.2
1
2
3
4
5
6
7
8
-0.4 -0.6
(b) Fig. 5 Representation of ACF and PACF
References 1. W. Jianjun, S. Huijun, G. Ziyou, H. Linghui, S. Bingfeng, Risk-based stochastic equilibrium assignment model in augmented urban railway network. J. Adv. Transp. 48(4), 332–347 (2014)
556
Y. B. Jinila et al.
2. J. Mwakalonge, D. Badoe, Trip generation modeling using data collected in single and repeated cross-sectional surveys. J. Adv. Transp. 48(4), 318–331 (2014) 3. E. Brumancia, S.J. Samuel, L.M. Gladence, K. Rathan, Hybrid data fusion model for restricted information using Dempster-Shafer and Adaptive neuro-fuzzy inference (Dsanfi) system. Soft. Comput. 23(8), 2637–2644 (2019) 4. T.S. Kala, A. Christy, A pattern matching algorithm for reducing false positive in signature based intrusion detection system. Int. J. Eng. Technol. 8(2), 580–586 (2016) 5. A. Andreoni, M. Postorino, A multivariate Arima model to forecast air transport demand, in Proceedings of the Association for European Transport and Contributors, pp. 1–14 (2006) 6. G. Nagarajan, R.I. Minu, Fuzzy ontology based multi-modal semantic information retrieval. Procedia Comput. Sci. 48, 101–106 (2015) 7. M.P. Selvan, A. Gupta, A. Mukherjee, Give attention to overlapping network detection in networks for multimedia. J. Comput. Theoret. Nanosci. 16(8), 3173–3177 (2019) 8. S. Prayla Shyry, V.S.K. Charan, V.S. Kumar, Spam mail detection and prevention at server side, in 2019 Innovations in Power and Advanced Computing Technologies (I-Pact), pp. 1–6 9. R.V. Kulkarni, S.H. Patil, R. Subhashini, An overview of learning in data streams with label scarcity, in Proceedings of the International Conference on Inventive Computation Technologies, ICICT 2016 (2017) 10. S.P. Deore, A. Pravin, On-line Devanagari handwritten character recognition using moments features, in International Conference on Recent Trends In Image Processing And Pattern Recognition (Springer, Singapore, 2018), pp. 37–48 11. K.S. Varun, I. Puneeth, T.P. Jacob, Hand gesture recognition and implementation for disables using Cnn’s, in 2019 International Conference On Communication And Signal Processing (ICCSP) (Ieee, 2019), pp. 0592–0595 12. D. Gong, M. Tang, S. Liu, G. Xue, L. Wang, Achieving sustainable transport through resource scheduling: a case study for electric vehicle charging stations. Adv. Prod. Eng. Manage. 14(1), 65–79 (2019) 13. M. Lippi, M. Bertini, P. Frasconi, Short-term traffic flow forecasting: an experimental comparison of time-series analysis and supervised learning. IEEE Trans. Intell. Transp. Syst. 14(2), 871–882 (2013) 14. E.I. Vlahogianni, J.C. Golias, M.G. Karlaftis, Short-term traffic forecasting: overview of objectives and methods. Transp. Rev. 24(5), 533–557 (2004) 15. M.M. Hamed, H.R. Al-Masaeid, Z.M.B. Said, Short-term prediction of traffic volume in urban arterials. J. Transp. Eng. 121(3), 249–254 (1995) 16. S.V. Kumar, L. Vanajakshi, Short-term traffic flow prediction using seasonal Arima model with limited input data. Eur. Transp. Res. Rev. 7(3), 1–9 (2015) 17. C. Preethi, K.M. Prasad, Analysis of vehicle activities and live streaming using IoT, in 2019 International Conference on Communication and Signal Processing (ICCSP) (IEEE, 2019), pp. 0754–0757 18. G.K. Krishnan, R.G. Franklin, Privacy and auditing bigdata stored in cloud with verify update. Res. J. Pharmaceut. Biol. Chem. Sci. 7(3), 742–750 (2016) 19. M. Van Der Voort, M. Dougherty, S. Watson, Combining Kohonen maps with Arima time series models to forecast traffic flow. Transp. Res. C, Emerg. Technol. 4(5), 307–318 (1996) 20. D. Gong, S. Liu, X. Lu, Modelling the impacts of resource sharing on supply chain efficiency. Int. J. Simul. Model.14(4), 744–755 (2015) 21. S. Lee, D.B. Fambro, Application of subset autoregressive integrated moving average model for short-term freeway traffic volume forecasting. Transp. Res. Rec. 1678(1), 179–188 (1999) 22. B. Williams, Multivariate Vehicular traffic flow prediction: evaluation of Arimax modeling. Transp Res. Rec. J. Transp. Res. Board 1776(2), 194–200 (2001) 23. S.H. Sutar, Y.B. Jinila, Congestion control for better performance of Wsn based IoT ecosystem using Kha mechanism. Int. J. Recent Technol. Eng. 8(2s3) (2019) 25. Y. Bevish Jinila, K. Komathy, Rough set based fuzzy scheme for clustering and cluster head selection in VANET. Elektronika Ir Elektrotechnika 21(1), 54–59 (2015). ISSN 1392–1215
Attribute-Based Data Management in Crypt Cloud Eswar Sai Yarlagadda and N. Srinivasan
Abstract Data proprietors will store their data out in the open cloud alongside encryption and specific arrangement of credits to get to control on the cloud data. While transferring the data into open cloud, they will allot some ascribe set to their data. On the off chance that any approved cloud client needs to download their data, they ought to enter that specific credit set to perform further activities on data proprietor’s data. A cloud client needs to enlist their subtleties under cloud association to get to the data proprietor’s data. Clients need to present their subtleties as characteristics alongside their assignment. In light of the client subtleties, semi-trusted authority creates decoding keys to gain power on proprietor’s data. A client can play out a great deal of activities over the cloud data. In the event that the client needs to peruse the cloud data, he should enter some read-related traits, and on the off chance that he needs to compose the data, he should enter compose-related qualities. Adversary every single activity client in an association would be checked with their one of a kind quality set. These qualities would be shared by the administrators to the approved clients in cloud association. These characteristics will be put away in the approach records in a cloud. On the off chance that any client releases their interesting decoding key to the any malignant client, data proprietors needs to follow by sending review solicitation to evaluator and reviewer will process the data proprietors demand and infers that who is the liable. Keywords Confidentiality · Integrity · Forward-backward access control · Semi-trusted authority · Crypt cloud
E. S. Yarlagadda (B) · N. Srinivasan Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] N. Srinivasan e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_58
557
558
E. S. Yarlagadda and N. Srinivasan
1 Introduction The cloud itself might be a lot of equipment, systems, stockpiling, administrations and interfaces that change the conveyance of figuring as a help. Cloud administrations grasp the conveyance of bundle, framework and capacity over the net upheld client request. On account of that, data security becomes significant contemplations from clients after they store delicate data on cloud servers. These contemplations begin from established truth that cloud servers are in some cases worked by business providers that are surely to be outside of the reliable space of the clients. Data classified against cloud servers is therefore frequently wanted once clients source data for capacity inside the cloud. There are cases during which cloud clients themselves are content providers. They distribute data on cloud servers for sharing and need finegrained data get to the board regarding that client has the entrance benefit to that types of data [1]. To remain touchy client data private against untrusted servers, existing arrangements here and there apply cryptanalytic methodologies by uncovering data disentangling keys exclusively to endorsed clients [2, 3]. Be that as it may, in doing in this way, these arrangements definitely present a noteworthy calculation overhead on the data proprietor for key dissemination and data the executives once fine-grained data get to the executives is wanted and subsequently do not scale well [4]. CiphertextPolicy Attribute-Based Encryption may be a successful response for guaranteeing the secrecy of data and to give fine-grained get to control here. In a CP-ABE-based distributed storage structure, for example, affiliations and individuals would first be able to indicate get to strategy over properties of a potential cloud customer. Affirmed cloud customers by then are surrendered get to qualifications relating to their attribute sets which can be used to gain admittance to the redistributed data. CP-ABE offers a reliable procedure to ensure data set aside in cloud, yet in like manner engages fine-grained get to authority over the data. Any data that is put away in cloud whenever spilled could result in a range of ramifications for the affiliation and individuals [5]. The current CP-ABE-based [6–9] scheme enables us to keep security break from outside aggressor and furthermore an insider of the affiliation who carries out the “wrongdoings” of redistributing the decoding rights and the dissemination of understudy data in plain game plan for illegal money-related picks ups. Simultaneously, it can likewise guarantee that semi-believed authority would not circulate the made access certifications to others by proposing CryptCloud+, which gave a responsible position and revocable CP-ABE-based distributed storage framework. Regardless, one difficult issue in dealing with customer repudiation in distributed storage is that a renounced customer may regardless will even now have the ability to unscramble an old ciphertext they were endorsed to access before being disavowed [3]. To address this issue, the ciphertext set aside in the distributed storage should be refreshed, ideally by the cloud server. Likewise, it needed planned data getting to control which would give a significant degree of security. This paper proposes Cloud++, an all-inclusive model of the CryptCloud+ by giving a planned data get to control. What is more, numerous examining and examination plot and expelled one of the two revocable frameworks and decreased it to one unequivocal
Attribute-Based Data Management in Crypt Cloud
559
revocable framework. We have fused Shengmin’s et al [10] time encoding instrument into the revocable framework to give planned data get to control. This paper broadens work in [6] as follows. (1) We address a weakness in the examining approach in [6]. Specifically, the inspecting system will tumble right now evaluator said to be trusted totally may disregard to be direct every so often. As a balance, we change the examining technique and add various inspectors overview to survey and consider results. (2) We improve the value of the advancement ATER-CP-ABE in [6]. This ATR-CP-ABE improvement empower us to satisfactorily renounce the poisonous customers explicitly [11, 12].
2 Related Work Zheng et al. [6] proposed a system which improves a current MA-ABE plan to deal with productive and on-request client disavowal and demonstrate its security. Sun et al. [10] proposed first trait-based watchword search procedure with client denial (ABKS-UR) that empowers fine-grained (e.g., file level) search authorization. Ren et al. [13] proposed a novel system for get to control to PHRs inside distributed computing condition. To empower fine-grained and versatile access control for PHRs, they utilized property-based encryption (ABE) methods to scramble every patient’s PHR data. Goyal et al. [14] built up another cryptosystem for fine-grained sharing of encoded data that they call Key-Policy Attribute-Based Encryption (KP-ABE). Okamoto et al. [15] proposed another methodology on bilinear pairings utilizing the thought of double matching vector spaces and furthermore present a completely secure various leveled PE conspire under the presumption whose size does not rely upon the quantity of queries. Cao et al. [16] proposed two novel answers for APKS dependent on a various leveled HPE, one with upgraded proficiency and the other with improved question security. This paper Lou et al. [17] developed another cryptosystem for fine-grained sharing of mixed data. Ning et al. [18] proposed a multimaster ciphertext-approach ABE plot with duty, which licenses following the character of a getting into naughtiness customer who discharged the translating key to other people, and right now the trust in assumptions on the pros just as the customers. Shengmin et al. [4] propose a thought called auditable σ-time re-appropriated CPABE, which is acknowledged to be proper to dispersed figuring. Right now mixing movement brought about by unraveling is offloaded to cloud, and then, the rightness of the undertaking can be investigated successfully. Mazhar Ali et al. [19] gave an expressive, profitable and revocable data find a good pace for multi-master disseminated capacity structures, where there are different specialists exist together and each authority can give properties self-rulingly. Specifically, they proposed a revocable multi-master CP-ABE plan and applied it as the principal frameworks to layout the data access to control plot [20–24].
560
E. S. Yarlagadda and N. Srinivasan
3 Existing System In existing framework, the CP-ABE may assist us with forestalling security break from outside assailants. However, when an insider of the association is suspected to carry out the “wrongdoings” identified with the redistribution of unscrambling rights and the course of client data in plain arrangement for illegal monetary benefits, how might we be able to convincingly establish that the insider is blameworthy? Is it likewise feasible for us to disavow the undermined get to benefits? Notwithstanding the above inquiries, we have one more which is identified with key age authority [25, 26]. A cloud client’s entrance qualification is typically given by a semi-believed authority dependent on the properties the client has. How might we ensure that this specific power would not (re-)disseminate the created get to accreditations to other people.
4 Proposed System Right now, we have tended to the test of certification spillage in CP-ABE-based distributed storage framework by structuring a responsible position and revocable crypt cloud which supports white-box discernibility and evaluating. This is the first CP-ABE-based distributed storage framework that all the while bolsters white-box discernibility, responsible power, examining and viable repudiation. In particular, Crypt Cloud+ permits us to follow and renounce malignant cloud clients. Our methodology can be likewise utilized for the situation where the clients’ certifications are redistributed by the semi-confided in power (Fig. 1).
5 Module Description 5.1 Organization Profile Creation and Key Generation Client has an underlying level registration process at the web end. The clients give their very own data to this procedure. The server thusly stores the data in its database. Presently, the Accountable STA (semi-confided in Authority) creates decoding keys to the clients dependent on their Attributes Set (e.g., name, mail-id, contact number etc..,). Client gets the provenance to get to the organization data in the wake of getting unscrambling keys from Accountable STA.
Attribute-Based Data Management in Crypt Cloud
561
STA
Data Owners
Data owners have all rights to delete and edit their data
Public Cloud
File Permission Key
Generates Aribute based decrypon keys
File Upload
Cloud Users
Registraon & Login
Encrypt Files
Enter Decrypon key Policy File Creaon File permission key Informs to Data Owners Key Leakage
True User
Account Blocked
File Read, Write, Download, Delete
Fig. 1 Overview of the proposed system
5.2 Data Owners File Upload Right now proprietors make their records under the open cloud and transfer their data into open cloud. While transferring the documents into open cloud, data proprietors will scramble their data utilizing RSA encryption calculation and create open key and mystery key. Furthermore produces one special record get to authorization key for the clients under the association to get to their data.
562
E. S. Yarlagadda and N. Srinivasan
5.3 File Permission and Policy File Creation Various data proprietors will produce diverse document authorization keys and issues those keys to clients to get to their records. And furthermore produces strategy documents to their data for those who wants to access it. Arrangement file will part the key for read the document, compose the record, download the document, and erase the record.
5.4 Tracing Who Is Blameworthy Approved DUs can get to the re-appropriated data. Here, record consent keys are given to the representatives in the association dependent on their experience and position. Senior employees have all the consent to get to the documents fresher’s just having the consent to peruse the records. A few employees have the authorization to peruse and compose. Also, a few representatives have all the consents with the exception of erase the data. On the off chance that any senior employee holes or offers their mystery consent keys to their lesser representatives, they will demand to download or erase the data owners data. While entering the key framework will create trait set for their job in foundation approve that the client has all privileges to get to the data. In the event that the properties set is not coordinated to the data owners approach records, they will be guaranteed as liable. In the event that we ask them, we will discover who released the way in to the lesser representatives.
6 Conclusion We have attempted to take care of the issue of accreditation spillage in CP ABEbased circulated stockpiling structure by arranging a responsible position and revocable Cloud++. This is the CP-ABE-based appropriated stockpiling structure that underpins responsible position, different examining, and ground-breaking repudiation. In particular, Cloud++ empowers us to follow and repudiate poisonous cloud customers. Our methodology can be similarly used as a piece of the circumstance where the customers’ accreditations are redistributed by the semi-put confidence in pro. This moreover gives a coordinated data get to control where customer can find a workable pace inside a decided time for the given key right now access to records by the denied client. Besides, AU is believed to be totally trusted in CryptCloud+. Nevertheless, essentially, it may not be the circumstance. So we gave a way to deal with decline trust from AU by using different AUs.
Attribute-Based Data Management in Crypt Cloud
563
References 1. A. Lewko, B. Waters, New proof methods for attribute-based encryption: achieving full security through selective techniques, in Advances in Cryptology–CRYPTO 2012 (Springer, 2012), pp. 180–198 2. J. Li, X. Lin, Y. Zhang, J. Han, KSFOABE: outsourced attribute-based encryption with keyword search function for cloud storage. IEEE Trans. Serv. Comput. 10(5), 715–725 (2017) 3. S.P. Deore, A. Pravin, On-line Devanagari handwritten character recognition using moments features, in International Conference on Recent Trends in Image Processing and Pattern Recognition (Springer, Singapore, 2018), pp. 37–48 4. S. Xu, G. Yang, Y. Mu, R.H. Deng, Secure Fine-Grained Access Control and Data Sharing for Dynamic Groups in Cloud 5. T.P.Jacob, Implementation of randomized test pattern generation strategy. J. Theoret. Appl. Inf. Technol. 73(1) (2015) 6. M. Li, S. Yu, Y. Zheng, K. Ren, W. Lou, Salable and secure sharing of personal health records in cloud computing using attribute based encryption. IEEE Trans. Parellel Distrib. Syst. 24(1) (2013) 7. B. Waters, Ciphertext-policy attribute-based encryption: an expressive, efficient, and provably secure realization, in Public Key Cryptography–PKC 2011 (Springer, 2011), pp. 53–70 8. K. Yang, Z. Liu, X. Jia, X.S. Shen, Time-domain attribute-based access control for cloud-based video content sharing: a cryptographic approach. IEEE Trans. Multimedia 18(5), 940–950 (2016) 9. Z. Liu, Z. Cao, D.S. Wong, Blackbox traceable cp-abe: how to catch people leaking their keys by selling decryption devices on ebay, in Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security (ACM, 2013), pp. 475–486 10. W. Sun, H. Li, S. Yu, T. Hou, W. Lou, Protecting your right: attribute-based keyword search with finegrained owner-enforced search authorization in the cloud. Proc. IEEE 978-1-4799-3360-0 (2014) 11. A. Christy, P. Thambidurai, Ctss: a tool for efficient information extraction with soft matching rules for text mining (2008) 12. J.K. Karthika, V.M. Anu, A. Veeramuthu, An efficient attribute based cryptographic algorithm for securing trustworthy storage and auditing for healthcare big data in cloud (2006) 13. M. Li, S. Yu, K. Ren, W. Lou, Securing personal health records in cloud computing: patientcentric and fine-grained data access control in multi-owner settings, in Proceedings of the Sixth International ICST Conference on Security and Privacy in Communiction Networks (SecureComm 10), pp. 89–106 (2010) 14. V.Goyal, O. Pandey, A. Sahai, B. Waters, Attribute-based encryption for fine-grained access control of encrypted data, in Proceedings of the ACM Conference on Computer and Communications Security, p. 8998 (2006) 15. A. Lewko, T. Okamoto, A. Sahai, K. Takashima, B. Waters, Fully secure functional encryption: attribute-based encryption and (hierarchical) inner product encryption, in EUROCRYPT, p. 6291 (2010) 16. M. Li, S. Yu, N. Cao, W. Lou, Authorized private keyword search over encrypted personal health records in cloud computing, in Proceedings of the 31st International Conference on Distributed Computing Systems (ICDCS 11) (2011) 17. W. Sun, B. Wang, N. Cao, M. Li, W. Lou, Y.T. Hou, H. Li, Privacy preserving multi-keyword text search in the cloud supporting similarity based ranking, in Proceedings of ASIACCS (ACM, 2013), pp. 71–82 18. J. Ning, Z. Cao, X. Dong, K. Liang, L. Wei, K.-K. Raymond Choo, CryptCloud+: Secure and Expressive Data Access Control for Cloud Storage 19. M. Ali, R. Dhamotharan, E. Khan, S.U. Khan, A.V. Vasilakos, K. Li, A.Y. Zomaya, Sedasc: secure data sharing in clouds. IEEE Syst. J. 11(2), 395–404 (2017)
564
E. S. Yarlagadda and N. Srinivasan
20. V. Goyal, O. Pandey, A. Sahai, B. Waters, Attribute-based encryption for fine-grained access control of encrypted data, in Proceedings of the 13th ACM Conference on Computer and Communications Security (ACM, 2006), pp. 89–98 21. J. Li, Q. Huang, X. Chen, S.S.M. Chow, D.S. Wong, D. Xie, Multi-authority ciphertext-policy attribute based encryption with accountability, in Proceedings of the 6th ACM Symposium on Data, Computer and Communications Security, ASIACCS 2011 (ACM, 2011), pp. 386–390 22. J. Ning, Z. Cao, X. Dong, K. Liang, H. Ma, L. Wei, Auditable-time outsourced attribute-based encryption for access control in cloud computing. IEEE Trans. Data Forensics Secur. 13(1), 94–105 (2018) 23. K. Yang, X. Jia, Efficient, and revocable data access control for multiauthority cloud storage 24. A. Sahai, B. Waters, Fuzzy identity-based encryption, in Advances in Cryptology–EUROCRYPT 2005 (Springer, 2005), pp. 457–473 25. P.K. Rajendran, B. Muthukumar, G. Nagarajan, Hybrid intrusion detection system for private cloud: a systematic approach. Procedia Comput. Sci. 48(C), 325–329 (2015) 26. M.P. Selvan, A. Gupta, A. Mukherjee, Give attention to overlapping network detection in networks for multimedia. J. Comput. Theor. Nanosci. 16(8), 3173–3177 (2019)
Analysis on Sales Using Customer Relationship Management Silla Vrushadesh, S. N. Ravi Chandra, and R. Yogitha
Abstract Customer relationship management (CRM) is an established concept which is used to manage customer and employee’s lifecycle management through various techniques and process-oriented tools. The main reason of this CRM is to improve the relationship between customers and the company by using many modules like to examine of sales, client services, and many others. The objective behind this project is to increase the sales efficiency of the organization which returns more profit for the organization. In this paper, CRM includes few modules. Customer relationship management (CRM), it is troublesome not to see robotized chatbot changing these stages and empowering deals and advertising groups to all the more likely react to their clients. Some may scrutinize the humankind of CRM chatbot; however, there is no denying that opening up CRM information by utilizing creative devices helps speed the business cycle. Keywords Customer relationship management · CRM · Pareto’s principle · Delphi method · CRM chatbot
1 Introduction 1.1 CRM Technology is everywhere nowadays. Evolution of it sector has a great impact in the business sector in the behavior as well as in the ways of communicating and S. Vrushadesh (B) · S. N. Ravi Chandra · R. Yogitha Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] S. N. Ravi Chandra e-mail: [email protected] R. Yogitha e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_59
565
566
S. Vrushadesh et al.
interacting. Web-based applications also play a vital role on integrating technologies with business. Since information plays a decisive role in business sector, there is an alarming need to ledger the valuable information which helps fabricate wealth. CRM is one such information management system which plays fundamental role in fulfilling this urgent need. CRM stands for customer relationship management which creates an opportunity to create a balance sheet for inventory, maintain track on sales, and keep track for the customer needs [1].
1.2 Literature Characteristics of a great CRM system is [2] simple integration, ease of use, adaptability, and growth. Some paper [3] presents the aftereffects of research dependent on Delphi technique, which states and planned for finding real CRM definition and client attributes later on. To [4] improve market within firms, we have to analyze the information and train the Employee [5, 3]. CRM helps firms to serve better to customers and provides more reliable service then self-service [6]. With it customer gets less need of interaction with company for problems. CRM hypothetically [7] associated with the three sorts of value: Relationship, value, and brand, and customer value driver benefits [8]upgraded capacity to target gainful clients, integrated help across channels, enhanced deals power proficiency and viability, improvised estimating, customized items and administrations, improved client care productivity and adequacy, campaigns (individualized showcasing messages and [2, 9] connect clients and all channels on a solitary stage.
1.3 Why CRM? Competition for customers is intense. From a business point of view, retaining of the old customers can be done with minimal cost than to find a new one. According to [10] Pareto’s principle, it states that 80% of its profits generates from 20% of a company’s customers. In item deals, it takes a normal of 8–10 physical brings face to face to sell another client 2–3 calls to sell a current client [11, 12]. It is five to multiple times more exorbitant to get another client than to acquire rehashed business from an old client [4, 13].
Analysis on Sales Using Customer Relationship Management
567
Fig. 1 Components of CRM
2 Methodology 2.1 Components of CRM Main components of CRM are (Fig. 1) • • • •
Builds customer relationship through marketing Manages customer relationship at every stage Observes relationship Distributes the value of the relationship.
2.2 Existing System The modules included are customer sifting, profiling, and limited time devices. Customer sifting permits the client to channel through a client list from the database [14, 15]. Client profiling to make every client is having a profile; the client may see the client’s profile included investigation of the client [16, 17]. Limited time instruments permit the client to make another special arrangement on an item premise, and channel rundown of a client to advance the advancement. From that point onward, the client can see the examination [18]. CRM expects supervisors to start by characterizing technique at that point assess whether that vital focuses can fix by the CRM information and assess what sort of CRM information can be fixed [19, 20]. It likewise computes the worth that would
568
S. Vrushadesh et al.
bring to the organization [21]. It prepares the representatives to utilize CRM, and it oughts to exceed the cost associated with the organization [22]. It can design the incentive program and measure/monitor CRM impact and progress (makes a system for it). • • • •
Company uses CRM to: Gather market research on customer Generate more reliable sales forecasts Coordinate information quickly between customer support representative and sales staff • Enables sales people to see the financial impact on different product before they set price for the customer • Accurately see the people for the promotional program • Improve custom retention.
2.3 Proposed System Contact the board for a 360* perspective on your clients, assists with monitoring every one of your clients, drives, accomplices, merchants, providers and typically contains. • • • • • • • • • •
Contact data Email Communication history Purchase history Case history Documents contracts Quotes Invoices, and so forth. All information put away in the incorporated database It can be gotten to by various clients progressively.
CRM’s objective to make planned and current client data open to fitting representatives at all degrees of your association and to guarantee that the entrance requires an absolute minimum of IT or other human intercessions. Another piece of the front end UI of a CRM is a Chabot. It empowers clients to get to client information and report through intelligence, for example, asking characteristic language inquiries. For instance, questions like “Who are the main data officials of our greatest clients?” spare clients time.
3 Architecture Diagram See Fig. 2.
Analysis on Sales Using Customer Relationship Management
569
Fig. 2 CRM architecture
4 Result and Discussion 4.1 Analysis As Figs. 3 and 4 shows all the sold product which is sold by salesmen after the discount analysis which was done by CRM. The provided sold data helps the sales executive to check his/her sales in the provided day, month, or year.
Fig. 3 View product
570
S. Vrushadesh et al. X: TIME Y: SALES
Fig. 4 Graph on sales
5 Conclusion The system can effectively show the analysis of an employee work efficiency in graph representation and analyze stock data inventory for the production. Inventory provides up to date stock checking ability. This enables to manage the system and company in a better way and helps the organization to improve in an exponential rate.
References 1. K.M. Prasad, R.S. Kavya, S.B. Devi, Virtual fitting space for dress trials. IOP Conf. Ser. Mater. Sci. Eng. 590(1), 012013 (2019) 2. H. Gebert, M. Geib, L. Kolbe, W. Brenner, Knowledge-enabled customer relationship management: integrating customer relationship management and knowledge management concepts. J. Knowl. Manag. 7(5), 107–123 (2003). https://doi.org/10.1108/13673270310505421 3. F. Khodakarami, Y.E. Chan, Exploring the role of customer relationship management (CRM) systems in customer knowledge creation. Inf. Manage. 51(1), 27–42 (Jan.). https://doi.org/10. 1016/j.im.2013.09.001 4. M. Triznova, H. Mat’ova, J. Dvoracek, S. Sadek, Customer relationship management based on employees and corporate culture, in Proceedings of the 4th World Conf. on Business, Economics and Management (WCBEM2015), Procedia Economics and Finances, vol. 26, pp. 953–959 (2015). https://doi.org/10.1016/S2212-5671(15)00914-4 5. B.M. Krishna, Customer Relationship Management, Coursebook of Lovely Professional University (Excel Books Private Limited, New Delhi (India), 2013) 6. S.P. Deore, A. Pravin, On-line Devanagari handwritten character recognition using moments features, in International Conference on Recent Trends in Image Processing and Pattern Recognition (Springer, Singapore, 2018), pp. 37–48 7. A. Payne, P. Frow, A strategic framework for customer relationship management. J. Market. 69(4), 167–176 (2005). 8. A. Souri, N.J. Navimipour, Behavioral modeling and formal verification of a resource discovery approach in grid computing. Expert Syst. Appl. 41(8), 3831–3849 (2014). https://doi.org/10. 1016/j.eswa.2013.11.042 9. I. Mahdavi, N. Cho, B. Shirazi, N. Sahebjamnia, Designing evolving user profile in e-crm with dynamic clustering of web documents. Data Knowl. Eng. 65(2), 355–372 (2008). https://doi. org/10.1016/j.datak.2007.12.003
Analysis on Sales Using Customer Relationship Management
571
10. R.S. Hassan, A. Nawaz, M.N. Lashari, F. Zafar, Effect of customer relationship management on customer satisfaction, in Proceedings of the 2nd Global Conference on Business, Economics, Management and Tourism, Procedia Economics and Finance, vol. 23, pp. 563–567 (2015). https://doi.org/10.1016/S2212-5671(15)00513-4 11. A. Christy, P. Thambidurai, Efficient information extraction using machine learning and classification using genetic and C4. 8 algorithms. Inform. Technol. J 5, 1023–1027 (2006) 12. J.K. Karthika, V.M. Anu, A. Veeramuthu,. An efficient attribute based cryptographic algorithm for securing trustworthy storage and auditing for healthcare big data in cloud (2006) 13. N.J. Navimipour, B. Zareie, A model for assessing the impact of e-learning systems on employees’ satisfaction. Comput. Hum. Behav. 53, 475–485 (2015). https://doi.org/10.1016/j. chb.2015.07.026 14. J.S. Paul, E. Brumancia, S.J. Samuel, A survey on effective bug prioritizing using instance selection and feature selection techniques. Indian J. Sci. Technol. 9, 31 (2016) 15. A. Praveena, M.K. Eriki, D.T. Enjam, Implementation of smart attendance monitoring using open-CV and Python. J. Comput. Theor. Nanosci. 16(8), 3290–3295 (2019) 16. G.K. Krishnan, R.G. Franklin, Privacy and auditing bigdata stored in cloud with verify update. Res. J. Pharmaceut. Biol. Chem. Sci. 7(3), 742–750 (2016) 17. A. Velmurugan, T. Ravi, Optimal symptom diagnosis for efficient disease identification using Somars approach. J. Comput. Theor. Nanosci. 14(2), 1157–1162 (2017) 18. T.P. Jacob, Implementation of randomized test pattern generation strategy. J. Theoret. Appl. Inf. Technol. 73(1) (2015) 19. K.K. Thyagharajan, G. Kalaiarasi, Pulse coupled neural network based near-duplicate detection of images (PCNN–NDD). Adv. Electr. Comput. Eng. 18(3), 87–97 (2018) 20. R. Subhashini, B. KeerthiSamhitha, S.C. Mana, J. Jose, Data analytics to study the impact of firework emission on air quality: a case study. AIP Conference Proceedings (2019) 21. G. Nagarajan, R.I. Minu, Fuzzy ontology based multi-modal semantic information retrieval. Procedia Comput. Sci. 48, 101–106 (2015) 22. M.P. Selvan, R. Selvaraj, Monitoring Fishy activity of the user in social networking, in 2017 International Conference on Information Communication and Embedded Systems (ICICES) (IEEE, 2017), pp. 1–5
A Unified and End-to-End Methodology for Predicting IP Address for Cloud and Edge Computing Vutukuri Manasa, A. C. Charishma Jayasree, and M. Selvi
Abstract The extensive usage of Internet is the major cause of more cyber-attacks. And these attacks are mainly with IP spoofing. So, the IOT devices must be secured to stop the IP spoofing and that involves the authorization of the source address of IP packets which are received at the gateway. This is important to stop an user who is unsanctioned from utilizing the IP address as the flooding packets and source to the gateway. From then on, the assigned bandwidth to allow users is used. All unique file names and file attributes are registered daily by tenants on the virtual machine and from these file lists and IP events often import specific data into CSP. Then, the assessing task will be done by the TWCP for remaining everyday log details and mailing the security risk information to all the tenants. We designed and implemented the DTOS program (DNS TRAFFIC QUERY PROGRAM) to analyze the DNS traffic. The previous IP address and also the calculation of the next IP address were performed. The analyzing and the prediction of the IP address is done mainly in two important cloud computing providers and discovered that the real entropy given by the specified IP addresses is restricted. Here, we consider many predictive models, such as the Markov process model which produces prediction data of the addresses from the connected IP addresses. Keywords IP address · IP spoofing · Prediction · Cloud computing
V. Manasa (B) · A. C. Charishma Jayasree · M. Selvi Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] A. C. Charishma Jayasree e-mail: [email protected] M. Selvi e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_60
573
574
V. Manasa et al.
1 Introduction Spoofing of IP comprises IP flooding of packages by unauthorized customers by duplicating the registered customer source code. In the IP parodying situation, goal bus has no information about who the real diet is and when it sends parcels back to the IP address of source. Moreover, the actual client does not receive the normal information. Additionally, the authorized clients can hoard the system which will deny the genuine clients of the system transmission capacity. Denial of service attackers exhausts an unbiased administration structure in program that refuses access to desirable customers. Dos assaults at the system application or transport layers can target explicit IP location of the administration and adventure susceptibility at the focused on layer. To alleviate disavowal of administration assaults, analysts contain investigated arrangements, for example, assessing and sitting and recognizing assault ways [1], distinguishing oddities [2, 3], adjusting remaining burdens [4], and restricting solicitation traffic [5]. For example, IP address randomization has emerged as a promising technique by late moving objective barriers [6–10]. Randomization of IP address empowers moving objective barrier iron works to alleviate DoS assaults on bus that are presented to the Internet. The thought is to befuddle the aggressor by occasionally moving administration to recently designated IP addresses, compelling the assailant to invest energy and exertion finding the objective assistance at another IP address [11]. In such frameworks, enlisted customers might be diverted to the new location, while obscure and possibly malignant customers incidentally may lose contact with objective. For instance, consider a cloud-based connected with an objective server that utilizes an open IP address [12]. The server executes a help, for example, a VPN, for an enormous number of customers that are validated run the earlier, and denies unknown customers iron utilizing the administration [10]. To keep potential refusal of administration assailants occupied from the server of objective, the other private machine in a similar system chooses a period and solicitations a crisp IP address and updates the objective server’s IP address with the new IP address. To cure this, moving objective resistance frameworks much of the time invigorate the IP address for the objective assistance to squander the enemy’s pursuit exertion. Keeping up a flighty arrangement of IP addresses of target servers is a center of security prerequisite in moving objective protection frameworks since the enemy’s expense relies upon the normal number of endeavors expected to find the IP address of the objective. The aim of this work is to investigate the viability of randomization of IP addresses by dissecting the consistency of IP addresses distributed through unmistakable distributed computing stages [13, 14]. The target objective of this work is to prepare a down-to-earth attack that could be recognized provided the open devices and APIS a distributed computer suppliers [15]. We consider an aggressor who can generate and maintain dataset of IP location and uses that dataset to anticipate a move that helps IP address. Production of the dataset should be versatile [16], allowing the aggressor to collect a huge portion of accessible IP addresses [17]. The assailant will reveal the forecast based on percentile evidence from the target system [18]. Given ongoing perceptions from the objective system,
A Unified and End-to-End Methodology for Predicting IP Address …
575
the assailant ought to create a sensibly little arrangement of upcoming IP delivery that is probably going to incorporate the IP address apportioned to the objective help [18]. We center around IPv4 systems, despite the fact that IP Version 6 (IPv6) exponentially expands the location look space for aggressors [19]. The outcomes exhibit that famous cloud gave may allot addresses in unsurprising manners [20]. Consequently, it is significant that moving objective safeguards dependent on IP address randomization are actualized to moderate unsurprising location distributions, and their security is assessed considering a foe’s capacity to enter their quest for the objective IP address dependent on information on IP address portion [21].
2 Related Work Quite the shortcoming in existing IP systems by Sameer et al. [1] is IP parodying issue where an unauthorized client may flood the system with the IP packets Kozierok et al. [3] holding subjective origin IP address. The basic issue is DoS attacks by Reynolds et al. [4] which can make information from authorized clients be disposed of at the entryway or servers due to satirize parcels from unauthorized clients expending data transfer capacity of genuine clients. IP caricaturing safeguard utilizing jump check altered bounce tally Mukaddam et al. [5] and jump tally sifting [6]. The altered jump include separating is actualized in two stages, learn stage and sift stage. Light weight way to deal with IP ridiculing Agrawal et al. [7] accepts IP sifting rules (firewalls, get to control records) that are designed at the entryway gadget. It is hard to arrange IP separating rules for every single imaginable estimation of source IP that tends to a large number of which will not be known ahead of time. ARP spoofing-based attacks like DoS assault, host pantomime assault, man in the middle assault are portrayed in Bhirud et al. [8]. IP parodying utilizing follow back technique by Santhosh et al. [9] expects switch to produce ICMP blunder message to recognize the area of caricature source gadget. IP parodying anticipation utilizing reverse way sending as applied to software characterized systems is portrayed in Benton et al. [22]. The confirmation is accomplished in source address of IP bundles to locate a substantial interface. On the off chance that the substantial interface is not discovered the parcel is disposed of. In the mocking anticipation technique by Zhang et al. [23], the parcels transmitted among source and goal self-governing framework are set apart with a key by source AS and the getting AS approves the key. A moving objective resistance endeavors to foil assaults by persistently modifying few things of objective on which the assault depends. The main theme is to extend the assault’s necessary schedule and assets. Moving objective protections do not handle explicit specialized vulnerabilities that permit an assault, however, safeguard the framework by making abuse increasingly hard for all assaults that rely upon a specific property. For instance, a forswearing of administration assault requires information on the injured individual’s system addresses. Moving objective guards endeavor to keep an enemy from knowing those addresses. A few creators have introduced reviews and formal examinations of moving objective resistances by Wright et al. [16]; here, the center is constrained to
576
V. Manasa et al.
moving objective barriers that endeavor to conceal arrange endpoints. A comparable work utilizes organized reenactment to accomplish double dealing by reproducing virtual system topologies and utilizing the reproduction results to postpone surveillance endeavors [24]. The general shared objective is to battle observation assaults, in any case, the ongoing work in the territory does not explicitly target address randomization, nor gives a deliberate displaying of surveillance assaults on cloud-based virtual systems [25].
3 Existing System Pictures consider an aggressor whose objective is to deny support for the duration of longest possible which provides the resources to the most outrageous number of posts within the target framework. The target of the attacker might also be to expand remote access to a host segment in the target framework in speeding up the remote advantage. Despite the fact that the probability of attack with remote access and denial of an organizational ambush is rare from an IP address randomization point of view, the two attacks rely on the allocation of the target IP addresses for some time. In this way, hindrances reliant on consenting and moving IP will in general will be correspondingly general groundbreaking. A denial of organization ambush is performed at the layer of application or at the layer of association, for example, by flooding the HTTP organization with an over the top number of unconstrained requesting. In the target framework, attackers will speak to Internet entrances and set up preliminary records in the communicated registration system of the target framework. The aggressor is relied upon to understand the goal framework’s disseminated figuring provider and has recognized a hidden game plan of IP watches out for that fill in as the Internet section to the goal framework. Aim system represents a specific group by promoting Internet-accessible selects with no particular doubts about the option of frame display. Disseminated processing stages offer ways to deal with prohibit unfortunate IP addresses from showing up the goal framework, which do not seem, by all accounts, to be rational (Fig. 1).
4 Experimental Results The experimental outcome was accomplished on a variety of datasets. Here, we predict the next IP address for the hacked system to make the system efficient and which cannot be predicted by the hackers. This method puts forward an idea for the adjustments in IP stack which is to support exchange of data or the messages. The utilization of conventions like a DHCP for dynamic IP address task and setup of channels for IP caricaturing is evaded. This technique maintains a strategic distance from use of complex cryptographic plans for validation of bundles. Restriction of technique like this may expect that the address of MAC which is utilized as
A Unified and End-to-End Methodology for Predicting IP Address …
577
Fig. 1 Outline of proposed system
gadget identifier can be copied by unauthorized clients. The chance of anticipating IP addresses dispensed by distributed computing stages is disturbing for moving objective resistances that accept IP address portions given by cloud administrations which are exceptionally flighty. As appeared in fifth section, an assailant can dependably foresee IP address of cloud allotments. Except if distributed computing stages utilize restrictions on IP address portion, assailants can helpfully refresh the database of IP locations and keep assaulting different customers. Strategies restricting IP address allotment would force ease of use confinements that are hard to execute when serving enormous virtual systems. Hence, planning moving objective guard frameworks with a center component that relies upon newly relegated IP delivers which requires cautious contemplations for designing the entropy in the chosen IP addresses (Figs. 2, 3 and 4).
5 Conclusion Here, we predict the next IP address for the hacked system to make the system efficient and which cannot be predicted by the hackers. This method puts forward idea for the adjustments in IP stack which is to support exchange of data or the messages. The utilization of conventions like a DHCP for dynamic IP address task and setup of channels for IP caricaturing is evaded. This technique maintains a strategic distance from use of complex cryptographic plans for validation of bundles. Restriction of technique like this may expect that the address of MAC which is utilized as gadget identifier that can be copied by unauthorized clients. The chance of anticipating
578
V. Manasa et al.
Fig. 2 Designing the entropy
Fig. 3 Graph
IP addresses dispensed by distributed computing stages is disturbing for moving objective resistances that accept IP address portions given by cloud administrations are exceptionally flighty. As appeared in fifth section, an assailant can dependably foresee IP address of cloud allotments. Except if distributed computing stages utilize restrictions on IP address portion, assailants can helpfully refresh the database of IP locations and keep assaulting different customers. Strategies restricting IP address
A Unified and End-to-End Methodology for Predicting IP Address …
579
Fig. 4 IP address for the hacked system
allotment would force ease of use confinements that are hard to execute when serving enormous virtual systems. Hence, planning moving objective guard frameworks with a center component that relies upon newly relegated IP delivers requires cautious contemplations for designing the entropy in the chosen IP addresses.
References 1. M.A. Venkatesulu, IP routing, in Linux TCP/IP Architecture, Design and Implementation. 2008, Press: Wiley-IEEE. 900 p. 2. D.E., Comer Internetworking With the concepts, protocols, and architecture of TCP/IP Ed., Sixth Edition. 1. 1. 1. 2014: PHI PVT-learning. LTD. 3. C.M., Kozierok TCP/IP Guide A Reference Comprehensive, Illustrated Internet Protocols. 2005: Publisher and distributor Shroff PVT. LTD. India 4. M.S. Reynolds, DDoS Internet based attacks of the next decade, in IEEE Systems Assurance Conference, Man and Cybernetics Knowledge Society, 2003 5. A. Mukaddam, et al., IP detection spoofing using changed hop count, in 28th International Conference on Advanced Knowledge Networking and Applications (IEEE), 2014 6. C. Jin, N. Agrawal, K.G. Shin, Protection against spoofed IP traffic using Filtering Hop-Count. IEEE/ACM Netw. Trans. 15(1), 40–53 (2007) 7. S. Tapaswi, N.A. Agrawal, Lightweight approach to detect spoofed cloud DDoS attacks at low/high rate IP, in IEEE 7th International Symposium on Cloud Computing and Infrastructure (SC2) in 2017, Aug 2017 8. S.G. Bhirud, V. Katkar, Light weight strategy for identification and prevention of IP-ARP spoofing, in 2011 Second International Internet Conference of the Asian Himalayas (AH-ICI), Aug 2011 9. C. Fancy, K.R. Santhosh, A dedicated setup for spoofing identification via IP-traceback, in International Conference on Smart Sustainable Systems (ICISS), 2017 10. K. Benton, et al., Filtering spoofing of source IP using feasible reverse path forwarding with SDN, in 2015 IEEE Communications and Network Security Conference (CNS), Aug 2015
580
V. Manasa et al.
11. A. Christy, P. Thambidurai, Efficient information extraction using machine learning and classification using genetic and C4. 8 algorithms. Inform. Technol. J. 5, 1023–1027 (2006) 12. M.P. Selvan, R. Selvaraj, Monitoring Fishy activity of the user in social networking, in 2017 International Conference on Information Communication and Embedded Systems (ICICES), Feb 2017. IEEE (2017), pp. 1–5 13. E. Weippl, K. Krombholz, J. Ullrich A. Dabrowski, H. Hobel, Security IPv6: Attacks and countermeasures in nutshell, in The 8th USENIX Offensive Technology Workshop (WOOT 14), San Diego, CA: Asociación USENIX, 2014 14. P. Kieseberg, J. Ullrich, K. Krombholz, Recognition with IPv6: a pattern-based approach to scanning, in 10th International Conference on Availability, Reliability and Security 2015, Aug 2015, pp. 186–192 15. M. Selvi, P.M. Joe Prathap, WSN data aggregation of dynamic routing by QoS analysis. J. Adv. Res. Dyn. Control Syst. 9(18), 2900–2908 (2017) 16. S. Prince Mary, D. Usha Nandini, B. Ankayarkanni, R. Sathyabama Krishna, Big data deployment for an efficient resource prerequisite job. J. Comput. Theor. Nanosci. 16(8), 3211–3215 (2019) 17. Sheela, A.C. Santha, C. Kumar, Duplicate web pages detection with the support of 2D table approach. J. Theor. Appl. Inf. Technol. 67(1) (2014) 18. S.L. JanyShabu, C. Jayakumar, Multimodal image fusion and bee colony optimization for brain tumor detection. J. Eng. Appl. Sci. Asian Res. Publ. Netw. (ARPN) 13(8), 2899–2904 (2018) 19. J. Refonaa, M. Lakshmi, Machine learning techniques for rainfall prediction using neural network. J. Comput. Theor. Nanosci. 16(1–5), 1546–1955 issue to be indexed in SCOPUS (2019) 20. J.K. Karthika, V.M. Anu, A. Veeramuthu, An efficient attribute based cryptographic algorithm for securing trustworthy storage and auditing for healthcare big data in cloud (2006) 21. S.P. Deore, A. Pravin, On-line devanagari handwritten character recognition using moments features, in International Conference on Recent Trends in Image Processing and Pattern Recognition, Dec 2018 (Springer, Singapore, pp. 37–48) 22. J. Wu. LAS, J. Bi, Zhang, An successful method of antispoofing, using existing information, in 23rd International Electronic Communications and Networks Conference (ICCCN), 2014 23. Levy and Bremler-Barr, Prevention of spoofing process, in Proceedings IEEE, 24th Annual Joint IEEE Computer and Communications Societies Conference, 2005 24. R.I. Minu, G. Nagarajan, Bridging the IoT gap through edge computing, in Edge Computing and Computational Intelligence Paradigms for the IoT (IGI Global, 2019), pp. 1–9 25. T.P. Jacob, Implementation of randomized test pattern generation strategy. J. Theor. Appl. Inf. Technol. 73(1) (2015)
Predicting the Farmland for Agriculture from the Soil Features Using Data Mining Kada Harshath, Kamalnath Mareedu, K. Gopinath, and R. Sathya Bama Krishna
Abstract Agriculture is one of the significant distinct and income creating segment in India. Various seasons and Organic Patterns impact the harvest creation, but since the change in these may bring about a phenomenal misfortune to ranchers. These elements can be limited by utilizing an appropriate methodology identified with the information on soil type, strength, reasonable climate, type of crop. Our aim of Soil Features should meet these fundamental prerequisites, i.e., it needs to discover whether a specific land is versatile for Agriculture, with the ideal conditions for farming, and to improve the precision of the calculations and contrast to locate the best among the three. This encourages us to arrange which land is for Farming and which one is Non Farming. This encourages us to develop crops and do horticulture. The framework is stacked with soil pieces of information like the region, locale it is available, surface of dirt, water system scaling, pivot, yield, soil disintegration, wind disintegration, slant, evacuation, and so forth. The chi-square element calculation is utilized for Feature Extraction, Selection, and Scaling. It diminishes the clamor highlights of the dataset and enhances the highlights for the framework to process. The precision will be created and expanded with the assistance of the calculations like DNN, Random Forest, and Linear Discriminant Analysis. The results show that the proposed conspire isn’t just doable yet additionally assists ranchers with understanding their ecological list of homesteads.
K. Harshath (B) · K. Mareedu · K. Gopinath · R. Sathya Bama Krishna Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] K. Mareedu e-mail: [email protected] K. Gopinath e-mail: [email protected] R. Sathya Bama Krishna e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_61
581
582
K. Harshath et al.
Keywords Agriculture · Data mining · Soil · Feature extraction · DNN · Random forest
1 Introduction Data Mining is an exceptionally critical research space in the late research world. The systems are valuable to evoke critical and utilizable information which can be seen by numerous people. Data mining algorithms comprise of diverse techniques which are generally formed and utilized by business activities and biomedical analysts. The systems are all around set towards their exacting information area. The operation of a typical measurable investigation strategy is both monotonous and costly. Proficient measures can be produced and custom fitted for explaining composite soil informational indexes utilizing Data Mining to improve the feasibility and exactness of the Categorization of vast soil informational collections [1]. The dirt analysis is the investigation of a dirt example to decide add-on material, structure, and various attributes. The analysis is generally performed to compute success and exhibit inadequacies that must be cured [2]. The dirt examining research centers are furnished with appropriate specific script on various parts of soil testing, including testing strategy and definitions of fertilizer proposal [3, 4]. It causes ranchers to choose the degree of fertilizer and homestead yard excrement to be applied at different phases of the development pattern of the yield. Tests are typically performed to measure fertility and demonstrate insufficiencies that should be remedied [5–7]. Soil fruitfulness is a significant quality which is considered for land assessment, likewise accomplishing and keeping up essential degrees of ripeness is important for sustaining crop creation, subsequently, this paper incorporates ventures for building a proficient and exact prescient model of soil richness with the assistance of Data Mining techniques [8, 9]. The, generally speaking, purpose of the Data Mining method is to get rid of data from an informational index and alter it into a sensible structure for further utilization [10]. Agriculture is one of the key development territories of IoT in India, the beginning of human progress; horticulture has been a critical piece of each human culture because of the fundamental truth that the sustenance of any human progress straightforwardly relies upon farming. India, specifically, is an Agriculture substantial economy. Shockingly, horticulture has not been honored with the most recent headways in the innovative space, not at all like different zones like correspondence, transportation, instruction, account, and so on [11]. Headways in horticulture are important to adjust the interest and supply as the populace is expanding step by step [12]. With the utilization of present-day and trend-setting innovations, the proficiency of the horticultural business can be significantly improved [13].
Predicting the Farmland for Agriculture from the Soil Features …
583
2 Related Work Atmosphere Smart Agriculture framework is a strategy to check the climate of the region and to develop fundamental items as the atmosphere of that zone. This will assist ranchers with growing the right measure of harvests in the necessary land and to know the precipitation max temp, min temp of that area. Tenzin et al. [14] center on the expectation of the most gainful yield that can be developed in the horticultural land utilizing AI methods. This paper incorporates the utilization of an android framework that will give the continuous yield investigation utilizing different climate station reports and soil quality. Therefore ranchers can develop the most productive harvest in the best reasonable months. Zingade et al. [3] RSN (Remote Sensor Network) is an advancement to India where it will in general be used in the Agriculture Sector in India to extend crop yielding by giving gauge of plant illnesses and bug. This should be conceivable by venturing through assessment data from area where the WSN mastermind is present and with reasonable AI estimations for this data to get foreseen yield. The paper gives us the chance of the gather examination using WSN frameworks and to keep from the disturbances and to use pesticides that would not hurt the yields wellbeing. Wani et al. [15] in the paper an item instrument named ’Reap Advisor Tool’ has been used as a site page for envisioning the principal climatic parameter on the yielding yields. C4.5 computation is used, i.e., made by Ross Quinlan to find the fundamental climatic characteristics on the yields aftereffects of picked crops in picked areas of MP. This item offers us the hint of various air changes that can affect crop improvement in a territory. Veenadhari et al. [16] Internet Things (IoT) one of the new periods of calculation is utilized to propel the need of horticulture segment. Using the IOT highlights brilliant farming can be chronicle in these paper we are utilizing a Bluetooth gadget and a wide region system to get the subtleties of the encompassing, for example, soil water level, pesticide identification and so on. These will give ranchers computerization in the field of farming as all the subtleties will be associated with a gadget used by the ranchers. Every subtlety of the homestead can be refreshed in the application utilizing the IoT modules. Hulsegge et al. [3] explores pig ailments in the animals’ business by dissecting mean qualities and similitudes with regards to occasional changes. The examination applies large information investigation to inspect the wellbeing state of pigs in 44 unique ranches to give knowledge into sickness anticipation and patterns. Huang et al. [6] recommend setting up IoT hardware on the homestead as well as little climate stations to screen neighborhood climate and environmental changes. Since ranch regions regularly experience the ill effects of poor remote system association, the transmission of immense picture information frequently brings about parcel misfortune; with regards to this, the examination utilizes a wide range of picture pressure procedures in their investigation to adequately improve information pressure-volume and limit picture contortion. On the subject of information safeguarding,
584
K. Harshath et al.
In the shrewd human services framework referenced by Sundaravadivel et al. [17], besides observing the physical states of an enduring and running other connected applications, the framework uses enormous information to survey pertinent information for the anticipation of specific sicknesses that thus lessen restorative weight. Study of Li et al. [18] utilizes large information to assist improve power dissemination in VVC (Volt-VAR Control) forms in the wind power age. The Volt-VAR Control focuses can be an incorporated Volt-VAR Control, an appropriated VoltVAR Control, or a progressive Volt-VAR Control; the examination centers around how to consolidate the primary force source with wind power age to accomplish the accurate result. Wan et al. [19] advocate a program structure as well as the Internet of Things and large information. An Internet of Things organize share its information over numerous frameworks; for example, the observing stage needs momentary alerts while each mechanical stage understands information, performs activities, and runs enormous information examination. Lin et al. [20] offer a bunching dynamic framework with an input component; the examination uses the criticism system and edge to rank, by request, the ideal speculator. Lin et al. [21] give a probabilistic reluctant multiplicative inclination relation model in which the leader will survey and score elective item choices before using the collective choice-making model to distinguish the ideal elective item. The proposed Intelligent Farming stage in our examination utilizes 4G arrange for information broadcast to guarantee the nature of system broadcast. We utilize XML (Extensible Markup Language) for information transmission, which takes into consideration information broadcasting across various stages or organization names. Our examination embraces bunch innovation to run ranch information investigation and consolidates basic qualities utilizing the edge way to deal with deciding if a specific yield is appropriate for development at a particular homestead. Our proposed component can assist ranchers with getting a firm handle on their homestead condition and further develop the amount and nature of their homestead items.
3 Existing System India is at present experiencing a progress stage from the horticultural stance. In the ongoing Union spending plan, the Indian government gave a push to showcase changes in farming to improve the financial condition in provincial India. These proposed rural changes are relied upon to originate from three apparently detached spaces that frequently get interconnected with regards to farming, particularly lately: Bio innovation, Nanotechnology and Information and Communication Technology. The summation of these particular innovative viewpoints leads to the belief system of keen farming. So as to make the market progressively available to the ranchers, the idea of e-farming has been presented [16]. E-farming is the web application that will enable the ranchers to play out the agro-showcasing prompting to make progress and increment in their way of life. In consistency with keen innovations, mechanical technology is additionally being acquainted, similarly as to make more space for
Predicting the Farmland for Agriculture from the Soil Features …
585
mechanical progressions in horticulture [2, 5]. Web of Things (IoT), a developing bury disciplinary space, is additionally contributing to brilliant horticulture. Furthermore, a mix of IoT, Cloud Computing, Big Data Analytics, and Mobile Technology has the possibility to significantly change the space of Agriculture and reclassify the manner in which farming is being seen [1, 22].
4 Proposed System The Proposed framework builds the expectation of whether a land is appropriate for Farming. This encourages us to characterize which land is for Farming and which one is Non Farming. This causes us to develop crops and do Agriculture. The framework is stacked with soil figures like the territory, area it is available, surface of the dirt, water system scaling, turn, yield, soil disintegration, wind disintegration, incline, evacuation, and so on. The chi-square element calculation is utilized for Feature Extraction, Selection, and Scaling. It decreases the clamor highlights of the dataset and improves the highlights for the framework to process. The exactness will be created and expanded with the assistance of the calculations like DNN, Random Forest, and Linear Discriminant Analysis (Fig. 1). FEATURE EXTRACTION
DATA PREPROCESSING
Label Encoding
Soil DATASET
Extraction and Engineering
OneHot Encoding Selection and Scaling Scaling
TRAINING DATA
AGRICULTURE SCREENING MODEL
AGRICULTURE PREDICTED DATA
TESTING DATA
CLASSIFICATION
DNN
Random Forest ACCURACY PREDICTION
LDA
Fig. 1 Architecture of work
586
K. Harshath et al.
Agriculture data
Agriculture model
classify farm land and non farm land
Fig. 2 Data flow at the initial level
5 Module Description 5.1 Keras Data Keras is an Open Source Neural Network library written in Python those sudden spikes in demand for top of Theano or Tensorflow. It is intended to be secluded, quick, and simple to utilize. It was created by François Chollet, a Google engineer. Keras doesn’t deal with low-level calculations. Rather, it utilizes another library to do it, called the “Backend”. So Keras is a significant level API wrapper for the low-level API, fit for running on TensorFlow, CNTK, or Theano. Keras High-Level API handles the manner in which we make models, characterizing layers, or set up different info yield models. Right now, likewise orders our model with misfortune and streamlining agent capacities, preparing process with fit capacity. Keras doesn’t deal with Low-Level API, for example, making the computational chart, making tensors or different factors since it has been dealt with by the "backend" motor.
5.2 Data Pre-processing The information needs to be cleaned before stacking into the Neural Classifier (Fig. 2).
5.3 Data Visualization and Descriptive Statistics 5.3.1
Data Splitting
In measurements and AI this work typically split our information into two subsets: preparing information and testing information (and now and again to three: train, approve and test), and fit our model on the train information, so as to make expectations on the test information. At the point when this work does that, one of two things may occur: we overfit our model or we underfit our model. This don’t need
Predicting the Farmland for Agriculture from the Soil Features …
587
any of these things to occur, in light of the fact that they influence the consistency of our model — might be utilizing a model that has lower precision as well as is un generalized.
5.3.2
Train/Test Split
Preparation set contains a recognized yield and the representation learns on this information and is added up to other information shortly. This work has the test dataset (or subset) so as to test our model’s hope on this subset.
5.4 Predicting Admissions This task has picked DNN—Deep Neural Network. Neural systems use haphazardness by configuration to guarantee they adequately get familiar with the capacity being approximated for the issue. Arbitrariness is utilized on the grounds that this class of AI calculation performs preferred with it over without. The most basic type of haphazardness utilized in neural systems is the irregular introduction of the system loads.
5.5 The Evaluation of Model Performance To assess the likeness between various subnets. These subnets are converged into the heap forecasting. In the wake of consolidating comparative subnets, the normality of the new subnet’s capacity request can be upgraded, and it is helpful to manage similar factors by receiving a reasonable anticipating model for the subnet. The count in load forecasting can be decreased fundamentally after subnets parcel and subnets combining. To improve forecasting model for viable in mass force framework (Fig. 3).
6 Algorithms 6.1 DNN A deep neural network is a neural network through a different level of intricacy, a neural network with greater than two layers. Deep neural networks use complicated mathematical modeling to process data in composite ways.
588
K. Harshath et al. Preprocessing
Feature extraction
Agricultural Data
Splitting data
Classified farm land and non farm land
Classifying data
Prediction
Transformation
Fig. 3 Load forecasting
Neural network is a technology built to simulate the action of a human brain particularly, model recognition and the course of input through various layers of simulated neural connections (Fig. 4). Fig. 4 Deep neural network
Predicting the Farmland for Agriculture from the Soil Features …
589
6.2 LDA Linear Discriminant Analysis (LDA) is a dimensionality drop procedure. As the name implies dimensionality drop method shrink the number of size (variables) in a dataset while retains as much information as possible. For instance, imagine that we plotted the association among two variables where each color represents a different class.
6.2.1
Pseudocode
Import LinearDiscriminantAnalysis as LDA lda = LDA(n_components = 1) X_train=lda. fit_transform( X_train, y_train) X_test = lda.transform(X_test) fromsklearn.ensembleimport
6.3 SVM The main aim of the Support Vector Machine algorithm is to find out hyperplane in the dimensional space—N (N—the number of features) that clearly differentiate the data. SVM works comparatively well when there is an apparent margin of division among classes. SVM is more competent in high dimensional spaces. SVM is capable in cases where the number of dimensions is superior to the number of samples. SVM is more or less a recall efficient (Fig. 5).
6.3.1
Pseudocode
import pandas as pd importnumpy as np importmatplotlib.pyplot as plt from sklearn.svm import SVC svclassifier = SVC(kernel='linear') svclassifier.fit(X_train, y_train) y_pred = svclassifier.predict(X_test)
590
K. Harshath et al.
Fig. 5 Support Vector Machine
6.4 Random Forest Random forest is a supervised learning algorithm that is used for both classifications as well as weakening. But however, it is mainly used for classification problems. Like, random forest algorithm creates decision trees on data sample and then acquire the calculation, from each of them and finally select the best solution by means of voting. It is the ensemble technique that is better than a single decision tree because it reduces the over-fitting by average the result. A sample dataset is given in Fig. 6. The algorithm overcomes the problem of overfitting by average or merges the results of different decision trees. Random forests work well for a large variety of data items than a single decision tree. Random forest has less discrepancy than single decision tree. Random forests are very elastic and have very high accuracy. Scaling of data does not require in random forest algorithm. It maintains good accuracy even after as long as data without scaling. Random Forest algorithms maintain good accurateness even a large amount of the data is missing (Fig. 7).
Predicting the Farmland for Agriculture from the Soil Features …
Fig. 6 Dataset description
Fig. 7 Random Forest
6.4.1
Pseudocode
From sklearn.ensembleimport RandomForestClassifier classifier=RandomForestClassifier(max_depth=2, random_state=0) classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test)
591
592
K. Harshath et al.
7 Conclusion Confronted with outrageous atmosphere changes and expanded worldwide populace, we are compelled to underscore must address nourishment issues including, for example, harvests and Agriculture. Our investigation proposes utilizing an Intellectual Agriculture stage in the direction of the ecological variables on a ranch and apply these natural factors in examining development systems held by ranchers. Our projected conspire utilizes moving normal and change in information clean-up, which clears out information with increasingly intense variety. The Proposed framework builds the forecast of whether a land is reasonable for Farming. This causes us to arrange which land is for Farming and which one is Non Farming. This encourages us to develop crops and do farming. We could see that the dataset has been handled and prepared. The yield of the prepared information has been checked against the yield set of the train information. The total segment has by and by checked with the test train and test yield information. The precision of the DNN has additionally been confirmed. It works fine for the dataset and predicts the result precisely.
References 1. J. Ruan, Y. Wang, F.T.S. Chan, X. Hu, M. Zhao, F. Zhu, B. Shi, Y. Shi, F. Lin, A life cycle framework of green IoT-based agriculture and its nance, operation, and management issues. IEEE Commun. Mag. 57(3), 9096 (2019) 2. Q. Li, Y. Zhang, T. Ji, X. Lin, Z. Cai, Volt/var control for power grids with connections of large-scale wind farms: a review. IEEE Access 6, 2667526692 (2018) 3. X. Tan, L. Di, M. Deng, A. Chen, F. Huang, C. Peng, M. Gao, Y. Yao, Z. Sha, Cloud-and agentbased geospatial service chain: a case study of submerged crops analysis during flooding of the Yangtze River Basin. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 8(3), 13591370 (2015) 4. Y. Huang, Z.-X. Chen, T. Yu, X.-Z. Huang, X.-F. Gu, Agricultural remote sensing big data: Management and applications. J. Integr. Agric. 17(9), 19151931 (2018) 5. J. Wan, S. Tang, Z. Shu, D. Li, S. Wang, M. Imran, A. Vasilakos, Software-defined industrial Internet of Things in the context of industry 4.0. IEEE Sens. J. 16(20), 73737380 (2016) 6. B. Hulsegge, K.H. de Greef, A time-series approach for clustering farms based on slaughterhouse health aberration data. Prev. Vet. Med. 153, 6470 (2018) 7. R. Ranjan, L. Wang, A.Y. Zomaya, J. Tao, P.P. Jayaraman, D. Georgakopoulos, Advances in methods and techniques for processing streaming big data in datacentre clouds. IEEE Trans. Emerg. Topics Comput. 4(2), 262265 (2016) 8. M.P. Selvan, R. Selvaraj, Monitoring Fishy activity of the user in social networking, in 2017 International Conference on Information Communication and Embedded Systems (ICICES) (IEEE, 2017), pp. 1–5 9. J.K. Karthika, V.M. Anu, A. Veeramuthu, An efficient attribute based cryptographic algorithm for securing trustworthy storage and auditing for healthcare big data in cloud (2006) 10. R.I. Minu, G. Nagarajan, A. Pravin, BIP: A dimensionality reduction for image indexing. ICT Express 5(3), 187–191 (2019) 11. T.P. Jacob, Implementation of randomized test pattern generation strategy. J. Theor. Appl. Inf. Technol. 73(1) (2015) 12. G. Kalaiarasi, K.K. Thyagharajan, Clustering of near duplicate images using bundled features. Cluster Comput. 22, 11997–12007 (2019). https://doi.org/10.1007/s10586-017-1539-3
Predicting the Farmland for Agriculture from the Soil Features …
593
13. S.P. Deore, A. Pravin, On-line devanagari handwritten character recognition using moments features, in International Conference on Recent Trends in Image Processing and Pattern Recognition (Springer, Singapore, 2018), pp. 37–48 14. S. Tenzin, Low Cost Weather Station for Climate-Smart Agriculture (IEEE-2017) 15. P. Sundaravadivel, E. Kougianos, S.P. Mohanty, M.K. Ganapathiraju, Everything you wanted to know about smart health care: evaluating the different technologies and components of the Internet of Things for better health. IEEE Consum. Electron. Mag. 7(1), 1828 (2018) 16. J. Lin, R. Chen, A novel group decision making method under uncertain multiplicative linguistic environment for information system selection. IEEE Access 7, 1984819855 (2019) 17. R. Sathya Bama Krishna, B. Bharathi, M.U. Ahamed, B. Ankayarkanni, Hybrid method for moving object exploration in video surveillance, in 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE), Dubai, United Arab Emirates, 2019, pp. 773–778 18. R. Peters, Nine billion and beyond: From farm to fork [agriculture big data]. Eng. Technol. 11(4), 74 (2016) 19. V.R. Sarma Dhulipala, P. Devadas, P.H.S. Tejo Murthy, Mobile phone sensing mechanism for stress relaxation using sensor networks: a survey. Wireless Pers. Commun. 86, 1013–1022 (2016) 20. S. Wolfert, L. Ge, C. Verdouw, M.-J. Bogaardt, Big data in smart farming: a review. Agric. Syst. 153, 6980 (2017) 21. A. Kamilaris, A. Kartakoullis, F.X. Prenafeta-Boldú, A review on the practice of big data analysis in agriculture. Comput. Electron. Agric. 143, 2327 (2017) 22. X. Bai, L. Liu, M. Cao, J. Panneerselvam, Q. Sun, H. Wang, Collaborative actuation of wireless sensor and actuator networks for the agriculture industry. IEEE Access 5, 1328613296 (2017)
e-Commerce Site Pricing and Review Analysis Sourav Nandi and A. Mary Posonia
Abstract Nowadays, people only prefer to buy products by online means as they feel it is easy but on the other hand, they get low-quality products. Online purchasers get confused for considering which e-commerce site to buy a product. There are many strategies that can be considered to buy a product online by seeing offers and reviews. There are many e-commerce sites that provide fake reviews and unrealistic offers that work for attracting customers. In this paper, we suggest a method by which every customer gets an option of comparison between two sites dynamically which helps them to see through customer reports of the particular product bought from the particular e-commerce Web site in a transparent way. Keywords API · Web scrapping · Trusted reviews · Review records · Data mining
1 Introduction e-Commerce shopping is becoming more and more popular with individuals and organizations, owing to the ease, flexibility, and wide selection of products it offers to retailers. Online shopping feedback has also become an important component for both sellers and buyers in the future decision making [1, 2]. Sadly, online reviews of shopping could be posted without a test, which means if a retailer wants to increase his or her production through the site, he may manage to make fake reviews through different users and try to hide the faults of the particular product provided by a particular retailer [3]. As in the review section, there are many people who give their experience to a product as a review and these reviews can get the customer to a point to make a decision that the product should be taken by this retailer or not [4]. Here comes the use of my paper where it tries to give clarity to the customer where it S. Nandi (B) · A. Mary Posonia Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] A. Mary Posonia e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_62
595
596
S. Nandi and A. Mary Posonia
recognizes both the fake and true review and also consider the fake offer provided by the consumer give the customer the percentage value showing the trust value and giving a clear glance to customer so that they can select the best product from the best product providing e-commerce site [5].
2 Literature Survey • Fake review detection Wenqian Liu and his team who proposed a solution on the detection to fake review by forest algorithm. They considered both reviews and comments and tried to classify with the data [6, 7]. • Spam detector Lately, Web or email spam has been enormously contemplated. For instance, a study is given on Web spam location. Email spam identification is additionally contemplated. Blog spam or system spam is additionally seriously examined. For the survey spams, Fei et al. considered the conduct of phony surveys and gave conceivable spam designs [8].
3 Proposed System The suggested paper comprises of all the requirements and methods that can show the customers the ecommerce site providing more bluff comparatively for the particular product [3]. We need to extract the data by separating item records from two e-commerce sites which move smoothly through the e-commerce site in the html hierarchy, distinguishing, and extracting the item links from the tags of the e-commerce pages and recognizing the item [5]. Class or div value to get qualities or records inside the tags [9]. After extracting the complete data in a dynamic way, we try to distinguish between real and fake reviews and after all the predictions are made for the reviews, then the average number for the review and the paper try to say the fakeness provide for the particular product [10]. • Requirements In my paper, there are two types of data that need to be extracted dynamically from the e-commerce Web sites; one is the reviews and other is the offer rate provided by the ecommerce site so it is very important that we consider [11] all the reviews dynamically that is showed to the customers and the fake offers that are posted and continuously changed according to their sales [6, 12]. In any case, late correlation shopping engines
e-Commerce Site Pricing and Review Analysis
597
acquire their information legitimately from the online retailers through a particular item feed characterized by the comparison-shopping motor administrators. There are many API’s provided by different e-commerce sites but from my analysis, they only provide the data other than reviews and if so the reviews are provided those can be considered as a set of filtered reviews [13, 14]. – All the reviews are extracted by using the selenium package in Python and going through the tree structure of the html tags and after that, getting the links for the review list with its review name and date. – For the product, the customer want see the offers are also taken and their respective original price for comparison from the respective e-commerce Web sites. • Crawling On moving through the html hierarchy of the e-commerce, using the Web driver chrome, we try to extract the links by the tag of “div” and the class name using the “class” attribute to gather the reviews in level by level operation [8]. – Level-1: Open the Web site using the Web driver, get to the search box using its class name and write the product name. – Level-2: On getting the new webpage, the content of the page, and then get the link of the phone, we are searching for using the tag name “div” and it’s particular id. – Level-3: The new page will contain all the mobile descriptions with its review details, get those using tag “li” and its class name, as shown in Fig. 1. • Analysis For the purpose of the analysis, part I had to create a model that can depict weather a particular review is fake or real [15, 16]. There are many ways that we can find a particular review is real or fake [17].
Fig. 1 Access data by class name and Web driver
598
S. Nandi and A. Mary Posonia
If there are many reviews for a particular product by the same reviewer and considering the date also. – In a survey it was found that for fake reviews or content, the number of verbs are more. – we can also consider review to be real if it has more numbers of sentimental words. – We can consider above three parameters for distinguishing but they cannot be taken so relevant points so we had to make a model using Navie Bayes algorithm. I have taken the dataset of “Hotel reviews” from Kaggle. The hotel review dataset contain a set of both fake and real reviews. • Scientific Approach The methodology that we are using to distinguish between true and fake reviews is Baye’s theorem. Formula: P(C|B) = P(C)P(B|C)/P(B) – – – –
P(C/B) is a conditional probability called as posterior probability P(B|C) is a conditional probability called as prior probability P(C) is the independent probability of A P(B) is the independent probability of B
• Process There are three stages in making the model. – Stage 1: Preprocessing of the each review is done by removing stop word, stemming, and also removing the unnecessary symbols [17]. – Stage 2: For using Bayes theorem, we try to get prior probability from the list of true and fake reviews. – Stage 3: Now, for each review, in both true and fake reviews set, try to get the words those are not present under frequent words but present in sentimental words [17]. prior_positive_probability = Total_positive_word/(Total_positive_word + Total_negative_word) prior_negative_probability = Total_negative_word/(Total_positive_word + Total_negative_word) where we know:Total_positive_word = total words got from True review set. Total_negative_word = total words got from Fake review set. – Stage 3.1: we also try to collect the tags for both True and Fake reviews and get their probabilities. we store this probability in Pos_tag_prob and Neg_tag_prob Now, to generate prior probability: prior_positive = Total_postive_word/(Total_negative_word + Total_positive_word)
e-Commerce Site Pricing and Review Analysis
599
prior_negative = Total_negative_word/(Total_positive_word + Total_negative_word) – Stage 4: we also consider the tag for each review in both True and fake review set and we do not consider the tag made by special symbols like “$”,”)”,”(”etc. [18]. There are three stages for testing each review: – Stage 1: preprocessing of the each review is done by removing stop word, stemming, and also removing the unnecessary symbols [18]. – Stage 2.1: Now for each review, to find it is a True or Fake review, we need to find the posterior probability. To get the posterior probability, we need to find independent probability for both True and Fake of the particular review [19]. – Stage 2.2: compare every word in the review to which collection of words it belong to Negative_Word_Dict or in Positive_Word_Dict and we generate Negative_probability and Positive_probability. posterior_positive = prior_positive*Positive_probability posterior_negative = prior_negative*Negative_probability – Stage 2.3: To get a more accurate s. posterior_positive = posterior_positive + Pos_tag_prob posterior_negative = posterior_negative + Neg_tag_prob – Stage 3: Now on comparing between posterior_positive and posterior_negative which is having more value is considered to be a True or Fake review. – Stage 4: For more accurate percentage, add the difference of offer rate and original rate to the total probability.
4 Result and Discussion We know, nowadays, there are plenty of e-commerce Web sites where consumers visit to buy many kinds of product but at times, they get confused by the product reviews and the offers provided by different e-commerce Web sites; and, here, our paper can be made use of as it would provide more clarity to defects, specification, and also the percentage of trust should be given to the particular e-commerce site [20]. This paper will extract the data dynamically from two e-commerce Web sites and compare between them for the particular product. In the interface, basically, it gives customer a glance to more trustable e-commerce Web site for the particular product [3, 21]. As we know there are different–different retailers for different products and each retailer wanted their product to increase their sales and which may lead them take unauthorized measures like providing fake reviews, using this interface consumer may be concern about only the reviews which are true and seeing the percentage trust for the two e-commerce Web sites for the particular product, as shown in Fig. 2. This interface also provide an additional feature of a section of question and answer, where certain consumers may have some question related to a particular
600
S. Nandi and A. Mary Posonia
Fig. 2 Comparison between two e-commerce Web site
product which could be discussed or resolved by interpreting the fault in a correct perception, as shown in Fig. 2. The consumer can also delete or edit their own questions that were needed to be changed or edited [22] (Fig. 3).
Fig. 3 Questions related products
e-Commerce Site Pricing and Review Analysis
601
5 Conclusion This paper shows a perspective by which consumers can recognize the fakeness of offers and which is a trustable product. Consumers are provided with true reviews or comments which could affect on understanding which are trustable e-commerce site to purchase that particular product.
References 1. G. Dhanisha, J.M. Seles, E. Brumancia, Android interface based GCM home security system using object motion detection, in 2015 International Conference on Communications and Signal Processing (ICCSP). IEEE (2015), pp. 1928–1931 2. K.M. Prasad, R.S. Kavya, S.B. Devi, Virtual fitting space for dress trials, in IOP Conference Series: Materials Science and Engineering, vol. 590, no. 1). (IOP Publishing, 2019), p. 012013 3. A. Horch, H. Kett, A. Weisbecker, Mining E-commerce data from E-shop websites. Website, 2015. [Online]. Available: https://ieeexplore.ieee.org/document/7345488 4. M.P. Selvan, R. Selvaraj, Monitoring Fishy activity of the user in social networking, in 2017 International Conference on Information Communication and Embedded Systems (ICICES). IEEE (2017), pp. 1–5 5. M.P. Selvan, N. Navadurga, N. Lakshmi Prasanna, An efficient model for predicting student dropout using data mining and machine learning techniques. Int. J. Innov. Technol. Explor. Eng. (IJITEE) 8(9S2) (2019). ISSN: 2278-3075 6. W. Liu, J. He, S. Han, F. Cai, Z. Yang, N. Zhu, A method for the detection of fake reviews based on temporal features of reviews and comments. Website, 2019. [Online]. Available: https://iee explore.ieee.org/document/7345488 7. M. Maheswari, S. Geetha, S. Selva Kumar, Adaptable and proficient Hellinger coefficient based collaborative filtering for recommendation system. Cluster Comput. 22, S12325–S12338 (2019). https://doi.org/10.1007/s10586-017-1616-7 8. K. Srilatha, V. Ulagamuthalvi, Support vector machine and particle swarm optimization based classification of ovarian tumour. Biosci. Biotechnol. Res. Commun. 12(3), 714–719 (2019) 9. S.P. Deore, A. Pravin, On-line Devanagari handwritten character recognition using moments features, in International Conference on Recent Trends in Image Processing and Pattern Recognition (Springer, Singapore, 2018), pp. 37–48 10. G.K. Krishnan, R.G. Franklin, Privacy and auditing bigdata stored in cloud with verify update. Res. J. Pharm. Biol. Chem. Sci. 7(3), 742–750 (2016) 11. G. Nagarajan, R.I. Minu, V. Vedanarayanan, S.S. Jebaseelan, K. Vasanth, CIMTEL-mining algorithm for big data in telecommunication. Int. J. Eng. Technol. (IJET) 7(5), 1709–1715 (2015) 12. A. Velmurugan, T. Ravi, Optimal symptom diagnosis for efficient disease identification using Somars approach. J. Comput. Theor. Nanosci. 14(2), 1157–1162 (2017) 13. T.P. Jacob, Implementation of randomized test pattern generation strategy. J. Theor. Appl. Inf. Technol. 73(1) (2015) 14. R.I. Minu, G. Nagarajan, A. Pravin, BIP: A dimensionality reduction for image indexing. ICT Express 5(3), 187–191 (2019) 15. S. Vaithyasubramanian, A. Christy, D. Saravanan, Two factor authentications for secured login in support of effective information preservation and network security. India: ARPN J. Eng. Appl. Sci. 10(5) (2015) 16. B. Ankayarkanni, A.E.S. Leni, GABC based neuro-fuzzy classifier with multi kernel segmentation for satellite image classification (2016)
602
S. Nandi and A. Mary Posonia
17. H. Zhang, D. Li, Naïve Bayes text classifier. Website, 2007. [Online]. Available: https://ieeexp lore.ieee.org/abstract/document/4403192 18. B. Nagelvoort, R. van Welie, P. van den Brink, A. Weening, J. Abraham, Europeb2ce-commerce light report 2014. Website, 2014 [Online]. Available: https://www.ecommerce-europe.eu/web site/facts-figures/light-version/download 19. PostNord, Ecommerce in Europe 2014, Website, 2014. [Online]. Available: https://www.pos tnord.com/en/media/publications/e-commerce-archive 20. C. Consulting, Consumer market study on the functioning of e-commerce and internet marketing and selling techniques in the retail of goods. Website, 2011 [Online]. Available: https://www.civic-consulting.de/reports/studyecommercegoodsen.pdf 21. M.D.V. Ajay, N. Adithya, A. Mary Posonia, Technical era: an online web application, in IOP Conference Series: Materials Science and Engineering, vol. 590, no. 1, p. 012002. IOP Publishing, 2019 22. A. Viji Amutha Mary, A random projection approach to strengthen the privacy level of medical images. J. Comput. Theor. Nanosci. 16(8), 3219–3221 (2019)
Heart Disease Prediction Using Machine Learning M. Sai Shekhar, Y. Mani Chand, and L. Mary Gladence
Abstract In the present, period passing because of coronary illness has become a significant issue roughly one individual bite the dust every moment because of coronary illness. This is thinking about both the genders classification, and this part shifts as indicated by area; likewise, this part is taken into consideration for every person old enough to gathering. This doesn’t show every individual that those who are having different ages gathering won’t get influenced by coronary illness. The above issue might begin starting of the age bunch additionally, foresee the motive and infection is a great take a look at those days. Here, right now was examined distinctive calculations and gadgets applied for forecast of heart maladies. The datasets are prepared in java coding language by using machine learning algorithms like naïve Bayes, K-NN algorithms which are used to predict the disease easily and show the probability of disease of the patient which is either negative or positive. Keywords Prediction · Classification · Naïve Bayes · K-NN · Coronary illness · Probability · Positive
1 Introduction The main theme of this paper around distinctive data mining rehearses those are important in coronary illness figure by using unique data mining instruments those are available [1, 2]. In this event that the heart doesn’t work appropriately, this will trouble different pieces of the human body, for example, cerebrum, kidney, M. Sai Shekhar (B) · Y. Mani Chand · L. Mary Gladence Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] Y. Mani Chand e-mail: [email protected] L. Mary Gladence e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_63
603
604
M. Sai Shekhar et al.
and so on. Coronary illness is a sort of sickness which impacts the working of the heart. In modern era, coronary illness is the primary reason for end of. WHO— World Health Organization has foreseen that 13 Mio. Individuals kick the bucket each year in view of heart sicknesses. Some heart contaminations are cardiovascular, respiratory disappointment, coronary, and pound. Pound is a sort of coronary sickness that occurs because of strengthening, blocking, or lessening of veins which crosses the thoughts, or it can in like manner be begun by hypertension. The significant test that the healthcare business faces now-a-days are predominance of office. Diagnosing the sickness precisely and giving convincing treatment to patients will portray the idea of organization. Poor examination causes awful results that are not recognized. Records or information of clinical history is enormous; however these are from numerous disparate establishments. The translations that are finished by doctors are fundamental segments of this information. The data in real world might be noisy, deficient, and clashing, so data preprocessing will be required in command to fill the disposed of characteristics in the database. Regardless of whether cardiovascular sicknesses is found as the significant wellspring of death in world in antiquated years, these have been reported as the most avoidable and sensible illnesses. The entire and exact administration of a sickness lay on the very much planned judgment of that infection. A privilege and deliberate gadget for seeing high-peril patients and burrowing data for advantageous assessment of coronary illness looks a real need. Distinctive individual body can show various side effects of coronary illness which may differ likewise. However, they oftentimes incorporate back torment, jaw torment, neck torment, stomach issue, and littleness of breath, chest agony, arms and shoulders torments. There are a wide range of heart ailments which incorporates cardiovascular breakdown and stroke and coronary conduit infection. Heart experts create a good and huge record of patient’s database and store them. It likewise conveys an extraordinary possibility for mining esteemed information from such kind of datasets. There is tremendous research proceeding to decide coronary illness chance factors in various patients; various scientists are utilizing different factual methodologies and various projects of information mining draws near. Measurable examination has recognized the check for hazard factors of heart maladies tallying smoking, age, circulatory strain, diabetes, all out cholesterol and hypertension, coronary illness preparing in family, heftiness, and the absence for activity [3]. For anticipation and social insurance of patients who are going to have dependent of coronary illness, it is imperative to have familiarity with heart ailments.
2 Related Work The methodology performed also to considering all information without a moment’s delay, while fundamentally lessening the quantity of the biomarkers expected to
Heart Disease Prediction Using Machine Learning
605
accomplish a sure analysis for every patient [4, 5]. In this way, it might add to a customized and viable location of AD and may demonstrate helpful in clinical settings [6]. The basic target of this paper is to supply reasonable essentials for blocking and gauging the normality of hand, foot, and mouth disease to look into the impact of various meteorological conditions every so often of hand, foot, and mouth issue in Wuwei City, northwestern China [7, 8]. Here, the information about the ailments and air was amassed from 2008 to 2010, and the relationship assessment differing direct descend into sin and exponential bend fitting strategies was made [9, 10]. Spectral data have been widely used to estimate the disease severity levels of different plants [11]. Be that as it may, such information have not been assessed to gage the infection phases of the plant [12, 13]. This examination planned for building up an otherworldly infection record that can recognize the phases of wheat leaf rust illness at different DS levels [14, 15]. In request to break down heart valve ailment precisely and adequately, another quantized finding technique was proposed to examine four clinical heart valve sounds, to be specific cardiovascular sound trademark waveform [16, 17]. The article underscores clinical and prognostic criticalness of non-straight extents of the beat variability applied on the social occasion of patients with coronary sickness and age-facilitated strong benchmark gathering [18]. Three specific methods were applied: Hurst model, Detrended fluctuation analysis, and assembled entropy [19]. Hurst sort of the R-R game plan was obliged by the range rescaled appraisal system [20, 21]. DFA was utilized to evaluate fractal long go affiliation properties of heartbeat changeability [22, 23].
3 Proposed System This proposed system has a data which orchestrated if patients have coronary sickness or not as showed by features in it. This proposed structure can endeavor to use this data to make a model which endeavors foresee (getting data and data exploration) if a patient has this disease or not. At this moment, use determined backslide count. Right now, utilize calculated relapse (arrangement) calculation. Executes naive Bayes calculation for getting precision result. Finally, researching the results by the help of comparing models and confusion matrix. From the data we are having, it should be portrayed into different sorted out data reliant on the features of the patient heart. From the accessibility of the information, we need to make a model which predicts the patient illness utilizing calculated relapse calculation. To begin with, we need to import the datasets. Peruse the datasets; the information ought to contain various factors like age, sexual orientation, sex, cp (chest torment), incline, target. The information ought to be investigated with the goal that the data is checked. Make a brief variable and furthermore assemble a model for calculated relapse. Here, we utilize sigmoid capacity which helps in the graphical portrayal of the grouped information. By utilizing strategic relapse, credulous Bayes the precision rate increments.
606
M. Sai Shekhar et al.
4 User Module In this Module, patient can act as user.
4.1 Login This is the fundamental activity that opens the site. Client needs to give a right contact numbers and a secret key, which client enters while enlisting, to logged in the application. On the off chance that data furnished for the client equals the information in the database table then client sufficiently logged into the application else message of logged in fizzled are shown up and client should return right data. An interface with the register improvement is additionally given to selection of new clients.
4.2 Registration Another client who needs to get to the site needs to enlist first before login. By tapping on register button in login development, the register activity gets open. Another customer enrolls by entering total name, mystery expression, and contact number. A customer needs to enter mystery state again in insist mystery word textbox for assertion. At the point when client entered the data in every text boxes, on the snap of register button, the information is moved to database, and clients are coordinated to be logged in action once more. Enlisted client then needs to login so as to get to the application. Approvals are applied on every text boxes for appropriate working for the application. Like information in each textbox is must that is each textbox, it is possible that it is of name, contact, secret phrase, or affirm secret key which won’t be unfilled while enrolling. On the off chance that any of those textboxes are vacant application that will get the message of data which should be in every textbox. Additionally, information in secret word and affirm secret word fields must counterpart for fruitful enlistment. Another endorsement is contact number that must be real one that is of 10 digits. In the event that any such approval is disregarded, at that point, enlistment will be fruitless and afterward client needs to enroll once more message that application will show when one of the fields is vacant. On the off chance that all such data is right client will be coordinated to login movement for login into the application.
Heart Disease Prediction Using Machine Learning
607
5 Admin In this module, admin can add and view new doctor details, disease details and drug details. And then admin can view feedback provided by various users.
6 Disease Analysis In this module, we can analyze the disease and give patient feedbacks if it is life threatening or not and also calculate the how much probability disease will be happening.
7 Disease Prediction Patient will show the reactions caused in view of his illness. System will represent certain request with respect to his sickness and structure envisions the affliction subject to the reactions showed by the patient, and structure will moreover suggest pros reliant on the contamination (Figs. 1 and 2).
Fig. 1 Heart disease prediction system
608
M. Sai Shekhar et al.
Fig. 2 Gathering information
References 1. N. Srinivasan, C. Lakshmi, Stock price prediction using rule based genetic algorithm approach. Res. J. Pharm. Technol. 10(1), 87–90 (2017) 2. A. Praveena, M.K. Eriki, D.T. Enjam, Implementation of smart attendance monitoring using open-CV and python. J. Comput. Theor. Nanosci. 16(8), 3290–3295 (2019) 3. S. Divya, R. Vignesh, R. Revathy, A distinctive model to classify tumor using random forest classifier, in 2019 Third International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 2019, pp. 44–47 4. S. Khemmarat, L. Gao, Supporting drug prescription via predictive and personalized query system, in PervasiveHealth. IEEE, 2015 5. C. Knox et al., Drugbank 3.0: a comprehensive resource for omics research on drugs. Nucleic Acids Res. 39(suppl 1), D1035–D1041 (2011) 6. P. Kajendran, A. Pravin, Enhancement of security related to ATM installations to detect misbehavior activity of unknown person using video analytics. ARPN J. Eng. Appl. Sci. 12(21) (2017) 7. T.P. Jacob, Implementation of randomized test pattern generation strategy. J. Theor. Appl. Inf. Technol. 73(1) (2015) 8. R.I. Minu, G. Nagarajan, A. Pravin, BIP: A dimensionality reduction for image indexing. ICT Express 5(3), 187–191 (2019) 9. M. Kanehisa, S. Goto, Kegg: Kyotoencyclopedia of genes and genomes. Nucleic Res. 28(1), 27–30 (2000) 10. T. Fawcett, An introduction to roc analysis. Pattern Recogn. Lett. 27(8), 861–874 (2006) 11. M.P. Selvan, R. Selvaraj, Monitoring Fishy activity of the user in social networking, in 2017 International Conference on Information Communication and Embedded Systems (ICICES). IEEE (2017), pp. 1–5 12. K.M. Prasad, R.S. Kavya, S.B. Devi, Virtual fitting space for dress trials, in IOP Conference Series: Materials Science and Engineering, vol. 590, no. 1. IOP Publishing (2019), p. 012013 13. V. Ramya, R.G. Franklin, Alert system for driver’s drowsiness using image processing, in 2019 International Conference on Vision Towards Emerging Trends in Communication and Networking (ViTECoN). IEEE (2019), pp. 1–5
Heart Disease Prediction Using Machine Learning
609
14. A. Velmurugan, T. Ravi, Optimal symptom diagnosis for efficient disease identification using somars approach. J. Comput. Theor. Nanosci. 14(2), 1157–1162 (2017) 15. V.V. Kaveri, V. Maheswari, Cluster based anonymization for privacy preservation in social network data community. J. Theor. Appl. Inf. Technol. 73(2), 269–274 (2015) 16. K. Sangkuhl et al., Pharmgkb: understanding the effects of individual genetic variants. Drug Metab. Rev. 40(4), 539–551 (2008) 17. A. Langer et al., A text based drug query system for mobile phones. Int. J. Mob. Commun. 12(4), 411–429 (2014) 18. C. Doulaverakis et al., Panacea, a semanticenabled drug recommendations discovery framework. J. Biomed. Semant. 5, 13 (2014) 19. G. Dhanisha, J.M. Seles, E. Brumancia, Android interface based GCM home security system using object motion detection, in 2015 International Conference on Communications and Signal Processing (ICCSP). IEEE (2015), pp. 1928–1931 20. G. Nagarajan, K.K. Thyagharajan, A machine learning technique for semantic search engine. Procedia Eng. 38, 2164–2171 (2012) 21. R. Subhashini, J.K. Jeevitha, B.K. Samhitha, Application of data mining techniques to examine quality of water. Int. J. Innov. Technol. Explor. Eng. (IJITEE) 8(5S) (2019). ISSN: 2278-3075 22. T.S. Kala, A. Christy, A pattern matching algorithm for reducing false positive in signature based intrusion detection system. Int. J. Eng. Technol. 8(2), 580–586 (2016) 23. B. Ankayarkanni, A.E.S. Leni, GABC based neuro-fuzzy classifier with multi kernel segmentation for satellite image classification (2016)
Secured Image Retrieval from Cloud Repository Using Image Encryption Scheme Mercy Paul Selvan, Viji Amutha Mary, Putta Abhiram, and Reddem Srinivasa Reddy
Abstract Proposing a safe system for re-appropriated defense in broad shared picture storehouses to safeguard efficiency and recovery. Distributed computing is another type of web-based figuring that on request gives PCs and various gadgets shared handling assets and knowledge. It is the conveyance of facilitated benefits over the web. Distributed computing administrations can be open, private or half, and half. The developing business of cloud has offered an assistance worldview of capacity/calculation re-appropriating assists with decreasing clients’ weight of IT framework support and diminishes the expense for both the endeavors and individual clients. We are going to propose secure framework structure. We are going to proposing content-based image retrieval. This work enables mixed storing moreover with the help of content-based image retrieval requests while ensuring protection from authentic anyway curious cloud administrators. The proposed output gives IESCBIR is secure and effective compared with the existing method and prepares for new practical application circumstances. Keywords JAVA · Cloud computing · HTML · J2EE · JSP
1 Introduction The new scene with the iCloud picture storage company and big name visual statistics nowadays is responsible: the theory of total Internet traffic in circumstances of company and individual use. The number of normal images, outlines, and portraits M. P. Selvan (B) · V. A. Mary · P. Abhiram · R. S. Reddy Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] P. Abhiram e-mail: [email protected] R. S. Reddy e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_64
611
612
M. P. Selvan et al.
produced and exchanged, especially via phones, is creating at a steadily increasing pace [1]. A guiding element for data redistributing organizations, such as those using cloud storage and computing courses of action, has been the restricting prerequisites for such far ranging data totals in resource-binding mobile phones [2]. To be among the largest network organizations in the country, these organizations were represented. Despite the fact that re-appropriation of the data seems a troubles up the extent that data control of safety. The effect is of redistributing data, which regularly recommends releasing control (and a couple of many times amazing belonging) over it [3]. Late scenes have given clear affirmation that security should not be depended upon to be spared by cloud providers [4]. Provider research has complete access to data on the supporting cloud machines plus, harmful or basically tactless system executives [5]. Finally, outside software engineers a manhandle programming powerless ties to increment unapproved access to servers. The estimates motivate assessment of the presence of RF-EMFs on the body and allow for an analysis with the guideline for universal radiation [6]. Given the dissymmetric calculation, the system was controlled by two inertial sensors to calculate the guinea pig’s action parameters [7]. The accelerometer tri-hub and a tri-pivotal spinner measure the increasing velocities and rakish rates of the estimation hubs [8]. This paper shows approval of the new system for two symmetrical 2.4 GHz ISM Band polarizations in an anechoic and resonation chamber, just as in an indoor office space [9]. The fundamental concentration in the proposed tests into exhibit achievability and benefits of an appropriated PDE with incorporated inertial detecting to gain top notch information that considers the nearness and movement human body during-EMF introduction appraisals [10]. We officially describe IESCBIR as The New Image Encryption Scheme with Image Retrieval property based on content and suggest a practical implementation that will achieve its usefulness [11]. Tell the best way to structure a re-appropriated picture stockpiling, search, and recovery system by utilizing proposed method to keep away from most substantial Calculations to be carried out by the customer henceforth going around execution traps that exist in ebb and flow condition of workmanship recommendations [12, 13]. We officially illustrate the safety of our system, IESCBIR; we tentatively illustrate that when alternatives are contrasted and contested [14]. The system provides increased flexibility, reliability and lower transfer speed usage, allowing customer frameworks to be ever lighter and more lightweight, effectively demonstrating that the precision of recovery, and analysis of the scheme proposed is comparable to the momentum of craftsmanship [15]. The suggested IESCBIR can be applied with timely benefits [16–19]. We also give an overall traditional security evaluation of our recommendations and an exhibition analysis of our structure’s pursuit operation in comparison with specific past works. In addition, we are offering practical security checks of IESCBIR and its entropy levels at each progression of the encryption and a complete representation of all [20].
Secured Image Retrieval from Cloud Repository Using Image …
613
2 Related Work Nowadays, visual data is accountable. For perhaps, the best understanding of total Internet traffic in situations of both corporate and individual use [21]. The proportion of traditional photographs, structures and pictures being taken and shared, to build at regularly expanding pace, particularly through mobile phones. The limiting criteria for such extensive data in resource-binding mobile phones have been a driving component for data re-appropriating organizations, such as those using game plans for cloud storage and computing. Such organizations have been described as one of the largest web organizations to be created. Regardless of the way in which data re-appropriating, especially cloud enlisting frameworks, appears to be a trademark answer to support tremendous picture gathering and rebuilding systems [22]. It also brings new problems when it comes to data security control. This is result of re-appropriating data, by and large recommends releasing control over it [23]. In addition, harmful or essentially indiscreet system sets out progressing in the direction of the providers. Past suggestion for supporting re-appropriated amassing, chase, and recuperation. In the photographs, mixed region even thoroughly isolated in two classes [24]. In the symmetric encryption technique searchable viewpoint and those public-key based midway was widely used by the collection of inquiries as a piece of the past, especially for content data. In the image area, regardless of the way that is not known as SSE plans, various systems use the identical picture look/recovery methodologies. We infer these as SSE-based approaches to action for ease [25]. Lu et al. used document wide bits of knowledge it change the storage facilities are revived and then drive the redevelopment and encryption the record, anticipating that what are the things clients downloading often and translate the full substance store. Looks are encoded with a request defending. Embedding scheme relies on claimants space movement [26]. Specific updates these improvements in the distribution again include re-improvement and data re-encryption.
3 Existing System Recently, data storage requirement has increased. Based on multimedia access and mobile device, existing proposition right now generally unconventional, to be specific those requiring completely homomorphism encryption. Since portable customers for the most part have restricted computational and capacity assets, they will in general depend on cloud administrations for putting away and preparing massive information, for example, pictures. Right now, customers (clients) need to designate their private picture vaults stockpiling to a cloud supplier, while adapting to the restrictions of their gadget’s stockpiling capacity, computational force, and better life.
614
M. P. Selvan et al.
4 Problem Statement In general, encryption methods in picture handling lead to change in the size of an encoded picture. Along these lines, recovery cannot be accomplished appropriately. User’s protection is influenced because of the thoughtlessness of cloud specialist co-op. Images are spilled because of the most reduced security level in the cloud.
5 Proposed System IESCBIR content-based image retrieval method used for proposing. With the help of content-based image retrieval questions used searching and storage which are encrypted. Pictures are re-appropriated to archives that dwell. Many of the users are accessing that and two of them can include their own pictures as well as search utilizing an inquiry picture. Every store is made by a solitary client. Upon the production of an archive, another store key is created by that client and afterward imparted to other confided in clients, permitting them to look in the vault and include/update pictures. Use the portrayal of bag-of-visual words to create a jargon tree for each store and a modified rundown record. We pick this ordering technique as it shows great properties of inquiry execution and flexibility (Fig. 1).
Fig. 1 Overview of proposed system
Secured Image Retrieval from Cloud Repository Using Image …
615
6 Modules Description 6.1 Create Repository and Upload Images Archive is extra room of assortment of information. Every vault is made by single client. He is the proprietor of that archive. At that point, he produces a key for that storehouse by utilizing RSA calculation and imparted to the clients who are all have a record to get to it. Presently, repository can be gotten to by different clients with the consent of a proprietor. At that point, proprietor transfer immense measure of picture datasets as compress document into the cloud.
6.2 Codebook and Index Generation The administrator of cloud has duty to make reports dependent on pictures which is helpful for looking of pictures by clients. In this way, he removes compress record and applying CBIR encryption method. It encodes pictures dependent on shading esteems and surface highlights and furthermore rearranging the pixels in segment insightful just as column savvy. At that point, he makes codebook, list, and picture key for those encoded pictures. These records are utilized to improve the looking through productivity of cloud and furthermore deal with the time appropriately while recovering answer.
6.3 Add Image/Query to Cloud Presently, users can get to the cloud to include their own pictures into the storehouse. In this way, in the event that that cloud has ‘n’ number of clients, at that point vault has opportunity to increment quickly. Presently, the store has assortment of ‘n’ number of pictures in various spaces. All the pictures are put away in encoded position for security. At that point, client needs to request that question cloud. Its take question is in the configuration of encoded picture utilizing CBIR encryption system.
6.4 Content-Based Searching and Retrieval Subsequent to getting encoded picture question, the cloud extricates the highlights of a unique picture. Presently, applying content put together looking with respect to the codebook and picture file by utilizing that separated highlights. Clearly, presently looking through outcomes will be an encoded picture. This came about answer will send to that comparing client. Presently, client can apply CBIR decoding method
616
M. P. Selvan et al.
to unscramble the recovered pictures. In this way, the appropriate response will be extremely fine and delectable because of gigantic dataset.
7 Conclusion The proposed work used to retrieve broad shared image and to secure storage. Right now, another guaranteed framework has been proposed for the security shielding redistributed limit, look, and recovery of immense scope, capably invigorated picture documents, where a central point is the decrease in overheads for clients Because of our architecture, there is a novel cryptographic arrangement called IES-CBIR, particularly planned for pictures. Central to its structure is the discovery that concealing information from surface information can be disengaged in images, allowing the use of different encryption strategies with different properties for all and enabling protection to save content-based image retrieval central to its structure is the discovery that concealing information from surface information can be disengaged in images, allowing the use of out of different encryption methods with different properties for all and enabling protection to save content-based image retrieval central to its structure is the discovery that concealing information from surface information can be disengaged in images, allowing the use of various encryption strategies with different characteristics for everyone, and enabling protection to save image retrieval based on content. We officially tested the reliability of our recommendation and further test implemented assessment models and revealed that our strategy brings about fascinating trade-off in CBIR with greater flexibility between precision and audit.
References 1. J. Rodrigues, J. Leitão, Member, IEEE, Practical Privacy-Preserving Content-Based Retrieval in Cloud Image Repositories Bernardo Ferreira, Student Member, IEEE, Henrique Domingos, Member, IEEE 2. Global Web Index, Instagram tops the list of social network growth, https://tinyurl.com/hnw wlzm, 2013 3. C.D. Manning, P. Raghavan, H. Schütze, An Introduction to Information Retrieval, vol. 1 (Cambridge University Press, 2009) 4. R. Chow, P. Golle, M. Jacobson, E. Shi, J. Staddon, R. Masuoka, J. Molina, Controlling data in the cloud: outsourcing computation without outsourcing control, in CCSW’09 (2009) 5. D. Rushe, Google: don’t expect privacy when sending to Gmail, https://tinyurl.com/kjga34x, 2013 6. G. Greenwald, E. Mac skill, NSA Prism programtaps in to user data of Apple, Google and others, https://tinyurl.com/oea3g8t, 2013 7. A. Chen, GCreep: Google Engineer Stalked Teens, Spied on Chats, https://gawker.com/563 7234, 2010 8. J. Halderman, S. Schoen, Lest we remember: cold-boot attack son encryption keys. Commun. ACM 52(5) (2009)
Secured Image Retrieval from Cloud Repository Using Image …
617
9. National Vulnerability Database, CVE Statistics, https://web.nvd.nist.gov/view/vuln/statistics, 2014 10. G. Dhanisha, J.M. Seles, E. Brumancia, Android interface based GCM home security system using object motion detection, in 2015 International Conference on Communications and Signal Processing (ICCSP). IEEE (2015), pp. 1928–1931 11. D. Lewis, iCloud Data Breach: Hacking and Celebrity Photos, https://tinyurl.com/nohznmr, 2014 12. P. Mahajan, S. Setty, S. Lee, A. Clement, L. Alvisi, M. Dahlin, M. Walfish, Depot: cloud storage with minimal trust. ACM Trans. Comput. Syst. 29(4), 1–38 (2011) 13. C. Gentry, S. Halevi, N.P. Smart, Monomorphic evaluation of the AES circuit, in CRYPTO’12 (Springer, 2012), pp. 850–867 14. P. Parlier, Public-key cryptosystems based on composite degreeresiduosity classes, in EUROCRYPT’99, 1999, pp. 223–238 15. T. ElGamal, A public key cryptosystem and a signature scheme based on discrete logarithms. Adv. Cryptol. (1985) 16. C.-Y. Hsu, C.-S. Lu, S.-C. Pei, Image feature extraction in encrypted domain with privacypreserving SIFT. IEEE Trans. Image Process. 21(11), 4593–4607 (2012) 17. P. Zheng, J. Huang, An efficient image monomorphic encryption scheme with small cipher text expansion. IJARIIE 3(6), 7082 (2017) www.ijariie.com 1022. ISSN(O)-2395-4396 18. W. Lu, A. Swaminathan, A.L. Varna, M. Wu, Enabling search over encrypted multimedia databases. IS&T/SPIE Electron. Imaging 725, 418–725, 418–11 (2009) 19. X. Yuan, X. Wang, C. Wang, A. Squicciarini, K. Ren, Enabling privacy-preserving Imagecentric social discovery, in ICDCS’14. IEEE (2014) 20. M.P. Selvan, R. Selvaraj, Monitoring Fishy activity of the user in social networking, in 2017 International Conference on Information Communication and Embedded Systems (ICICES). IEEE (2017), pp. 1–5 21. P. Kajendran, A. Pravin, Enhancement of security related to ATM installations to detect misbehavior activity of unknown person using video analytics. ARPN J. Eng. Appl. Sci. 12(21) (2017) 22. L. Weng, L. Amsaleg, A. Morton, S. Marchand-maillet, A privacy-preserving framework for large-scale content-based information retrieval. TIFS 10(1), 152–167 (2015) 23. S. Vaithyasubramanian, A. Christy, D. Saravanan, Two factor authentications for secured login in support of effective information preservation and network security. India: ARPN J. Eng. Appl. Sci. 10(5) (2015) 24. J.Z. Wang, J. Li, G. Wiederhold, Simplicity: semantics sensitive integrated matching for picture libraries. IEEE Trans. Pattern Anal. Mach. Intell. 23(9), 947–963 (2001) 25. T.P. Jacob, Implementation of randomized test pattern generation strategy. J. Theor. Appl. Inf. Technol. 73(1) (2015) 26. R.I. Minu, G. Nagarajan, A. Pravin, BIP: a dimensionality reduction for image indexing. ICT Express 5(3), 187–191 (2019)
Hybrid Edge-Based Gaussian Mixture Model for Foreground Detection in Video Sequences Subhaluxmi Sahoo, Sunita Samant, and Sony Snigdha Sahoo
Abstract Identification of object and its analysis is a significant and essential area of pattern recognition. The object and the features extracted play a very important role in scene understanding and video surveillance. In this paper, we have combined the Gaussian mixture model and canny edge detection method to effectively and accurately extract the edges of the foreground object. Here, our objective is to identify the foreground object but using its edges. Edges extracted simplify the method of object analysis in computer vision and pattern recognition. Keywords Object detection · Gaussian mixture model · Canny edge
1 Introduction Video object analysis is a significant and trending research area in computer vision. It has got many applications in video surveillance, object tracking, and multimedia. Moving object can be analyzed in a video sequence only after it is extracted. Modelling of background plays an important role in object detection and extraction. The easiest way of background modelling is obtaining the background image without any moving object. Once the background is available, the object in motion can be derived by simple background subtraction. The problem here is that the background is not always available in many critical situations like scenes having illumination changes and moving background entities. Background subtraction fails
S. Sahoo · S. Samant (B) Department of ECE, ITER, SOA (Deemed to be) University, Bhubaneswar, India e-mail: [email protected] S. Sahoo e-mail: [email protected] S. S. Sahoo DDCE, Department of Computer Applications, Utkal University, Bhubaneswar, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_65
619
620
S. Sahoo et al.
in these scenarios; hence, background modelling has to be robust and adaptive to tackle these subtle changes. In the literature, various approaches for video object detection can be seen, and based on these, many kinds of features are considered for background modelling. Background modelling can be classified into statistical and fuzzy-based background modelling and background estimation. Background can be extracted in an image with the help of different modelling approaches as stated above. All these face the same issues, i.e., initializing the background, then estimating and maintaining the background, detecting the foreground, choosing the correct feature size, and feature type. All these steps dictate the development of a proper background subtraction method. All the early work in video analysis was mainly focussed on traffic monitoring scenarios. Friedman and Russell’s [1] modelling was based on the use of a mixture of three Gaussians corresponding to road, vehicle, and shadows to represent each pixel. They initialized the model using an expectation maximization clustering algorithm [2]. The Gaussian distributions were tactically labelled based on the pixel variance. One of the landmark proposals was given by Stauffer and Grimson [3], and they generalized this idea by modelling the pixel color feature history {X 1 , …, X t } using a mixture of K Gaussian distributions. Gaussian mixture modeling was able to handle complexity in video scenes to a certain extent. The crude Gaussian mixture model was having many limitations. The probability distributions for background and foreground pixels are considered to be Gaussian which is not always true [4]. The mixture of Gaussians requires several Gaussians to be present in the model which may not adhere to every pixel process [5]. Parameter initialization and maintenance are arbitrary and do not follow any fixed scheme [6]. Feature size and feature type are also fixed which may not give the best result in every scenario [7–10]. Many improvements have been proposed in the literature to overcome these limitations. They are more meticulous in the statistical sense [11, 12]. An extensive survey about extended Gaussian mixture model variants for different applications with their pros and cons can be found in [13]. In this portion of work, we have tried to extract the foreground object based upon a generalized Gaussian mixture model combined with canny edge detector. This helps us in overcoming the minor background movements detected in the Gaussian mixture model. The remaining of the paper has been organized as given: Sect. 2 is a remainder of the original pixel-based MOG model, Sect. 3 describes the canny edge detector algorithm, and Sect. 4 indicates our method along with qualitative comparisons. Finally, Sect. 5 is given the conclusion with future work.
2 Proposed Edge-Based Mixture of Gaussians (EMOG) MOG works in the RGB color space where every pixel is characterized by its intensity. The probability of observing the current pixel value in the multidimensional case is given by:
Hybrid Edge-Based Gaussian Mixture Model for Foreground …
P(X t ) =
K
wi,t η(X t , μi,t ,
i=1
621
)
(1)
i,t
Inthe above equation, K indicates the number of Gaussian distributions and μi,t and i,t indicate the mean and standard deviation of ith Gaussian in frame t. η indicates the Gaussian probability distribution function. Every pixel property is decided by the combination of K Gaussian distribution functions. Parameter initialization is done only after defining the background model. The basic three parameters of the mixture of Gaussian are the number of distributions K, the weight wi,t for every ith Gaussian at time t, the mean μi,t and standard deviation i,t . K is decided based on background multimodality. Stauffer and Grimson [3] had decided to set K from 3 to 5. The initialization of the weight, the mean, and the covariance matrix can be made by using the random function or by using an EM algorithm. After parameter initialization is done, initial foreground detection happens and then w parameters are updated. The ratio σ jj helps in ordering the individual Gaussian distributions. As the background is predominant in a scene, it has maximum weight and minimum variance as compared to a foreground pixel; hence, it is being assigned with heavy weight and weak variance. The first N Gaussian distributions weight which increases a specific threshold T will be kept as background distribution and others are considered as foreground distributions. Distribution parameters updation is according to the following equation. N = arg min n
n
wk > T
(2)
k=1
wk,t = (1 − α)wk,t−1 + α Mk,t
(3)
μt = (1 − ρ)μt−1 + ρ X t
(4)
2 σt2 = (1 − ρ)σt−1 + ρ(X t − μt )T (X t − μt )
(5)
3 Canny Edge Detector The Canny edge detector is an operator which detects edges with the help of an algorithm having multiple stages and can detect different range of edges in images. It extracts structural information from different objects. Edges characterize boundaries and therefore are very helpful in object identification. They indicate the areas with strong intensity contrasts.
622
S. Sahoo et al.
Canny edge algorithm is an optimal edge detector and maximizes the chances of pixels detected as an edge with different orientations. The canny edge detector works in multiple stages. First, it removes noise by preliminary image smoothing. Then in the next step, the gradient of the image is extracted to indicate regions with high spatial derivatives. The algorithm then goes along these regions and removes any pixel which is not at maximum using non-maximum suppression. Thinning of edges can be removed by reducing the gradient array. The Gaussian filter has to be convolved with the image to remove unwanted image noise. G(m, n) =
−(m 2 +n 2 ) 1 2σ 2 e 2π σ 2
(6)
where (m, n) indicates the pixel location, and other symbols indicate the usual meaning. After noise removal, the edge strength is obtained by computing the image gradient. Generally, Sobel operator [14] is used to compute 2D spatial gradient and the absolute gradient magnitude is computed by G 2m + G 2n
|G| =
The Sobel operators are a pair of 3 × 3 masks used for convolution. They are utilized for calculating gradient in X-direction (vertical) and the other for calculating gradient in the Y-direction (horizontal). Sobel Gm and Gn masks have been represented in Fig. 1. After the computation of gradients, we need to find out the direction of the gradient, i.e., = tan−1
Gm Gn
This edge direction is related to the image direction and has to be carefully mapped to four degrees of rotation 0, 45, 90, and 135. Non-maximum suppression is applied after computing the edge directions. It is useful for getting gradient in the direction of Fig. 1 a) Sobel operator mask Gm , b) sobel operator mask Gn
(a)
(b)
-1
0
1
1
2
1
-2
0
2
0
0
0
-1
0
1
-1
-2
-1
Hybrid Edge-Based Gaussian Mixture Model for Foreground …
623
the edge and then compare it with that obtained perpendicular to the gradient. Values in the edge direction are compared with the two perpendicular pixels, and they are suppressed if their value is less than the edge pixel, else the higher pixel value will be set as the edge and the other two will be suppressed with a pixel value of 0. Finally, hysteresis is used to reduce and remove any breakage in the edge contour.
4 Results and Discussions We have implemented the Gaussian mixture model on the scenes which has been processed using a canny edge detector. MOG can handle the movable regions, and at the same time, a Canny edge detector generates the optimal boundary and edges of the moving object in the scene. Both of them jointly compute the foreground object’s edge and also help in minimizing unnecessary background changes happening in the scene. Figures 2 and 3 demonstrate the optimally detected object edges for different frames. We have used the standard dataset of PETS 2000 and PETS 2001 for result computation. We have checked our results on different frames, and we can see that the object detection is accurate without any background noise. The object can be reconstructed back from these edge features using the neighborhood reconstruction concept.
(a)
(d)
(b)
(e)
Fig. 2 PETS 2000 Data set with segmented edges
(c)
(f)
624
S. Sahoo et al.
(a) Frame 59
(d)
(b) Frame 81
(e)
(c) Frame 103
(f)
Fig. 3 PETS 2001 Data set with segmented edges
5 Conclusion Our work is intended for extracting objects accurately in videos. Object extraction becomes complicated in complex scenes because background noise is detected along with the moving object. Here in our work, we have considered edges to be the object features that represent a moving object. We detect the object edges accurately in video frames using the Canny edge detector-based Gaussian mixture model. The background noise edge feature is not so prominent and hence becomes suppressed in the Gaussian mixture model, thereby removing noise and accurately detecting objects. The idea is to reconstruct the object from these edge features using the concept of a connected neighborhood. Reconstruction has to be carefully done to avoid the detection of unnecessary pixels while utilizing connected neighborhood pixels for generating the moving object.
References 1. N. Friedman, S. Russell, Image segmentation in video sequences: a probabilistic approach, in Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence (Morgan Kaufmann Publishers Inc., 1997), pp. 175–181 2. A.P. Dempster, N.M. Laird, D.B. Rubin, Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc.: Ser. B (Methodol.) 39(1), 1–22 (1977)
Hybrid Edge-Based Gaussian Mixture Model for Foreground …
625
3. C. Stauffer, W.E.L. Grimson, Adaptive background mixture models for real-time tracking, in Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149) (Vol. 2, pp. 246–252). IEEE (1999) 4. H. Kim, R. Sakamoto, I. Kitahara, T. Toriyama, K. Kogure, Robust foreground extraction technique using background subtraction with multiple thresholds. Opt. Eng. 46(9), 097004 (2007) 5. Z. Zivkovic, Improved adaptive Gaussian mixture model for background subtraction, in Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, vol. 2. IEEE (2004), pp. 28–31 6. M. Greiffenhagen, V. Ramesh, H. Niemann, The systematic design and analysis cycle of a vision system: a case study in video surveillance, in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 2. IEEE (2001), pp. II-II 7. V. Jain, B.B. Kimia, J.L. Mundy, Background modeling based on subpixel edges, in 2007 IEEE International Conference on Image Processing, vol. 6. IEEE (2007), pp. VI-321 8. Y.L. Tian, M. Lu, A. Hampapur, Robust and efficient foreground analysis for real-time video surveillance, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1. IEEE (2005), pp. 1182–1187 9. G. Stijnman, R. Van den Boomgaard, Background estimation in video sequences (2000) 10. H.L. Ribeiro, A. Gonzaga, Hand image segmentation in video sequence by GMM: a comparative analysis, in 2006 19th Brazilian Symposium on Computer Graphics and Image Processing. IEEE (2006), pp. 357–364 11. P. Tang, L. Gao, Z. Liu, Salient moving object detection using stochastic approach filtering, in Fourth International Conference on Image and Graphics (ICIG 2007). IEEE (2007), pp. 530– 535 12. S.Y. Yang, C.T. Hsu, Background modeling from GMM likelihood combined with spatial and color coherency, in 2006 International Conference on Image Processing. IEEE (2006), pp. 2801–2804 13. T. Bouwmans, F. El Baf, B. Vachon, Background modeling using mixture of Gaussians for foreground detection-a survey. Recent Patents Comput. Sci. 1(3), 219–237 (2008) 14. N. Kanopoulos, N. Vasanthavada, R.L. Baker, Design of an image edge detection filter using the Sobel operator. IEEE J. Solid-State Circ. 23(2), 358–367 (1988)
Design of EEG Based Classification of Brain States Using STFT by Deep Neural Network Rahul Agrawal and Preeti Bajaj
Abstract Brain–Computer Interface facilitates the communication link for the physically disabled person suffering from severe brain injury related to brain stroke and has lost the ability to speak. It helps them to connect with the outside world. Electroencephalogram (EEG) is an efficient signal measurement method to record brain activity which carries information that can be used as a source of communication. In the proposed work EEG signal is used as an input source that is preprocessed and decomposed into smaller segments of the signal by Time-frequency approaches (T-F) like fast Fourier transform and short time Fourier transform. Both these methods acts as a feature extraction method followed by training of the data using a deep neural network. Then the same is classified into three Communication messages which will help to solve the speech impairment problem of disabled persons. Keywords Brain–computer interface (BCI) · Electroencephalography (EEG) · Short-time Fourier transform · Deep neural network
1 Introduction Out of the total Population of 121 Crore in India 2.68 Cr. (2.21%) are suffering from one of the other types of disability. In 2.68 Cr. Affected population 1.18 Cr. (44%) are females and 1.5 Cr (56%) are males. In the Population of Disabled person, 1.84 Cr (69%) comes from rural areas and 0.81 Cr (31%) are from urban areas. The highest number of Disabled person is found in the age range from (10–19 yr) which is 46.2 Lakh (17%). Further for the range of 20–29 yr, it is contributing nearly (16%) R. Agrawal (B) Research Scholar, Department of Electronics Engineering, G H Raisoni College of Engineering, Nagpur, India e-mail: [email protected] P. Bajaj Vice Chancellor, Galgotias University, Greater Noida and Professor (on lien), Department of Electronics Engineering, G H Raisoni College of Engineering, Nagpur, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_66
627
628
R. Agrawal and P. Bajaj
and 21% for the age above 60 yr of the total population of disabled persons [1]. Figure 1 shows the Statistics of the percentage of Total disability in India. People with problems in seeing, hearing, movement, speech, mental illness, mental retardation, multiple disabilities are the various types of disability by which the persons are affected [1–4]. Also many neurodegenerative diseases like Amyotrophic Lateral Sclerosis Alzheimer’s disease, paralysis, cerebral palsy, etc. are due to damage to nervous system, spinal cord injury, or when nerve cells lose function over the time [5]. From the Analysis given in Table 1 it is seen that 7% of the people are affected due to disability in speech so there is a requirement of an assistive device that will unravel the problem of motor injury. Brain–computer interface gives the solution to the above problem by developing a communication channel for disabled persons to communicate with the outside world, thus, solving the speech impairment problem of the affected person using Electroencephalography. BCI is an emerging technology that exploits the electrical or neural activity in the brain of the affected patients. Various acquisition methods are used to capture the activities related to brain-like functional magnetic imaging (fMRI), Infrared Imaging (IR) or positron emission tomography, etc. Amongst all of them, Electroencephalograph (EEG) is found to be the most generous for the Noninvasive approach. The information to the external device from the human brain is given by the BCI system which anticipates reviving those abilities. Such type of systems allows the bidirectional flow of electrical information from the brain to assist and map the motor imagery function in an efficient way. The vital objective of the current research work is to develop the physical system for accurate and efficient means of communication for the disabled person. It empowers them to become independent of any external support and improve their quality of life. In Literature, the deep convolutional neural network gives the classification of motor imagery tasks for some of the movement by using time-frequency approach and continuous wavelet transform (CWT) for feature extraction. Scalogram obtained by the use of CWT filter gives higher resolution so CWT outperforms in the result as compared to different time-frequency approaches. CNN consists of Convolution layers, pooling layer, fully connected layers followed by a classifier that predicts the motor movements [6]. Conventionally, the time domain signal is decomposed into two types, i.e., extraction of feature and identification of pattern. J. Huaug et al. proposed in which the one-dimensional time-domain ECG signal is converted into a two-dimensional image spectrogram after signal segmentation by using different methods of Fourier analysis which gives the Deep–CNN model helps in the classification of data of arrhythmia into five types [7]. D. Mzurikwao reviewed some parameters for feature extraction of EEG signal in a time-domain representation such as variance, Fourier and its extended transforms, wavelet transforms, mean, standard deviation, and power spectral density still it is difficult to extract motion-related feature which is present in the EEG interference [8]. The database physionet which is available in public domain is used commonly for channel selection. The vital feature of a deep neural network is to have one or more hidden layer in which number of hidden neurons are varying, the more the number of hidden neurons the more complex is the structure, but more powerful to extract
Design of EEG Based Classification of Brain States Using STFT …
629
Fig. 1 Percentage of total disability in India
Table 1 Shows the percentage of individual affected (% males and % females) Disability type (In)
% of individual affected
% of males
% of females
Movement
20
62
38
Hearing
19
53
47
Seeing
19
52
48
Multiple disability
8
55
45
Speech
7
56
44
Mental retardation
6
58
42
Mental Illness
3
58
42
Others
18
55
45
features from the biopotential signals i.e., informative data from the EEG signal. S. Radeva et al. uses electroencephalogram or other physiological measures, it functions with smart applications to control and provide communication to the disabled person. The experimental setup of the electrophysiological signal provides an estimation of different mental tasks after noise filtering. The clustering and classification are
630
R. Agrawal and P. Bajaj
done with the help of a Bayesian network classifier and pairwise classifier. Bayesian Network classifier was used after the feature selection to identify which tasks are being performed out of five mental tasks viz. Base, Letter, Math, Count, Rotate. The Comparative result of Bayesian classifier with pair-wise classifier shows that the variance of former will decay slower than the latter [9]. Chang-Hee Han et al. developed a BCI based system to express the intention of the disabled person with its eye as a gesture. So to provide stimulation to the person with the impaired oculomotor function was implemented. Here FFT based system is used which uses a straight frequency identification method to measure the intention of the user [10]. C. Ieracitano works on EEG data focused on neurological disorders uses Statistical coefficients like skewness, kurtosis, and entropy as the parameter of feature extraction as an input vector to the net of the multi-layer perceptron to classify the results into different neurological disorder. Based on the used time-frequency approaches artifacts are removed from the EEG signal via clinical inspection and separating into non-overlapping segments [11]. Choong et al. proposed that a machine learning algorithm can be used in stroke emotion analysis to compare and analyze the emotion of stroke patients and normal people. Here the classification of emotional electroencephalogram (EEG) is done between people affected by stroke and normal people. EEG signal is used to extract the detrended Fluctuation Analysis feature for both the classes with the help of K-Nearest Neighbor (KNN) classifier which calculates the nearest class of classification with the help of distance metric. The comparison between different distance metrics shows that city block distance was best in performance amongst all the available methods [12].
2 System Overview The overall method of classifying the brain states or the electrical activity on deep neural network by using FFT, STFT based procedure is shown in Fig. 2. The EEG
Class 1 Data Base
Feature Extraction
Class 2 [FFT] [STFT] Class 5
Fig. 2 Proposed system for STFT based classification
Message 1 Deep Learning Neural Network
Classifier
Message 2 Message 3
Design of EEG Based Classification of Brain States Using STFT …
631
Fig. 3 Sample waveform of EEG data signal
databases consist of data of five different class of data with 100 segments each of 23.6 s duration The original EEG time-series signal are used from the database [13] are fed to do the feature extraction by using FFT, STFT and to represent it in the frequency domain to get the actual information of EEG signal. Further, the Deep learning neural network based model is used to classify the data into three classes which are then converted into three messages of communication.
2.1 Dataset In this section, the source of information of the EEG database is given which is available online at https://epileptologie-bonn.de [13], Bonn University. Five sets of datasets are available viz. Z, O, N, F, S which are sampled with 4097 samples having a duration of 23.60 s with a sampling rate of 173 Hz. The one-dimensional timevarying EEG data is having the spectral bandwidth of the signal acquired system as 0.5–85 Hz [14]. Figure 3 shows the raw EEG data acquired by the system.
2.2 Feature Extraction Fast Fourier Transform (FFT): Fourier is a transform that is used to represent the original domain signal into frequency domain. The FFT is an Algorithm that calculates discrete Fourier transform which decomposes the signal into different frequency components. On the basis of this extracted component of frequency the values of the feature are given to deep neural network. The Fourier transform equation is given by Haykin and Veen [15] Yk =
N −1 n=0
yn .e− j2πkn
(1)
632
R. Agrawal and P. Bajaj
Fig. 4 Waveform of decomposed EEG signal by FFT
where K varies from 0, 1, …, N − 1, yn is the raw EEG signal and Y k is computed FFT of EEG signal yn. The Extracted decomposed EEG signal after applying FFT is given in Fig. 4. Short time Fourier Transform (STFT): The EEG is a non-stationary data signal and its instantaneous frequency varies in accordance with the time. Hence the changes occurring in the signal cannot be completely described by the frequency domain Therefore, enhanced methodology called as STFT is used which is the advanced version of discrete Fourier transform (DFT) [7]. For a finite span of window size, the equation for short-time Fourier transform is given by Haykin and Veen [15]. {x[n]} = Z (m, w) =
n=∞
x[n]w[n − m]e− jwn
(2)
n=−∞
where x[n] denotes the electroencephalogram value of signal whose rate of sampling the data was 175 Hz and w[n] is the window size. In the research work, we used the size of the window as 512, and the equation is given below. w(n) =
0.5[1 − cos(2π n/N − 1)], 0 ≤ n ≤ N − 1 0, otherwise
(3)
The segmented EEG signal after applying STFT is given in Fig. 5.
3 Deep Neural Network Classifier In Machine learning variety of subfields are available amongst which deep neural network is a very advanced method of classification and progression on time series, image, text data signal, etc [16]. It is basically inspired by the biological system in which rigorous training of the multiple layers is done for classification purpose. The
Design of EEG Based Classification of Brain States Using STFT …
633
Fig. 5 Waveform of decomposed EEG signal by STFT
input data to the classifier are the number of extracted features of the EEG signal data whose characteristics are similar to time series data and to classify this sequence data the best method is to use long short-term memory networks (LSTM) [17]. Figure 6 shows the LSTM Network in which the first layer is a sequence input layer to take the number of extracted feature value as an input to the Network layer [18, 19]. The following layer is the LSTM layer which is nothing but a recursive neural network that works from input sequence to label classification with the help of setting various parameters such as a number of hidden neuron units, name, and value. The third layer is a fully connected layer in which all the inputs from one layer are connected to every activation unit of the upcoming/layers. This fully connected layer consists of multiple inputs which multiplies each input node with the weight attach along with the addition of the bias vector. The aim of this layer is to combine the various feature extracted to classify the data. The size of this layer is always equal to classes of data. The fourth is the Softmax layer, as it normalizes the output by an activation function. The yield of this layer gives a number whose sum is equal to one, which can be used in probabilities of the next layer. The fifth and last layer is the classification layer which uses the probability values given by the former layer activation function and assigns the input to each class. Fig. 6 LSTM architecture
Sequence Input Layer
Classification Output layer
LSTM Layer
Fully Connected Layer
Softmax Layer
634
R. Agrawal and P. Bajaj
4 Results 4.1 Evaluation Metrics Table 2 shows the confusion matrices value in time-frequency representation. Based on the Value of the Events given in the table we have calculated the Evaluation metrics and its parameters like accuracy, precision, sensitivity, specificity, F1-Score and Fig. 7 shows the confusion matrix of true class vs predicted Class. Table 2 Confusion matrix for three class of data Events
STFT using deep neural network Class 1
Class 2
class 3
True positive (TP)
69
101
65
False positive (FP)
4
5
3
False negative (FN)
3
0
9
True negative (TN)
171
141
170
Fig. 7 Confusion matrix of true class versus predicted class
Design of EEG Based Classification of Brain States Using STFT … Table 3 Performance parameters for three class of data
635
Performance parameter
Class 1
Class 2
Class 3
Accuracy (%)
97
98
95
Precision (%)
95
95
96
Sensitivity (%)
96
100
88
Specificity (%)
98
97
98
F1 score
0.95
0.98
0.92
4.2 Performance Parameter The performance parameters are defined as below [6]. TP + TN × 100 TP + FP + FN + TN
(4)
Precision(%) =
TP × 100 TP + FP
(5)
Sensitivity(%) =
TP × 100 TP + FN
(6)
Specificity(%) =
TN × 100 TN + FP
(7)
2 ∗ precision × Recall Precision + Recall
(8)
Accuracy(%) =
F1-Score = where, Precision =
TP TP , Recall = TP + FP TP + FN
Based on the above five equations viz, (4)–(8) various performance parameters are calculated as shown in Table 3.
5 Conclusion In this paper, we evaluate the performance of deep neural networks on EEG signal classification. Here we extracted the feature by FFT, STFT as well as classify the data into three classes which are nothing but the three different messages depicting as the thought of brain states. As a result, the average accuracy of the system is 97% and the results are successfully validated using DNN.
636
R. Agrawal and P. Bajaj
References 1. Accessed: FEB. 15 2020. [Online] Available: https://enabled.In/wp/disabled-population-inindia-as-per-census-2011-2016-updated/ 2. Accessed: Jan. 10, 2018. [Online]. Available: https://mospi.nic.in/sites/default/_les/public ation_reports/Disabled_persons_in_India_2016.pdf 3. L. Junwei, S. Ramkumar, G. Emayavaramban, D. Franklin Vinod, M. Thilagaraj, V. Muneeswaran, M. Pallikonda Rajasekaran, V. Venkataraman, A. Faeq, Hussein brain computer interface for neurodegenerative person using electroencephalogram, in Special Section on New Trends in Brain Signal Processing and Analysis, vol. 7. IEEE Access, 2019, pp. 2439–2452 4. A. Nair, N. Shashikumar, S. Vidhya, S.K. Kirthika, Chapter 22 Design of a Silent Speech Interface using Facial Gesture Recognition and Electromyography. Springer Science and Business Media LLC, 2017 5. S. Palheriya, S.S. Dorle, R. Agrawal, Review on human-machine interface based on EOG. Int. J. Sci., Eng. Technol. Res. (IJSETR) 6(3), 317–319 (2017). ISSN: 2278-7798 6. S. Chaudhary, S. Taran, V. Bajaj, Member, IEEE, A. Sengur, Convolutional neural network based approach towards motor imagery tasks EEG signals classification. IEEE Sens. J. 1558– 1748, IEEE 2019 7. J. Huang, B. Chen, B. Yao, W. He, ECG arrhythmia classification using STFT-based spectrogram and convolutional neural network, in Special Section on Data-Enabled Intelligence for Digital Health. IEEE Access vol. 7 (2019), pp. 92871–92880 8. D. Mzurikwao, O.W. Samuel, et al., A channel selection approach based on convolutional neural network for multi-channel EEG motor imagery decoding, in 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering, 978-1-7281-1488-0/19 IEEE (2019), pp. 195–202 9. S. Radeva, D. Radev, Human-computer interaction system for communications and control, in 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, 978-1-5090-0154-5. IEEE (2015), pp. 2025–2030 10. C.-H. Han, H.-J. Hwang, J. Lim, C.-H. Im, Development of an “eyes-closed” brain-computer interface system for communication of patients with oculomotor impairment, in 35th Annual International Conference of the IEEE EMBS, Osaka, Japan, 3–7 July 2013, 978-1-4577-02167/2013. IEEE 11. C. Ieracitano, N. Mammone, A. Bramanti, S. Marino, A. Hussain, F.C. Morabito, A timefrequency based machine learning system for brain states classification via EEG signal processing, in International Joint Conference on Neural Networks, Budapest, Hungary. 14–19 July 2019, 978-1-7281-2009-6. IEEE (2019), pp. 2–8 12. C.W. Yean, W. Khairunizam, M.I. Omar, M. Murugappan, B.S. Zheng, S.A. Bakar, Z.M. Razlan, Z. Ibrahim, Analysis of the distance metrics of KNN classifier for EEG signal in stroke patients. IEEE. 978-1-5386-8369-9/18/2018 13. R.G. Andrzejak, K. Lehnertz, F. Mormann, C. Rieke, P. David, C.E. Elger, Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: dependence on recording region and brain state. Phys. Rev. E 64, 061907, 1–8 (2001) 14. S. Dehuri, A. Jagadev, S.-B. Cho, Epileptic seizure identification from electroencephalography signal using DERBFNs ensemble. Procedia Comput. Sci. (2013) 15. S. Haykin, B.V. Veen, Signals and Systems (Wiley, Hoboken, NJ, 1999) 16. Intelligent Data Communication Technologies and Internet of Things. Springer Science and Business Media LLC, 2020 17. S. Hochreiter, J. Schmidhuber, Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Design of EEG Based Classification of Brain States Using STFT …
637
18. Y. Liu, G. Zhao, X. Peng, Deep Learning Prognostics for Lithium-Ion Battery Based on Ensembled Long Short-Term Memory Networks (IEEE Access, 2019) 19. S.S. Lekshmi, V. Selvam, M. Pallikonda Rajasekaran, EEG signal classification using principal component analysis and wavelet transform with neural network, in 2014 International Conference on Communication and Signal Processing, 2014
Swarm Intelligence-Based Feature Selection and ANFIS Model Parameter Optimization for ASCV Risk Prediction and Classification Paulin Paul , B. Priestly Shan , and O. Jeba Shiney
Abstract In the absence of Indian ethnic-specific cardiovascular (CV) risk prediction tools, machine learning models with artificial intelligence (AI) techniques are beneficial. This study focuses on the comparison of two intelligent CV risk prediction and classification models. The study has used both traditional and nontraditional CV risk markers to identify the Atherosclerotic cardiovascular (ASCV) risk status at an early stage. To handle the missing data, we have used multiple imputation (MI) using the gaussian copula (GC) method. This work has studied two popular swarm intelligence (SI) techniques for optimal feature subset selection and tuning the neuro-fuzzy learning process for ASCV risk prediction. In the proposed model, selection of optimal input feature and ASCV risk prediction is implemented using the Particle Swarm Optimization (PSO) and Grey Wolf Optimization (GWO) algorithms. Optimal feature selection was done using the fitness function-based evaluation of wrapper-based multi-support vector machine (multiSVM) classifier. Secondly, the optimal features are fed to the adaptive neuro-fuzzy inference system (ANFIS), whose parameters are optimized using PSO and GWO, denoted as ANFISPSO and ANFISGWO for ASCV risk prediction. Finally, the risk predicted by SVMPSO -ANFISPSO and SVMGWO -ANFISGWO models are classified using a multi-SVM classifier and compared to identify the emerging robust model. The proposed framework is validated in MATLAB using Kerala-based clinical data. The final model performance of SVMPSO -ANFISPSO -Multi-SVM has shown 88.41% (training) and 95% (testing) accuracy, 79.02% (training), 89.47% (testing) sensitivity, and with 91.84% (training), 97.47% (testing) specificity the PSO model outperforms the SVMGWO -ANFISGWO- Multi-SVM model showing higher performance variables. P. Paul (B) Sathyabama Institute of Science and Technology, Chennai, India e-mail: [email protected] B. P. Shan · O. J. Shiney Galgotias University, Greater Noida, India e-mail: [email protected] O. J. Shiney e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_68
639
640
P. Paul et al.
Keywords Feature selection · ANFIS · PSO · GWO · SVM · CVD · Risk prediction
1 Introduction Cardiovascular (CV) diseases are still a leading cause of death globally as well as in India. Kerala has high proportion of lifestyle-related risk factors [1]. In the absence of customized ethnic-specific Indian-based CV risk prediction tools based on epidemiological studies, the machine learning models are a good option. These models are a new non-invasive way to deduct risk by quantitatively predicting the chance of CV risk in any near time future at an early stage. The conventional CV risk predictions can be updated with more prognostic non-traditional atherosclerotic cardiovascular (ASCV) risk markers along with the traditional risk factors (TRF). This can provide improved support for the doctors, than using the TRF markers alone. This risk prediction model development requires systematic preprocessing of the medical datasets for effective performance of the algorithm used. The two main stages of data preprocessing are, data imputation method for handling the missing data in large datasets and the optimal feature reduction for managing complex datasets. The presence of missing data affects the learning algorithm performance in terms of processing speed and accuracy. Though an unbiased approach to missing data estimation is difficult, however, adequate methods can be used to reduce the possible consequences of missing data. The multiple imputations (MI) using Gaussian Copula have comparatively good estimation results with commendable coverage and small bias [2]. The features used in the model should adequately describe the model. Selection of optimal feature subset can reduce the computation time and increase the performance. Several standard meta-heuristic SI optimization techniques are used for optimal attribute subset selection, risk prediction, classification and diagnosis applied in different domains. These include genetic algorithms (GA) [3], whale optimization algorithm (WOA) [4], ant bee colony (ABC) optimization [5], ant colony optimization (ACO) [6], cuckoo search (CS) [7], firefly algorithm (FA) [8], particle swarm optimization (PSO) [9], Grey Wolf Optimization (GWO) [10]. PSO and GWO are effective optimization algorithms for problems with unknown search spaces, which are based on the collective intelligence of a group of simple agents to find an optimal solution. The objective here is to compare the performance of the proposed risk prediction models such as SVMPSO -ANFISPSO -Multi-SVM and SVMGWO -ANFISGWO- Multi-SVM, validated using Kerala population data. In this paper, a wrapper-based optimized attribute selection using PSO and GWO (SVMPSO and SVMGWO ) are used to find an optimal CV risk feature subset. A multi-support vector machine (multi-SVM) classifier is used to evaluate the fitness of both the feature selection models. The Adaptive Neuro-Fuzzy Inference System (ANFIS) optimized using PSO and GWO (ANFISPSO and ANFISGWO ) is used for ASCV risk prediction. Finally, the multi-SVM classifier is used to classify the predicted
Swarm Intelligence-Based Feature Selection and ANFIS Model …
641
ASCV risk of the subjects into six ASCV risk categories such as Very Low, Low, Intermediate, Intermediate-High, High and Very High. The rest of the paper describes the mathematical modelling of PSO and GWO techniques in the second section. The third section describes the SVMPSO -ANFISPSO and SVMGWO -ANFISGWO ASCV risk prediction and fitness function evaluation. The fourth section describes the experimental setup used in the study. The fifth section describes the results of the ASCV risk prediction and subsequent classification, evaluated, and compared to identify an emerging optimal robust model.
2 Swarm Intelligence Techniques 2.1 Particle Swarm Optimization (PSO) The mathematical modelling of PSO algorithm is based on the studies by Eberhart and Kennedy [11]. Mathematical Modelling The initial population is initialized with N particles at random positions in the search space denoted as below: − → − → − → X 1d = [X 1d , Y1d ]; X 2d = [X 2d , Y2d ]; X 3d = [X 3d , Y3d ] Here, each particle is identified by X, Y pair and is stored as a vector. The vector − →d X 1 is the first particle location in the swarm, in an iteration ‘d’. Any particle in the − → search space is identified as, X id is shown in Eq. (1), − → X 1d = [X id , Yid , Z id , . . .]
(1)
where ‘i’ is the particle index. The positional vector of any swarm particle is associated with a velocity defined by the movement direction and speed, indicated as the velocity vector, Vid . The velocity vector is defined using three vectors as the current velocity, the tendency towards the personal best (PBest), and finally the tendency towards the team’s global best (GBest) solution. Next velocity of a particle is calculated as shown in Eq. (2). −− → − →d − − →d − − →d →d →d d+1 V1 = 2r1 V1 + 2r2 P1 − X 1 + 2r3 G − X 1
(2)
642
P. Paul et al.
−−→ − → − →d Here, V1d+1 is the next velocity of the particle, V1d is current velocity, p1 is → − → − personnel best (PBest) solution of a particle. Here, P1d − X 1d is the distance to →d − →d − →d − the PBest solution. The value G is the global best solution and G − X 1 is the distance to the GBest solution. The three velocity vector distances are controlled by the random variables r1 , r2 multiplied by two and r3 multiplied by three. Using the −−→ next velocity, V1d+1 , the next position of the particle is calculated as mentioned in Eq. (3). −− → − → −−→ X 1d+1 = X 1d + V1d+1
(3)
Here, the next position X 1d+1 is equal to the current position V1d plus the next velocity, V1d+1 . Each particle in the search space is associated with a best solution tracked by each of the moving particles in the search space. The Inertia weight introduced in the PSO [12] algorithm modifies the Eq. (2) and is as shown in Eq. (4). −− → − →d − − →d − − →d →d →d d+1 V1 = w V1 + c1r2 p1 − X 1 + c2 r3 G − X 1
(4)
where w is the inertia, c1 and c2 are the last two velocity vector components. These − → − → velocity component wV1d is the inertia, c1 r2 p1d − X 1d is the cognitive component, − →d d and 2r3 G − X 1 is the social component calculated using the current position and the swarms’ best position. The term G d is the best optimal solution to be achieved. The particle movement is controlled by tuning c1 and c2 coefficients. The inertia weight, w is used to tune the exploration and the exploitation that linearly decreases in the range [0.9 to 0.4].
2.2 Grey Wolf Optimization (GWO) The GWO algorithm by Mirjalili [13], mimics the leadership hierarchy and hunting behaviour of grey wolves. The main phases are: tracking; chasing and approaching prey; pursuing; encircling, harassing the prey; and lastly attacking the prey. With decreasing dominance from top to bottom, the most powerful of the pack is Level 1 leaders called alphas (α); the Level 2 is the subordinates to alpha called betas (β); the Level 3 is the lowest ranking wolf called omega (δ); the Level 4 wolves are subordinate wolfs to α, β, δ called delta (ω). The main phases of GWO include, mathematical modelling for prey encircling, hunting, and attacking (exploitation).
Swarm Intelligence-Based Feature Selection and ANFIS Model …
643
Mathematical Modelling The GWO optimization modelling considers the fittest parameters of the solution in the order as alpha (α), beta (β), then delta (δ) and all the remaining wolves in the − → pack as omegas (ω). In prey encircling, the fitness function, D and the positional − → vector, X of the search agents α, β, δ, and ω are calculated using the two coefficient − → − → vectors A and C defined in Eqs. (5) and (6), − → → → → A = 2− a− r1 − − a
(5)
C = 2.r1
(6)
→ For each iteration, vector − a is linearly decreased from 2 to 0. The random vectors − → − → r1 , r2 are in the range [0, 1]. The prey encircling behaviour of the grey wolves is modelled in Eqs. (7) and (8). →− → − → − → − D = C . X p (t) − X (t)
(7)
− → − → − →− → X (t + 1) = X p (t) + A . D
(8)
− → − → Here ‘t’ is the current iteration, A and C are the coefficient vectors defined in Eqs. − → − → (5) and (6). X p indicates the position vector of the prey, and X is the position vector of any grey wolf. The initial current position around the prey is updated to a more optimal position nearer to the prey. The various places around the current position of − → − → the best agent can be reached by adjusting A and C vectors and the random vectors − → → − → r2 . The vector C supports exploration using random values in the range [0, 2]. r1 , − The random behaviour and exploration of the prey are favoured by setting a random weight using the variable C as, (C > 1) or (C < 1). After encircling the prey, the random positions of different omegas are updated based on the positions of the best agents (α, β, δ). Equations (9) and (10) defines the fitness value and positions of the best search agents (α, β, δ) using Eqs. (7) and (8). The next position to be updated is − → defined as X (t + 1) shown in Eq. (11). → − → − → − → − → − → − → − → − → − → − → − → − Dα = C1 · X α − X ; Dβ = C2 · X β − X ; Dδ = C3 · X δ − X − → − → − → − → − → − → − → − → − → − → − → − → X 1 = X α − A1 · ( Dα ); X 2 = X β − A2 · ( Dβ ); X 3 = X δ − A3 · ( Dδ ) −−−−−−−−−→ X1 + X2 + X3 − → X (t + 1) = 3
(9) (10)
(11)
644
P. Paul et al.
→ The exploitation phase for prey attack is modelled by decreasing the vector − a in − → the interval [−2a, 2a] for each iteration. Updating the parameter. a for exploration and exploitation (range 2 to 0) is shown in Eq. (12). − → a = 2 − t.
2 max _itern
(12)
Here, max _itern is the total number of iterations, and ‘t’ is the iteration number. The exploration of the search agent is globally determined by the convergence and − → divergence, modelled using the random variable A , where |A|1 indicates divergence in search of prey.
3 Proposed Methodology The input dataset is preprocessed using the Gaussian Copula-based multiple imputation (MI) for managing the missing data. The performance of MI is evaluated using the fitness function with minimization of RMSE. The study proposes two models as SVMPSO -ANFISPSO -Multi-SVM and SVMGWO -ANFISGWO -Multi-SVM to determine the utility of intelligent attribute selection, ASCV risk prediction, and the classification. Optimal attributes are selected from the preprocessed dataset using the multi-SVMPSO . The ASCV risk prediction was done with the optimal attribute set using optimized ANFISPSO . Finally, multi-SVM-based ASCV risk classification of the predicted risk was done as Very Low, Low, Intermediate, Intermediate High, High, and Very High ASCV risk categories. The obtained results are compared with multi-SVMGWO -based optimal attribute selection, ANFISGWO -based risk prediction, and finally Multi-SVM-based ASCV risk classification. The PSO, GWO optimized feature selections are evaluated using an accuracy-based fitness function (see Sect. 3.1) done with a wrapper-based multi-SVM classifier. The optimally reduced features set with maximum accuracy are fed to the respective optimized neuro-fuzzy models referred as ANFISPSO and ANFISGWO. The ANFIS performance is tuned using the fitness function (see Sect. 3.2) with minimal root mean square error (RMSE) for ASCV risk prediction. The resulting risk of the subjects was classified into six ASCV risk classes and compared to identify the best performing model based on the overall accuracy, sensitivity, and specificity values. For the general flow of the proposed SVMPSO -ANFISPSO -Multi-SVM and SVMGWO -ANFISGWO -Multi-SVM models see (Fig. 1).
Swarm Intelligence-Based Feature Selection and ANFIS Model …
645
Fig. 1 Flowchart for proposed SVMPSO ANFISPSO -Multi-SVM and SVMGWO -ANFISGWO Multi-SVM models
3.1 Fitness Function for Optimal Feature Selection Using SVMPSO and SVMGWO The PSO, GWO optimized selection of best CV risk features in the search space is evaluated using the SVM. The best feature is an optimal feature set with a reduced feature subset and high attribute selection accuracy. The fitness function is used to maximize the attribute selection accuracy using Eq. (13). Fitness = (misAcc + 0.001 ∗ (len))
(13)
where misAcc is the misclassification accuracy calculated as 1-Accuracy. The Accuracy is calculated using the SVM classifier, and the fitness function is to maximize the accuracy, len is the length of the selected features.
646
P. Paul et al.
3.2 Fitness Function for Prediction Accuracy Using ANFISPSO and ANFISGWO The optimal fitness function is based on RMSE as defined in Eq. (14), where smaller the RMSE higher is the fitness value, and therefore, objective is to minimize RMSE for improved network performance. The problem representation (in Eq. 14) with input variables for ANFISPSO and ANFISGWO requires minimization of RMSE. → F(− v ) = Minimize(R M S E)
(14)
→ where (− v ) are the input vector variable set. The RMSE value is calculated using Eq. (15). RMSE =
n p=1
( pik − dik )2 n
(15)
Here n is the number of outputs, pik is the actual output of i-th Input unit of the k-th sample, dik is the desired output of i-th Input unit of the k-th sample.
4 Experimental Setup and Dataset The proposed study was implemented in MATLAB R2018a running in Windows 10. Details of 594 participants collected from the medical archives of clinical locations at Ernakulum District, in Kerala, were used for validation. Traditional and nontraditional atherosclerotic risk variables used in the study was as described in [14], is given below in Table 1. The parameter settings used for SVMPSO -ANFISPSO and SVMGWO -ANFISGWO models are shown in Table 2.
5 Results and Discussion In this proposed study as shown in Table 3 we have studied two optimized intelligent models such as SVMPSO -ANFISPSO -Multi-SVM and SVMGWO -ANFISGWO Multi-SVM with attribute reduction, ASCV risk prediction and classification. The performance of both the proposed models was evaluated using the accuracy, sensitivity, and specificity tests. Implementation of the Gaussian Copula MI approach was done based on the study described in [15], and the fitness function evaluated has RMSE (mean) = 0.301. Both the models studied are evaluated using the same dataset. In the PSO-based model, the SVMPSO attribute reduction has obtained 13 optimal features with 85.19% accuracy, 72.78% sensitivity, and 90.12% specificity.
Swarm Intelligence-Based Feature Selection and ANFIS Model …
647
Table 1 Initial set of attributes used in the study Total CV attributes used
Description of Attributes
1
Age
Patient Age in year
2
Gender
0 for Male and 1 for Female
3
SBP
Systolic Blood Pressure in mmHg
4
HDL-C
High Density Lipoprotein Cholesterol
5
TC
Total Cholesterol (TC) in mg/dl
6
Non- HDL-C
Non-High Density Lipoprotein Cholesterol in mg/dl
7
FH
Familial Hypercholesterolemia
8
FBS Score
Fasting Blood Sugar score in mg/dl
9
Diab_Medi
Medication used for Diabetics
10
Smoking
Cigarette smoking day
11
BMI
Body Mass Index in (kg/m2 )
12
HTN_Treat
Hypertension Medication used
13
CVD_fh
Family History of cardiovascular disease
14
AF
Arterial Fibrillation
15
CVD_exist
Presence of pre-existing cardiovascular disease
16
RA
Rheumatoid Arthritis
17
CKD
Chronic Kidney Disease
18
cIMT
carotid Intima-Media Thickness
19
cTPRS
carotid Total Plaque Risk Score
Table 2 Parameter settings in the optimized attribute reduction and ANFIS CV risk prediction Optimization used Parameters
Parameters for attribute Parameters for risk selection using prediction using ANFIS multi-SVM
PSO
Population size
40
25
Cognitive parameter (c1 )
1.5
1
Social Parameter (c2 )
2.0
2.0
Initial inertia weight (w)
1
1
Inertia weight damping 0.99 ratio GWO
0.99
Max Iteration
100
1000
No. of Wolfs
14
25
No. of Iterations
50
1000
Problem Dimension
19
10
13
10
SVMPSO
SVMGWO
0.11707
ANFISGWO
Accuracy (Train)
88.41%
81.50%
Multi-SVMPSO
Multi-SVMGWO
0.11594
Classification Model used
ASCV risk Classification
0.09512
ANFISPSO
0.12684
Fitness Function (RMSE) 65.42%
79.44%
95%
95%
85.47%
85.19%
Accuracy
61.02%
67.80%
51.21%
61.02%
68.21%, 93.75%
79.02%, 89.47%
60.0%
70.0%
Specificity (Test)
85.85%
90.12%
Specificity
86.72%, 95.45%
91.84%, 97.47%
Specificity (Train, Test)
Sensitivity (Test)
67.07%
72.78%
Sensitivity
Sensitivity (Train, Test)
Accuracy (Test)
Accuracy (Test)
Accuracy (Train)
Age, SBP, Smoking, FH, Diab_Medi, CVD_exist, RA, CKD, cIMT, cTPRS
Age, Gender, SBP, TC, Smoking, Diab_Medi, FH, CVD_fh, CVD_exist, CKD, AF, cIMT, cTPRS
Features selected
Prediction Model used
ASCV risk Prediction
Total selected
Model
CV Attribute Reduction
Table 3 Performance features with SVMPSO -ANFISPSO -Multi-SVM and SVMGWO -ANFISGWO -Multi-SVM
648 P. Paul et al.
Swarm Intelligence-Based Feature Selection and ANFIS Model …
649
ANFISPSO for ASCV risk prediction has obtained 79.44% (train) and 67.80% (test) accuracy, 61.02% sensitivity and 70.0% specificity. However, using the GWO-based, optimization the SVMGWO has obtained 10 optimal features with 85.47% accuracy, 67.07% sensitivity, and 85.85% specificity. Whereas the ANFISGWO risk prediction has obtained a comparatively lower accuracy with 65.42% (train) and 61.02% (test), and 51.21% sensitivity, and 60.0% specificity. With low RMSE value below 0.5 (see Table 3) the models reflects good prediction ability. The PSO-based model classification with multi-SVM shows better performance with 88.41% (train), 95% (test) accuracy, 79.02% (train), 89.47% (test) sensitivity and 91.84% (train), 97.47% (test) specificity. The CV risk variables such as Age, Systolic Blood Pressure, Smoking, Familial Hypercholesterolemia, intake of Diabetics Medication, existing CV disease, Chronic Kidney Disease, carotid Intima-Media Thickness score, and carotid Plaque Risk Score are the principal features contributing to the ASCV risk. Performance of SVMPSO and SVMGWO for attribute reduction are comparable however, the ANFISPSO model outperforms the ANFISGWO model in ASCV risk prediction. In overall, the PSO-based SVMPSO -ANFISPSO -Multi-SVM model outperforms GWObased SVMGWO -ANFISGWO -Multi-SVM model in ASCV risk prediction and risk classification.
6 Conclusion Machine learning models with AI are useful techniques for risk prediction and classification to be used at clinical locations where trained doctors are scarce. We have implemented and compared the optimized ASCV risk prediction models using the PSO and GWO algorithms to identify an emerging model for ASCV risk prediction and classification validated using Kerala based population data. Optimized wrapper-based CV feature selection, ANFIS-based ASCV risk prediction and multiSVM based ASCV risk classification was modelled to find the best emerging model. It is observed that the PSO-based model outperformed the GWO-based model in terms of accuracy, sensitivity, and specificity. The robustness of the model needs further investigation using few other datasets.
References 1. P. Paulin, Cardiovascular risk prediction using JBS3 tool: a Kerala based Study. Curr. Med. Imaging 16(1) (2020). https://doi.org/10.2174/1573405616666200103144559 2. M.H. Florian, Multiple imputation using Gaussian copulas. Sociol. Methods Res. 1–52 (2018) 3. A.K. Paul, Genetic algorithm based fuzzy decision support system for the diagnosis of heart disease, in 5th International Conference on Informatics, Electronics and Vision (ICIEV) (IEEE, 2016), pp. 145–150 4. M.A. Makhlouf, Dimensionality reduction using an improved whale optimization algorithm for data classification. I.J. Mod. Educ. Comput. Sci.7, 37–49 (2018)
650
P. Paul et al.
5. B. Subanya, Feature selection using artificial bee colony for cardiovascular disease classification, in 2014 International Conference on Electronics and Communication Systems, ICECS 2014 1–6 (2014) 6. C. Huang, ACO-based hybrid classification system with feature subset selection and model parameters optimization. Neurocomputing 73, 438–448 (2009) 7. S. Moameri, Diagnosis of coronary artery disease via a novel fuzzy expert system optimized by cuckoo search. Int. J. Eng. 31, 2028–2036 (2018) 8. N.C. Long, A highly accurate firefly based algorithm for heart disease prediction. Expert Syst. Appl. 6, 8221–8231 (2015) 9. Y. Khourdifi, Heart disease prediction and classification using machine learning algorithms optimized by particle swarm optimization and ant colony optimization. Int. J. Intell. Eng. Syst. 228, 242–252 (2019) 10. T. Al Qasem, Feature selection method based on grey wolf optimization for coronary artery disease classification, in Recent Trends in Data Science and Soft Computing, IRICT 2018 (Springer, 2019), pp. 257–266 11. K. James, Particle swarm optimization, in IEEE 1942–1948 (0-7803-2768-3/95/$4.00 0 1995 IEEE 1942, 1995) 12. Y. Shi, Modified particle swarm optimizer, in IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (IEEE, 1998), pp. 69–73 13. S. Mirjalili, Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014) 14. P. Paulin, Relative estimate of revised cardiovascular risk combining traditional and nontraditional image based CV markers: a Kerala based study. Curr. Med. Imaging 16(1) (2020). https://doi.org/10.2174/1573405616666200218125539 15. R. Houari, Missing data analysis using multiple imputation in relation to Parkinson’s disease, in ACM International Conference Proceeding Series (BDAW, 2016), pp. 1–6
Discrimination of Hemorrhage in Fundus Images Using Shape and Texture-based Descriptors Jeba Derwin, Tamil Selvi, Priestly Shan, Jeba Singh, and S. N. Kumar
Abstract The annual screening for diabetic retinopathy is essential for diabetic patients. Among the various annotations of diabetic retinopathy, hemorrhage is one of the major signs of diabetic retinopathy in retinal vasculature. This paper proposes a new four-step automated method of hemorrhage detection in fundus image. In the preprocessing stage, the green band of input fundus image is extracted and normalized to remove background noise. The following stages include the extraction of candidates for hemorrhage detection for feature extraction process. It provides a better detection due to the combined texture and shape-based features. In the final stage, the random forest classifier is used to classify hemorrhage from non-hemorrhage region in retinal image. Moreover, this system does not require vessel segmentation adds an advantage over state-of-art approaches. The two public datasets DIARETDB1 and MESSIDOR are used to analyze the performance of proposed system. Keywords Diabetic retinopathy · Fundus image · Hemorrhage · Normalization · Laplacian of gaussian · Random forest
J. Derwin · J. Singh Arunachala College of Engineering for Women, Kanyakumari, TamilNadu, India e-mail: [email protected] J. Singh e-mail: [email protected] T. Selvi National Engineering College, Kovilpatti, Tamil Nadu, India P. Shan (B) Galgotias University, Greater Noida, Delhi, NCR, India e-mail: [email protected] S. N. Kumar Amal Jyothi College of Engineering, Kanjirapally, Kottayam, Kerala, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_69
651
652
J. Derwin et al.
1 Introduction Diabetic retinopathy (DR) is a medical condition based on the occlusion and leakage of blood vessels into the retina. It is the leading cause of blindness in people before the age of 50 in developed countries [1, 2]. The survey by World Health Organization present, at the end of 2045, there will be a tremendous growth of 629 million of diabetic mellitus (DM) people [3]. Hence, the requirement for treating DM patients also increases significantly. In this context, early detection and treatment of DR are essential to avoid the vision loss. DR is sorted extensively as: non-proliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR). These stages are explained by clinicians as: mild NPDR, moderate NPDR, severe NPDR and PDR. The mild NPDR shows up as early indications of DR as blockage in little veins which are termed as microaneurysm. Afterward, the impeded break and leaks into the retina frames a roundish blog-like structure named as hemorrhages (HM) is portrayed in Fig. 1. The extreme NPDR displays the indications of exudate shape because of leakage of proteins in the retinal veins. In addition, in NPDR stage, the retina changes its original position due to growth of new blood vessels. Ophthalmologists use fundus pictures to examine the presence of microaneurysm, HM, exudates and different indications of DR. The rate of DR patients is increasing worldwide, leads to proper treatment to avert vision loss. Therefore, computer-aided approaches are in dispensable in the diagnosis and treatment of HM. However, there exist a development in these approaches, lack of accuracy in prediction makes researchers to introduce new conventional methods in HM detection. In this article, texture and shape descriptors are used to extract the features that add an advantage over the existing methods. In [3], an automatic system was developed for the detection of hemorrhages in DR images. The pre-processing stage comprises of green channel extraction and contrast local adaptive histogram equalization. For blood vessel extraction, morphological Fig. 1 Mild NPDR
Discrimination of Hemorrhage in Fundus Images …
653
operators are used, and for candidate extraction, Otsu thresholding with morphological operators was employed. For feature extraction, texture, wavelet and color feature are considered and classification was performed by random forest classifier. A novel splat feature classification technique was employed for the detection of hemorrhages in DR images [4]. The deep learning neural network architecture was found to be efficient in the detection of ocular diseases in fundus images, and efficient results were produced, when compared with the classical algorithms [5]. A CNN architecture was proposed in [6] to detect the three signs of DR; exudates, hemorrhages and microaneurysms. The softmax outputs of the layers are used to generate the probability map for three signs. The novelty of the architecture is the automatic detection of DR without requiring a massive dataset. The EyeWeS, a combination of deep learning neural network and multiple instance learning was proposed for the detection of diabetic retinopathy [7]. In [8], a detailed study has been done on the different types of deep learning neural network for the detection of microaneurysms, hemorrhages and exudates. An automatic model based on circular Hough transform was employed for the detection of hemorrhages in DR images [9]. An efficient motion pattern generation technique was proposed for the automatic detection of hemorrhages in DR images, with sensitivity and specificity of 97% and 98%, respectively [10]. The region growing with the threshold optimization by gray wolf optimization was employed for the detection of hemorrhages in DR images. The classification was done by ANFIS classifier [11]. The iterative thresholding based on firefly/particle swarm optimization with support vector machine/linear regression classifier was employed for the detection of hemorrhages in DR images [12].
2 Proposed System The proposed method includes the following stages: (1) Pre-processing, (2) candidate extraction, (3) feature extraction and (4) classification. The flow diagram of proposed system is depicted in Fig. 2.
Fig. 2 Proposed flow diagram
654
J. Derwin et al.
Fig. 3 Pre-processed image a input image, b green channel image, c normalized green channel image
2.1 Pre-processing The pre-processing stage is essential in fundus image, since the illumination and color of acquired image are non-uniform. This stage is a two-step process with green channel extraction and color normalization. The green channel of the retinal image shows more variation in brightness and contrast than the image background. The red lesion information can be found more in green channel of an image. Hence, the extraction of green band in input fundus image helps in the detection of hemorrhage. Normally, the acquired retinal images have variations in contrast and illumination. Therefore, a Gaussian filter of window size 5 × 5 is applied with the parameter ‘σ’ chosen as 1. Thus, the input fundus image gets smoothened and normalized. The obtained images in pre-processing stage are shown in Fig. 3.
2.2 Candidate Extraction The color normalized image highlights the dark regions of fundus image. Hemorrhages are associated with the leakage of small blood vessels and it appears like round dot red spots. Due to this resemblance, Laplacian of Gaussian(LOG) operator [13] is proposed to extract the candidates of HM. The LOG provides a maximum response when the size of an object matches at the same scale. Here, we apply the LOG operator of size 5 × 5, 7 × 7 and 9 × 9 in the pre-processed image. The maximum LOG is obtained by comparing the window size images of LOG operator. This furnishes the dark region of the fundus image. These dark regions highlight the candidates of HM such as roundish blobs and blood vessels. The attained LOG operator images and HE candidates are shown in Fig. 4a–d.
Discrimination of Hemorrhage in Fundus Images …
655
Fig. 4 a LOG operator (5 × 5), b LOG operator (7 × 7), c LOG operator (9 × 9), d maximum LOG operator
2.3 Feature Extraction The detection of HM candidates is followed by feature extraction, three intensity features and 209 shape descriptors are extracted from each candidate region. The intensity features include sample mean, standard deviation and extend can be computed using the expressions.
f (xi , yi ) N f (xi , yi ) + μ Standard deviation σ = N −1 Sample mean μ =
(1)
(2)
Extend is obtained by computing, the signal power in normalized images after band pass filtering. The shape features include speeded up robust feature (SURF), blobness measure (BM) and HOG. Blobness measure is the absolute difference between two extreme time points T1 and T2 responses of fundus images. The twotime points of fundus images T 1 and T 2 represent the spatio-temporal changes. It is detected while applying LOG operator at several scales to each time point images and then comparing the results. The 64 SURF features are extracted for each 5 × 5 pixel size [14]. 2 2 Blobness measure BM(T1 , T2 ) = max ∇norm (σ ) ∗ T1 − max ∇norm (σ ) ∗ T2 σ
σ
where, 2 ∇norm (σ ) = σ 2 ∇ 2 G(xi , yi ; σ )
∇ 2 G(xi , yi ; σ ) =
∂ 2 G(xi , yi ; σ ) ∂ 2 G(xi , yi ; σ ) + ∂x2 ∂ y2
(3)
656
J. Derwin et al.
Fig. 5 Detected hemorrhages
Histogram of oriented gradient (HOG) is computed for each 5 × 5 pixel size to produce 144 features [15]. Therefore, a total of 3 intensity features and 209 appearance and shape descriptors are extracted and fed to RF classifier.
2.4 Classification A total number of 212 features are fed to train the RF classifier [16], it needs two parameters such as (1) number of classification tree (k) and (2) number of prediction values (m). In our work, ‘k’ is chosen as 11 and ‘m’ is chosen as 6. The random predictive value ‘k’ is used to classify the given dataset. It is necessary to optimize the parameters ‘k’ and ‘m’ to minimize the generalization error. Furthermore, when the RF tree grows it uses the best split of predictive variables at every node and the split value is chosen as 6. On the other hand, reducing the value ‘m’ and strength of every single tree increases the classification by reducing the correlation between the trees (Fig. 5).
3 Results and Discussion The two public dataset DIARETDB1 [17] and MESSIDOR [18] are used to analyze the performance of the proposed system. A total of 1200 fundus images acquired with the pixel resolution of 1388 × 876 in MESSIDOR dataset are categorized into four grades: DR0 , DR1 , DR2 and DR3 . The grading of fundus images depends on the severity of DR. In MESSIDOR dataset, 546 images do not show any signs of DR, remaining 654 images contain all signs of DR. Among 654 fundus images, 84 images exhibit the signs of moderate NPDR. It is important to estimate per image analysis for the proposed system in clinical point of view. The parameters such as sensitivity, specificity and accuracy are estimated using the standard expressions. The receiver operating characteristic (ROC) curve is plotted against false positive rate and true positive rate as shown in Fig. 6.
Discrimination of Hemorrhage in Fundus Images …
657
ROC Curve
1.2
Tru e P o s tiv e R a te
1 0.8 0.6 0.4 MESSIDOR DIARETDB1
0.2 0
0
0.2
0.4 0.6 False Positive Rate
0.8
1
The proposed methodology has obtained a sensitivity of 98.45%, specificity of 96.07% and accuracy of 98.02% for DIARETB1 database. Furthermore, the proposed system achieved a sensitivity of 99.10%, specificity of 97.84% and accuracy of 99.12% for MESSIDOR database. From the ROC curve, the area under curve (AUC) value is evaluated as 0.932 for DIARETDB1 and 0.944 for MESSIDOR dataset. Moreover, the proposed system attained a better performance measure than other texture-based methods [19, 20].
4 Conclusion The computer-aided diagnostic systems based on image processing are widely used to assist the ophthalmologist in decreasing the diagnostic time. In this paper, an approach based on texture and shape-based feature extraction for HM detection is presented. The green channel normalization of fundus image increases the visibility of small retinal changes. Also, a simple and effective method for feature extraction in detecting spatio-temporal changes of fundus images improved the accuracy of HM detection. On a publicly available dataset DIARETDB1 and MESSIDOR, this method furnished better sensitivity, specificity and accuracy for subtle retinal changes in color fundus image.
658
J. Derwin et al.
References 1. H.R. Taylor, J.E. Keeffe, World blindness: a 21st century perspective. Br. J. Ophthalmol. 85, 261–266 (2001) 2. S. Wild, G. Roglic, A. Green, R. Sicree, H. King, Global prevalence of diabetes: estimates for the year 2000 and projections for 2030. Diabetes Care 27, 1047–1053 (2004) 3. K. Ogurtsova, R.D. da Rocha Fernandes, Y. Huang, Idf diabetes atlas: global estimates for the prevalence of diabetes for 2015 and 2040. Diabetes Res. Clin. Pract. 128, 40–50 (2017) 4. N. Kaur, S. Chatterjee, M. Acharyya, J. Kaur, N. Kapoor, S. Gupta, A supervised approach for automated detection of hemorrhages in retinal fundus images, in 2016 5th International Conference on Wireless Networks and Embedded Systems (WECON), 2016 Oct 14 (pp. 1–5). IEEE 5. L. Tang, M. Niemeijer, J.M. Reinhardt, M.K. Garvin, M.D. Abramoff, Splat feature classification with application to retinal hemorrhage detection in fundus images. IEEE Trans. Med. Imaging 32(2), 364–375 (2012) 6. Y. Elloumi, M. Akil, H. Boudegga, Ocular diseases diagnosis in fundus images using a deep learning: approaches, tools and performance evaluation, in Real-Time Image Processing and Deep Learning, 2019 May 14, Vol. 10996, p. 109960T. International Society for Optics and Photonics (2019) 7. P. Khojasteh, B. Aliahmad, D.K. Kumar, Fundus images analysis using deep features for detection of exudates, hemorrhages and microaneurysms. BMC Ophthalmol. 18(1), 288 (2018) 8. P. Costa, T. Araújo, G. Aresta, A. Galdran, A.M. Mendonça, A. Smailagic, A. Campilho, EyeWes: weakly supervised pre-trained convolutional neural networks for diabetic retinopathy detection, in 2019 16th International Conference on Machine Vision Applications (MVA), 2019 May 27 (pp. 1–6). IEEE 9. N. Asiri, M. Hussain, H.A. Abualsamh, Deep Learning based Computer-Aided Diagnosis Systems for Diabetic Retinopathy: A Survey. arXiv preprint arXiv:1811.01238. 2018 Nov 3 10. A. Biran, P.S. Bidari, K. Raahemifar, Automatic method for exudates and hemorrhages detection from fundus retinal images. Int. J. Comput. Information Eng. 10(9), 1599–1602 (2016) 11. R. Murugan, An automatic detection of hemorrhages in retinal fundus images by motion pattern generation. Biomed. Pharmacol. J. 12(3), 1433–1440 (2019) 12. L. Godlin Atlas, K. Parasuraman, Detection of Retinal Hemorrhage from Fundus Images Using ANFIS Classifier and MRG Segmentation, Biomedical Research (2018) Vol. 29, Issue 7. https:// doi.org/10.4066/biomedicalresearch.29-18-281 13. K. Adem, M. Hekim, S. Demir, Detection of hemorrhage in retinal images using linear classifiers and iterative thresholding approaches based on firefly and particle swarm optimization algorithms. Turkish J. Electrical Eng. Comput. Sci. 27(1), 499–515 (2019) 14. T. Lindeberg, Feature detection with automatic scale selection. Int. J. Comput. Vision 30(2), 79–116 (1998) 15. H. Bay et al., Speeded-up robust features (surf). Comput. Vis. Image Underst. 110(3), 346–359 (2008) 16. N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1 (IEEE, 2005), pp. 886–893 17. L. Seoud, T. Hurtut, J. Chelbi, F. Cheriet, J.M.P. Langlois, Red lesion detection using dynamic shape features for diabetic retinopathy screening. IEEE Trans. Med. Imaging 35, 1116–1126 (2016) 18. T. Kauppi, V. Kalesnykiene, J.-K. Kämäräinen, L. Lensu, I. Sorri, A. Raninen, R. Voutilainen, H. Uusitalo, H. Kälviäinen, J. Pietilä, The DIARETDB1 diabetic retinopathy database and evaluation protocol. BMVC (2007) 19. Decenciere et al., Feedback on a publicly distributed image database: the MESSIDOR database. Image Anal Stereol. 33, 231–234 (2014)
Discrimination of Hemorrhage in Fundus Images …
659
20. D. Jeba Derwin, S. Tamil Selvi, O. Jeba Singh, Secondary observer system for detection of microaneurysms in fundus images using texture descriptors. J. Digital Imaging 32(1) (2019) 21. D. Jeba Derwin, S. Tamil Selvi, O. Jeba Singh, Discrimination of microaneurysm in color retinal images using texture descriptors. Signal, Image, and Video Processing 7(5) (2019)
Modeling Approach for Different Solar PV System: A Review Akhil Nigam and Kamal Kant Sharma
Abstract Renewable energy sources have played a vital role in the production of electricity due to its reliable, cost-effective, and sustainable configuration. Many sources have been installed due to their different operating characteristics and climatic conditions. Researchers have also found numerous experimental areas in order to achieve better system performance under the influence of different parameters. In power generation solar energy has been taken as renewable energy source due to its enhanced clean and green technologies and to fulfill energy shortages of any country. This paper has been dealt with different modeling approaches under different climatic conditions like solar irradiance and temperature. Review of different types of solar PV cell architecture has been presented with their entire characteristics. Keywords Solar PV cell · Solar irradiance · Converter · MATLAB/SIMULINK
1 Introduction Renewable energy sources are playing an important role because of free energy and less maintenance cost. These sources like solar PV, wind, and hydro have overcome the problems of centralized power plants. Since solar PV technologies are small, highly modular and they can be utilized virtually anywhere, unlike traditional power generation technologies. So unlike traditional power plants using gas, oil, and coal, solar PV uses no fuel costs and relatively low operation and maintenance costs [1]. In addition, solar energy has also integrated with non-renewable energy sources in order to reduce cost of fossil fuels and improve efficiency. Solar photovoltaic system is leading with this revolution by utilization of sunlight and converting it into electricity then transforming that from DC into AC. The steady-state model is also designed A. Nigam (B) · K. K. Sharma Chandigarh University, Mohali, India e-mail: [email protected] K. K. Sharma e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_70
661
662
A. Nigam and K. K. Sharma
to predict power generation so mostly steady-state models use weather conditions such as temperature and irradiance [2]. Solar energy sources have been installed with 49 percent capacity and it will be global leader in terms of capacity in 2040. Solar energy has been reduced constantly proliferating energy crisis problems [3].
2 Solar PV System Solar energy has become one of the most widely used technologies for power generation. Designing of solar PV module has been implemented in order to determine different aspects of solar PV cells. As it is also suggested that solar cells are made up of semiconductor materials like amorphous silicon using thick film deposition methods [4]. This approach can also be achieved by employing power conversion stage which may be connected between solar panel and load taken as pulse-widthmodulated (PWM) dc/dc SEPIC or Cuk converter [5]. Solar PV generation system can also be implemented by using fuzzy regression model and can find characteristics between power and voltage [6]. Since photovoltaic modules are connected in series in order to produce required output voltages then a novel circuit referred to as a generation control circuit for solar PV system is defined [7]. There are different types of semiconductor materials like monocrystalline, polycrystalline, microcrystalline, copper indium diselenide, etc. in photovoltaic (PV) system. In PV systems there many components like cell, module, and arrays for producing electricity. In addition to these components, there are also some regulating and monitoring systems available to enhance better efficiency. The rating of photovoltaic systems is in peak kilowatt (kWp). Solar battery system may also be implemented and can determine the behavior under load and different irradiance conditions [8]. So solar energy is playing a vital role in electricity sector and by 2040 it will be the largest renewable energy source. Solar PV system is also integrated with fuel cell and ultracapacitor for sustained power generation and met with load demands [9]. Renewable energy report shows globally production of electricity by renewable energy since 2008 upto 2018 through various countries. This electricity production is still increasing by employing latest technologies and it will reach with 26% growth rate means 1500 GW. Hence Solar PV is the single largest source of additional expansion potential which is followed by onshore wind and hydropower energy source. Some of the authors have done work on solar PV systems to obtain best dynamic performance by employing different maximum power tracking techniques [10] for achieving more efficiency. Through using real weather conditions, solar PV systems can be used that better stability and efficiency can be achieved [11]. Since we also require low cost installation of solar PV systems, then consumers can use different modules, cells and PV array system with optimal capacity factor [12]. In some cases, voltage-based and current-based MPPT techniques are used and compared which is more reliable for better utilization of power [13]. Solar PV systems have been installed when it requires more lighting applications; hence, for that purposes fuzzy logic-based systems can
Modeling Approach for Different Solar PV …
663
be preferred [14]. Similarly due to the presence of advanced techniques like artificial neural network systems with real-time tracking controller can be employed for getting maximum efficiency of solar PV system [15]. For load leveling, an interleaved dual boost converter can be used to operate solar PV system at maximum power point and compared with other boost converters for analyzing the performance [16]. During the use of maximum power point tracking techniques, it may lead with short current pulse of photovoltaic systems to find an optimum operating point [17]. To reduce the complexity with two-stage design of solar PV system, we may prefer one stage which results in small size, low cost and high efficiency [18]. For reducing the problem of mismatching between solar PV array and load can be modified by using digital signal processor [19]. In solar PV systems, the performance of entire system can be optimized by comparing incremental conductance and instantaneous conductance of PV array [20]. There are many types of photovoltaic system like photo-emissive system which converts the sunlight energy into electric energy [21]. Some of the solar PV systems have been used through the use of ac load and dc load via inverter and simulated by changing temperature and solar irradiation [22]. To find the data about solar array current sliding mode observer has been used and fed into MPPT for the generation of reference voltage [23]. Through using neural network for solar PV array, a real basis function has been carried out and emulate the current–voltage characteristics of solar PV system [24]. The designing of solar PV system is somehow based on standard solar modules which proceed for low power applications [25]. The performance of solar PV array is also affected by shading impact, and it also affects the I–V characteristics which may lead into low output [26].
2.1 Modeling Aspects of Solar PV Cells Many authors have proposed different types of solar PV cell considering different types of converters and PV array ratings. Each author has presented different modeling of solar PV systems with different types of converters. Solar PV systems performs and it changes the maximum operating power point with simultaneous change in temperature and solar radiation so apart from using conventional techniques microprocessor based techniques have been preferred for more flexibility [27]. E. Koutroulis et.al have proposed new MPPT system which uses buck type DC-DC converter and its function is monitored by microprocessor based unit [28]. Trishan Esram et.al have compared different techniques for maximum power point tracking of photovoltaic array with multiple variations on implementation [29]. A.M. A. et.al have proposed cuk converter and with the help of cuk converter duty cycle have been adjusted [30]. Jae-hyun Yoo, et.al have compared between buck and buck-boost type converter with power voltage and current-voltage curve [31]. Minwan Park, et.al has been done simulation on real-time weather and then compared with real output of an established PV array [32]. Katsutoshi Ujiie, et.al have been investigated between static and dynamic behavior of PV system [33]. Hiroshi Yamashita, et.al
664
A. Nigam and K. K. Sharma
have proposed a model of PV array system and compared with the proposed simulation PV model [34]. Ching-Tsan Chiang, et.al have proposed CMAC-GBF based solar PV model and simulated under different climatic conditions [35]. Yushaizad Yusof, et.al have simulated PV array system with the help of incremental conductance algorithm [36]. Mummadi Veerachary has demonstrated PV array system by using sepic converter and improved efficiency [37]. K. S. Phani Kiranmai has proposed MPPT-based PV system and simulated in PSIM software [38]. J. Ghaisari, et.al have proposed PV system with MPPT controller with PWM boost converter, and simulations are carried out using PSCAD software [39]. Montie A. Vitorino, et.al have compared between rated and experimental PV array system and simulated [40]. Kuei-Hsiang Chao, et.al have proposed PSIM based PV system and then simulated under different temperatures and solar irradiance [41]. Dorin Petreus, et.al have proposed four photovoltaic cell models and simulated under light and temperature influence [42]. Siamik Mehrnami, et.al have introduced a new index for PV system and shown a linear relation with current and it reaches zero at MPP [43]. A. Elkholy, et.al have proposed mathematical model of PV system and simulated under the influence of irradiation and temperature [44]. Yuncong Jiang et.al have proposed an improved MATLAB Simulink model of PV system and compared it with others [45]. Fatima Zahra Amatoul et.al have presented an approach of modeling and controlling of grid-connected PV system and simulated in MATLAB environment [46]. G. Bhuvaneswari, et.al have proposed PV model and compared it with other existing models under different STC and real-time conditions [47]. A. safari et.al have analyzed comprehensive formulation of standalone and grid-connected PV system under different irradiance and temperature [48]. Satarupa Bal, et.al have been performed a comparative analysis of single diode, two diode, and simplified diode under different irradiation and temperature [49]. Md. Aminul Islam, et.al have designed PV model in order to simulate its characteristics based on CS6P-250M module [50]. Mamta Suthar, et.al have performed comparison of single diode and two diode model by mathematical equation of 37w PV cell [51]. Habbati Bellia, et.al have performed simulation of PV cell using MATLAB by taking both series and shunt resistance under the influence of irradiance and temperature [52]. Ravinder Kumar Kharb, et.al have proposed an adaptive neuro-fuzzy inference based PV model and simulated by keeping constant temperature and different irradiance [53]. Oladimeji Ibrahim, et.al have designed solar PV cell and simulated through perturb and observe algorithm under the influence of solar irradiance and temperature [54]. Md Faysal Nayan, et.al have simulated of PV cell model under the influence of irradiance and temperature and observed the effects on series resistance, shunt resistance, and ideality factor [55]. Ali Chikh, et.al have proposed an adaptive neuro fuzzy-based solar PV module and performed under different climatic conditions [56]. Bijit Kumar Dey, et.al have simulated PV cell model through MATLAB under consideration of ohmic losses which represent in resistances [57] (Table 1). Biraja Prasad Nayak, et.al have compared perturb and observe and fuzzy-based MPPT algorithm for solar PV cell under different time instants [58]. Nikita Gupta, et.al have been designed solar PV cell under sensitivity function and simulated through MATLAB environment [59]. Ranita Sen, et.al have proposed solar PV
Modeling Approach for Different Solar PV …
665
Table 1 Different aspects of modeling for solar PV system with their outcomes S. No
Author & year
Modeling techniques
Outcomes
1
A.M.A. Mahmoud, 2000
Fuzzy logic Control
It has controlled all the function performed in the PV system and achieved maximum output at all insolation levels
2
Jae Hyun Yoo, 2001
Pulse width modulation
Simulation shows better results in terms of increased tracking error and large response time
3
Katsutoshi Ujiie, 2002
Momentarily short calibration method
Static Characteristics are obtained by employing MPPT from dynamic characteristics
4
Hiroshi Yamashita, 2003
Novel simulation technique
Simulation of PV generation system shows stability of voltage control and efficient
5
Yushaizad Yusof, 2004
Incremental conductance
Simulation has been performed using C programming and gives good boosting capability
6
Mummadi Veerachary, 2005
SIMULINK method
Modeling has been developed in SIMULINK environment and improves converter efficiency
7
K. S. Phani Kiranmai, 2006
Pulse width modulation
Proposed scheme shows linear results with different solar irradiation
8
J. Ghaisari, 2007
PWM boost converter
Maximum power transmission capability is improved and gets better efficiency
9
Saimak Mehrnami, 2009
Incremental conductance
Provides better results under different climatic conditions
10
Saad Ahmad, 2010
Incremental conductance and perturb & observe
Comparison between thin-film technology and conventional silicon cells in terms of I-V characteristics
11
Fatima Zahra Amatoul, 2011
Perturb & Observe
Simulation has been performed in simulink and performance of inverter and MPPT has been improved
12
Sandeepan Majumdar, 2012
Incremental conductance
Reduction of battery cost has been performed by reducing battery bank capacity (continued)
666
A. Nigam and K. K. Sharma
Table 1 (continued) S. No
Author & year
Modeling techniques
Outcomes
13
Mamta Suthar, 2013
Newton Raphson method
Comparison of different electrical mathematical equations using single and double diode has been done
14
Ravinder Kumar Kharb, 2014
Neuro-fuzzy
Maximum power point has been tracked at different temperature and irradiance of PV module
15
Oladimeji Ibrahim, 2015
Perturb & Observe
Percentage deviation of PV output power from ideal one is 10 percent and output power has been increased
16
Hemant Patel, 2016
Perturb & Observe, incremental conductance and constant voltage
Mathematical modeling of PV system has been formed and compared with and without MPPT method so P&O gives better efficiency
17
B. P. Nayak, 2017
Fuzzy Logic and perturb & observe
Fuzzy logic performance is better than perturb & observe in terms of response time and efficiency
18
Suneel Raju Pendem, 2018 Suneel Raju Pendem, 2018
Honey comb configuration Employing honey comb method configuration maximum power is generated under all shading conditions as compared to series or series-parallel configuration
model under different solar irradiance and temperature using boost converter through MATLAB simulink [60]. Enrica Scolari, et.al have been presented solar PV module using pyranometer estimations and simulated [61]. Suneel Raju Pendem, et.al have employed solar PV module and simulated through partial shading conditions under solar irradiance and temperature [62]. Valentin Milenov, et.al have designed different types of solar PV modules and simulated under different operating conditions like solar irradiance and temperature [63]. S. Mohammadreza Ibrahimi, et.al have been proposed a new flexible particle swarm optimization algorithm and improved the performance of the entire system [64]. Mohamed Boussaada, et.al have carried out a generator PV simulator and obtained all the characteristics of solar PV system under different conditions [65]. Tao Ma, et.al have compared monocrystalline and polycrystalline based solar PV cell has been carried out and simulated under different climatic conditions through MATLAB environment [66]. Bibin Raj V. S. et.al have proposed a sub-MPPT method in order to achieve better MPPT performance by using deadbeat controller [67]. Table 2 shows a review of different types of solar PV
Modeling Approach for Different Solar PV …
667
Table 2 Review of different types of solar PV system with their observations S.
Author & Title
Observation
1
A.M. A. Mahmoud, et.al (2000), Fuzzy logic implementation for photovoltaic maximum power tracking
Simulation study of fuzzy logic controller using cuk conveter in stand-alone photovoltaic system has been carried out
2
Jae-hyun Yoo, et.al (2001), Analysis and Control of PWM Converter with V-I Output Characteristics of Solar Cell
Comparison between buck and buck-boost converter has been performed
3
Minwan Park, et.al (2001), A Novel There have been simulation on real field simulation method for PV power generation weather and compared with real output systems using real weather conditions data
4
Katsutoshi Ujiie, et.al (2002), Study on Dynamic and Static Characteristics of Photovoltaic Cell
5
Hiroshi Yamashita, et.al (2002), A Novel Comparison between real output voltage Simulation Technique of the PV Generation and proposed simulation has been System using Real Weather Conditions performed
6
Ching-Tsan Chiang, et.al (2003), Modeling Solar PV model is built by CMAC-GBF a photovoltaic power system by platform and simulated under different CMAC-GBF climatic conditions
7
Yushaizad Yusof, et.al(2004), Modeling and Simulation of Maximum Power Point Tracker for Photovoltaic System
Simulation has been performed with incremental conductance algorithm
8
Mummadi Veerachary (2005), Power Tracking for Nonlinear PV Sources with Coupled Inductor SEPIC Converter
Voltage based power tracking of PV array using sepic converter is demonstrated
9
K. S. Phani Kiranmai, et.al (2006), A The proposed MPPT model is simulated in Single-Stage Power Conversion System PSIM software for the PV MPPT Application
10
J. Ghaisari, et.al (2007), An MPPT PV system is analyzed with MPPT and Controller Design for Photovoltaic (PV) without MPPT strategy Systems Based on the Optimal Voltage Factor Tracking
11
Montie A. Vitorino, et.al (2007), Using the model of the solar cell for determining the maximum power point of photo voltaic systems
Comparison between experimental and rated PV system has been performed
12
Kuei-Hsiang Chao, et.al (2008), Modeling and Fault Simulation of Photovoltaic Generation Systems Using Circuit-Based Model
PSIM based PV system is performed under different irradiance and temperature
13
Dorin Petreus, et.al (2008), Modelling and simulation of photovoltaic cells
Four photovoltaic models are presented and simulated under light and temperature influence
No
The static and dynamic behavior of PV system has been investigated
(continued)
668
A. Nigam and K. K. Sharma
Table 2 (continued) S.
Author & Title
Observation
14
Siamik Mehrnami, et.al (2009), A Fast Maximum Power Point Tracking Technique for PV Powered Systems
Introduces a new index which shows linear relation with operating point current and reaches zero at MPP
15
A. Elkholy, et.al (2010), A New Technique Mathematical modeling of PV array for Photovoltaic Module Performance system is formulated under the influence of Mathematical Model irradiation and temperature
16
Yuncong Jiang, et.al (2010), Improved Solar PV Cell Matlab Simulation Model and Comparison
17
Fatima Zahra Amatoul, et.al(2011), Design An approach of modeling and controlling Control of DC/AC Converter for a grid of grid-connected PV system has been Connected PV Systems with Maximum performed in MATLAB Power Tracking using Matlab/Simulink
18
G. Bhuvaneswari, et.al (2011), Proposed PV model and compared with Development Of A Solar Cell Model In the existing models under different STC Matlab For PV Based Generation conditions as well as real-time conditions System
19
A.safari et.al (2011), Inc. Cond. MPPT Method for PV Systems
Comprehensive analysis of PV system of standalone and grid-connected has been performed
20
Satarupa Bal, et.al (2012), Comparative Analysis of Mathematical Modeling of Photo-Voltaic (PV) Array
Comparative analysis of single diode, two diode model and simplified single diode model has been performed
21
Md. Aminul Islam, et.al (2013), Modeling Solar Photovoltaic Cell and Simulated Performance Analysis of a 250W PV Module
Solar PV cell has been designed in order to simulate its characteristics based on CS6P-250 Module
22
Mamta Suthar, et.al (2013), Comparison of Comparison of different electrical Mathematical Models of Photo Voltaic equivalent mathematical model for single (PV) Module and effect of various and two diode model is performed Parameters on its Performance
23
Habbati Bellia, et.al (2014), A detailed modeling of photovoltaic module using MATLAB
Simulation of PV cell model has been performed by using both series and shunt resistance under irradiance and temperature
24
Ravinder Kumar Kharb, et.al (2014), Modeling of solar PV module and maximum power point tracking using ANFIS
An adaptive neuro fuzzy inference system based PV system has been formulated
25
Oladimeji Ibrahim, et.al (2015), Solar PV cell is simulated using perturb Matlab/Simulink Model of Solar PV Array and observe MPPT algorithm under with Perturb and Observe MPPT for different irradiance and temperature Maximizing PV Array Efficiency
No
In this, an improved MATLAB simulation model of PV cell has been shown and compared to others
(continued)
Modeling Approach for Different Solar PV …
669
Table 2 (continued) S.
Author & Title
Observation
26
Md Faysal Nayan, et.al (2015), Modelling of Solar Cell Characteristics Considering the Effect of Electrical and Environmental Parameters
Effect of temperature and solar irradiance has been observed on series resistance, shunt resistance, and ideality factor
27
Ali Chikh, et.al (2015), An Optimal Maximum Power Point Tracking Algorithm for PV Systems With Climatic Parameters Estimation
An adaptive neuro fuzzy-based solar PV module has been established and simulated under different climatic conditions
28
Bijit Kumar Dey, et.al (2016), Solar PV cell model has been simulated by Mathematical Modelling and Characteristic considering ohmic losses in resistances analysis of Solar PV Cell
29
Biraja Prasad Nayak, et.al (2017), Design of MPPT Controllers and PV cells Using MATLAB Simulink and Their Analysis
Comparison of P& O and fuzzy-based MPPT algorithm for solar PV cell has been performed
30
Nikita Gupta, et.al (2017), Sensitivity and reliability models of a PV system connected to grid
Sensitivity function of solar PV cell and boost converter has been performed and simulated through MATLAB
31
Ranita Sen, et.al (2017), Modeling of PV array using P&O algorithm in Boost Converter
Simulation of solar PV module has been performed under different solar irradiance and temperature using boost converter
32
Enrica Scolari, et.al (2018), Photovoltaic Model-Based Solar Irradiance Estimators: Performance Comparison and Application to Maximum Power Forecasting
Solar PV module has been performed under pyranometer based estimations
33
Suneel Raju Pendem, et.al (2018), Modeling, simulation and performance analysis of solar PV array configurations (Series, Series–Parallel and Honey-Comb) to extract maximum power under Partial Shading Conditions
Design of solar PV module has been performed under partial shaded conditions employing different irradiance and temperature
34
Valentin Milenov, et.al (2019), Modeling of Modeling of different solar PV system has Electrical Characteristics of Various PV been presented and characterized under Panels different operating conditions
35
S. Mohammadreza Ibrahimi, et.al (2019), Flexible particle swarm optimization has Parameters identification of PV solar been proposed in order to improve cells and modules using flexible particle performance of existing algorithm swarm optimization algorithm
36
Mohamed Boussaada, et.al (2019), Emulating and Amplifying an I-V Panel Based on an Electrical Model of a PV Cell
Generator PV simulator has been carried out to emulate all the characteristics of solar PV module
37
Tao Ma, et.al (2019), An improved and comprehensive mathematical model for solar photovoltaic modules under real operating conditions
Comparison between monocrystalline and polycrystalline silicon-based solar PV cell has been carried out and simulated through MATLAB environment
No
(continued)
670
A. Nigam and K. K. Sharma
Table 2 (continued) S.
Author & Title
Observation
Bibin Raj V. S. et.al (2020), Design and development of new control technique for standalone PV system
A sub-MPPT method is proposed in order to achieve best MPPT performance through using deadbeat controller
No 38
systems with their observations and gives future consideration for modeling of solar PV system. Below Figs. 1 and 2 shows generating capacity from renewable and non- renewable energy sources like solar and wind from the years 2012 to 2018. Since the
Fig. 1 Power Generating Capacity through renewable and non-renewable sources (solar PV and wind power) respectively from 2008 to 2018 [68]
Fig. 2 Power Generating Capacity through renewable and non-renewable sources (like hydro, bio-power, geothermal and ocean power) respectively from 2008 to 2018 [68]
Modeling Approach for Different Solar PV …
7000 6000 5000 4000 3000 2000 1000 0
671
Non-renewable (GW) Hydro Power (GW) Wind power, Solar PV, geothermal, ocean (GW)
2008 2010 2012 2014 2016 2018 Fig. 3 Global Power Generating Capacity from 2008–2018 [68]
utilization of renewable energy sources are increasing due to free fuel energy and less maintenance cost. Hence these sources have improved efficiency and have given faster response than conventional energy sources. The energy production has been increased and it will reached up to 43GW till 2020. So these sources have overcome the problems of conventional sources and accessed the remote areas (Fig. 3). There may challenge for renewable energy sources in the future like consistency and plausibility issues. In India, it needs a more comprehensive approach in order to achieve the target of 100 GW by 2022 but still it has 72 GW short of it. Since solar photovoltaic technology has complex expensive issues and demands advanced technologies for overcoming manufacturing and installation problems. Second there are many environmental concerns that disturb its performance and reduce efficiency. Next due to a lack of knowledge for rural people they have no awareness about solar energy potential. Competition from other solar markets it is also a very serious concern which blocks further developments. Due to different types of semiconductor material made of chemicals utilization, there may be some environmental issues by the presence of toxic elements. Solar energy is not consistent throughout all time. Integration with other energy sources needs to be consistent but it is not possible. Through using various conventional and nonconventional energy sources a smart grid can be implemented which reduces the cost of system and deliver energy at higher quality so for this AMI system is preferred which transfers data by using smart meters to the consumers [69].
3 Conclusion This paper has been reviewed about different architectural aspects of solar PV cells under the influence of climatic conditions like solar irradiance and temperatures. This review paper has shown different characteristics among power, voltage, and current at different irradiance and temperature by employing different types of converters. Different rating of solar PV module systems has been employed throughout the years. From this many researchers may attain knowledge about different solar PV
672
A. Nigam and K. K. Sharma
module system for their research interest. Solar industry has been achieving latest technologies by using different solar cells materials and improving their efficiencies. Many technical and environmental challenges and future role of solar energy has also been discussed. Acknowledgement I would like to thank my supervisor Dr. Kamal Kant Sharma who has provided me insight and given his valuable and constructive suggestions in writing this review paper.
References 1. K. Ranabhat, L. Patrikeev, A.A. Revina, K. Andrianov, V. Lapshinsky, E. Sofronova, An introduction to solar cell technology. J. Appl. Eng. Sci. 405, 481–491 (2016) 2. Y. Liu, F. Yang, Lu. Zhigang, The Investigation of Solar PV Models: IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (Washington New York, IEEE, 2018), pp. 1–5 3. M.G. Simoes, N.N. Franceschetti, Fuzzy optimization based control of solar array system, in IEEE Proceedings of Electrical Power Applications, Vol. 146, no. 5 (IEEE, 1999), pp. 552–558 4. J.A. Gow, C.D. Manning, Development of a photovoltaic array model for use ın power electronics simulation studies, in IEEE Proceedings on Electric Power Application, Vol. 146, no. 2 (IEEE, 1999), pp. 193–200 5. K.K. Tse, M.T. Ho, H.S.H. Chung, S.Y. Hui, A novel maximum power point tracker for PV panels using switching frequency modulation. IEEE Trans. Power Electronics 17(6), 980–989 (2002) 6. Th.F. Elshatter, M.T. Elhagry, E.M. Abou-Elzahab, A.A.T. Elkousy, Fuzzy modeling of photovoltaic panel equivalent circuit, in Proceedings of Photovoltaic Specialists Conf , Vol. 15, no. 22 (IEEE, 2002), pp. 1656–1659 7. T. Shimizu, M. Hirakata, T. Kamezawa, H. Watanabe, Generation control circuit for photovoltaic modules. IEEE Trans. Power Electron 16(3), 293–300 (2001) 8. Zahedi, Development of an electrical model for a PV/battery system for performance prediction. Renew. Energy 15(1), 531–534 (1998) 9. M. Uzunoglu, O.C. Onar, M.S. Alam, Modeling, control and simulation of a PV/FC/UC based hybrid power generation system for stand-alone applications. Renew. Energy 34(3), 509–520 (2009) 10. K. Ujiie, T. Kuroda, T. Izumi, T. Yokoyam, T. Haneyosh, Study on Dynamic and Static Characteristics of PV Panels: Japan Industry Application Society Conference, Vol. 2, no. 222 (IEEE, 2002), pp. 810–815 11. M. Park, B. Tae-Kim, In Keun Yu.: A Novel Simulation Method for PV power generation systems using real weather condition, in IEEE Internastional Symposium on Industrial Electronics Procedings, Vol. 1 (IEEE, 2002), pp. 526–530 12. M. Salameh, B.S. Borowy, A.R.A. Amin, Photovoltaic module-site matching based on the capacity factors. IEEE Trans. Energy Conversion 10, 326–332 (1995) 13. M.A.S. Masoum, H. dehbonei, E.F. Fuchs, Theoretical and experimental analysis of photovoltaic systems with voltage and current based maximum power point tracking. IEEE Power Eng. Rev. 22(8), 62–62 (2002) 14. T.F. Wu, C.H. Chang, Y.K. Chen, A fuzzy logic controlled single stage converter for PV powered lighting system applications. IEEE Trans. Industrial Electronics 47(2), 287–296 (2000) 15. T. Hiyama, S. Kouzuma, T. Imakubo, T.H. Ortmeyer, Evaluation of neural network based real time maximum power tracking controller for PV system. IEEE Trans. Energy Conversion 10(3), 543–548 (1995)
Modeling Approach for Different Solar PV …
673
16. M. Veerachary, T. Senjyu, K. Uezato, Maximum power point tracking control of IDB converter supplied PV system, in IEEE Proceedings of Electrical Power Applications, Vol. 148, no. 6 (IEEE, 2001), pp. 494–502 17. T. Noguchi, S. Togashi, R. Nakamoto, Short current pulse-based maximum power-point tracking method for multiple photovoltaic and converter module system. IEEE Trans. Industrial Electronics 49(1), 217–223 (2002) 18. Y. Kuo, T. Liang, J. Chen, Novel maximum-power-point-tracking controller for photovoltaic energy conversion. IEEE Trans. Industrial Electronics 48(3), 594–601 (2001) 19. C. Hua, J. Lin, C. Shen, Implementation of DSP controlled photovoltaic system with peak power tracking. IEEE Trans. Industrial Electronics 45(9), 99–107 (2008) 20. K.K. Sharma, A. Gupta, A. Nigam, Electric energy systems, in Green Information and Communication Systems for a Sustainable Future (2020), pp. 159–174. ISBN 9781003032458 21. K.K. Sharma, A. Gupta, A. Nigam, Renewable energy systems, in Green Information and Communication Systems for a Sustainable Future (2020), pp. 93–110. ISBN 9781003032458 22. K.K. Sharma, A. Gupta, A. Nigam, Hybrid energy systems, in Green Information and Communication Systems for a Sustainable Future (2020), pp. 133–142. ISBN 9781003032458 23. S. Kim, M.B. Kim, M.J. Youn, New maximum power point tracker using sliding-mode observer for estimation of solar array current in the grid-connected photovoltaic system. IEEE Trans. Industrial Electronics 53(4), 1027–1053 (2006) 24. A. Nigam, A.K. Gupta, Performance and simulation between conventional and improved perturb & observe MPPT algorithm for solar PV cell using MATLAB/Simulink, in 2016 International Conference on Control, Computing, Communication and Materials (ICCCCM) (IEEE, 2016), pp. 1–4 25. F. Blaabjerg, J.M. Guerrero, Smart grid and renewable energy systems, in 2011 International Conference on Electrical Machines and Systems (IEEE, 2011), pp. 1–10 26. H. Kawamura, K. Naka, N. Yonekura, S. Yamanaka, H. Kawamura, H. Ohno, K. Naito, Simulation of I–V characteristics of a PV module with shaded PV cells. Solar Energy Mater. Solar Cells 75(3(4)), 613–621 (2003) (Elsevier) 27. A. Shaid, Smart grid integration of renewable energy systems, in 2018 7th International Conference on Renewable Energy Research and Applications (IEEE, 2018), pp. 944–948 28. E. Koutroulis, K. Kalkaitzakis, N.C. Voulgaris, Development of a microcontroller-based photovoltaic maximum power point tracking control system. IEEE Trans. Power Electronics 16(1), 46–54 (2001) 29. T. Esram, P.L. Chapman, Comparison of photovoltaic array maximum power point tracking techniques. IEEE Trans. Energy Conversion 22(2), 439–449 (2007) 30. A.M.A. Mahmoud, H.M. Masaly, S.A. Kandil, H. El Khashab, M.N.F. Nashed, Fuzzy logic implementation for photovoltaic maximum power tracking. IEEE Industrial Electronics Soc. 1, 735–740 (2002) 31. J.-H. Yoo, J.-S. Gho, G.-H. Choe, Analysis and control of PWM converter with V-I output characteristics of solar cell, in IEEE International Symposium on Industrial Electronics Proceedings, Vol. 2 (IEEE, 2001), pp. 1049–1054 32. M. Park, B.-T. Kim, Yu. In-Keun, A novel simulation method for PV power generation systems using real weather condition. IEEE Int. Symp. Industrial Electronics Proc. 1, 526–530 (2002) 33. K. Ujiie, T. Izumi, T. Yokoyama, T. Haneyoshi, Study on dynamic and static characteristics of photovoltaic cell, in IEEE Proceeding of the Power Conversion Conference–Osaka, Vol. 2 (2002), pp. 810–815 34. H. Yamashita, K. Tamahashi, M. Michihira, A. Tsuyoshi, K. Amako, M. Park, A novel simulation technique of the PV generation system using real weather conditions, in IEEE Proceeding of the Power Conversion Conference –Osaka, Vol. 2 (2002), pp. 839–844 35. C.-T. Chiang, T.-S. Chiang, H.-S. Huang, Modeling of photovoltaic power system by CMACGBF, in IEEE 3rd World Conference on photovoltaic Energy Conversion, Vol. 3 (2004), pp. 2431–2434 36. Y. Yusof, S.H. Sayuti, M.A. Latif, M. Zamri Che Wanik, Modeling and simulation of maximum power point tracker for photovoltaic system, in National Power & Energy Conference Proceedings (2005), pp. 88–93
674
A. Nigam and K. K. Sharma
37. M. Veerachary, Power tracking for nonlinear PV sources with coupled inductor sepic converter. IEEE Trans. Aerosp. Electronic Syst. 41(3), 1019–1029 (2005) 38. K.S. Phani Kiranmai, M. Veerachary, A single-stage power conversion system for the PV MPPT ˙ application, in IEEE International Conference on Industrial Technology, Mumbai, India (2007), pp. 2125–2130 39. J. Ghaisari, M. Habibi, A. Bakhshai, An MPPT controller design for photovoltaic (PV) systems based on the optimal voltage factor tracking, in IEEE Canada Electrical Power Conference. IEEE, Montreal, Que., Canada (2008), pp. 359–362 40. M.A. Vitorino, L.V. Hartmann, A.M.N. Lima, M.B.R. Correa, Using the model of the solar cell for determining the maximum power point of photovoltaic systems, in European Conference on Power Electronics and Applications, Aalborg, Denmark (2008), pp. 1–10 41. K.-H. Chao, C.J. Li, S.-H. Ho, Modeling and fault simulation of photovoltaic generation systems using circuit-based model, in IEEE International Conference on Sustainable Energy Technologies. Singapore (2009), pp. 290–294 42. D. Petreus, C. Farcas, I. Ciocan, Modeling and simulation of photovoltaic cells. Acta Technica Napocensis Electronics Telecommun. 49(1), 42–47 (2008) 43. S. Mehrnami, S. Farhangi, A Fast Maximum Power Point Tracking Technique for PV Powered Systems: 35th Annual Conference of IEEE Industrial Electronics (Porto, Portugal, 2009), pp. 42–46 44. Elkholy, F.H. Fahmy, A. Abu Elela, A new technique for photovoltaic module performance mathematical model, in IEEE International Conference on Chemistry and Chemical Engineering, Kyoto, Japan (2010), pp. 6–10 45. Y. Jiang, J.A. Abu Qahouq, I. Bataresh, Improved solar PV cell MATLAB simulation model and comparison, in IEEE International Symposium on Circuits and Systems, Paris, France (2010), pp. 2770–2773 46. F.Z. Amatoul, M.T. Lamchich, A. Outzourhit, Design control of DC/AC converter for a grid connected PV systems with maximum power tracking using Matlab/Simulink, in IEEE International Conference on Multimedia Computing and Systems, Berminghum, UK (2011), pp. 1–6 47. R. Bhuvaneswari, Annamalai: Development of a Solar Cell Model in MATLAB for PV Based Generation System: Annual IEEE India Conference (Hyderabad, India, 2011), pp. 1–5 48. Safari, S. Mekhilef, Incremental conductance MPPT method for PV systems, in IEEE 24th Canadian Conference on Electrical and Computing Engineering. Niagara Falls, ON, Canada (2011), pp. 345–347 49. S. Bal, A. Anurag, B. Chitti Babu, Comparative analysis of mathematical modeling of photovoltaic (PV) array, in Annual IEEE India Conference. Kochi, India (2016), pp. 269–274 50. Md. Aminul Islam, A. Merabet, R. Beguenane, H. Ibrahim, Modeling solar photovoltaic cell and simulated performance analysis of a 250w PV module, in IEEE Electrical Power and Energy Conference. Halifax, NS, Canada (2013), pp. 1–6 51. Suthar, G.K. Singh, R.P. Saini, Comparison of mathematical models of photovoltaic module and effect of various parameters on its performance, in IEEE International Conference on Energy Efficient Technologies Sustainability. Nagercoil, India (2013), pp. 1354–1359 52. H. Bellia, R. Youcef, M. Fatima, A detailed modeling of photovoltaic module using MATLAB. NRIAG J. Astronomy Geophys. 3(1), 53–61 (2014) 53. R.K. Kharb, S.L. Shimi, S. Chatterji, Md. Fahim Ansari, Modeling of solar PV module and maximum power point tracking using ANFIS. Renew. Sustain. Energy Rev. 33, 602–612 (2014) 54. O. Ibrahim, N.Z. Yahaya, N. Saad, M.W. Umar, Matlab/simulink model of solar PV array with perturb and observe MPPT for maximizing PV array efficiency, in IEEE Conference on Energy Conversion (IEEE, Johor Bahru, Malaysia, 2015), pp. 254–258 55. Md. Faysal Nayan, S.M. Safayet Ullah, Modeling of solar cell characteristics considering the effect of electrical and environmental parameters, in 3rd International Conference on Green Energy and Technology. IEEE, Dhaka, Bangladesh (2015), pp. 1–6 56. A. Chikh, A. Chandra, An optimal maximum power point tracking algorithm for PV systems with climatic parameters estimation. IEEE Trans. Sustain. Energy 6(2), 644–652 (2015)
Modeling Approach for Different Solar PV …
675
57. B.K. Dey, I. Khan, N. Mandal, A. Bhattacharjee, Mathematical modeling and characteristic ˙ analysis of solar PV cell, in IEEE 7th Annual Information Technology, Electronics and Mobile Communication. IEEE, Vancouver, BC, Canada (2016), pp. 1–5 58. B.P. Nayak, A. Shaw, Design of MPPT controllers and PV cells using MATLAB simulink and their analysis, in International Conference on Nascent Technologies in the Engineering Field (IEEE, Navi Mumbai, India, 2017), pp. 1–6 59. N. Gupta, R. Garg, P. Kumar, Sensitivity and reliability models of a PV system connected to grid. Renew. Sustain. Energy Rev. 69, 188–196 (2017) 60. R. Sen, A. Garg, A. Singh, Modeling of PV array Using P&O algorithm in Boost Converter, in International Conference on Computing and Communication Technologies for Smart Nation. IEEE, Gurgaon, India (2017), pp. 231–236 61. E. Scolari, F. Sossan, M. Paolone, Photovoltaic model-based solar ırradiance estimators: performance comparison and application to maximum power forecasting. IEEE Trans. Sustain. Energy 9(1), 1–10 (2018) 62. S.R. Pendem, S. Mikkili, Modeling, simulation and performance analysis of solar PV array configurations (series, series-parallel and honey-comb) to extract maximum power under partial shading conditions. Energy Rep. 4, 274–287 (2018) 63. V. Milenov, Z. Zarkov, B. Demirkov, I. Bachev, Modelling of electrical characteristics of various PV panels, in 26th International Conference on Electrical Machines, Drives and Power Systems ELMA. IEEE, Varna, Bulgaria (2019), pp. 1–5 64. S. Mohammadreza Ebrahimi, E. Salahshour, M. Malekzadeh, Parameters identification of PV solar cells and modules using flexible particle swarm optimization algorithm. Energy 179, 358–372 (2019) 65. M. Boussaada, R. Abdelati, H. Yahia, Emulating and Amplifying an I-V Panel Based on an Electrical Model of a PV Cell: 10th International Renewable Energy Congress (Sousse, Tunisia, IEEE, 2019), pp. 1–6 66. T. Ma, Gu. Wenbo, Lu. Shen, M. Li, An improved and comprehensive model for solar photovoltaic modules under real operating conditions. Sol. Energy 184, 292–304 (2019) 67. V.S. Bibin Raj, G. Glan Devadhas, Design and development of new control technique for standalone PV system. Microprocessor and Microsystems 72, 1–13 (2020) 68. REN21, P.S., Renewable 2019: Global Status Report. (2019): Secretariat Renewable Energy Policy Network for the 21st Century (REN21) Paris 69. A. Nigam, I. Kaur, K.K. Sharma, Smart grid technology: a review. Int. J. Recent Technol. Eng. (IJRTE) 7(6S4), 243–247 (2019)
An Empirical Study on Gender–Age Influence on Direct-To-Consumer Promotion of Pharmaceutical Products Jaya Rani, Saibal K. Saha, Vivek Pandey, and Ajeya Jha
Abstract Health care, since time immemorial, has been treasured in an ethical bubble and rightly so because lack of ethics in health care can have devastating effects. One aspect of the ethics is to discourage access healthcare information to patients. The law does not permit access to healthcare information being shared with patients. Arrival of Internet-based technologies has disrupted this legal act completely. Today access to healthcare information is available to us online. It is found to have distinct positive aspects, despite the stigma it had earlier. How do people view it? Do their beliefs on the positive outcome of this access to healthcare information differ across age and gender? The questions have been answered based on this study. 400 patients were subjected to a test that measured their beliefs regarding the positive outcomes of access to healthcare information. It has been found low but significant differences across age and gender exist in this respect. Keywords Positive outcomes · Direct-to-consumer promotion · Pharmaceutical products · Age · Gender
1 Introduction Health care since the very beginning has been enshrined in an ethical cocoon and rightly so because lack of ethics in health care can have devastating effects. One aspect of the ethics is to discourage medication practices beyond healthcare professionals. J. Rani · S. K. Saha · V. Pandey · A. Jha (B) Sikkim Manipal Institute of Technology, Sikkim Manipal University, Sikkim, India e-mail: [email protected] J. Rani e-mail: [email protected] S. K. Saha e-mail: [email protected] V. Pandey e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_71
677
678
J. Rani et al.
Only and only physicians have the right to prescribe medicines. To maintain this exclusivity, even healthcare information is sought to be kept exclusively with physicians alone. The law does not permit access to healthcare information being shared with patients. Launch of information technology, however, makes this principle an exercise in futile. Today healthcare information is literally at the tips of fingers of patient. This phenomenon has invited interest of very many researchers and who identify distinct positive and negative outcomes of direct-to-consumer promotion (DTCP). Many researchers have explored positive influence of DTCP in general and age- and gender-based differences in this respect which has been discussed subsequently.
2 Positive Aspects Important researchers exploring the positive aspects of DTCP include [1–37]. Peri et al. found that 82% of the patients believe that they get right kind of health information through Internet [1]. Singh t. [35] reported that customers claim that they are more aware and educated regarding health-related issues due to DTCP [6, 25].
3 Age Researchers exploring the age influence of DTCP are [1–32]. Most researchers find age-based differences in access to health information with younger people having greater access to online information. There are differing views on the influence of age on the positive perception of DTCP. Some believe younger people are less positive about it compared to middle aged and old patients. Turget [21], however, found that younger people have more trust on online health information.
4 Gender Influence Gender differences have been found across gender too. Differences in access have been reported earlier but later researchers found that this gap has gradually disappeared. With greater access, particularly in developing countries, women now have free access to online health information. [20, 21, 23, 36–45]. Mitchell et al. reported gender being the most noteworthy demographic variable in this respect [43]. Bidmon et al. found that women have more positive attitude toward Internet than men [39]. Nam et al. found men less positive toward DTCP [44]. Ghia et al. found that the influence of DTCP is equal on both the genders [46].
An Empirical Study on Gender–Age Influence on Direct-To-Consumer …
679
Influence of age and gender has been explored extensively but combined influence of age and gender needs further exploration. The purpose of this paper is to investigate age–gender influence on DTCP positive perceptions.
5 Methodology Study relies on an empirical approach and it is having a conclusive design. Hypothesis framed is that Patients do not reflect any significant gender and age-based difference in considering DTCP to be positive. Data has been collected from ordinary people having health-based issues that necessitates them to seek allopathic medications. Data has been collected from 400 patients across east India. The research instrument created for the study comprises of ten statements related to positive outcomes of DTCP and which can be measured on a 5-point Likert scale. ANOVA has been used to measure the significance of differences across age and gender. P-values below 0.05 indicate a significance difference.
6 Result and Discussion IN this study, age–gender differences regarding the perceived positive outcomes of the study have been focused upon. Results in this respect have been given in Table 1. Table 1 provides the succinct details of the result of this survey. Significant differences (p-value below 0.05) are evident with statements 3,4,5,6, 8, 9, and 10. Therefore, null hypothesis is rejected with statements: (i) Online healthcare information is trustworthy, (ii) Online healthcare information is clear, and (iii) Online healthcare information has made me healthier. Interpretation of results for statements against which we find significant age–gender influence is as follows: i.
ii.
Online healthcare information is empowering: A significant mean value difference across gender is to be seen with age group 31–50, whereas age differences across 18–30 and above 50 are also stark. Gender difference across 31–50 age group could be due to the greater need women feel during this age because of maternity issues involved. They need information more than their male companions. Similarly, older people have greater medical needs than the younger people and hence health information on finger-tips will obviously be empowering for them. Online healthcare information helps in diagnosing a disease: Again we find gender differences across age group 18–30. There is difference in the mean values and is 2.83 for male and 3.6 for female. This again gets repeated in age group 31–50 and 51 and above also. Female, therefore, more strongly believes in the statement that Internet information helps in better diagnosis. Agebased differences across group 18–30 and 31–50 for male are also significant.
Statement
Online healthcare information is trustworthy
Online healthcare information is clear
Online healthcare information is empowering
Online healthcare information helps in diagnosing a disease
Online healthcare information clearly mentions side effects
Online healthcare information permits taking preemptive steps
Online healthcare information has made me healthier
Online healthcare information is educative
Online healthcare information helps me to live a healthier life
Online healthcare information is very assuring
Sl. No.
1
2
3
4
5
6
7
8
9
10
Table 1 Patient (age–gender): positive
4.1
3.7
2.9
2.8
3.9
3.3
3.7
2.9
2.8
4.0
3.0
4.4
3.6
4.0
3.6
4.0
3.6
2.9
3.3
4.0
2.9
4.1
4.1
4.1
2.8
2.9
4.1
4.1
M
F 4.0
M 4.0
(µ) 31–50
(µ) 18–30
Age F
2.7
4.4
3.7
4.0
4.1
4.1
3.7
4.1
4.4
4.1
4.1
4.1
3.7
4.1
4.4
4.1
3.7
4.4
4.4
4.1
M
(µ) Above 50 F
3.6
4.6
4.0
4.1
4.5
4.4
4.0
4.5
4.3
4.0
2.4
16.8
14.7
0.2
0.8
1.1
2.1
23.3
2.8
0.3
F-value
0.048
0
0
0.9
0.04
0.04
0.041
0
0.1
0.8
Sign
680 J. Rani et al.
An Empirical Study on Gender–Age Influence on Direct-To-Consumer …
iii.
iv.
v.
vi.
vii.
681
Differences again can be explained on the basis of divergent medical needs across gender and age. Online healthcare information clearly mentions side effects: We find that in the age group 18–30 the mean of male respondents is 3.73 and mean of female respondent is 3.96. In the age group 31–50, the mean value of male and female is 4.05 and 4.12, respectively. In the group 51 and above, the mean value of male respondent is 4.05 and of female respondent is 4.43. As the age increases, side effects become more relevant. This is natural as older people have higher share of diseases. Women rate information more highly than male across age groups but significantly for higher age group. This could be due to their greater sensitivity to side effects. Online healthcare information permits taking preemptive steps. We find that no appreciable difference across gender but it does across age groups. It appears that older you are greater you believe that online healthcare information helps in taking preemptive steps to maintain health. Perhaps again the greater vulnerability of older people with respect to health issues is the root of this Online healthcare information is educative: In the given table, we can see that except the male of age group 18–30 and 31–50 others have general agreement that information provided on Internet is highly educative. Differences across age exist again due to old age-based health issues which occur frequently, are varied, and as well as are chronic. Online healthcare information helps me to live a healthier life: From the table, it is apparent that in the age group 31–50 mean value of male is 4.04 and female is 4.37 and hence significance of difference is confirmed. This is true in case of age group 51 and above. In the age group 18–30, male respondents do not agree to the statement as their mean value is 2.88. This is explained again on the basis of less health-related vulnerability of young people. Online healthcare information is very assuring: Age and gender are low but significant differences, particularly in younger age group are obvious. Women have greater healthcare issues mostly related to gynecological complications.
7 Conclusion and Implications It is evident that age and gender do account for differences on positive aspects of DTCP. Differences, however, are low or moderate. This is primarily driven by differences in health issue across age and gender. Men are less vulnerable to health issue then women. Across age younger people have less health-based problems. Implications of this study are for marketers, physicians, patients, and regulatory bodies. Marketers must appreciate the greater necessities of older people vis-à-vis the health information that is timely, accurate, complete, credible, and unbiased. The same holds true for women. It needs emphasis that many of the disease and product Web sites are managed by pharmaceutical companies themselves, often anonymously. They, consequently, have great accountability to be ethical to the core.
682
J. Rani et al.
They must weed out the not-so-ethical pharmaceutical marketers among them. The policy-makers must develop their own information systems that are neutral yet effective. This is not easy as even the so-called neutral online sources are vulnerable to manipulations by the powerful vested interests. Physicians must gear to be more thoughtful to the requirements of women and geriatric patients who are emerging as prominent consumers of healthcare, including health information. Perhaps the patients themselves need to be more cautious and enabled to be proactive for their health. This calls for empowerment of the family members too. This is happening gradually. Older men, the most indifferent so far, are having greater concern for their healthcare complications [47, 48].
References 1. M. Perri, A.A. Nelson, An exploratory analysis of consumer recognition of direct-to-consumer advertising of prescription medications. J. Health Care Market. 7, 9–17 (1987) 2. A. O’Brain, Point-of-care advertising is under-appreciated, but why? DTC Perspective 7(1), 31–33 (2008) 3. R.A. Bell, R.L. Kravitz, M.S. Wilkes, Direct-to-consumer prescription drug advertising and the public. J. Gen. Intern. Med. 14, 651–657 (1999) 4. K.J. Bozic, A.R. Smith, S. Hariri et al., The ABJS Marshall Urist Award—the impact of direct-to-consumer advertising in orthopaedics. Clin. Orthop. Relat. Res. 458, 202–219 (2007) 5. A. Deshpande, A. Menon, M. Perri, G. Zinkhan, Direct-to-consumer advertising and its utility in health care decision making: a consumer perspective. J. Health Commun. Int. Perspect. 9, 499–513 (2004) 6. E. Murray, B. Lo, L. Pollack, K. Donelan, K. Lee, Direct-to-consumer advertising: public perceptions of its effects on health behaviors, health care, and the doctor-patient relationship. J. Am. Board Fam. Pract. 17, 6–18 (2004) 7. J.S. Weissman, D. Blumenthal, A.J. Silk, K. Zapert, M. Newman, R. Leitman, Consumers’ reports on the health effects of direct-to-consumer drug advertising. Health Aff. W3, 82–95 (2003) 8. L.J. Burak, A. Damico, College students’ use of widely advertised medications. J. Am. Coll. Health 49, 118–121 (2000) 9. S.M. Choi, W.N. Lee, Understanding the impact of direct-to-consumer (DTC) pharmaceutical advertising on patient-physician interactions—adding the web to the mix. J. Advertising 36, 137–149 (2007) 10. B. Datti, M.W. Carter, The effect of direct-to-consumer advertising on prescription drug use by older adults. Drugs Aging 23, 71–81 (2006) 11. D.E. DeLorme, J. Huh, L.N. Reid, Age differences in how consumers behave following exposure to DTC advertising. Health Commun. 20, 255–265 (2006) 12. D.E. DeLorme, J. Huh, L.N. Reid, A. Soontae, The state of public research on over-the-counter drug advertising. Int. J. Pharmaceutical Healthcare Market. 4(3), 208–231 (2010) 13. D.E. DeLorme, J. Huh, Seniors’ uncertainty management of direct-to-consumer prescription drug advertising usefulness. Health Commun. 24, 494–503 (2009) 14. M. Herzenstein, S. Misra, S.S. Posavac, How consumers’ attitudes toward direct-to-consumer advertising of prescription drugs influences ad effectiveness, and consumer and physician behavior. Market. Lett. 15(4), 201–212 (2004) 15. J. Huh, D.E. DeLorme, L.N. Reid, The third-person effect and its influence on behavioral outcomes in a product advertising context: the case of direct-to-consumer prescription drug advertising. Commun. Res. 31, 568–599 (2004)
An Empirical Study on Gender–Age Influence on Direct-To-Consumer …
683
16. R.H. Kon, M.W. Russo, B. Ory, P. Mendys, R.J. Simpson, Misperception among physicians and patients regarding the risks and benefits of statin treatment: the potential role of direct-toconsumer advertising. J. Clin. Lipidol. 2, 51–57 (2008) 17. B. Lee, C.T. Salmon, H.J. Paek, The effects of information sources on consumeir reactions to direct-to-consumer (DTC) prescription drug advertising—a consumer socialization approach. J. Advertising 36, 107–119 (2007) 18. J.S. Marinac, L.A. Godfrey, C. Buchinger, C. Sun, J. Wooten, S.K. Willsie, Attitudes of older Americans toward direct-to-consumer advertising: predictors of impact. Drug Inf. J. 38, 301– 311 (2004) 19. A.M. Menon, A.D. Deshpande, M. Perri, G.M. Zinkhan, Consumers’ attention to the brief summary in print direct-to-consumer advertisements: perceived usefulness in patient-physician discussions. J. Public Policy Market. 22, 181–191 (2003) 20. M. Joseph, D.F. Spake, D.M. Godwin, Aging consumers and drug marketing: senior citizens’ views on DTC advertising, the medicare prescription drug programme and pharmaceutical retailing. J. Med. Market. 8(3), 221–228 (2008) 21. T.C. Elif, Online Health Information Seeking Habits of Middle Aged and Older People: A Case Study (Master’s Thesis) (2010) 22. C.H. Rajani, A study to explore scope of direct to consumer Advertisement (DTCA) of prescription drugs in India. Int. J. Market. Human Resource Manage. (IJMHRM) 3(1). ISSN 0976-6421 (Print), ISSN 0976-643X (Online) (2012) 23. S. Vats, Impact of direct to consumer advertising through interactive internet media on working youth. Int. J. Business Admin. Res. Rev. 1(2), 88–99 (2014) 24. J. Reast, D. Palihawadana, H. Shabbir, The ethical aspects of direct to consumeradvertising of prescription drugs in the United Kingdom: physician versus consumer views. J. Advertising Res. 48(3), 450–464 (2008) 25. B.A. Liang, T. Mackey, Direct-to-consumer advertising with interactive Internet media. Global regulation and public health issues. JAMA 305(8), 824–825 (2011) 26. D.L. Frosch, D. Grande, D.M. Tarn, R.L. Kravitz, A decade of controversy: balancing policy with evidence in the regulation of prescription drug advertising. Am. J. Public Health. 100(1), 24–32 (2010) 27. N. Sumpradit, R.P. Bagozzi, F.J. Ascione, Give me happiness or take away my pain: explaining consumer responses to prescription drug advertising. Cogent Bus. Manage. 2(1) (2015). https:// doi.org/10.1080/23311975.2015.1024926 28. J.K. Prigge, B. Dietz, C. Homburg, W.D. Hoyer, L. Burton Jr, Patient empowerment: a crossdisease exploration of antecedents and consequences. Int. J. Res. Mark. 32(4), 375–386 (2015) 29. J.C. Bélisle-Pipon, B. Williams-Jones, Drug familiarization and therapeutic misconception via direct-to-consumer information. J. Bioethical Inquiry 12(2), 259–267 (2015) 30. J.G. Ball, D. Manika, P. Stout, Causes and consequences of trust in direct-to-consumer prescription drug advertising. Int. J. Advertising 35(2), 216–247 (2016) 31. C. Pechmann, J.R. Catlin, The effects of advertising and other marketing communications on health-related consumer behaviors. Curr. Opin. Psychol. 10, 44–49 (2016) 32. C. Adams, F.L. Gables, Direct-to-consumer advertising of prescription drugs can inform the public and improve health. JAMA Oncol. 2(11), 1395–1396 (2016) 33. J. Rani, A. Jha. Impact of age on online healthcare information search: a study on indian patients. Asian J. Manage. 6, 17–24 (2015). ISSN-0976-495X 34. J. Rani, S. Saibal, S.K. Mukherjee, V. Pandey, A. Jha, Gender Differences in Patients’ Assessment of Positive Outcomes of Online Direct-To-Consumer Promotion (2019a) 35. T. Singh, D. Smith, Direct-to-consumer prescription drug advertising: a study of consumer attitudes and behavioural intentions. J. Consumer Market. 22(7), 369 (2005) 36. M. Joseph, G. Stone, J. Haper, E. Stockwell, K. Johnson, J. Huckaby, The effect of manufacturerto-consumer prescription drug advertisements: an exploratory investigation. J. Med. Market. 5(3), 233–244 (2005) 37. E. Kontos, K.D. Blake, W.S. Chou, A. Prestin, Predictors of eHealth usage: insights on the digital divide from the health information national trends survey 2012. J. Med. Internet Res. 16(7), e172 (2014)
684
J. Rani et al.
38. E. Renahy, I. Parizot, P. Chauvin, Determinants of the frequency of online health information seeking: results of a Web-based survey conducted in France in 2007. Informatics Health Soc. Care 5, 25–39 (2010) 39. S. Bidmon, R. Terlutter, Gender differences in searching for health information on the internet and the virtual patient-physician relationship in Germany: exploratory results on how men and women differ and why. J. Med. Internet Res. 17(6), e156 (2015) 40. S. Fox, L. Rainie, J. Horrigan, A. Lenhart, T. Spooner, M. Burke, O. Lewis, C. Carter, The online healthcare revolution: how the web helps Americans take better care of themselves. Pew Internet and American Life Project. Washington, D.C. (Online) (2000) 41. R. Gauld, S. Williams, Use of the internet for health information: a study of Australians and New Zealanders. Informatics Health Soc. Care 34(3), 149–158 (2009) 42. Y.Y. Yan, Online health information seeking behavior in Hong Kong: an exploratory study. J. Med. Syst. 34, 147–153 (2010) 43. V.W. Mitchell, P. Boustani, The effects of demographic variables on measuring perceived risk, in Proceedings of the 1993 Academy of Marketing Science (AMS) Annual Conference. (Springer International Publishing, 2015), pp. 663–669 44. S. Nam, P. Manchanda, P. Chintagunta, The effect of signal quality and contiguous word of mouth on consumer acquisition for a video-on-demand service. Market. Sci. 29(4), 690–700 (2010) 45. J. Rani, S.K. Mukherjee, A. Jha, S. Bibeth, An Empirical Note on Positive Aspects of Online Direct to Consumer Promotion of Pharmaceutical Products (2019b) 46. C. Ghia, R. Jha, G. Rambhad, Impact of pharmaceutical advertisements. J. Young Pharm. 6(2), 58–62 (2014) 47. B. Gough, M.T. Connor, Barriers to healthy eating amongst men:a qualitative analysis. Soc. Sci. Med. 62, 387–395 (2006) 48. N. Richardson, The ‘buck’ stops with me’—reconciling men’s lay conceptualisations of responsibility for health with men’s health policy. Health Sociol. Rev. 19, 419–436 (2010)
An Investigation of Inclusion of Marginalized People in Skill Development Mission, Sikkim Anita Gupta, Neeta Dhusia, and Ajeya Jha
Abstract The economic growth of India does not indicate the social development of all. About 93% of the country’s labor force is in the unorganized sector which contributes almost 60% to the Country’s GD. In the tiny hill State of Sikkim, almost 68,000(10%) are engaged in the informal sector. Economic and social development is lopsided as infrastructure including health and education is mostly concentrated in Gangtok (East District). The current paper is based on primary research in the State of Sikkim involving 600 respondents from amongst the various stakeholders which attempts to examine the differences in the perceptions of stakeholders regarding the factors for enhancing inclusion of marginalized people in the State of Sikkim for implementing Skill Development programmes. Data has been collected from various stake holders including students, parents, teachers, policy-makers, and industry managers and multiple sampling methods have been used to decide the respondents. These factors are reaching the skilling programmes in remote areas, enhancing training in the traditional sector, the involvement of at the doorsteps mass media, separate allocation of funds for skilling, quoting success stories of the underprivileged locals, and involvement of village panchayats. Keywords Inclusion · Unorganized sector · Informal sector · Skill development
1 Introduction The average age of the population in India is estimated around 29 yr by 2020 as against 40 yr in the USA, 46 yr in Europe, and 47 yr in Japan. This demographic advantage is predicted to be present until 2040. India is faced with the mammoth task of skilling the youth, especially in the unorganized sector to step towards inclusion of the society and reduce glaring inequalities in the economy which prevails despite financial and economic growth of the country. This sector group includes unpaid family workers, casual laborers, households engaged in small industries, contractual A. Gupta · N. Dhusia · A. Jha (B) Sikkim Manipal Institute of Technology, Sikkim Manipal University, Sikkim, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_72
685
686
A. Gupta et al.
and migrant laborers, school dropouts and failures, adults without skills, farmers and artisans in rural areas, and also women. There are various roadblocks in the process of Skill development in the country. More so in the case of NER as the task of skilling and consequently providing gainful employment is more difficult than the other regions of the country. Most studies on skill development in NER are based on the secondary data. The primary data-based research studies are less due to factors like difficult topography, fierce geographical conditions and seclusion, cultural barriers, and most of all lack of dedicated research scholars and teams. Furthermore, the studies explore the demands for skilling rather than understanding the aspirations of the marginalized people. Most of the studies explore the impact of skilling in the urban areas for those working in the organized sector. Due to the aforesaid reasons, it has been difficult to have a holistic understanding of the skilling scenario. The State of Sikkim in the North East has its own unique identity and therefore studies for this State cannot be clubbed together with others for a deeper insight into the issues of skill implementation. A few researches have been done on Sikkim earlier like NEFDI Comprehensive Study of Sikkim, the HRD Report of Dept of HRDD, Research on Eco-Tourism, NSDC Skill study gap, Studies on Home Stays and even Arts & Crafts of Sikkim focusing on the problems and the way forward. The current study is based on Primary data research and examines the differences in the perception of various stakeholders about factors for enhancing inclusion of marginalized people in the State of Sikkim for implementing skill development programmes.
2 Literature Review 2.1 Skilling Programmes for Remote Areas One of the scholars has highlighted that HRD to bring about inclusion should adopt a ‘selective support and development’ approach to enable, develop and empower the disadvantaged group of society [2]. This view has been reinforced in another paper which discusses the capability requirements for developing a ‘socially relevant’ HR profession [29]. Another research conducted in remote areas of Australia analyzes the role of mining companies in voluntary partnering in the sustainability of the affected communities [6]. Feedback studies that were undertaken in some backward districts of Madhya Pradesh revealed that satellite education in these remote areas led to a better learning environment in the schools [5]. Another article focuses on positive impact of ICT and IS by various stakeholders in creating an opportunity to prevent social exclusion by disadvantaged people [22]. The same idea is explored to examine the possibilities of social and e-inclusion by the use of Internet technologies [15].
An Investigation of Inclusion of Marginalized People …
687
One of the papers addresses policy issues pertaining to the inadequacy and absence of institutions to develop rural industrial clusters in India and ignoring the inclusive innovation dimension of the informal sector [9].
2.2 Skilling in the Traditional Sector The semi-skilled and the unskilled ones are losing control over their natural resources by way of traditional livelihood and sustainability through gaining work opportunities in the new emerging forms of employment [27]. It explored the impact of globalization on the traditional sectors particularly on the inclusion of women. Another book furnishes vast and deep research insights into various key aspects of training process for women in the informal sector [7]. It is felt that skill training for women can promote productivity and employment and improve standards of living [12]. Jhabvala and Sinha [19] analysed the impact of liberalization and globalization on the working conditions of women in the traditional sector and points to the declining employment opportunities for them) [13]. The impact of innovations and skills on the traditional sector has been explored and identified the appropriate policy interventions [18]. It has also been observed that traditional skilling and methods are coming under great pressure due to changing information technologies and globalization and a balance between technology and individual needs to be maintained [14]. Liseralisation and its after-effects have been a subject of great debate. While proponents point to the declining levels of poverty, opponents insist the opposite has happened - poverty has increased, employment opportunities and access to social services have declined. This article looks at the micro sector—the world of the unorganised woman worker and analyses the varied impact that liberalisation and globalisation has had on her working conditions. A decline in employment opportunities has seen a simultaneous ‘casualisation’ and growing ‘feminisation’ of the workforce—with concomitant ills of low wages and declining job secA decline in employment opportunities has seen a simultaneous ‘casualisation’ and growing ‘feminisation’ of the workforce—with concomitant ills of low wages and declining job security. A decline in employment opportunities has seen a simultaneous ‘casualisation’ and growing ‘feminisation’ of the workforce—with concomitant ills of low wages and declining job security. A decline in employment opportunities has seen a simultaneous ‘casualisation’ and growing ‘feminisation’ of the workforce—with concomitant ills of low wages and declining job security. A decline in employment opportunities has seen a simultaneous ‘casualisation’ and growing ‘feminisation’ of the workforce—with concomitant ills of low wages and declining job security.
688
A. Gupta et al.
2.3 Separate Allocation of Skilling Funds One of the articles [31] point out the fact that a few the challenges could be mitigated by allocation of CSR funds for skill development of the disadvantaged sections and also be utilized to build infrastructure locally so that the students do not have to travel far for training. Some of the CSR funds could be used for incubation or as soft loans for the weaker sections to set up small enterprises or develop technological innovations to help in reaching skill development to the remotest corner of the country. Another paper explores the evolvement of institutional arrangements in the SME sector in India to facilitate interactive learning by finding new modes of financing investment and skill up gradation [10]. In the Health Sector the changing health environment require special focus on separate funding [20].
2.4 Involvement of Mass Media One of the reports examines the impact of Global media networks on higher education across borders [8]. Another study explores how the ICT initiatives have promoted integration of immigrants and ethnic minorities into the mainstream facilitating social integration and providing employment opportunities [26]. One of the researches done assesses the importance of print media as a strategy to promote and bring about branding awareness as well as its effectiveness in some lifelong opportunity programmes offered by Malaysian Universities [11]. Another Research suggests that the broadcast media can be utilized as an important tool in awareness-raising for Human resource development programmes [3]. Another paper discusses how IT is used as an innovative tool to address human resources shortages in the health sector in the Pacific [28].
2.5 Quoting Success Stories of the Underprivileged A report by the McKinsey Center for Government draws success stories from more than 100 initiatives by way of employment-oriented programmes in 25 countries after surveying more than 8000 youth focusing on best practices in skill development and how these can be emulated by others [23]. Another article discusses the reforms that could increase the success rates of community college and provide democratic opportunities to disadvantaged people [17]. The non-profit sector is also promoting skill programmes for the tribes in India. Such cases of under-represented women are reviewed who have successful stories of empowering themselves by making use of education and training in several villages of Gujarat and are a source of inspiration for others [30].
An Investigation of Inclusion of Marginalized People …
689
2.6 Involvement of Village Panchayat for Mobilization A World Bank report argues that participatory community development and decentralization as a mechanism, if done rightly can remove civil society failures and bring about inclusion. It reviews several such projects to understand the correlation and arenas for improvement in monitoring and evaluation [21]. A case study of Haryana district of India highlights the importance of SHGs as a tool for reducing poverty. The formation of SHGs in Haryana has facilitated transforming rural India into a powerful society through microfinance. It is promoting inclusion especially for women and working towards achieving Millennium Development Goals [24]. A similar study was done in 6 districts of Tamilnadu [32]. Another paper based on the studies in three countries examined the role of local communities and institutions in integrated rural development and capacity building [33].
2.7 Review of General Factors for Promoting Inclusion Extensive analysis and synthesis of literature on skill development initiatives have been done in the past. Most of them highlight the opinion that upscaling of skilling implementations by the government in collaboration with industrial and private institutions can fulfill the inclusive national growth objectives of the nation. One explores the positive role of Community development in skill enhancement Frisby and Millar [16] and another article examines the impact of using ICT in 40 projects by public and private sector both to reduce the disadvantage experienced by the more excluded groups in our society [22]. Review of literature has also been done on the impact of teachers’ attitude towards special needs youth to build an inclusive society [1]. Even parents’, counselors and professionals propagate the inclusive theory to integrate the students with disabilities into general education for their holistic development [4].
3 Methodology The nature of this work is exploratory and empirical using descriptive statistics, based on the data collected by way of responses of various stakeholders associated with the skill development process in Sikkim. The research-design for the research work is conclusive. The objectives of the research is to identify the r the factors for enhancing inclusion of marginalized people in state Skill mission. The sampling frame includes all the 4 districts of Sikkim and the sampling population includes students undergoing training in skilling in sectors under various centrally and state-sponsored schemes in different districts, parents, trainers, industry HR heads in Sikkim and officials associated with the skill department. The sample size constitutes the following:
690
A. Gupta et al.
• Teachers teaching skill courses. 100 in numbers. • Industry partners providing employment: 50 in numbers • Government officials & administrators associated with skill initiatives in the State: 40 in numbers. Both random and judgmental sampling methods have been adopted for this study. The Skill Development & Entrepreneurs Department Govt. of Sikkim officials and training providers/stakeholders were selected on judgmental basis and trainees and trainers on a random basis from a defined group. Data collection was done in three phases: a. 1st phase—Focus group discussion and pilot study were conducted with stakeholders involved in skill development and based on their responses to open-ended questions the factors promoting inclusion were identified. b. The second phase—data was collected based on information provided in structured interview schedules. These schedules have been distributed amongst the sample stake-holders and their responses collected. For the students and their parents, interpreters were employed who could explain the questions in native (Nepalese) language. The unwillingness to respond was extremely low at around 1.8%, mostly from students and parents. c. The third phase was the Survey method. The filled questionnaires were collected after eliciting responses from the stakeholders the same day in a classroom situation and on a few selected days later. The statements were framed on a five-point Likert scale. The confidentiality of the data from the respondents has been maintained. Reliability of the data has been measured using Cronbach α and which for six variables have been found to be 0.823 and which may be considered very good as any value of Cronbach α above 0.6 is considered acceptable. Validity was tested on the basis of common method bias analysis which is centred on Harman’s one-factor test with an explorative (EFA) factor analysis. The EFA-solution of the un-rotated principal factor analysis of patient-survey revealed the presence of six factors with an eigenvalue greater than 1.0. The largest factor did not account for the majority of the variance in the variables (23%). Therefore, in accordance with Podsakoff and Organ (1986, p. 536) no general factor is apparent. Face validity was checked using the experts’ opinion regarding whether or not the statements measure what they are intended to be measured. Five experts were consulted to assign values ranging between 1 to 5 with 5 indicating highest possible concurrence between statements wordings and what they intend to measure. Only the variables which have been assigned minimum 4 value by all the experts have been included in the study. The variables along with their codes have been given in Table 1. A subsidiary objective is to test if significant variations exist across select demographic variables.
An Investigation of Inclusion of Marginalized People …
691
Table 1 Variables and respective codes S. No.
Proposed Intervention
Code
1
Skilling programmes being reached at their doorsteps in remote areas
SPDS
2
Enhancing training in the traditional sector
ETTS
3
Involvement of mass media
IMM
4
Separate allocation of skilling funds for them
SASF
5
Quoting success stories of the underprivileged
QSSU
6
Involvement of village panchayat for mobilization
VPM
4 Result and Findings a. Table 2 provides respective scores on inclusion interventions to enhance inclusion of marginalized people in State Skill Mission Overall the most important emerges to be involvement of village panchayats for mobilization for skill development programmes with an overall rank of 396 followed by second rank assigned to separate allocation of funds for skilling (389). The third rank was given to quoting success stories of the underprivileged that have successfully undertaken skilling and benefited from them (383). The fourth rank has been attained by the factor that training to be enhanced in the traditional sectors as perceived by all the stakeholders (371). The fifth rank being given to skilling programmes being reached at the doorstep of the beneficiaries. (366) and the least rank assigned to involvement of mass media (355). Teachers value that separate allocation of funds is the most promoting measure for skill development (324), followed by 2nd rank assigned to the involvement of panchayats (319). Enhancing training in the traditional sector was ranked 3rd and skilling reaching the doorsteps of the beneficiaries was ranked 5th and involvement of mass media 6th rank respectively. Industry-partners value involvement of mass media the highest (136) by the industry partners as a major factor in promoting inclusion in skill sector closely followed by separate allocation of funds (135), enhancing training in the traditional sector (130) and quoting success stories of the underprivileged (126) and skilling reaching the doorsteps of the beneficiaries with same scores. Involvement of mass media (110) has been valued lowest. Table 2 Inclusion Interventions, Overall values Code
Overall
Teachers
Industry
Policy-makers
SPDS
366
307
126
34
ETTS
371
308
130
43
IMM
355
300
110
40
SASF
389
324
135
36
QSSU
383
306
126
40
VPM
396
319
136
39
692
A. Gupta et al.
Table 3 Inclusion interventions, overall ranking Overall
Teachers
Industry
Policy-makers
SPDS
5
4
4
5
ETTS
4
3
3
1
IMM
6
6
5
2
SASF
2
1
2
4
QSSU
3
5
4
2
VPM
1
2
1
3
Policy-makers value-enhancing training in the traditional sector as the highest factor in promoting inclusion (43) followed by 2nd rank assigned both to involvement of mass media and quoting success stories of the underprivileged.(40). The 3rd rank was assigned to involvement of village panchayats for mobilization for skill programmes (39), 4th rank to separate allocation of funds and 5th rank (34) to programmes reaching the doorsteps of the marginalized section for better outreach respectively. Since the sample size for different categories of respondents vary greatly, more than the values, respective ranking of inclusion interventions (Table 3) may provide a better picture. From the comparative ranking table, we find that the most important factor for inclusion as perceived by all is involvement of village panchayats in skill development. This indicates that decentralization and involvement of communities at local levels can facilitate inclusion effectively. The 2nd overall rank is assigned to separate allocation of funds for skilling followed by 3rd rank to quoting success stories of the underprivileged. The 4th rank is given to enhancing training in the traditional sectors and 5th and 6th rank to skilling reaching the doorsteps of youth and involvement of mass media respectively. Most of the teachers believed that separate allocation of funds is a most important factor which can promote inclusion and least important is involvement of mass media. However, the Industry partners perceived involvement of village panchayats as the most important contributing factor in inclusion and shared the view of the teachers in their opinion bout involvement of mass media. The Policymakers ranked training in the traditional sector as the most significant factor and reaching the skilling programmes in the remote areas as least important. b. Significance of differences across stake-holders group vis-a-vis inclusion interventions Hypothesis: HO1 is framed as there is no significant difference in the expressed belief held by stake-holders vis-à-vis inclusion interventions (Table 4). The data has been subjected to ANOVA to determine if significant differences exist amongst the stakeholders for the enhancing inclusion of marginalized people in the State of Sikkim for implementing skill development programmes. For the variable 13a it is seen that the means are almost similar for the teachers and the Industry Partners but different and much lower for the Policymakers. For variables13a, 13b, 13c, 13d, 13e and 13f the teachers are showing not much difference of opinion. Mean
An Investigation of Inclusion of Marginalized People …
693
Table 4 Inclusion Interventions across stake-holders Code
Teachers
Industry-partners
Policymakers
Mean
STD
Mean
STD
Mean
STD
SPDS
3.49
1.02
3.71
0.94
2.83
1.53
ETTS
3.46
0.85
3.82
1.06
3.58
0.51
IMM
3.41
0.83
3.44
0.98
3.33
1.23
SASF
3.64
0.73
3.97
0.9
3
0.95
QSSU
3.48
0.68
3.71
0.91
3.33
1.15
VPM
3.63
0.81
4.12
0.86
3.25
1.14
Table 5 ANOVA Inclusion Interventions across stake-holders
Code
F-value
p-value
Null Hypothesis
SPDS
3.06
0.05
Rejected
ETTS
2.06
0.13
Accepted
IMM
0.06
0.94
Accepted
SASF
6.75
0
Rejected
QSSU
1.41
0.25
Accepted
VPM
6.03
0
Rejected
is highest for Industry partners for variable 13d. However, it has significantly low mean for policy makers (Table 5). From the table, we find that Null hypothesis that there is no significant differences exist is rejected for variables 13a, 13d, and 13f. For the rest, it is accepted. We, therefore, infer that differences do exist partially for the inclusion variables. The null hypothesis that there is no significant difference in the expressed belief held by stakeholders towards inclusion variables is rejected. There is a considerable difference in opinion for the following factors:a. Skilling programmes being reached at their doorsteps in remote areas b. Separate allocation of skilling funds for them c. Involvement of village panchayat for mobilization. From the aforesaid tables, it is implied that the involvement of village panchayats in skill mobilization followed by separate allocation of funds for skilling are the most significant factors in contributing to inclusive society in Sikkim as perceived both by the teachers and the Industry partners. Decentralization of authority and finances and involvement of local communities by the government in such programmes can bring about higher enrolments. More investments and funding should be done for the implementation of skill schemes at the ground levels and success stories of the less privileged should be quoted often to inspire the others to enroll in skill programmes. A group discussion with the World Bank team funding for skill development focused on District skill action plans based on the aspirations and needs of the rural youth and identify projects for inclusion in the districts.
694
A. Gupta et al.
Table 6 ANOVA inclusion interventions across gender Gender
Mean
STD
t-value
p-value
Null Hypothesis
SPDS
M
3.59
0.86
1.27
0.21
Accepted
F
3.35
1.27
ETTS
M
3.65
0.71
1.18
0.24
Accepted
F
3.46
1.07
M
3.51
0.82
1.45
0.15
Accepted
F
3.28
0.98
SASF
M
3.66
0.82
−0.07
0.95
Accepted
F
3.67
0.85
QSSU
M
3.58
0.79
0.95
0.34
Accepted
F
3.45
0.79
M
3.75
0.85
0.56
0.58
Accepted
F
3.67
0.93
IMM
VPM
c. Significance of differences across gender visa vis inclusion interventions. Hypothesis: HO2 is framed as there is no significant difference in the expressed belief held across genders vis-à-vis inclusion interventions (Table 6). From the above table, it is observed that the Null Hypothesis for all the inclusion interventions has been accepted suggesting that there is no significant difference across gender for the given factors responsible for promoting inclusion in the society. The perceptions are the same across sexes indicative of non-discrimination between men and women. d. Significance of differences across districts visa vis vis inclusion interventions Hypothesis: HO3 is framed as there is no significant difference in the expressed belief held across districts vis-à-vis inclusion interventions. Null Hypothesis for all the given variables has been accepted implying that there is no significant difference in the perceived factors for inclusion by the stakeholders across the Districts of Sikkim. These opinions across the Districts are the same and have no correlation with the regions.
5 Discussion and Implications A study was conducted in South Sikkim, a mountain region located in the Indian Eastern Himalaya, to understand the bottlenecks and challenges for inclusion of the NE societies. (SUNDAS, R. 2015) It revealed that mobilization and counselling are an arduous task in the hilly state particularly in the interiors of the North and West Sikkim. Much of the youth needs to be sensitised for skilling as awareness levels are still very low in the remote regions. Sikkim, to a large extent hosts an
An Investigation of Inclusion of Marginalized People …
695
isolated population that exhibits very distinct socio-cultural traits and hence merits a separate study. The state is also important as the present government is implementing revolutionary approaches to bring about economic changes, including bringing the entire population above poverty line. Skill development is a key dimension to enhance the employability of, otherwise unemployable youth across the scale. The NSDC study gap study of 2012 indicates that reaching the skilling to the remote areas has been a great challenge because of the rough terrain unfavourable climate and inaccessibility. Industry partners believe that involvement of village panchayats and separate skilling funds can go a long way in training the disadvantageous section of the society. In a meeting on 23rd Feb. 2018 with the World Bank delegates to Sikkim, the District officials shared the view that there should be decentralization of power and authority. Funds should be granted to them to develop district skill plans according to the sectorial needs of each district and area. In rural Sikkim, the problem of educated unemployment is higher implying the urban-rural divide (Reimeingam 2014). Therefore, inclusion of society is possible only when skilling intervention have a broader outreach. • The opinions of males and females are same pertaining to the building of an inclusive society in Sikkim. The secondary literature review suggests that there are no gender disparities in the North East States and the perceptions of the stakeholders regarding the inclusive factors are the same as across the sexes. • The perception of teachers’ policymakers and industry partners was the same across the districts for enhancement of inclusion of marginalized sections of the society in skill development. • It reflects that Sikkim as a State needs to earmark funds exclusively for skill infrastructure and training.
6 Conclusion From the result and discussions, we observe that involvement of Village Panchayats in skill mobilization is the most prime factor as perceived by the Industry partners and the teachers which can bring about an inclusive society. The second most significant factor is separate allocation of funds for skilling which needs to be addressed On the contrary the Policymakers ranked training in the traditional sector as the most significant factor followed by quoting the success stories of the underprivileged in the skill sector. This suggests that the Government needs to review its strategy to enhance inclusion, emphasize the involvement of village panchayats, separate allocation of skilling funds, and highlight success stories of the underprivileged. Enhancing traditional training may also pay fair dividends in this respect. The perception of the stakeholders irrespective of their gender was the same for the various factors across the districts which can promote inclusion. The Industry partners and the teachers have
696
A. Gupta et al.
aligned in their views that decentralization of funds and community involvement of the people is a must to enhance inclusion. It also reflects that Sikkim as a State needs to spend more on local level institutions to facilitate acquiring skills by the disadvantaged sections of the society. The policy implications of the study include involvement of village panchayat for mobilization of stakeholders and resources to enhance the inclusion of youth from the marginalized sections and allocating separate funds for this activity. This implies that a micro strategy against a mass strategy may provide us with better results. Village Panchayats need to be geared up to play this new role of facilitating skill training to their youth. This includes their ability to remain abreast with updation of skills, identification of resource persons, and conduct of skill training. They may play a critical role to motivate the youth to take up such training as well as to be ready to relocate if needed. Policy-makers, however, believe that skilling in traditional sector could be the best way to enhance inclusion of marginalized people. While they may be correct as traditional sector is known to the rural people who harbor the chunk of marginalized population in the state, they may modify their views if they are familiar with the aspirations of new generation and who perhaps are aware of the limitations of traditional sector in a rapidly changing world where new skills are rapidly replacing the older ones.
References 1. E. Avramidis, B. Norwich, Teachers’ attitudes towards integration/inclusion: a review of the literature. Eur. J. Special Needs Educ. 17(2), 129–147 (2002) 2. G. Banerji, S. Pillai, Inclusive human resource development-A HRD trajectory for development of disadvantaged groups. NHRD Netw. J. 7(1), 42–51 (2014) 3. N. Butcher, Using the Mass Broadcast Media to Build Awareness around Human Resource Development within the Reconstruction and Development Programme (1994) 4. S.L. Carpenter, M.E. King-Sears, S.G. Keys, Counselors+ educators+ families as a transdisciplinary team= more effective inclusion for students with disabilities. Professional School Counseling 2(1), 1 (1998) 5. S. Chaudhary, S. Garg, On using satellite-based networks for capacity building and education for all: a case study of Rajiv Gandhi project for EduSat-supported elementary education. Educ. Res. Rev. 5(4), 155–165 (2010) 6. L. Cheshire, A corporate responsibility? The constitution of fly-in, fly-out mining companies as governance partners in remote, mine-affected localities. J. Rural Stud. 26(1), 12–20 (2010) 7. M.S. Chikara, Skilling Women: for work in informal sector. Vision 16(3), 226 (2012) 8. S. Cunningham, S. Tapsall, Y. Ryan, L. Stedman, K. Bagdon, T. Flew, New media and borderless education: a review of the convergence between global media networks and higher education provision. Eval. Investigations Program 97, 22 (1998) 9. K. Das, Indian rural clusters and innovation: challenges for inclusion. Economics, Management & Financial Markets 6(1) (2011) 10. K. Das, K.J. Joseph On Learning, Innovation and Competence Building in India’s SMEs: Challenges Ahead. Gujarat Institute of Development Research (2010) 11. C. Devaney, R. Crosse, R, Parenting Support and Parental Participation Work Package Final Report: Tusla’s Programme for Prevention, Partnership and Family Support (2018)
An Investigation of Inclusion of Marginalized People …
697
12. N. Diwakar, T. Ahamad, Skills development of women through vocational training. Int. Message Appl. Res. 1, 79–83 (2015) 13. D. Dutta, 11. Effects of globalisation on employment and poverty in dualistic economies: the case of India. Econ. Glob. Soc. Conflicts Labour Environ. Issues28(412.79), 167 (2004) 14. J. Farrow, Management of change: technological developments and human resource issues in the information sector. J. Managerial Psychol. 12(5), 319–324 (1997) 15. T. Fischer, J. Cullen, G. Geser, W. Hilzensauer, D. Calenda, M. Hartog, Social software for social inclusion: aspirations, realities and futures, in Conference proceedings International Information Management Corporation (2011) 16. W. Frisby, S. Millar, The actualities of doing community development to promote the inclusion of low income populations in local sport and recreation. Eur. Sport Manage. Q. 2(3), 209–233 (2002) 17. S. Goldrick-Rab, Challenges and opportunities for improving community college student success. Rev. Educ. Res. 80(3), 437–469 (2010) 18. C. Jack, D. Anderson, N. Connolly, Innovation and skills: implications for the agri-food sector. Education+ Training 56(4), 271–286 (2014) 19. R. Jhabvala, S. Sinha, Liberalisation and the woman worker. Economic and Political Weekly, 2037–2044 (2002) 20. M. Lung, F.C. Somerville, Planning and resource allocation in NHS colleges of health studies: 1. Br. J. Nurs. 2(20), 1037–1040 (1993) 21. G. Mansuri, V. Rao, Localizing Development: Does Participation Work? The World Bank (2012) 22. L. Phipps, New communications technologies-A conduit for social inclusion. Information Commun. Soc. 3(1), 39–68 (2000) 23. M. Mourshed, D. Farrell, D. Barton, Education to Employment: Designing a System that Works. McKinsey Center for Government (2013) 24. M. Panwar, J. Kumar, Self Help Groups (SHGS) of Women in Haryana: A Social Work Perspective (2012) 25. B. Pozzoni, N. Kumar, A Review of the Literature on Participatory Approaches to Local Development for an Evaluation of the Effectiveness of World Bank Support for Community-based and Driven Development Approaches (2005) 26. C. Redecker, A. Haché, C. Centeno, Using Information and Communication Technologies to Promote Education and Employment Opportunities for Immigrants and Ethnic Minorities. Joint Research Centre European Commission (2010) 27. S. Sarkar, Globalization and women at work: a feminist discourse, in International Feminist Summit,“Women of Ideas: Feminist Thinking of for a New Era”, Southbank Convention Center, Townsville, 17–20 Haziran 2007 (2007, July), p. 17 28. N. Sarkis, L. Mwanri, The role of information technology in strengthening human resources for health: the case of the Pacific open learning health network. Health Educ. 114(1), 67–79 (2013) 29. M. Shukla, From ‘Corporate-Centric’to ‘Socially Relevant’HR: A Concept Note 30. S. Sindhi, Prospects and challenges in empowerment of tribal women. IOSR J. Humanities Soc. Sci. (JHSS) 6(1), 46–55 (2012) 31. S.N. Tara, N.S. Kumar, Skill development in India: In conversation with S. Ramadorai, Chairman, National Skill Development Agency & National Skill Development Corporation; former CEO, MD and Vice Chairman, Tata Consultancy Services. IIMB Manage. Rev. 28(4), 235–243 (2016) 32. A.V.V.S. Subbalakshmi, Role of Self Help Groups in Growth of Rural Women Enterepreneur Ship Through Microfinance 33. C.M. Wijayaratna, Role of local communities and institutions in integrated rural development, in Role of Local Communities and Institutions in Integrated Rural Development, pp. 34–62 (2004)
A Fuzzy Logic Approach for Improved Simulation and Control Washing Machine System Variables Tejas V. Bhatt, Akash Kumar Bhoi, Gonçalo Marques, and Ranjit Panigrahi
Abstract To automate the washing process, determination and measurement of system variables is a challenging task. This paper describes the arrangement of different sensors used to detect the system variables as dirtiness of laundry, the quantity of laundry, humidity, and level inside the drum. Accordingly, the fuzzy controller automatically selects the different controlling variables as washing time, amount of detergent, water level, and drying time. The main contribution of this paper is to present the simulation of system variables obtained through various sensors incorporated in the washing machine design using MATLAB fuzzy logic toolbox. Keywords Fuzzy logic · Fuzzy control · Sensors · Simulation
1 Introduction The fundamental problem with the earliest washing machine was the manual adjustment of the washing time, the number of rinsing, the quantity of detergent, and water level according to the different input variables. In the current era, the machines are available having limitations by means of manual selection few parameters like water T. V. Bhatt (B) Department of Biomedical Engineering, U. V. Patel College of Engineering, Ganpat University, Gujarat, India e-mail: [email protected] A. K. Bhoi Department of Electrical and Electronics Engineering, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Sikkim, India e-mail: [email protected] G. Marques Instituto de Telecomunicações, Universidade da Beira Interior, Covilhã, Portugal e-mail: [email protected] R. Panigrahi Department of Computer Applications, Sikkim Manipal University, Sikkim, India e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 A. K. Bhoi et al. (eds.), Advances in Systems, Control and Automations, Lecture Notes in Electrical Engineering 708, https://doi.org/10.1007/978-981-15-8685-9_73
699
700
T. V. Bhatt et al.
Fig. 1 System arrangement of the washing machine
level and quantity of detergent which are automatic but not smart [1]. Various sensors are used to sense input parameters, and the fuzzy controller decides the program ideal for washing processes such as water level, the time required to wash, drying time, the quantity of detergent, and the number of rinsing to overcome these problems. Figure 1 represents the system arrangement of the fuzzy controlled washing machine. The main objective of the paper is to develop a fully automatic washing machine by auto-adjustment or auto-selection of different variables [2]. Out of these variables, few variables are available in the existing washing machine and it does not contain the automatic selection of detergent quantity [3]. This research represents the automatic selection of detergent quantity on the basis of the degree of dirtiness and other input variables. A fuzzy controller as the thought of is based on the sensing of the input parameter of laundry quantity, degree of dirtiness, water level, and humidity to determine controller variables of washing time, detergent quantity, water level, and drying time. The dirtiness is continuously sensed by the detection of water transparency using an optical sensor [4]. The amount of laundry is sensed through a strain gauge beam supporting the laundry drum [5]. The dryness is detected by using the humidity sensor placed inside the drum [6]. The water level control is obtained by the pressure operated switch connected in a tight air hose. The arrangement and location of different sensors are proposed by authors to implement the management of the washing and drying process by cycles of operations [7]. Similar sensors, such as the proposed, are already in use in other industrial applications. Each of the input variables and each of the output variables is set for fuzzy membership values in the small, medium, and high range to obtain an appropriate action from the fuzzy controller [8]. The MATLAB software has been used by the authors. This software provides an enhanced toolbox to design and simulate fuzzy logic systems. Simulation results obtained in the form of rule viewer and surface viewer enabled us to visualize the system performance and the fuzzy controller design.
A Fuzzy Logic Approach for Improved …
701
2 Materials and Methods The design objective is to select different sensors and their material is functioning properly in the machine environment in various conditions. An optical sensor is used for dirtiness detection and a resistive type strain gauge measures the weight. For the humidity sensor, a layer of polyimide material is deposited on an alumina substrate. The differential presser level switch determines the water level. The methodology used for this research article is a knowledge-oriented fuzzy rulebased system. It correlates the input as well as output system variables on the basis of different hedges like small, medium, and large. All the variables are connected with AND logical operator. The centroid method is used for the defuzzification process which determines the system output.
3 Sensors for System Variables 3.1 Dirtiness Detection Sensor A colloid solution of water is produced when laundry dirtiness particles come into the water due to washing action. When a beam of light passes through a colloid, according to the Tyndall effect, the light scattered in the colloid is proportional to the colloid particle concentration. The laundry dirt particles are in size between 1 and 1000 nm. Figure 2 represents basic sensing elements for dirtiness detection in the sensor assembly. The sensor assembly is comprised of an LED as a light source and a phototransistor optical receiver. Input is the dirtiness that changes the emitter voltage of the phototransistor. The light reaching the phototransistor produces emitter voltage changes from V 1 for clean water and V 2 for dirt water given by the following equation: V2 = V1 e− x (β2− β1)
(1)
where: x = optical sensor path length. β1 = clean water light-absorbing factor. β2 = dirt water light-absorbing factor. The ratio of V2 /V1 is changed logarithmically, depending on the difference of light absorption factor (β2 − β1). Figure 3 represents the sensor output signal variation as the removal of dirt particles from the laundry occurs, making water colloid. It also shows the effect of rinsing. The water inlet valve is made to operate for water feeding when the drum is empty, and the signal is maximum. The feed valve closes when the water feed reaches the desired level. The water level sensing pressure switch closes the valve. The laundry dirt mixes with the water and forms a colloid solution as a washing process. The scattered light through the colloid
702
T. V. Bhatt et al.
Fig. 2 Assembly of the dirtiness detection sensor
Fig. 3 The response of the optical sensor
water is continuously decreasing, reducing the sensor output signal that becomes minimum when maximum dirt particles are transported into the water.
A Fuzzy Logic Approach for Improved …
703
Fig. 4 Strain gauge for weight measurement
3.2 Quantity Detection Sensor The strain gauge sensing is used for load detection in the washing drum. The drum is supported on a four-arm strain gauge beam. The resistance of the strain gauges Rsg1 and Rsg4 increases and that of Rsg2 and Rsg3 are decreases when the drum load comes on beam support [9]. Figure 4 represents the four-arm bridge circuit. The unbalance bridge output voltage is generated as the laundry quantity on the support beam changes as input. The four strain gauges increase the output sensitivity and compensate for the temperature effects on the strain gauges. The strain gauge bridge output fixes the fuzzy parameter of laundry quantity.
3.3 Humidity Detection Sensor The humidity sensor is constructed from the polyimide material. Figure 5 represents a resistive humidity sensor for the detection of laundry dryness. The ohmic resistance is inversely proportional to humidity. In addition, the drum space endorsing the output changes significantly to ascertain the dryness of the laundry. The humidity contents inside the drum changes and the sensor produces the difference in resistance as output [10]. The sensor is fabricated by the deposition of humidity absorbing polyimide layer on alumina or silicon substrate.
704
T. V. Bhatt et al.
Fig. 5 Resistive humidity sensor
3.4 Level Detection Sensor The pneumatic pressure switch is effectively used for water level detection and control [11]. Figure 6 shows the arrangement of the pressure sensing level switch fitted at the upper end of the tight air hose that runs from the washer drum to the water level control switch. As the water fills the drum, the air is pushed up in the hose. The air pressure in the hose is dependent on the water level in the drum. Depending upon the setting of the switch, it gets operated at the air pressure in the hose and shut off the water inlet valve. The laundry quantity selects the amount of level, which is detected by the air pressure in the tube. The pressure switch operates when the desired level is reached.
4 Fuzzy System Controller Fuzzy logic can be defined as a logical system that simplifies classical two-valued logic for cognitive below uncertainty. Figure 7 represents the process flow diagram
A Fuzzy Logic Approach for Improved …
Fig. 6 Arrangement of pressure sensing level switch
Fig. 7 Process flow diagram of a fuzzy logic controller
705
706
T. V. Bhatt et al.
of the fuzzy logic controller. That shows process starts with the fuzzification process, then intermediate fuzzy inference engine, and lastly defuzzification process.
4.1 Fuzzification Process Input data received from the external world is always crisp, and it converts into a fuzzy set by fuzzification [12]. Consequently, the membership function is employed for the measurements, and the degree of truth in each evidence is calculated. The washing machine system contains input parameters and controlling variables with different predicates. When the quantity of laundry is low, medium, and high and for the amount of detergent is small, medium, and large. Similarly, different predicates take for other variables. In fuzzy set theory, membership is a matter of degree. It is necessary to determine the possible levels for the input and output parameters. The translation of the realworld levels to fuzzy values are calculated using membership functions. There are various kinds of membership functions, such as triangular, trapezoid, gaussian, and bell. The purpose of the trapezoid membership function is to build trapezoid shape and its syntax is: trapm f (x, [a b c d]) = 0 x ≤ a = (x − a)/(b − a)a ≤ x ≤ b =1 b ≤ x ≤ c = (d − x)/(d − c) c ≤ x ≤ d =0 d ≤ x The trapezoid curve is a function of vector x and depends on four scalar parameters (a, b, c, and d) [13]. The a and d parameters set the feet of a trapezoid, and the b and c variables set the trapezoid shoulder. The membership values for the system variables are as follow:
4.2 Input Variables Laundry quantity: small (0 1 2 3). medium (2 3.5 4.5 5). large (4.5 6 8). Laundry dirtiness: low (0 20 32 45). medium (38 55 68 80).
A Fuzzy Logic Approach for Improved …
707
high (70 82 90 100). Humidity: low (15 25 30 40). medium (35 45 55 65). high (60 70 80 90).
4.3 Output Variables Washing time: small (0 5 8 12). medium (10 14 18 22). large (20 23 28 30). Detergent quantity: low (0 30 60 80). medium (70 90 120 140). high (130 160 180 200). Water level: low (0 6 15 23). medium (20 28 33 38). high (35 40 46 50). Drying time: small (0 1 1.5 2.5). medium (2 3.5 4.5 6). large (5 6.5 7 8).
4.4 Rule Evaluation The fuzzy rules are practical If-Then declarations [14]. These rules generally take the following form “If X is a then Y is b”. The X and Y are propositions containing linguistic variables [15]. The X is denominated as the premise, and Y is the consequence of the rule. Therefore, the use of linguistic variables and fuzzy If-Then rules exploits the tolerance for imprecision and uncertainty. Moreover, if there N rules each with I premise in the system, the N-th rule has the following form: If x1 is Xi, 1 x2 is Xi, 2 . . . xk is Xi, k then B Table 1 presents the fuzzy rules for washing time selection, and Table 2 describes the fuzzy rules for the quantity of detergent selected. The input and output of
708 Table 1 Fuzzy rules for washing time
Table 2 Fuzzy rules for detergent quantity
T. V. Bhatt et al. Dirtiness
Quantity Small
Medium
Large
Low
Small
Small
Medium
Medium
Medium
Medium
Large
High
Medium
Large
Large
Dirtiness
Quantity Small
Medium
Large
Low
Low
Medium
Medium
Medium
Medium
Medium
High
High
Medium
High
High
both Tables 1 and 2 are connected by the AND logical operator. This makes the combinational form of nine different rules, one of them as following: Rule: IF laundry quantity is small and dirtiness is low, THEN washing time is small, and the quantity of detergent is low. Table 3 represents the fuzzy rules for the water level concerning the quantity of laundry. It makes three rules as follows. RULE 1: IF laundry quantity is small, THEN water level is low. RULE 2: IF laundry quantity is medium THEN water level is medium. RULE 3: IF laundry quantity is large, THEN water level is high. Similarly, Table 4 represents the fuzzy rule for drying time with respect to the quantity of laundry and humidity. Input variables are connected by AND logical operator [16]. It also makes a combination of nine different rules. One of them is as follows: RULE: IF laundry quantity is small and humidity is low, THEN drying time is small. Left-hand side and right-hand side computation are the critical part of rule evaluation [17]. The LHS was located before THEN operator and produced a degree of fulfillment for each rule. The RHS fuzzy statement is located after the THEN operator and is responsible for changes in the shape of the membership function of output variables according to the degree of fulfillment produced by the LHS. Table 3 Fuzzy rules for water level
Input variable
Output variables
Quantity
Water level
Small
Low
Medium
Medium
Large
High
A Fuzzy Logic Approach for Improved … Table 4 Fuzzy rules for drying time
709
Humidity
Quantity Small
Medium
Large
Low
Small
Small
Medium
Medium
Medium
Medium
Large
High
Medium
Large
Large
Figure 8 represents two antecedents are connected by the AND logic operator having 2.5-kg laundry quantity and 50%-degree dirtiness. The AND operator represents the intersection of the fuzzy set. The result produces a degree of fulfillment [18]. The quantity of laundry is 2.5 kg, and the degree of laundry dirtiness is 50%. The possibility of 2.5 kg to be considered as small quantity is 0.4, and medium quantity is 0. The possibility of 50%-degree dirtiness to be considered as low is 0, and the medium is 0.67. DOF = Membership value of laundry quantity Membership value of dirtiness The shape of the membership function of output variables ranges according to the degree of fulfillment. Figure 9 represents right-hand side computation for the medium membership function of wash time at the membership value of 0.4 degrees of fulfillment. Figure 10 shows right-hand side computation for the medium membership function of detergent quantity at the membership value of 0.4 degrees of fulfillment. small
Fig. 8 Degree of fulfillment for rule 2
low
medium
710
T. V. Bhatt et al.
Fig. 9 Right-hand side computation for wash time
Fig. 10 Right-hand side computation for detergent quantity
4.5 Defuzzification Process The defuzzification of the output linguistic variables into crisp values is the last set in a fuzzy system [19]. The Center-of-Area or centroid method is standard defuzzification methods.
A Fuzzy Logic Approach for Improved …
Crisp output = /
711
Multiplication of membership value at particular position Membership value
Figure 11 represents the crisp output for a wash time. Moreover, Fig. 12 represents the crisp output for detergent quantity. The dark line shows the crisp defuzzification result by using the centroid method [20].
Fig. 11 Centroid defuzzification for wash time
Fig. 12 Centroid defuzzification for detergent quantity
712
T. V. Bhatt et al.
5 Simulation The rule viewer shows a roadmap of the complete fuzzy inference method [21]. Figure 13 represents the rule viewer as a part of the simulation result. The four small plots throughout the top of the figure show the antecedent and consequent of the first rule. Each rule is a row of plots, and each column is a variable. The Rule Viewer also presents how the shape of certain membership functions impacts the overall output. The entire output surface of the system, the surface viewer for output variable washing time concerning two input variables laundry quantity, and laundry dirtiness is presented in Fig. 14. It represents the three-dimensional view of the data; also, it is possible to reposition them to get a different three-dimensional view and quantifies the relationships of system fuzzy variables for performance evaluation [22]. Table 5, 6 and 7 represents the test results for output variables for different values of input variables. Table 5 correlates the various input values of laundry quantity and dirtiness which is responsible for the selection of washing time and quantity of detergent. It can be analyzed as the quantity of laundry increases the time required for processing is also increases. Subsequently, the resultant output of Tables 6 and 7 determines water level, and drying time increases with the quantity of laundry. The existing washing Machen for household applications can handle the average weight of 6–7 kg and in this system, it can handle up to an 8-kg load.
Fig. 13 Simulation result: Rule viewer of MATLAB fuzzy logic toolbox
A Fuzzy Logic Approach for Improved …
713
Fig. 14 Simulation result: Surface viewer of MATLAB fuzzy logic toolbox Table 5 Test result for washing time and detergent quantity
Table 6 Test result for water level
Input variable
Output variable
Laundry quantity
Dirtiness
Washing time
Detergent quantity
1.5
20
6.2
42.1
2
40
8.55
54.9
2.7
25
6.11
77.1
3.2
70
16
105
4.4
82
25.2
167
5.9
44
23.2
154
6.7
60
25.1
166
7.5
43
21.2
142
Input variable
Output variable
Laundry quantity
Water level
0.9
11.1
2.1
12.3
2.5
17.5
4.95
40.9
3.2
29.5
4.1
30.9
4.8
37.3
5.5
42.7
714 Table 7 Test Result For Drying Time
T. V. Bhatt et al. Input variable
Output variable
Laundry quantity
Humidity
Drying time
1.5
50
4
2.3
70
4.48
3.4
35
1.25
4.7
70
6.57
5.3
42
6.57
3.5
76
6.59
4.91
40
5.64
7.1
40
6.55
6 Conclusion Fuzzy logic control of washing machine implemented by sensing laundry input parameters enables to decision controlling variables for washing and drying of the laundry of varied quantity and degree of dirtiness. The important part of the system is to detect the quantity of detergent automatically on the basis of input parameters and decides the number of cycles. This methodology uses optical sensor and due to a lot of dirt there may a deposition of layer on the system may affect the performance of the system. It is possible to add more variables and increases the size as per the requirement of future use. The fuzzy controller optimizes the washing machine’s performance for the quality wash of the laundry.
References 1. H.T. Kashipara, T.V. Bhatt, Designing an intelligent washing machine using fuzzy logic, in International Conference on Trends in Intelligent Electronic Systems, Sathyabama University (2007), pp. 758–761 2. Yörüko˘glu, E. Altu˘g, Estimation of unbalanced loads in washing machines using fuzzy neural networks. ASME Trans. Mechatronics 18(3), 1182–1190 (2013) 3. P. Tandale, S. Shivpuje, S. Ladkat, K. Simran, Design of washing machine for cleaning of small components. Int. J. Emerging Eng. Res. Technol. 3(4), 30–36 (2015) 4. B. Spasojevic, Fuzzy optical sensor for washing machine, in 7th Seminar on Neural Networks Application in Electrical Engineering, University of Belgrade (2004), pp. 237–245 5. P. Tutak, Application of strain gauges in measurements of strain distribution in complex objects. J. Appl. Comput. Sci. Math. 6(2), 135–145 (2014) 6. K. Shinghal, A. Noor, N. Srivastava, R. Singh, Intelligent humidity sensor for wireless sensor network agricultural application. Int. J. Wireless Mobile Netw. 3(1), 118–128 (2011) 7. M. Wang, Research on the washing machine design improvement of specific consumption groups, in 6th International Forum on Industrial Design, pp. 1–4 (2018) 8. H.J. Zimmermann, Fuzzy Set Theory and Its Applications, 4th edn (Springer, 2006), pp. 9–63 9. S. Soloman, Sensors Handbook (McGraw Hill, 1998), pp. 5.7–5.14 10. P.R. Wiederhold, Water Vapor Measurement: Methods and Instrumentation (CRC Press, 1997), pp. 70–78
A Fuzzy Logic Approach for Improved …
715
11. L. Dun, R. Hu, W. Zheng, Research and design of a new type of spray washing machine, in IOP Conference Series: Materials Science and Engineering (2019), pp 1–6 12. Ghelli, H. Hagras, G. Aldabbagh, A fuzzy logic-based retrofit system for enabling smart energyefficient electric cookers. IEEE Trans. Fuzzy Syst. 23(6), 1984–1997 (2015) 13. D. Nagarajan, M. Lathamaheswari, J. Kavikumar, E. Deenadayalan, Interval type-2 fuzzy logic washing machine. Int. J. Fuzzy Logic Intell. Syst. 223–233 (2019) 14. W. Mei, Formalization of fuzzy control in possibility theory via rule extraction. IEEE Access 7, 90115–90124 (2019) 15. R.C. Berkan, S.L. Trubatch, Fuzzy Systems Design Principles, 1st edn (IEEE Press, 2000), pp. 83–129 16. Vidal, F. Esteva, L. Godo, On modal extensions of product fuzzy logic. J. Logic Comput. 27(1), 299–336 (2017) 17. T.M. Tererai, J. Mugova, C. Mbohwa, Proceedings of the International Conference on Industrial Engineering and Operations Management, Bogota, Colombia, October 25–26, 2017 (CRC Press, 1997), pp. 355–366 18. H.T. Kashipara, T.V. Bhatt, Fuzzy modeling and simulation for regulating the dose of anesthesia, in International Conference on Control, Automation, Communication and Energy Conservation, Perundurai, Tamilnadu (2009), pp. 1–5 19. R.M. Milasi, M.R. Jamali, C. Lucas, Intelligent washing machine: a bioinspired and multiobjective approach. Int. J. Control Autom. Syst. 5(4), 436–443 (2007) 20. J. Yen, R. Langeri, Fuzzy Logic- Intelligence, Control, and Information, 3rd edn (Pearson Education, 2005), pp. 46–77 21. K. Zhang, J. Zhan, W.-Z. Wu, Novel fuzzy rough set models and corresponding applications to multi-criteria decision making. Fuzzy Sets Syst. 383, 92–126 (2020) 22. T. Patel, T. Haupt, T. Bhatt, Fuzzy probabilistic approach for risk assessment of BOT toll roads in indian context. J. Eng. Des. Technol. 18(1), 251–269 (2019)