Machine Intelligence and Soft Computing: Proceedings of ICMISC 2021 (Advances in Intelligent Systems and Computing, 1419) 9811683638, 9789811683633

This book gathers selected papers presented at the International Conference on Machine Intelligence and Soft Computing (

140 103

English Pages 210 [198] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Conference Committee Members
Preface
Contents
About the Editors
Hybrid Fruit Fly Optimization for EHR Applications in Cloud Environments for Load Balancing Optimization
1 Introduction
2 Literature Survey
3 Proposed Methodology
3.1 Technology Requirements
3.2 Implementation
4 Results
5 Conclusion
References
Distributed Machine Learning—An Intuitive Approach
1 Introduction
2 Challenges with Machine Learning Computing
3 What to Distribute?
4 How to Distribute?
5 Challenges in Distribution
6 Case Study: Support Vector Machine
7 Conclusions
References
Prediction of Wall Street Through AI Approach
1 Introduction
2 Literature Survey
3 Methodology
3.1 Yahoo Finance
3.2 Dataset Input
3.3 Data Visualization
3.4 Algorithms Used for Prediction
3.5 Performance Evaluation
4 Conclusion
References
A Hybrid Intelligent Approach for Early Identification of Different Diseases in Plants
1 Introduction
1.1 Types of Diseases in Classification
2 Related Work
3 Proposed Work
4 Feature Extraction
5 Role of Clustering and Classification
6 Algorithm
7 Results and Discussion
8 Conclusion
References
Analyzing Comments on Social Media with XG Boost Mechanism
1 Introduction
1.1 Sentiment Analysis Techniques
1.2 Lexicon Based Approach
1.3 Comparison
2 Literature Review
3 Results and Discussions
4 Conclusion
References
Distributed Edge Learning in Emerging Virtual Reality Systems
1 Introduction
2 Literature Review
3 Method
3.1 VR Systems
3.2 MEC
3.3 A Model for VR Systems Based on Edge Computing
3.4 Simulation Analysis
4 Results and Discussions
4.1 Comparison of Task Offloading Performance Among Models
5 Conclusion
References
A Review on Deep Learning-Based Object Recognition Algorithms
1 Introduction
2 Literature Review
3 Results and Analysis
4 Conclusion and Future Work
References
A Study on Human–Machine Interaction in Banking Services During COVID-19
1 Introduction
1.1 Research Problem
1.2 Objectives of the Study
1.3 Research Methodology
2 Review of Literature
3 Theoretical Framework
3.1 What Are Human–Machine Interactions?
3.2 Advantages of HMI [16]
3.3 Disadvantages of Human–Machine Interface [16]
4 Results and Discussions
5 Conclusion
References
A Study on Review of Application of Blockchain Technology in Banking Industry
1 Introduction
2 What is a Blockchain?
2.1 What is Distributed Ledger Technology? [3]
2.2 Transaction in Blockchain
2.3 Growth in Blockchain Technology
3 Applications of Blockchain in Various Sectors
4 Blockchain in Banking
5 Banks Adopting Blockchain Technology
6 Results and Discussion
7 Conclusion
References
Face Expression Recognition in Video Using Hybrid Feature Extractor and CNN-LSTM
1 Introduction
2 Related Work
2.1 Literature Review
3 Methodology
3.1 Preprocessing of Video and Face Detection
3.2 Extraction of the Hybrid Features
3.3 FER Using Hybrid CNN-LSTM Classifier
4 Results and Discussion
4.1 Evaluation Metrics
4.2 Result Analysis
5 Conclusion
References
Two-Node Tandem Communication Networks to Avoid Congestion
1 Introduction
2 Queuing Model
3 Optimal Strategies of the Representation
4 Numerical Illustration and Sensitivity Analysis
5 Sensitivity Investigation
6 Conclusion
References
Detection of Fake Profiles in Instagram Using Supervised Machine Learning
1 First Section
1.1 Proposed Methodology
2 Literature Survey
2.1 Dataset Information
2.2 Data Collection
2.3 Dataset Annotation
3 Experimental Results
4 Conclusion
References
Review on Space and Security Issues in Cloud Computing
1 Introduction
2 Cloud Storage Models
3 Safety and Security Issues of Cloud Storage Space
4 Conclusion
References
Anomaly Detection in Solar Radiation Forecasting Using LSTM Autoencoder Architecture
1 Introduction
2 Literature Survey
3 Proposed Work
4 Results and Implementation
5 Conclusion
References
Medical Image Denoising Using Non-Local Means Filtering
1 Introduction
2 Related Work
2.1 Method
3 Results
4 Conclusion
References
Skin Disease Detection Using Machine Learning Techniques
1 Introduction
2 Related Work Analysis
3 Proposed Work
3.1 Modules
4 Results
5 Conclusions
References
Detecting False Data Attacks Using KPG-MT Technique
1 Introduction
2 Related Works
3 Proposed System
4 Results and Discussions
5 Conclusion
References
Utilization of Machine Learning Techniques in Pharma
1 Introduction
2 Disease Diagnosis or Identification of Disease
3 The Discovery of Drugs
4 The Prediction of Diseases
5 Research on Clinical Trials
6 Treatment Related to Personnel
7 Health Records Storing on Electronic Models
8 Conclusion
References
New Event Detection for Web Recommendation Using Web Mining
1 Introduction
2 Literature Review
3 Proposed Methodologies
4 Conclusion
References
An Automatic Convolution Neural Network-Based Framework for Robust Classification of Breast Cancer Histopathological Images
1 Introduction
2 Literature Survey
2.1 Breast Cancer Prediction Using Data Mining Method
2.2 By Means of Machine Learning Instrument in Categorization of Breast Cancer
2.3 Analytic Correctness of Dissimilar Machine Learning Algorithms for Breast Cancer Danger Computation
2.4 By Means of Machine Learning Algorithms for Breast Cancer Danger Forecast and Analysis
2.5 Machine Learning with Request in Breast Cancer Analysis and Forecast
3 System Implementation
3.1 Proposed System
3.2 Feature Extraction
4 Conclusion
References
Big Data Analytics Services in Health Care: An Extensive Review
1 Introduction
2 Related Work
2.1 Data Mining in Medical Sector
2.2 Privacy Issues of Big Data in the Medical Sector
2.3 Privacy Problem Solutions in Medical Sector
3 Methodology
3.1 Security Measures of Healthcare Using Big Data
3.2 Privacy Assessment Model
3.3 Architecture
4 Experimental Results
5 Discussion
6 Conclusion
References
Facial Expression Recognition Model Using Deep CNN and Hybrid Feature Selection Pre-processing Technique
1 Introduction
2 Related Work
3 Proposed Model
3.1 Model Description
3.2 Model Data Flow
3.3 Proposed Algorithm
4 Experimentation and Results
4.1 Dataset and Experimental Setup
4.2 Graphical Representation of Results
4.3 Comparison Table
References
Coronavirus (COVID-19) Detection and Classification Using High Resolution Computed Tomography (HR-CT) Imageries
1 Introduction
2 Methods
3 Experimental Results
4 Conclusion
References
Author Index
Recommend Papers

Machine Intelligence and Soft Computing: Proceedings of ICMISC 2021 (Advances in Intelligent Systems and Computing, 1419)
 9811683638, 9789811683633

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Advances in Intelligent Systems and Computing 1419

Debnath Bhattacharyya Sanjoy Kumar Saha Philippe Fournier-Viger   Editors

Machine Intelligence and Soft Computing Proceedings of ICMISC 2021

Advances in Intelligent Systems and Computing Volume 1419

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong

The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. Indexed by DBLP, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and Technology Agency (JST). All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

More information about this series at https://link.springer.com/bookseries/11156

Debnath Bhattacharyya · Sanjoy Kumar Saha · Philippe Fournier-Viger Editors

Machine Intelligence and Soft Computing Proceedings of ICMISC 2021

Editors Debnath Bhattacharyya Department of Computer Science and Engineering Koneru Lakshmaiah Education Foundation Vaddeswaram, India

Sanjoy Kumar Saha Department of Computer Science and Engineering Jadavpur University Kolkata, India

Philippe Fournier-Viger Shenzhen University, University Town Shenzhen, China

ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-981-16-8363-3 ISBN 978-981-16-8364-0 (eBook) https://doi.org/10.1007/978-981-16-8364-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Conference Committee Members

General Chair Paul S. Pang, Federation University, Australia Debnath Bhattacharyya, Koneru Lakshmaiah Education Foundation, India

Advisory Board Paul S. Pang, Unitec Institute of Technology, New Zealand Tai-hoon Kim, Jiao Tong University, Shanghai, China Sabah Mohammed, Lakehead University, Ontario, Canada Jinan Fiaidhi, Lakehead University, Ontario, Canada Y. Byun, Jeju National University, South Korea Amiya Bhaumick, LUC KL, Malaysia Divya, LUC KL, Malaysia Dipti Prasad Mukherjee, ISI Kolkata, India Sanjoy Kumar Saha, Jadavpur University, Kolkata, India Sekhar Verma, IIIT Allahabad, India C.V. Jawhar, IIIT Hyderabad, India Pabitra Mitra, IIT Kharagpur, India Joydeep Chandra, IIT Patna, India Y. Byun, Jeju National University, Jeju Island, South Korea Sonia Djebali, DVRC Léonard de Vinci, Research Center, Paris, France

v

vi

Conference Committee Members

Editorial Board Debnath Bhattacharyya, Koneru Lakshmaiah Education Foundation, India Sanjoy Kumar Saha, Jadavpur University, Kolkata, India Philippe Fournier-Viger, Shenzhen University, Shenzhen, Guangdong, China

Program Chair Bahman Javadi, Western Sydney University, Australia

Management Co-Chairs Bahman Javadi, Western Sydney University, Australia Hari Kiran Vege, KLEF, India

Finance Committee Venkata Naresh Mandhala, Koneru Lakshmaiah Education Foundation, India

Technical Programme Committee Sanjoy Kumar Saha, Professor, Jadavpur University, Kolkata, India Hans Werner, Associate Professor, University of Munich, Munich, German Goutam Saha, Scientist, CDAC, Kolkata, India Samir Kumar Bandyopadhyay, Professor, University of Calcutta, India Ronnie D. Caytiles, Associate Professor, Hannam University, Republic of Korea Y. Byun, Professor, Jeju National University, Jeju Island, Republic of Korea Alhad Kuwadekar, Professor, University of South Walse, UK Bapi Gorain, Professor, LUC, KL, Malaysia Poulami Das, Assistant Professor, Heritage Institute of Technology, Kolkata, India Indra Kanta Maitra, Associate Professor, BPPIMT, Kolkata, India Divya Midhun Chakravarty, Professor, LUC, KL, Malaysia F. C. Morabito, Professor, Mediterranea University of Reggio Calabria, Reggio Calabria RC, Italy Bidyut Gupta, Professor, Southern Illinois University Carbondale, Carbondale, IL 62901, USA

Conference Committee Members

vii

Nancy A. Bonner P., University of Mary Hardin-Baylor, Belton, TX 76513, USA Alfonsas Misevicius, Professor, Kaunas University of Technology, Lithuania Ratul Bhattacharjee, AVP, AxiomSL, Singapore Lunjin Lu, Professor and Chair, Computer Science and Engineering, Oakland University, Rochester, MI 48309-4401, USA Ajay Deshpande, CTO, Rakya Technologies, Pune, India Debasri Chakraborty, BIET, Suri, West Bengal, India Bob Fisher, Professor, The University of Edinburgh, Scotland Alexandra Branzan Albu, University of Victoria, Victoria, Canada Maia Hoeberechts, Associate Director, Ocean Networks Canada, University of Victoria, Victoria, Canada MHM Krishna Prasad, Professor, UCEK, JNTUK Kakinada, India Edward Ciaccio, Professor, Columbia University, New York, USA Yang-sun Lee, Professor, Seokyung University, South Korea Yun-sik Son, Professor, Dongguk University, South Korea Jae-geol Yim, Professor, Dongguk University, South Korea Jung-yun Kim, Professor, Gachon University, South Korea Mohammed Usman, King Khalid University, Abha, Saudi Arabia Xiao-Zhi Gao, University of Eastern Finland, Finland Tseren-Onolt Ishdorj, Mongolian University of Science and Technology, Mongolia Khuder Altangerel, Mongolian University of Science and Technology, Mongolia Jong-shin Lee, Professor, Chungnam National University, South Korea Jun-kyu Park, Professor, Seoul University, South Korea Wang Jin, Professor, Changsha University of Science and Technology, China Goreti Marreiros, IPP/ISEP, Portugal Mohamed Hamdi, Professor, Supcom, Tunisia Jaroslav Frnda, University of Zilina, Slovakia Shumaila Javaid, Shaanxi Normal University, China

Keynote Speakers Debashis Ganguly, Software and Hardware Modeling Engineer, Apple, Pittsburgh, Pennsylvania, USA Shumaila Javaid, Shanghai Research Institute for Intelligent Autonomous Systems,Tongji University, Shanghai, China Ratul Bhattacharjee, Assistant. Vice President, AxiomSL, Singapore Yvette Gonzales, Iloilo Science and Technology University, Philippines Saptarshi Das, Pennsylvania State University, University Park, Pennsylvania, USA Lalit Garg, Computer Information Systems, Faculty of Information & Communication Technology, University of Malta, Malta Djebali Sonia, ESILV—Ecole Supérieure d’Ingénieurs Léonard de Vinci, Paris, France Pelin Angin, Middle East Technical University, Ankara, Turkey

viii

Conference Committee Members

Philippe Fournier-Viger, Shenzhen University, Shenzhen, Guangdong, China Bahman Javadi, Western Sydney University, Greater Sydney Area, Australia Zhihan Lv, Qingdao University, Qingdao, Shandong, China

Preface

Knowledge in engineering sciences is about sharing our ideas of research to others. In engineering, it has many ways to exhibit. In that conference is the best way to propose your idea of research and its future scope; it is the best way to add energy to build strong and innovative future. So, here we are to give a small support from our side to confer your ideas by an “International Conference on Machine Intelligence and Soft Computing (ICMISC-2021)”, related to Electrical, Electronics, Information Technology and Computer Science. It is not confined to a specific topic or region, you can exhibit your ideas in similar or mixed or related technologies bloomed from anywhere around the world because “An idea can change the future and its implementation can build it”. KLEF is a great platform to make your Idea(s) penetrate into world. We give as best as we can in every aspect related. Our environment leads you to a path on your idea, our people will lead your confidence and finally we give our best to make yours. Our intention is to make Intelligence in engineering to fly higher and higher. That is why we are dropping our completeness into event. You can trust us on your confidentiality. Our review process is double blinded through Easy Chair. At last, we pay the highest regard to the Koneru Lakshmaiah Education Foundation, Guntur, Andhra Pradesh, India for extending support for Technical and financial management of ICMISC-2021. Best Wishes from Guntur, India Kolkata, India Shenzhen, China

Debnath Bhattacharyya Sanjoy Kumar Saha Philippe Fournier-Viger

ix

Contents

Hybrid Fruit Fly Optimization for EHR Applications in Cloud Environments for Load Balancing Optimization . . . . . . . . . . . . . . . . . . . . . . P. S. V. S. Sridhar, Sai Sameera Voleti, Venkata Naresh Mandhala, and Debnath Bhattacharyya

1

Distributed Machine Learning—An Intuitive Approach . . . . . . . . . . . . . . . Nandan Banerji

9

Prediction of Wall Street Through AI Approach . . . . . . . . . . . . . . . . . . . . . . V. Rajesh, Udimudi Satish Varma, Ch. S. Sashidhar Reddy, V. Dhiraj, Udimudi Prasanna Swetha, and S. K. Hasane Ahammad

17

A Hybrid Intelligent Approach for Early Identification of Different Diseases in Plants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bhanu Prakash Doppala, Vamsi Bandi, N. Thirupathi Rao, and Debnath Bhattacharyya

25

Analyzing Comments on Social Media with XG Boost Mechanism . . . . . S. Naga Mallik Raj, Eali. Stephen Neal Joshua, K. Swathi, S. Neeraja, and Debnath Bhattacharyya

33

Distributed Edge Learning in Emerging Virtual Reality Systems . . . . . . . Jingyi Wu, Hongxiang Jia, Qiqi Hu, Jiakang Sun, Kunpeng Song, Yao Xu, and Zhihan Lv

39

A Review on Deep Learning-Based Object Recognition Algorithms . . . . Mohan Mahanty, Debnath Bhattacharyya, and Divya Midhunchakkaravarthy

53

A Study on Human–Machine Interaction in Banking Services During COVID-19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Archana Acharya

61

xi

xii

Contents

A Study on Review of Application of Blockchain Technology in Banking Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Archana Acharya and P. Veda Upasan Face Expression Recognition in Video Using Hybrid Feature Extractor and CNN-LSTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Priyanka Anil Gavade, Vandana Bhat, Jagadeesh Pujari, and Venkata Naresh Mandhala Two-Node Tandem Communication Networks to Avoid Congestion . . . . . Bhanu Prakash Doppala, Debnath Bhattacharyya, N. Thirupathi Rao, and Hye-jin Kim

69

79

89

Detection of Fake Profiles in Instagram Using Supervised Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 N. Thirupathi Rao, Debnath Bhattacharyya, and Tai-hoon Kim Review on Space and Security Issues in Cloud Computing . . . . . . . . . . . . . 109 B. Dinesh Reddy, Debnath Bhattacharyya, and Hye-jin Kim Anomaly Detection in Solar Radiation Forecasting Using LSTM Autoencoder Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 N. Thirupathi Rao, Debnath Bhattacharyya, and Hye-jin Kim Medical Image Denoising Using Non-Local Means Filtering . . . . . . . . . . . 123 B. Dinesh Reddy, Debnath Bhattacharyya, N. Thirupathi Rao, and Tai-hoon Kim Skin Disease Detection Using Machine Learning Techniques . . . . . . . . . . . 129 Sk. Meeravali, Debnath Bhattacharyya, N. Thirupathi Rao, and Tai-Hoon Kim Detecting False Data Attacks Using KPG-MT Technique . . . . . . . . . . . . . . 137 Eali Stephen Neal Joshua, Debnath Bhattacharyya, N. Thirupathi Rao, and Hye-Jin Kim Utilization of Machine Learning Techniques in Pharma . . . . . . . . . . . . . . . 145 Bandi Vamsi, Debnath Bhattacharyya, and Hye-Jin Kim New Event Detection for Web Recommendation Using Web Mining . . . . 153 Eali Stephen Neal Joshua, Debnath Bhattacharyya, and Tai-Hoon Kim An Automatic Convolution Neural Network-Based Framework for Robust Classification of Breast Cancer Histopathological Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 S. NagaMallik Raj, Debnath Bhattacharyya, Eali Stephen Neal Joshua, and Tai-Hoon Kim Big Data Analytics Services in Health Care: An Extensive Review . . . . . . 167 Bandi Vamsi, Bhanu Prakash Doppala, and Nakka Thirupathi Rao

Contents

xiii

Facial Expression Recognition Model Using Deep CNN and Hybrid Feature Selection Pre-processing Technique . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Sree Venkat Paruchuri, B. Akhilandeswari, K. Jay Vardhan, P. V. V. S. Srinivas, and T. Ashish Narayan Coronavirus (COVID-19) Detection and Classification Using High Resolution Computed Tomography (HR-CT) Imageries . . . . . . . . . . . . . . . 183 Anil B. Gavade, Rajendra B. Nerli, Ashwin Patil, and Shridhar Ghagane Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

About the Editors

Prof. (Dr.) Debnath Bhattacharyya is associated as a Professor with Computer Science and Engineering Department, Koneru Lakshmaiah Education Foundation (known as K.L. Deemed to be University), Guntur, Andhra Pradesh, India. Dr. Bhattacharyya is presently an Invited International Professor, Lincoln University College, KL, Malaysia. Dr. Bhattacharyya received his Ph.D. (Tech., Computer Science and Engineering) from the University of Calcutta, Kolkata, India. Dr. Bhattacharyya is the Member of IEEE (Since 2010), Life Member of CSI, India. He is the Editor of Many International Journals (indexed by Scopus, SCI, and Web of Science). He Published 168 Scopus Indexed Papers, and 128 Web of Science Papers. His Research interests include Security Engineering, Pattern Recognition, Biometric Authentication, Multimodal Biometric Authentication, Data Mining and Image Processing. Dr. Sanjoy Kumar Saha currently associated as Professor with the Department of Computer Science and Engineering, Jadavpur University, Kolkata, India. He did is BE and ME from Jadavpur University and completed his PhD from IIEST Shibpur, West Bengal, India. His Research interests include Image, Video and Audio Data Processing, Physiological Sensor Signal Processing and Data Analytics. He published more than hundred articles in various International Journals and Conferences of repute. He has guided eleven PhD Students. He holds four US papents. He is a member of IEEE Computer Society, Indian Unit for Pattern Recognition and Artificial Intelligence, ACM. He has served TCS innovation Lab, Kolkata, India as advisor for the signal processing group. Philippe Fournier-Viger (Ph.D) is professor at the Shenzhen University, (Shenzhen, China). Five years after completing his Ph.D., he came to China and became full professor at the Harbin Institute of Technology (Shenzhen), after obtaining a title of national talent from the National Science Foundation of China. He has published more than 290 research papers in international conferences and journals, which have received more than 6400 citations. He is associate editor-in-chief of the Applied Intelligence journal and editor-in-chief of Data Science and Pattern Recognition. He is founder of the popular SPMF open-source data mining library, which provides xv

xvi

About the Editors

more than 190 algorithms for identifying various types of patterns in data, and has been used in more than 830 papers. He is co-organizers of the UDML series of workshops on utility pattern mining at KDD2018, ICDM2019 and ICDM2020.

Hybrid Fruit Fly Optimization for EHR Applications in Cloud Environments for Load Balancing Optimization P. S. V. S. Sridhar , Sai Sameera Voleti , Venkata Naresh Mandhala , and Debnath Bhattacharyya

Abstract Cloud computing is considered as the most inventive type of innovation as it has acquired its consideration in pretty much every area name in manufacturing, IT, training, automotive industry, and so forth. It is progressing as a lively innovation for clear reasons-customer well-being security and protection. It has stated its position in the medical market by storing electronic patient information and cloud-based analytic frameworks. The expanding utility of cloud-based innovation as a rule makes vulnerability in asset the executives and creates high force utilization, lackluster showing, and working expenses. The suggested method is utilized to achieve optimal resource use while lowering energy consumption and costs in a suitable simulation environment. The outcome of the current proposal provides a better path of action. The preliminary results demonstrate that the suggested estimation outperforms the present weight-changing calculations profitably. Keywords Cloud Computing · Electronic Health Records · Load balancing · Fruit fly Optimization Algorithm (FOA) · Simulated Annealing (SA) · Energy consumption

1 Introduction There has been an enormous move in the age, utilization, storing, and sharing of medical services information. From traditional capacity to the digitalization of medical care information, the medical care business has indeed made significant progress in terms of improving its information the board practices [1, 2]. The widespread adoption of cloud computing in medical services goes a long way past putting away information on cloud engineering. Medical care suppliers are currently utilizing this innovation to acquire efficiencies, enhance work processes, bring down the expenses related to medical services conveyance, and offer personalization in care plans to improve outcomes [3, 4]. P. S. V. S. Sridhar · S. S. Voleti (B) · V. N. Mandhala · D. Bhattacharyya Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andra Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 D. Bhattacharyya et al. (eds.), Machine Intelligence and Soft Computing, Advances in Intelligent Systems and Computing 1419, https://doi.org/10.1007/978-981-16-8364-0_1

1

2

P. S. V. S. Sridhar et al.

The methodology utilizes a limit incentive to distinguish the heap of a virtual machine[5, 6]. In the event that a specific VM is overburden, the errand is taken out, and allotted to the recognized virtual machine dependent on the cut-off time of the errand execution. In this paper, a crossbreed fruit fly advancement calculation is proposed with a simulated tempering way to deal with improve the combination speed furthermore, nature of the arrangement [4, 7].

2 Literature Survey In this particular section, a succinct summary of the research carried out in this area. Here, a brief survey of similar approaches is given, covering a variety of techniques for researching the problem. Various literature articles concentrate on load balancing and the fruit fly optimization algorithm (FOA) in cloud computing. A study on the heap adjusting calculation is performed in distributed computing. The creators introduced grouping on errand planning and burden adjusting calculations in seven distinct classes that incorporate Hadoop-map-lessen load adjusting, specialist-based burden adjusting, common marvels-based burden adjusting, application-situated burden adjusting, general burden adjusting, networkmindful burden adjusting, and work process explicit burden adjusting which in writing fall under two spaces dependent on framework state and who instated the interaction. From every class, the various calculations are gathered and their favorable circumstances and constraints are recorded [8, 9]. A method proposed about a powerful load balancing calculation utilizing insect settlement enhancement in network processing. This paper connects the pheromone with the assets. The major objective is to adjust the remaining burden and improve asset use [10, 11]. A firefly calculation for load adjusting is proposed in distributed computing. In their methodology, the calculation changes the heap, and the results demonstrate that their methodology builds the exhibition on errand relocation, work appearance rate, and decreased computational time [12, 13].

Hybrid Fruit Fly Optimization for EHR Applications …

3

3 Proposed Methodology 3.1 Technology Requirements 3.1.1

Fruit Fly Optimization Algorithm

The fruit fly is unrivalled, contrasted with remaining in vision and osphresis. Coming up next is the fruit fly-looking interaction and Algorithm.1 characterizes the strategy for FOA.

Basic Fruit Fly Algorithm Using Foraging Iteration Method This simulates the foraging behaviors of fruit flies in order to find an optimal solution to the objective function. Step 1: Initialize the parameters. Set the values for Sizepop and Maxgen of the population size. Set the population position. (x−axis , y−axis ) Step 2: In the scent method, fruit fly searches will arbitrarily render the search path and the search stage. Random value (RV) is the distance of the scan and the location of the population is modified simultaneously: 

xi = x−axis + Rv yi = y−axis + Rv

Step 3: Provided that the precise location of the food is uncertain, the distance (Disti) between the fruit flies and the coordinate origin must be determined, and then the flavor concentration parameter calculated (Si): Disti =



si =

xi2 + yi2

1 Disti

Step 4: Replace the concentration value (Si) for the fruit fly in the fitness role of the flavor concentration, then a separate taste of the Smelli of the fruit flies can be observed. Smell i = Fitness(s i ) Step 5: Identify the individual with the highest flavor concentration within the drosophila population.

4

P. S. V. S. Sridhar et al.

[best Smell, best I ndex], = min(Smell). Step 6: Organize the best flavor concentration and other individuals travel to that place in the population: SmellBest = bestSmell. 

x−axis = x(best I ndex) y_axis = y(best I ndex)

Step 7: The condition of the termination is to determine whether the optimal position concentration is greater than the previous generation and obtain the largest number of iterations.

3.2 Implementation The basic aim of the project is to ensure load balancing in the servers that maintain electronic health records (EHR). In this regard, the whole concept is considered by classifying the servers basically into three different ones. Server 1: Deals with out-patient data. Server 2: For physician data/use. Server 3: Pharmacy. The flow of the project goes with building three virtual machines using the technical requirements mentioned in the above section. Step 1: Virtual machine creation: In this step, an introductory interface is built that indicates virtual machine creation and clearing choices. Step 2: An interface that helps in creating a virtual machine. According to the load and tasks, the virtual machines are created. Step 3: An additional VM creation. An additional VM is built in order to divert the tasks from one server to another (Fig. 1).

4 Results This particular section provides experimental outcomes of the proposed method. Figure 2, gives information from a user request form. Whenever a user tries to request certain details related to pharmacy, the virtual machine connects to one of the servers created, by checking the number of tasks assigned to each server and task traffic (Fig. 2). If the server is overloaded, we get a message “Overload.” (Fig. 3). The results indicate the overload that is detected in earlier cases has now been balanced by verifying the space with other servers to efficiently establish connections for the electronic health records.

Hybrid Fruit Fly Optimization for EHR Applications …

5

Fig. 1 Additional virtual machine creation

Fig. 2 An interface that displays connection established to server 3

5 Conclusion In this work, basic load balancing concepts for cloud servers is presented that maintain Electronic Health Records (EHR). The work in the paper demonstrates load balancing among different servers implemented on an interface using JAVA, SWING, and SQL

6

P. S. V. S. Sridhar et al.

Fig. 3 An interface that shows connection refused by server

concepts. This interface can now be successfully used in any kind of hospitals for electronic data storage in cloud environment.

References 1. Vinati Kamani, A. (2019). 5 ways cloud computing is impacting healthcare. Retrieved from https://www.healthitoutcomes.com/doc/ways-cloud-computing-is-impacting-health care-0001. 2. Maguluri, S. T., Srikant, R., & Ying, L. (2012, March). Stochastic models of load balancing and scheduling in cloud computing clusters. In 2012 Proceedings IEEE Infocom (pp. 702–710). IEEE. 3. Cloud Computing in Healthcare. (2020). Retrieved from https://www.foreseemed.com/blog/ cloud-computing-in-healthcare. 4. Li, Y., & Han, M. (2020). Improved fruit fly algorithm on structural optimization. Brain Informatics, 7(1), 1–13. 5. Wang, L., & Zhang, X. L. (2017). Research progress of fruit fly optimization algorithm. Control Theory & Applications, 34(5), 557–563. 6. Hu, J., Chen, P., Yang, Y., Liu, Y., & Chen, X. (2019). The fruit fly optimization algorithms for patient-centered care based on interval trapezoidal type-2 fuzzy numbers. International Journal of Fuzzy Systems, 21(4), 1270–1287. 7. Lawanyashri, M., Balusamy, B., & Subha, S. (2017). Energy-aware hybrid fruitfly optimization for load balancing in cloud environments for EHR applications. Informatics in Medicine Unlocked, 8, 42–50. 8. Ghomi, E. J., Rahmani, A. M., & Qader, N. N. (2017). Load-balancing algorithms in cloud computing: A survey. Journal of Network and Computer Applications, 88, 50–71.

Hybrid Fruit Fly Optimization for EHR Applications …

7

9. Neghabi, A. A., Navimipour, N. J., Hosseinzadeh, M., & Rezaee, A. (2018). Load balancing mechanisms in the software defined networks: A systematic and comprehensive review of the literature. IEEE Access, 6, 14159–14178. 10. Goyal, S. K., & Singh, M. (2012). Adaptive and dynamic load balancing in grid using ant colony optimization. International Journal of Engineering and Technology, 4(4), 167–174. 11. Pan, W. T. (2012). A new fruit fly optimization algorithm: Taking the financial distress model as an example. Knowledge-Based Systems, 26, 69–74. 12. Florence, A. P., & Shanthi, V. (2014). A load balancing model using firefly algorithm in cloud computing. Journal of Computer Science, 10(7), 1156. 13. Loheswaran, K. (2021). An upgraded fruit fly optimisation algorithm for solving task scheduling and resource management problem in cloud infrastructure. IET Networks, 10(1), 24–33.

Distributed Machine Learning—An Intuitive Approach Nandan Banerji

Abstract The scope for automated systems by enabling artificial intelligence has grown rapidly over the last decade and this growth has been stimulated by advances in machine learning techniques by exploiting hardware acceleration. In order to improve the quality of predictions and utilize machine learning solutions feasible for more complex applications, a tangible amount of training data is necessary. Although relatively tiny machine learning models can be trained with modest amounts of data, the input for training larger models such as neural networks grows exponentially with the number of parameters. Since the demand for processing training data has outpaced the increase in computation power of computing machinery, there is a need for distributing the machine learning workload across different machines. That leads to a switch from the centralized into a distributed system. These distributed systems present new challenges; first and foremost, the efficient parallelism of the training process and the creation of a coherent model. This article provides an extensive overview of the current state-of-the-art in the field by outlining the challenges and opportunities of distributed machine learning over conventional (centralized) machine learning, discussing the techniques used for distributed machine learning, and providing an overview of the systems that are available. Keywords Classical machine learning · Distributed machine learning

1 Introduction The vast and swift development of new age technologies since th last decade has led to an unparalleled growth of data science and engineering. Subsequently, the Machine Learning (ML) approaches are used heavily to analyze data sets and set up the decision support systems. Sometimes, the algorithmic solution fails to produce N. Banerji (B) Department of CSE, National Institute of Technology, Durgapur, India Department of AI and DS, KLEF (Deemed to be University), Green Fields, Guntur, Andra Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 D. Bhattacharyya et al. (eds.), Machine Intelligence and Soft Computing, Advances in Intelligent Systems and Computing 1419, https://doi.org/10.1007/978-981-16-8364-0_2

9

10

N. Banerji

feasible solutions due to the nature of the problems. Mostly, in the field of real-time applications like Internet of Things(IoT) or some high computing video or audio analytic applications where the live or stream data is being fed into the system for immediate decision-making, needs tremendous and high-performance computing infrastructures. Often, we train the data over a long duration of time as the data are mostly “Big-data” in nature. The term “Big-data” does not mean massive or huge in true sense, it’s actually data having complex relationships among its different attributes or features. Also, it is not evident that always the required infrastructure becomes available to all of us or sometimes if it is available with any party, it may charge a lot for availing the infra. So, the challenge is not only with the infrastructure but with the cost of affording it. Also with the smart deployment of devices over the globe, we can find some devices around us and form a club or federation to create a virtual platform for executing ML algorithms for decision-making. The approach will likely be beneficial for training and designing such complex Artificial Intelligent(AI) systems [1, 2]. So, considering the above situation, we can design some solutions in a distributed manner with the modified ML algorithms to reduce the cost of training by means of time and resource to judiciously use the available nearby infrastructures as a cooperative federation as an alternative to the centralized legacy systems. To design such a system that mostly depends on data, we first have concern about the data. Inherently, for enterprise applications, data are distributed among different sites. In a nutshell, we can say that the data are geographically distributed. So, the concern of data, travelling from one location to another needs enormous security. Besides it, the routing of data needs a huge communication overhead; which sometimes may slowdown the performance of the system by encountering delays associated with data transfer. With modern day applications, the main trade-off is with infrastructural necessity and data security. The concern to minimize it motivates to design distributed solutions for ML algorithms [3, 4]. The paper is intended to focus on the issues and associated challenges with distributed approach. Also, the article is enriched with a case study by attempting the same problem with two approaches: Centralized and the distributed ML. Hope it will be beneficial as a guide toward distributed ML approaches. It’s a very emerging field of research, a very few articles are available on the web. We tried to mention that in different portions of our article [5, 6]. Rest of the paper is organized as below. Research challenges associated with the distributed approach have been elaborated in Sect. 2. Sections 3 and 4, respectively, discuss data distribution and its different techniques. The challenges associated with distributing the tasks are covered in Sect. 5. Section 6 illustrates the case study and results as a comparison. Finally, Sect. 7 concludes the discussion.

2 Challenges with Machine Learning Computing A rapid escalation of smart infrastructural growth, paved the way for AI and ML to be associated in complex applications. The problems are mostly data-dependent, while a variety of algorithms have evolved and the data representations used are mostly

Distributed Machine Learning—An Intuitive Approach

11

identical in their structural perspective. The act of finding an optimized solution for a given problem needs a lot of mathematics, especially linear algebraic solutions. The cost of such solutions mostly comes with transformation of one data format to others like metrics, vectors, tensors, etc. The list also includes the cost of arithmetic operations like multiplication (we know, multiplication is computationally costly as compared to addition) or transformation of matrices into different forms. Over decades, active research is going on to find some ways to reduce the load of such costly applications, associated with ML techniques. With the accelerated hardware, we achieved up to a certain limit. But to move above it, we must find some other ways which might improve the computational efficiency of the ML algorithms, so that we can obtain the results within some stringent deadlines with the cost of an average (definitely, not high) computing infra. A distributed approach can be a solution for that over a centralized one. The devices around us can form a cooperative and distributed virtualization infra, so that the demand of resources can be met as well as, the cost of setting up a high-computing infra likely to be reduced. Even, the distributed infra can be an ad hoc one. Anytime, we can create an ad hoc infra with the capable devices around us and serve for ML decision-making. That we can say as an extended and elastic ML infra to meet our new age decision support system. Discussion as compared to conventional computing infra like super computers, grids, cloud virtualization techniques, Machine Learning computing needs something different for the best execution of its workload, as most of the al- gorithms are processed(rather trained) with sufficient and enormous amount of data. Accelerated hardware and GPU’s are most common in this case, with a bit parallel approach it can be made better. Issue of scalable solutions is also possible, if we create on-demand infra with the resources available around us. That leads to better utilization of the system resources which certainly reduces the chance of over utilization and under utilization.

3 What to Distribute? As per a Generic idea of machine learning, firstly, the data is partitioned into two halves; training data and test data sets, respectively, though the ratio is purely problem-specific. The model also undergoes two phases, training phase and prediction phase. During training phase, the ML model is trained and optimized using available training data set. Also, that tunes up the considered hyperparameters (a hyperparameter is a value to control the training process). During training, a huge amount of training data is passed through the model for training. This needs enormous computing power as well as time. Then the trained model is deployed to provide predictions for new data fed into the system; which is known as prediction phase. The Training phase involves training a machine learning model while the Prediction phase is used for deploying the trained model in practice. The ready model accepts new data as input and produces a prediction as output. The training and prediction phases are not harmonious. Also, incremental learning combines the training phase and inference phase and continuously trains the model by using new data from the

12

N. Banerji Model Centric design

Model-01

Data_01

Model-01

Model-02

Data Data

Data

Data_02

Trained Model

Model-02

Division

Trained Model Model-03

Algorithm Data_03

Model-03

Data_N

Model-N

Model-N

Fig. 1 Data- and model-centric design

prediction phase. Now for distribution, a question comes, what to distribute data or model. As per the research community, there are two intrinsically different ways of distributing the problem across all machines. Popularly known as “data-centric approach” and “model-centric approach” or a combination of both (Fig. 1). In data-centric approach, the total data set is functionally divided into several shares. In the next section, we will discuss the details of this approach. In the “model-centric” approach, exact copies of the entire data sets are processed by the distributed coworker nodes that operate on different parts of the model. The model therefore acts as an aggregation of all distributed counterparts. The “modelcentric” approach cannot automatically be applied to every machine learning algorithm, because it is suggested by experts that the model parameters cannot be split up randomly.

4 How to Distribute? In the study, we focused on data distributed approach. As most of the community pointed the distribution of data for distributed ML is obvious. But there are so many challenges associated with it. In data-centric approach, the total data set is functionally divided into several shares and in each of the distributed worker nodes are given with one share/portion of the whole data set. The worker nodes apply the same ML model with different data set fragments. Sometimes, we can change the models of worker nodes so that we can achieve better accuracy in prediction. Though that is a separate piece of work; popularly known as landscape analysis, which may be covered in future. In data-centric approach, we must ensure that the partitioning of data must be lossless. There are two broad categories of data partitioning; vertical and horizontal partition. In vertical approach, data set if partitioned columnwise, a portion actually acts like a subset of the whole data set. While in horizontal approach, the partition is

Distributed Machine Learning—An Intuitive Approach

13

row-wise. So, all the worker nodes get a part of the sample data sets with all available features. Unlike vertical, here the smaller data set chunks have all the relationship variables similar to the original large data set.

5 Challenges in Distribution The challenges are associated with distribution patterns. Firstly, all must ensure that the partition of data must preserve the internal dependency, reconstructiveness, and lossless in all perspectives. In the horizontal approach, as the smaller chunk data sets have all the columns, it can be guaranteed to preserve the said properties. But in case of vertical partition approach, as the columns are getting reduced, it may that the dependencies may not be preserving. But for data sets, we often go with principle component analysis (PCA) for finding unnecessary burdened features and we eliminate them if necessary. The approach is very simple to check the mean of the actual large data set and after creating smaller chunk data sets we check the mean of it. If the mean does not change or changes very less, we consider that chunk as a valid one in connection with the actual data set. Either vertical or horizontal, there is another issue of the amount of shares each worker can get. In what ratio, the data to be partitioned. A number of approaches as mentioned below (Fig. 2). • Equal-share approach: It is a very democratic way of distribution. All the workers have some amount of data. This is best if all the worker nodes have the same infra. But for a case having heterogeneous worker nodes, it may be not a good one. • Weighted-share approach: Based on the capability of each worker nodes, a portion of the data set is given to them. It is a very practical approach. But sometimes, the management of dividing the shares fails to calculate and give equivalent shares of their available capabilities. It may lead to “favor the favorite” phenomenon. • Random-share approach: Purely based on a random approach. Better suited for ad hoc and homogeneous worker nodes (Figs. 3 and 4).

Horizontal Partition (Row-wise)

Fig. 2 Data partition techniques

Vertical partition (Column-wise)

14

N. Banerji

Fig. 3 Sample graph for training and test data(All data set)

Fig. 4 Sample graph for training and test data (with sample data share-1)

6 Case Study: Support Vector Machine The intuition of this study is to make a simple and clear idea of distributed machine learning approach with a vision to compare the results of both classical (centralized) approach with the distributed counterpart. The results all are done with standard data sets from well-known web repository. As a case study of distributed machine learning, we took support vector machine (SVM) model with a standard data set from Kaggle named Social Network Ads.csv as our sample data set. Initially, the data set is tested with the classical SVM approach, and then, we went with data-centric model to make it simple to understand. The output is given for SVM classical approach and we executed four instances of data-centric approach though for limitation space we included one such output. The test went with 4 data shares and yielding similar results. Here we find that the training accuracy for the total data set is 0.91 whereas in data shares, we obtained the training accuracy ranging between [0.88 and 0.92]. The conclusion can be made that, the accuracy may differ for sharing the total data set but it maintains a range which is an achievement where we can divide the work by making relatively smaller data chunks.

Distributed Machine Learning—An Intuitive Approach

15

7 Conclusions The study is an intuitive one for the newly introduced era of machine learning. The approach is to reduce the load of one individual machine and share it into a number of capable machines. In view of increasing load toward cloud infra, the researches proposed the distributed approach of learning for machine learning algorithms. The two approaches we mentioned here and out of that data-centric approach is relatively easier as compared to model-centric approach. Though the underlying challenges are there by means of data shares, their lossless decomposition guarantee, ensembles of results of each individual data shares and so many. In the later days, we have to focus on those issues in an intuitive manner to distribute the ML algorithms easily and with a cost and technology-effective manner.

References 1. Yang, K., Jiang, T., Shi, Y., Ding, Z. (2020). Federated learning via over- the-air computation. https://arxiv.org/abs/1812.11750v3. 2. Zhu, G., Liu, D., Du, Y., You, C., Zhang, J., & Huang, K. (2018). Towards an intel- ligent edge: Wireless communication meets machine learning. arXiv preprint arXiv:1809.00343. 3. Haller, L., & Moraldo, H. (2021). Software Engineers, Google Research. Quickly training gameplaying agents with machine learning. Tuesday, June 29. https://ai.googleblog.com/. 4. Ko, A. Architecture of federated learning. 5. Smith, V., Chiang, C.-K., Sanjabi, M., & Talwalkar, A. S. (2017). Federated multi-task learning. In Proceedings of Neural Information Processing System (NeurIPS) (pp. 4424–4434). 6. Stoica, I., Song, D., Popa, R. A., Patterson, D., Mahoney, M. W., Katz, R., Joseph, A. D., Jordan, M., Hellerstein, J. M., Gonzalez, J. E., et al. (2017). A berkeley view of systems challenges for AI. arXiv preprint arXiv:1712.05855.

Prediction of Wall Street Through AI Approach V. Rajesh , Udimudi Satish Varma , Ch. S. Sashidhar Reddy , V. Dhiraj, Udimudi Prasanna Swetha , and S. K. Hasane Ahammad

Abstract Financial exchange is one of the main areas of a nation’s economy. These are hard to foresee due to their unpredictable framework elements, making it an intriguing industry to study. The present target is to discover an ideal regressor algorithm that forecasts future “Close Price” of stocks through a Comparative report between various machine learning, Deep learning, and Timeseries forecasting techniques like ARIMA, Random Forest Regressor, Linear Regression, LSTM (which is said to be the particular type of RNN), SVM Regressor is performed including analysis. Then, a Comparative report between these techniques has been done regarding its prediction accuracy and performance. After analyzing all the models separately, the LSTM is the most precise for the stock value forecast. Keywords Financial exchange · Stock prediction · Regressor model · Support · Vector machine · Random forest · Linear regression · ARIMA · LSTM

1 Introduction Stock exchanges are places where individuals and institutions can trade stocks. These transactions are carried out in a public setting [1]. It is believed that the volatility of the markets can affect the nation’s development [2]. The goal of this work is to predict the closing prices of the stock market records for the period 2012–2020. The predictions are made using the various regressor models that were used [3]. This work aims to analyze the performance of AI models when it comes to estimating the closing price of stocks.

V. Rajesh · S. K. Hasane Ahammad Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India U. S. Varma · Ch. S. Sashidhar Reddy · V. Dhiraj (B) · U. P. Swetha Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 D. Bhattacharyya et al. (eds.), Machine Intelligence and Soft Computing, Advances in Intelligent Systems and Computing 1419, https://doi.org/10.1007/978-981-16-8364-0_3

17

18

V. Rajesh et al.

2 Literature Survey Nonita Sharma [4] and her colleagues discussed the main idea of using past market data to predict the prices of financial exchange. The proposed model is more robust than SVM Regressor and can be utilized to design effective algorithms for financial market forecasts. Aakanksha Sharaff [3] and her colleagues [5] studied various stochastic models such as the Recurrent Neural Network, the Artificial Neural Network, and the ARIMA algorithm for stock market prediction. They concluded that the ANN has robust prediction capability. Honghai Yu et al. [6] proposed a dual-stage ANN design that combines SVM and Empirical mode Decomposition algorithms. This model was suggested for stock price predictions. This model has been developed to predict the stocks accurately using the nonlinear stock value pattern prediction techniques. It is more advantageous to develop an integrated model that takes into account different features determination. Srinath Ravikumar [7] and his coauthors [8] proposed a system that can be used for both classification and regression techniques. The former proposes that the system should be able to predict the price movements of stocks based on the previous day’s data. Ze Zhang et al. [9] developed a self-adaptive PSO (price prediction algorithm) based on the Elman [10] RNN framework to forecast the beginning of the financial market. The model is compared with the backpropagation network and their performance is evaluated. ARIMA and ANN models were used for the forecasting of the Indonesian stock market. The former is faster to forecast and has better learning ability [2, 11].

3 Methodology The proposed methodology can be divided into various sections. The datareader, datapath, and scikit-learn are some of the essential Python libraries that are used to perform various tasks and the complete methodology of this research is explained in Fig. 1.

3.1 Yahoo Finance Yahoo finance is a website that provides various types of financial news and information, including stock statements, reports, and official statements.

Prediction of Wall Street Through AI Approach

19

Fig.1 Proposed methodology

3.2 Dataset Input The data are taken from yahoo finance and are used for training different machine learning models. The models were trained using the data to forecast the close price of the apple stocks.

3.3 Data Visualization The stock prices of apple have been retrieved from the web resources since the 8th of November 2007. The data includes the date of the stocks, the high and low open, and the close prices.

3.4 Algorithms Used for Prediction 3.4.1

Random Forest

There is no legitimate method for pruning information in Decision Trees. Random Forest was introduced to avoid this issue. Formula for the Random forest can be given as follows. Figure 2 denotes the prediction of apple stocks. s(x) = h 0 (x) + h 1 (x) + . . . + h n (x) In (1)

(1)

20

V. Rajesh et al.

Fig. 2 Stock prediction by Random forest regressor

Fig. 3 Prediction of apple stock closing prices

S(x) = Final Prediction. hn (x) = nth decision tree’s decision function.

3.4.2

Linear Regression

Linear regression is normally utilized in prediction and in stock analysis. Figure 3 denotes the prediction of apple stock closing prices with linear data. Linear regression generally draws a line that was close to all the data points considering the square of distance between all the data points is minimal with respect to the line drawn.

3.4.3

LSTM

LSTM is a special type of RNN utilized in deep learning in light of the fact that enormous architectures can be effectively prepared. Prediction accuracy is more in the LSTM model compared to other models which can be seen in Fig. 4 which denotes the prediction of apple stocks. LSTM is trained using the backpropagation Fig. 4 Predicted versus actual closing prices of apple stock

Prediction of Wall Street Through AI Approach

21

Fig. 5 Prediction of apple stocks by SVR

Fig. 6 Actual versus predicted price graph by ARIMA

through time which helps to overcome the problem of vanishing gradient (shrinking of the gradient as it back propagates) which is a drawback of RNN.

3.4.4

SVM Regressor

Both classification tasks and regression tasks can be implemented using the SVM model, keeping up all the principle includes that portray the model (maximal edge). Figure 5 denotes the prediction of apple stocks by SVR. SVM regressor utilizes almost the same principle as the SVM model but with a small set of changes to it.

3.4.5

ARIMA

ARIMA algorithm converts the non-static data into static data before it starts training from the data. ARIMA model has been utilized widely in the field of account and financial aspects as it is said to be robust, effective, and has a solid potential for forecasting the short-term stock market. Figure 6 denotes the prediction of apple stock prices.

3.5 Performance Evaluation The performance of the models that are utilized in financial market price forecasting can be given by the accuracy measure, MSE measures normal of the squares of the errors (obtained value to the original value). MSE can be expressed as follows:   1 1 s 2 2 x −x Meansquarederror = k=1 s k k

(2)

22

V. Rajesh et al.

Table 1 Prediction analysis

S.no

Algorithm name

MSE (Apple stock)

MAPE(Apple stock)

1

Random Forest Regressor

6.5189

0.9843

2

Linear Regression 4.1724

0.8324

3

Long short term memory

3.2485

0.6523

4

SVM regressor

4.196

0.8564

5

ARIMA

4.161

0.8278

In (2) S=total number of data points (number of days predicted) x k1 = values observed. x 2k = values predicted. MAPE is an accuracy prediction measurement for a prediction model in statistics and it is given by:   1 1 1 s  O k − P k  Mean Absolute Percentage Error =   k=1   s O k1

(3)

In (3) S = Total number of summation iterations. O k1 = original value. P k1 = Predicted value. The performance measures of the algorithms that are used for predicting the stock price values are given in the following Table 1. The models performed well in terms of both the stationary and non-stationary sets of data. The ARIMA is also accurate for short-term predictions.

4 Conclusion AI algorithms are used for stock market price prediction. A comparative analysis is performed among the various algorithms. The ARIMA and LSTM models had the highest precision among the models.

References 1. Ariyo, A. A., Adewumi, A. O., & Ayo, C. K. (2014). Stock Price prediction using the arima model. In 2014 UKSim-AMSS 16th International Conference on Computer Modelling and

Prediction of Wall Street Through AI Approach

23

Simulation, Cambridge (pp. 106–112). https://doi.org/10.1109/UKSim.2014.67. 2. Sharaff, A., & Choudhary, M. (2018). Comparative analysis of various stock prediction techniques. In 2018 2nd International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli (pp. 735–738). https://doi.org/10.1109/ICOEI.2018.8553825. 3. Zhang, Z., Shen, Y., Zhang, G., Song, Y., & Zhu, Y. (2017). Short-term prediction for opening price of stock market based on self-adapting variant PSO-Elman neural network. In 2017 8th IEEE International Conference on Software Engineering andService Science (ICSESS), Beijing (pp. 225–228). https://doi.org/10.1109/ICSESS.2017.8342901. 4. Sharma, A., Bhuriya, D., & Singh, U. (2017). Survey of stock market prediction using machine learning approach. In 2017 International conference of (ICECA), Coimbatore, (pp. 506–509). https://doi.org/10.1109/ICECA.2017.8212715. 5. Sharma, N., & Juneja, A. (2017). Combining of random forest estimates using LSboost for stock market index prediction. In 2017 2nd International Conference for Convergence in Technology (I2CT), Mumbai (pp. 1199–1202). https://doi.org/10.1109/I2CT.2017.8226316. 6. Ravikumar, S., & Saraf, P. Prediction of stock prices using machine learning (Regression, Classification) agorithms. In 2020 International Conference for Emerging Technology (INCET). https://doi.org/10.1109/INCET49848.2020.9154061. 7. Yuan, X., Yuan, J., Jiang, T., & Ain, Q. U. (2020). Integrated long-term stock selection models based on feature selection and machine learning algorithms for China stock market. IEEE Access, 8, 22672–22685. https://doi.org/10.1109/ACCESS.2020.2969293 8. Bhattacharjee, I., & Bhattacharja, P. (2019). Stock price prediction: a comparative study between traditional statistical approach and machine learning approach. In 2019 4th International Conference on Electrical Information and Communication Technology (EICT), Khulna, Bangladesh (pp. 1–6). https://doi.org/10.1109/EICT48899.2019.9068850. 9. Zhang, Y., & Yang, S. (2019). Prediction on the highest price of the stock based on PSO-LSTM Neural Network. In 2019 3rd International Conference on Electronic Information Technology and Computer Engineering (EITCE), Xiamen, China (pp. 1565–1569). https://doi.org/10.1109/ EITCE47263.2019.9094982. 10. Wang, J., Sun, T., Liu, B., Cao, Y., & Wang, D. (2018). Financial markets prediction with deep learning. In 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL (pp. 97–104). https://doi.org/10.1109/ICMLA.2018.00022. 11. Wijaya, Y. B., Kom, S., & Napitupulu, T. A. (2010). Stock price prediction: Comparison of Arima and artificial neural network methods - An Indonesia Stock’s Case. In 2010 Second International Conference on Advances in Computing, Control, and Telecommunication Technologies, Jakarta (pp. 176–179). https://doi.org/10.1109/ACT.2010.45.

A Hybrid Intelligent Approach for Early Identification of Different Diseases in Plants Bhanu Prakash Doppala , Vamsi Bandi , N. Thirupathi Rao , and Debnath Bhattacharyya

Abstract Manual detection of plant illness is highly impossible and research work is being carried out on many farm-related factors such as organic farming, constant plant monitoring, toward the recognition of different diseases. This requires an enormous amount of work, plant disease expertise, and also a substantial amount of time. We proposed a clustering and convoluted neural network (CNN) model for the accurate disease prediction. Identification of the plant disease includes methods like image segregation, preprocessing, fragmentation of the image, detection, and recognition of characteristics. This paper mainly examines the binding segmentation and retrieval functions of mainly on two different plant diseases (pomegranate and potato) and achieved an accuracy of 95% and 94%. Keywords Plant disease detection · K-Means clustering · CNN · Segmentation · Classification

1 Introduction High demand to meet the ability for producing food to more than 7 billion people modern technologies playing a prominent role. Whereas threats related to the security for food including periodical changes in the climate [1], the decline in pollinators (Report of the Plenary of the Intergovernmental Science-Policy Platform on Biodiversity Ecosystem and Services on the work of its fourth session, 2016), plant diseases, and others. Farming accounts for approximately 17% of total GDP, providing more than 60% of the population with employment [2]. The recognition of plant diseases plays an important role throughout the agricultural climate. B. P. Doppala · V. Bandi · N. T. Rao Department of Computer Science and Engineering, Vignan’s Institute of Information Technology (A), Visakhapatnam, India D. Bhattacharyya (B) Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram 522302, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 D. Bhattacharyya et al. (eds.), Machine Intelligence and Soft Computing, Advances in Intelligent Systems and Computing 1419, https://doi.org/10.1007/978-981-16-8364-0_4

25

26

B. P. Doppala et al.

(a)

(b)

(c) Fig. 1 a Late blight leaf b Potato early blight leaf c Bacterial blight of pomegranate leaf [4]

1.1 Types of Diseases in Classification Late Blight in potato plants—Late blight is evoked by Phytophthora infectants, a fungus-like oomycete microorganism. Sample leaf affected with late bight has been displayed in Fig. 1a. Early Blight in potato plants has been displayed in Fig. 1b, which is the illness chiefly affects leaves and stems, however below weather conditions, and if left unrestrained, it will contribute to important defoliation and increase the prospect of infection with the tuber [3]. Bacterial Blight Appears to be dark-colored irregular spots on leaves.

2 Related Work To identify the disorders, Selim et al. [5] used eleven statistical variables and the Support Vector Machine classifier (SVM). The benefits of this boost the detection, identification, and classification process’ efficiency. In terms of disease classification, it is 93% accurate. Sherly et al. [6]. looked at several forms of plant illnesses and different machine learning classification techniques for diagnosing diseases in

A Hybrid Intelligent Approach for Early Identification …

27

different plant leaves, as well as the benefits and drawbacks. Different algorithms for identifying and detecting bacterial, fungal, and viral plant leaf diseases were discussed in this work. In the past few years, various developments within the declining agriculture field have arisen and that fetched an honest supply of financial gain for the farmers. And one in every one of them will be an image process with machine learning algorithms. Pomegranate may be a deciduous tree fully grown in arid and semi-arid regions [7]. It develops well in areas of 25–35° temperature and 500–800 mm annual downfall. Diseases have resulted in Brobdingnagian in developed pomegranate in recent years. Microorganisms like fungi, microbes, and viruses are sometimes liable for these diseases. Microorganism blight, seed stain, plant red, and leaf plot are the diseases [8]. Potato plants are straightforward to grow. They’ve fully grown virtually all told elements of the planet, however, many diseases have an effect on potato plants; however, the foremost common diseases are blight, fungus wilt, and Rhizoctonia canker. These diseases are simply known and if treated early enough, the plants could also be saved. If the diseases don’t seem to be caught early enough, the complete plant ought to be removed. These diseases are contagious and that they unfold from plant to plant simply. The diseases inflicting substantial yield loss in potatoes are Phytophthora infestans (late blight) and Alternaria solani (early blight). Early detection of those diseases will permit preventive measures and mitigate economic and production losses [9]. Our proposed work addresses this problem in a more effective way with increased accuracy.

3 Proposed Work In our proposed system which is represented in Fig. 2, firstly, we perform the image acquisition in which the images are acquired from internet sources. Secondly, image processing techniques such as image improvement and image segmentation are performed on the leaf to enhance its affected region and to eliminate the noise from the provided image.

4 Feature Extraction Feature extraction may be a vital and essential step to extract regions of interest. In our planned methodology, the fundamental options are mean, variance, entropy, IDM, RMS, smoothness, skewness, kurtosis, contrast, correlation, energy, and homogeneity are calculated and thought of as options values. Then we’ve created the feature vector for these values. The divided methodology shows totally different values for pictures using Gray-Level Co-occurrence Matrix (GLCM). The Mathematical formulas for feature extraction are used to calculate the accuracy of the images. gi j = (i, j)th entry in GLCM.

28

B. P. Doppala et al.

Fig. 2 System architecture

L−1 denotes the number of different grey levels. Contrast is a metric for measuring spatial frequency.  i

(i − j)2 gi j

(1)

j

Textural homogeneity is measured by energy. When the grey level distribution has the same form, it reaches its maximum value. It is provided by  i

gi j 2

(2)

j

Homogeneity—It calculates the tightness of the element distribution in the GLCM and passes the value. It is provided by  i

j

1 gi j 1 + (i − j)2

(3)

g(i)P(g(i))

(4)

The mean value is written as L−1  i=0

As well as standard deviation defined as

A Hybrid Intelligent Approach for Early Identification …

  L−1   g(i) − M)2 P(g(i))

29

(5)

i=0

The entropy of an image is a measure of its disorder or complexity. When the image is not texturally homogeneous and the GLCM features have modest values, it is huge. The term “entropy” refers to the amount of information in a system. L=1 

P(g(i))log2 P(g(i))

(6)

i=0

5 Role of Clustering and Classification The picture processing system and K-means clustering algorithm were used to diagnose five pathogens, including early blight, late blight in potato leaves and Alternaria, anthracnose and bacterial blight affected pomegranate leaf. The K-means were used to cluster photos of the diseased leaf. The clustered photos were then transferred into a classifier NN. The outcome was that the NN classifier was much more accurate. Using MATLAB software, the training and testing of the leaves are done using the train files and test files that are represented using the mat files. The system architecture of our proposed work is the neural network consisting of 10 hidden layers and there are three clusters of the leaves. The neural network used is a forward backpropagation algorithm.

6 Algorithm Segmentation by K-means clustering operation. Input: Pomegranate leaf image. Output: segmented clusters of pomegranate leaf image. Step 1: Scan input image. Step 2: Input pictures converted regenerate to gray scale images. Step 3: Apply enhancement. Step 4: Resize the image. Step 5: Apply a K-Means clustering operation.

30

B. P. Doppala et al.

Step 6: Once all the measurements have been allocated to the clusters, measure the score by summarizing all the Euclide and instances between every data point and the respective centroid. Total distances n k  

2

x ( j) − c j 

(7)

j=1 i=1

Step 7: Define each cluster’s new centroid by measuring the mean of all points assigned to the cluster. The definition (n is the number of points allocated to the cluster) is as follows:xi /n. Step 8: Represent the clustered image. Step 9: Segmented output.

7 Results and Discussion This research was experimented on MATLAB 2020a when provided with the diseased leaf images they are segmented using the image processing techniques and segregated into different clusters belonging to this disease. The functionality of cooccurrence is calculated after mapping the R, G, B components of the given leaf image to the given threshold. The image classification is achieved first for the minimum distance criterion with K-Mean Clustering with k = 3 and then the characteristic features are extracted from the clusters. Table 1 represents the complete classification of leaves and diseases. The proposed system for prediction of diseases of both plants mostly depends on the image processing, K-means, and neural networks based on the input given with 1814 leaves and five diseases. Images used for validation among the total number of leaves taken for study have been represented in Fig. 3. Implementation is done on the system with the following characteristics mentioned in Table 2. The overall accuracy results obtained for the leaf disease analysis using image processing, CNN, and K-means clustering algorithm is approximately 95% for Table 1 Dataset for image classification of leaves and their diseases

Image class

Original images Images used for validation

Healthy leaf Bacterial blight Early blight Late blight Alternaria Anthracnose

530 267 287 250 297 183

424 190 230 200 238 147

Total images

1,814

1,429

A Hybrid Intelligent Approach for Early Identification …

31

Fig. 3 Graphical representation of image classification

Table 2 System specification used

Hardware Specs

Characteristics

Memory Processor Operating Systems Mat lab

8 GB Intel Core i7 Windows 10 2020a

Fig. 4 Accuracies obtained for different kinds of diseases

disease detection on pomegranate leaves and also 94% disease detection of potato leaf disease. Achieved testing accuracy has been displayed in Fig. 4.

8 Conclusion Proposed hybrid classification algorithm identified the diseased plants more accurately. The presented model, with the utilization of GLCM that implements specific color homogeneity by threshold and morphology, is easy to implement as a part of

32

B. P. Doppala et al.

an entire disease detection system. A neural network algorithm is used to predict the diseases of both plants with an accuracy of 94% and 95%, respectively.

References 1. Tai, A. P., Martin, M. V., & Heald, C. L. (2014). Threat to future global food security from climate change and ozone air pollution. Nature Clinical Practice Endocrinology & Metabolism, 4, 817–821. https://doi.org/10.1038/nclimate2317 2. “Indian agriculture economy.” Retrieved June 30, 2021, from https://statisticstimes.com/eco nomy/sectorwise-gdp-Contribution-ofIndia 3. https://www.gardeningknowhow.com/edible/vegetables/potato/potato-blight-diseases.htm. Retrieved from June 30, 2021. 4. https://www.krishisewa.com/articles/disease-management/398-pomegranate-diseases.html. Retrieved from June 30, 2021. 5. Selim Hossain, M., & Mumtahana Mou, R. (2018). Recognition and detection of tea leaf’s diseases using support vector machine. In International Colloquium on Signal Processing & its Applications, IEEE. 6. Sherly Puspha Annabel, L. (2019). Machine learning for plant leaf disease detection and classification—A review. In International Conference on Communication and Signal Processing, IEEE. 7. Jadhav, V. T. (2007). “Vision -2025”, National Research Centre on Pomegranate (Indian Council of Agricultural Research), August. 8. Khirade, S. D., & Patil, A. B. (2015). Plant disease detection using image processing. In 2015 International Conference on Computing Communication Control and Automation. 9. Islam, M., Anh Dinh, Wahid, K., & Bhowmik, P. (2017). Detection of potato dis-eases using image segmentation and multiclass Support Vector Machines. In 2017 IEEE 30th Canadian Conference on Electrical and Computer Engineering (CCECE).

Analyzing Comments on Social Media with XG Boost Mechanism S. Naga Mallik Raj , Eali. Stephen Neal Joshua , K. Swathi , S. Neeraja, and Debnath Bhattacharyya

Abstract This article looks at analyzing the sentiment of a tweet and the methods used to do it. This includes detecting emotions such as negative and positive, and various character traits such as emoticons, quotes, hashtags, mentions, etc. The Word2Vec model is used to improve performance and activate words in vectors. Bag of Words uses machine learning algorithms like Random Forest and Naive Bayes. Finally, the XG Boost model is used with advanced parameters to determine the most effective ranking model for sentiment analysis. Keywords Random forest · Naive bayes · Bag-of-words · XgBoost

1 Introduction Emotional analysis determines whether the scriptures are present, positive, negative, or neutral. The design of a text analysis program includes natural language processing (NLP) and machine learning techniques that assign weighted value to an organization, subjects, subjects, and paragraphs in sentences or phrases. Simple companies understand customer emotions, what people say and understand. You can find customer reviews in tweets, comments, reviews, or anywhere else where people express their feelings and opinions. Sentiment analysis is the area of software-based understanding of emotion that developers and executives in today’s workplace need to understand. As with many other areas, advances in deep learning have improved S. N. M. Raj (B) · Eali. S. N. Joshua · K. Swathi Department of Computer Science and Engineering, Vignan’s Institute of Information Technology, Visakhapatnam, Andra Pradesh, India S. Neeraja Department of Computer Science and Engineering, Pydah College of Engineering and Technology, Visakhapatnam, Andra Pradesh, India D. Bhattacharyya Department of Computer Science and Engineering, K L Deemed To Be University, KLEF, Guntur 522502, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 D. Bhattacharyya et al. (eds.), Machine Intelligence and Soft Computing, Advances in Intelligent Systems and Computing 1419, https://doi.org/10.1007/978-981-16-8364-0_5

33

34

S. N. M. Raj et al.

the visual analysis of the original vision of an algorithm. We currently use natural language processing, statistics, and text analysis to categorize the status of text into positive, negative, or neutral categories. The perspective can be the judgment, feelings, or evaluation of the writer [1]. The main problem in this field is the speculation, where the review is classified as a positive or negative evaluation of the item (film, book, etc.). The test can be done in two ways: a.

b.

Direct ideas (opinions): Gives a positive or negative feeling about the product directly [2]. For example, “The service of this café is poor” conveys a direct outlook. Comparison: It means comparing the subject with other similarities. For example, “The price and quality of cafe-a is better than that of cafe-b.” He presents a comparison.

1.1 Sentiment Analysis Techniques This section provides a brief description of the three methods of sentiment analysis in this paper. These below popular methods (i.e., most cited and widely used) cover a variety of strategies such as the use of Natural Language Processing (NLP) assign polarity, the use of Amazon Mechanical Turk (AMT) to create label editing, the use of psychometric scales to identify established emotions in the usual case, the use of supervised and unmanaged machine learning methods, and so on.

1.2 Lexicon Based Approach The lexicon-based methods of characterization are based on the premise that the format of a piece of text can be found down to the smallest of words. It talks about counting the number of positive and negative words in a text. If a text contains more positive words, the text is given a good score. If there are a certain number of negative words the text is marked for negative points. If the text contains an equal number of positive and negative words then you are given a neutral point. There are many ways to combine and create a lexicon. (a)

(b)

Dictionary-based approach: A small set of concept names is collected manually by known attributes. Thereafter, adjectives and conflicting words of these words are searched on Corpora such as WordNet or thesaurus and placed on the set. Corpus-based approach: They rely on Corpora’s vast syntactic and semantic patterns of ideas. Pronouns are specified by context and may require a large labeled dataset.

The most common lexicon resources are SentiWord Net, WordNet, and Concept Net, and of these resources, Senti Word Net is the most used [3].

Analyzing Comments on Social Media with XG Boost Mechanism

35

Table 1 Comparison of three approaches Types

Classification

Advantages

Machine learning Approach

Supervised, Unsupervised

Speed, Accuracy, stability, Massive sources to Ability to “train” the function, interpretation algorithms of results, Data acquisition

Disadvantages

Lexicon Based Approach

Unsupervised Learning

No previous training Requirement of required, No Adaptive corpus/corpora learning, Time production results are fast

Hybrid Based Approach

Supervised and Unsupervised Learning

Lexicon/Symbiosis for learning, detection, and measurement of sound at the intellectual and less sensitivity to changes in subject domains

Noisy reviews

1.3 Comparison The effectiveness of the various sentiment analysis techniques is measured on the basis of accuracy. A brief comparison of the different techniques used in the Sentiment analysis is shown in Table 1. Performing the assumptions considered by the various methods will produce different results. Each method has its advantages and disadvantages [2, 3].

2 Literature Review Bakshi et al. [4] used the term Sentiment Analysis widely in a variety of applications these days. There has been a surge in such technology. Through this paper, the author proposed a system, which is used to predict the behavioral pattern of tweets along with market value of stocks. The main aim of this study is to speed up the whole process while maintaining the accuracy [5]. Nonetheless, data is taken from a single organization; the datasets can be extended to several other countries as well because it is efficient and robust. Srivastava et al. [6] explains that in the traditional system, stop words removing approach in Naïve Bayes classier and processing N-gram had limited effect on the performance of classification. However, in this study, three different datasets have been incorporated using five step by step technique, i.e., Collection of data, Preprocessing, Selection of features and its extraction, Classification, and Score calculator with output presentation [4]. With the rise in datasets of Big Data, it becomes necessary to develop a more advanced model of classification of knowledge. The outcome of this model is very impressive as it offers better efficacy and accuracy [6].

36

S. N. M. Raj et al.

3 Results and Discussions This section presents the implementation and results. It includes every part of the implementation that is for libraries to the extended features that we have used to better the work and output. Plotting is also performed for better understanding of the work along with the confusion matrix with different models and algorithms [7]. (a)

(b)

i.

ii.

iii.

Libraries Used for Test Cases: The datasets were predefined and processed using libraries, pandas, numpy, and genism. The learning experiment was conducted using scikit-learn and the plots were generated using plotly library. Pre-processing steps Used: The main focus of the preprocessing of the data is to represent the data according to Bag-of-words format. In this process, there are several steps involved which are discussed below (Fig. 1). Cleansing Stage: In order to cleanup/filter the data sets, class name “Twitter Cleanup” was made. It has a method which allows different task to perform such as removing URLs, usernames, nonavailable data, and many more. The action was performed using a regular expressing string [8]. Tokenization & stemming: Now, there is “nltk” library which is used to process the text. The tokenization process begins with the nltk. word_tokenize method. After that, stemming is being processed by taking helo from Porter Stemmer system. Bag-of-Words: The data can be presented in Bag-of-words method

Fig. 1 Common word plotting and sentiments

Table 2 Comparing ML algorithms in terms of accuracy

Machine learning algorithm

Naïve bayes

Random forest

XG Boost

Accuracy

0.57

0.58

0.60

Analyzing Comments on Social Media with XG Boost Mechanism

37

Comparing different algorithms, we found that XGBoost is slightly better than other ML algorithms in terms of confusion matrix and accuracy as shown in Table 2 [7, 9].

4 Conclusion From the experiment, it has been found that in order to analyze the sentiment of tweets, a lot of preprocessing is a must. By doing so, it is easy to examine which algorithm has better efficacy and performance. The main hurdle in sentiment analysis is to obtain appropriate data for machine processing of the text. Algorithms like bag-of-words although have greater accuracy, cannot be used in every case scenario. Hence, it becomes important to add some additional features to enhance the overall capability. Word2vec algorithm has certainly raised the standard in prediction analysis as it contains skewed data that is useful for differentiating between negative cases. Besides this, in order to improve the classification, dataset should be enlarged along with several combinations of different words and must contain emoticons to express words. This will surely improve the performance of the model and help to analyze with a higher accuracy rate.

References 1. Mishra, N. et al. (2012, October). Classification of opinion mining techniques. International Journal of Computer Applications, 56(13), 1–6. 2. Mudinas, D., & Zhang, M. (2012). Levene, Combining lexicon and learning based approaches for concept level sentiment analysis. Proceedings of the First International Workshop on Issues of Sentiment Discovery and Opinion Mining, ACM, New York, NY, USA, Article, 5, 1–8. 3. Liu, B. (2012). Sentiment analysis and opinion mining (pp.18–19, 27–28, 44–45, 47, 90–101). Morgan and Claypool Publishers, May. 4. Bakshi, R. K., Kaur, N., Kaur, R., & Kaur, G. (2016). Opinion mining and sentiment analysis. In 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi (pp. 452–455). 5. Yue, L., Chen, W., Li, X., et al. (2019). A survey of sentiment analysis in social media. Knowledge Information System, 60, 617–663. https://doi.org/10.1007/s10115-018-1236-4 6. Srivastava, A., Singh, V., & Drall, G. (2019). Sentiment analysis of Twitter data: A hybrid approach. International Journal of Healthcare Information Systems and Informatics, 12, 1–16. https://doi.org/10.4018/IJHISI.2019040101 7. Sailunaz, K., &Alhajj, R. (2019). Emotion and sentiment analysis from Twitter text. Journal of Computational Science, 36. https://doi.org/10.1016/j.jocs.2019.05.009. 8. Bhavsar, H., Manglani, R. (2019). Sentiment analysis of Twitter data using Python. International Research Journal of Engineering and Technology (IRJET). 9. Kumar, A., & Jaiswal, A. (2019). Systematic literature review of sentiment analysis on Twitter using soft computing techniques. Concurrency and Computation: Practice and Experience, 32, e5107. https://doi.org/10.1002/cpe.5107

Distributed Edge Learning in Emerging Virtual Reality Systems Jingyi Wu, Hongxiang Jia, Qiqi Hu, Jiakang Sun, Kunpeng Song, Yao Xu, and Zhihan Lv

Abstract An edge computing-based model of Virtual Reality (VR) systems is built to improve the application of distributed edge learning in emerging VR systems employing artificial intelligence and edge computing technologies. Then, this model is simulated and analyzed. Analysis of instantaneity reveals the higher maximum downlink power, the smaller the system delay. Comparative analysis of offloading performance suggests that the system converges more stably under the replicator dynamics; with the increase in server processing capacity, the load gets greater. The proposed model is compared with other classic strategy models; the comparison results show that the system cost of the proposed model is lower, and the operating efficiency is better. Therefore, the proposed model has excellent transmission performance and operating efficiency, which can provide an experimental basis for the operation and data transmission of VR systems. Keywords Virtual reality system · Edge computing · Artificial intelligence · Offloading strategy

1 Introduction As computer communication technologies advance, traditional information applications undergo tremendous changes. For example, the traditional single-machine Virtual Reality (VR) cannot satisfy the users. Hence, distributed VR technology is proposed. As VR technology rapidly progresses, its applications have expanded to various areas, such as military, aerospace, information management, construction, and remote control in dangerous and harsh environments, showing excellent prospects [1]. Therefore, scholars focus on the transformation of VR systems from traditional single-machine operations to informationization and intelligence to improve the performance of VR systems.

J. Wu · H. Jia · Q. Hu · J. Sun · K. Song · Y. Xu · Z. Lv (B) School of Computer Science and Technology, Qingdao University, Qingdao, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 D. Bhattacharyya et al. (eds.), Machine Intelligence and Soft Computing, Advances in Intelligent Systems and Computing 1419, https://doi.org/10.1007/978-981-16-8364-0_6

39

40

J. Wu et al.

Computer technology is the core of VR systems. The latest high-tech achievements, such as 3D image technology, sensor technology, panoramic technology, natural interaction technology, and multimedia, are incorporated with various devices to build a 3D VR world, enabling participants to have immersive sensory experiences [2]. In a VR world, participants observe object changes from multiple angles and perform visual interactions via interactive devices. Recently, as network communication technology boosts, the demand for VR systems has increased. While pursuing multi-angle observation of object changes and visual interactions, scholars also aim at the real-time interaction of multiple users and the sharing of the same VR world [3, 4]. Therefore, improving the privacy and reliability of VR systems becomes crucial. Computing-intensive and delay-sensitive applications continue to increase in the era of the Internet of Everything. Traditional cloud computing provides services to users in the centralized computing manner, which increases latency. Mobile Edge Computing (MEC) reduces latency and distance by deploying servers at the edges of the network and transferring computing and analysis tasks to the data source for processing [5]. In MEC, task scheduling is indispensable from task execution. Hence, designing a useful task scheduling strategy is critical. MEC servers require edge computing devices to perform as many tasks collaboratively as possible. If the edge computing device is selfish, performing tasks in coordination with MEC servers will increase its cost. Without the willingness to coordinate computing, the edge computing devices and MEC servers will have inconsistent targets. MEC servers become the coordinator of the collaborative computing system rather than the subject of task execution. Therefore, the computing performance of MEC servers is improved, which alleviates the traffic pressure and solves the scalability problem of privacy [6, 7]. In order to solve the delay problems of VR systems, the system performance can be improved. Finally, the operating efficiency of the system is ensured via a suitable balance strategy. In summary, as Artificial Intelligence (AI) technologies continue to advance, human beings can interact and learn freely in a virtual environment, which has significant economic and social values. Besides, learning in and interacting with the VR world is also an urgent goal of AI technologies. Therefore, a model for VR systems is built utilizing edge computing, analyzed via simulation experiments, and compared with other algorithms to prove its performance, in an effort to provide experimental references for the operation of VR systems.

2 Literature Review As information technology (IT) progresses promptly, VR, as a new interactive method, is also advancing rapidly. Scholars all over the world have researched VR technologies. Ge et al. [8] proposed a Multi-path Cooperative Routing (MCR) solution suitable for the augmented reality and/or virtual reality (AR/VR) wireless transmission of 5G small cells; then, they analyzed the delay of the MCR solution; simulation results showed that in 5G small cells, the delay of the MCR solution was shorter

Distributed Edge Learning in Emerging Virtual Reality Systems

41

than other traditional single-path routing solutions [8]. Serafin et al. [9] proved the rationality of object-based sound effect simulation and its spatial propagation [9]. Schneider et al. [10] investigated how physical keyboards were applied to VR-based immersive head-mounted monitors; they designed a set of input and output mappings utilized for reconfiguring the virtual display of the physical keyboards and the final design space: a photo browser, a Whack-A-Mole game, a secure password input, and a virtual touch bar; finally, they investigated the feasibility of these applications and found that these applications were available in VR; the limitations and possibilities of remapping input and output characteristics of physical keyboards in VR were discussed, and future research directions were pointed out [10]. In order to apply cellular connection and wireless connection to VR systems, Hu et al. [11] identified the principal driving factors of VR systems from the viewpoint of applications and test cases; then, they mapped human perception demands to the four stages of VR technology advancement; finally, simulation experiments proved the effectiveness of the proposed solution; compared with video transmission in cellular networks, VR transmission could better meet the unique service requirements [11]. Since its proposal, MEC has attracted widespread attention worldwide. Currently, research on MEC focuses on task scheduling, content caching, and collaboration. Xiong et al. [12] proposed a prototype of a blockchain system supporting MEC to promote the application of blockchain in the mobile Internet of Things (IoT) in the future; besides, experimental results proved that the proposed prototype was correct [12]. Lyu et al. [13] designed an integrated architecture of the cloud, MEC, and IoT; a lightweight request-acceptance framework was proposed to solve the scalability problem; without interdevice coordination, the framework could run on IoT devices and computing servers separately by encapsulating the latency requirement in the offloading request; simulation results revealed that the proposed selective offloading solution could meet the delay requirements of different services and reduce the energy consumption of IoT devices [13]. Ning et al. [14] built an energy-saving scheduling framework of MEC-based Internet of Vehicles (IoV) for the heterogeneous demands of IoV in communication, computing, and storage, aiming at minimizing the energy consumption of Roadside Unit (RSU) under task delay constraints; finally, the validity of the framework was verified in terms of energy consumption, delay, and task blocking possibility [14]. Zeng et al. [15] proposed to apply massive Multiple Input Multiple Output (MIMO) to MEC; they found that massive MIMO could support more users to shed at the same time, thereby reducing queuing delay, and ultimately, the overall response time of MEC; simulation found that adopting more antennas could reduce system delay and energy consumption [15]. The above results suggest that designing a VR operating system, which can realize real-time interaction of multiple users to work efficiently between multiple servers, is necessary. Therefore, MEC is applied to VR systems to build a distributed VR systems model, which is vital to the advancement of communication technology in the future.

42

J. Wu et al.

3 Method 3.1 VR Systems Computer technology is the core of VR systems. The latest high-tech achievements, such as 3D image technology, sensor technology, panoramic technology, natural interaction technology, and multimedia, are various devices to build a 3D VR world, enabling participants to have immersive sensory experiences. Except for traditional mouse, keyboard, and other interactive methods, distributed VR systems also realize the synchronization between virtual characters through human gesture sensors and data gloves, thereby realizing the interaction between reality and virtuality. In the distributed VR simulation system, the server is responsible for the control functions of the VR model’s motion simulation, hydraulic system simulation, collision detection, and Human–Computer Interaction (HCI) [16, 17]. The distributed VR system achieves real-time HCI via model dynamic simulation and multiclient response requirements. As a comprehensive, integrated technology, VR involves computer graphics, HCI technology, sensing technology, and AI. VR utilizes computers for generating vivid 3D sensations, allowing participants to experience and interact with the virtual world via appropriate devices [18]. Therefore, VR systems refer to a particular environment generated by a computer. Human beings can “project” themselves into this environment, and enjoy an immersive experience by operating and controlling the environment via various devices. The composition and characteristics of VR systems are shown in Fig. 1 [19]. In VR systems, multisensitivity refers to the visual perception, auditory perception, force perception, tactile perception, and motion perception of general computer technologies. The ideal VR technology should have all human perceptions. Immersion, the sense of presence, refers to the degree of reality that VR users feel like the protagonists in the VR environment. The ideal VR environment should confuse users in distinguishing between true and false. Thus, users can devote themselves to the 3D virtual environment created by the computer. Interactivity indicates users’ ability to manipulate objects in the simulated environment, and how natural the environmental feedback feels. Interactivity has instantaneity. For example, users can hold virtual objects in the simulated environment with their hands, including the feeling of handgrip and object weight. Conceptuality emphasizes that VR technology should have a broader imagination, which can broaden the scope of human cognition, not only can reproduce the real environment but also imagine the unknown environment that does not exist objectively [20, 21]. The applications of VR technology are vast, such as urban planning, art education, and the military industry. For example, the pursuit of visualization is the most urgent in urban planning. VR technology can be widely employed in all aspects of urban planning. VR can bring users a strong and vivid sensory impact when presenting planning schemes. Hence, users can enjoy an immersive experience. Also, project data can be collected at any time in a real-time VR environment via the data interface, which is convenient for the planning, designing, bidding, and management of large

Distributed Edge Learning in Emerging Virtual Reality Systems Fig. 1 The composition and characteristics of VR systems

43

Virtual environment processor

form

vision system

et

Azimuth tracker

auditory system

Virtual reality system

Multi-Sensory

basic feature Immersion

Imagination Interactivity

and sophisticated projects; besides, applying VR technology provides convenience for designing and managing auxiliary designs and plan reviews [22]. During the operation of VR systems, the server must respond to client requests in real-time due to a large number of computing tasks, causing overloads of some servers but the idleness of other servers; in turn, the uneven loads of servers result in wastes of resources and reduced system performance [23]. Therefore, upgrading and improving VR systems are vital.

3.2 MEC As an emerging technology, MEC can deploy computing and storage resources at the edges of the mobile networks, or on the side of the wireless access networks that are close to the users, thereby providing computing services. Since its proposal in 2013, MEC has been improved to integrate the characteristics of contemporary mobile networks. Figure 2 illustrates that MEC “sinks” the services and functions of the cloud data center to the network nodes, providing services for users on the edges. The services include computing, storage, and communication. In this way, the network operations and delays are reduced, which ultimately improves the quality of user services [24]. MEC has the following characteristics during its operation: (1) nodes are deployed at the edges of the network, which facilitates users to access local network resources

44

J. Wu et al. Computer (calculation)

Mobile phone (Communication)

base station

MEC server

Core network

Internet

Memory (storage)

Fig. 2 The network structure of MEC

while promoting the advancement of IoT. (2) MEC is closer to users, which can promote new services by acquiring various user information for analysis and processing. (3) The edge computing devices are close to the end devices; thus, the delay is reduced, and the response speed is increased, thereby alleviating network congestion. (4) The real-time network data information, such as wireless networks, are adopted to provide corresponding services. (5) The location information of the associated IoT devices is determined according to the underlying command information, which lays a foundation for the services based on geographic information [25]. MEC considers a convex optimization problem with equality constraints. A critical realization of dual ascending is introducing dual variables and alternatively iterating the original variables and the dual variables. In this way, the two variables can obtain the optimal values simultaneously. Under the assumption of strong duality, optimal solutions to both the original problem and the dual problem are acquired simultaneously. However, the strong convexity of the function makes the dual ascent method unable to be applied directly [26]. Specifically, the problem is: min f (x) si. Ax = b

(1)

In (1), variable x ∈ R n , A ∈ R m×n . Besides, f is the convex function of f : R n → R. Transforming (1) into the Lagrangian form: L(x, y) = f (x) + y T (Ax − b)

(2)

The form of dual function: g(y) = inf L(x, y) = − f ∗ (−A T y) − by x

(3)

In (3), inf refers to the infimum of function, where y is the Lagrangian multiplier or dual variable, and f ∗ is the complex conjugate convex function of f . Therefore, (3) is transformed into solving the dual problem:

Distributed Edge Learning in Emerging Virtual Reality Systems

45

max g(y)

(4)

In (4), variable y ∈ R m . If the original function has strong duality, the optimal solution of the original problem will be equivalent to the optimal solution of the dual problem. In other words, the optimal solution of the original problem is equivalent to that of the dual problem. The optimal solution y ∗ to the dual problem is obtained. Then, the optimal solution x ∗ to the original problem is obtained via (5): x ∗ = arg min L(x, y ∗ )

(5)

x

Once the original problem function f is strictly convex, a unique minimum of L(x, y ∗ ) will exist. The dual problem is solved by using the gradient ascent method. Then, the original problem is solved. If the function g is differentiable, the residual of the equality constraint will be obtained by solving x + = arg min L(x, y): x

∇g(y) = Ax + −b

(6)

In (6), ∇g(y) is the derivative of g. Specifically, the solution is: First, the variable is solved at the k + 1-th time: x k+1 = arg min L(x, y k )

(7)

x

The dual variable is solved at the k + 1-th time: y k+1 = y k + α k (Ax k + 1 −b)

(8)

In (8), α k > 0 is the step parameter. If an appropriate step parameter is set, the original problem and the dual problem will converge to the optimal solution simultaneously via an alternate iteration of variables. Since the objective function is separable in a specific dimension, the dual ascent can split the problem into multiple sub-problems and update the parameters. The form of this decomposition parameter is dual decomposition [27]. Because the objective function f is separable, the independent variable can be decomposed; that is, x = (x1 , · · · , xn ), xi ∈ R ni , the original function is: f (x) =

N 

f i (xi )

(9)

i=1

The matrix is decomposed into A = [A1 , · · · , An ], then: Ax =

N  i=1

Ai x i

(10)

46

J. Wu et al.

Transforming the Lagrangian function into: = L(x, y) =

N  

N 

L i (xi , y)

i=1

f i (xi ) + y T Ai xi −

i=1

1 N

yT b

(11)



The update of x is N subproblems that can be parallelized. Specifically, the process is as follows: first, the i-th original variable is solved at the k + 1-th time: xik+1 = arg min L i (xi , y k )

(12)

xi

The dual variable is solved at the k + 1-th time: yik+1 = yik + α k (Axik + 1 −b)

(13)

The traditional dual decomposition method is utilized for solving linear programming problems. Dual decomposition can also be utilized for solving nondifferentiable optimization problems, such as subdifferential methods. MEC’s dual decomposition promotes the advancement of distributed optimization methods. The task scheduling of MEC is shown in Fig. 3 [28]. Edge computing device processing unit(CPU) request

date

MEC server Return the result

Data unit(Date)

... processing unit(CPU) request

processing unit(CPU) Task queue

Task parameters

scheduling strategy

date

Data unit(Date)

Fig.3 Schematic diagram of MEC task scheduling

Distributed Edge Learning in Emerging Virtual Reality Systems Private memory data

Private memory data Input data

Task unit (calculation program)

Task 1

47

Output data

Input data

Task 2

...

Task monitor and scheduler (node server)

Task unit (calculation program)

Output data

Task n

Task monitoring and scheduling data area

Hardware equipment of virtual reality system

Central server

Fig. 4 The framework of the model for VR systems based on edge computing

3.3 A Model for VR Systems Based on Edge Computing The distributed VR simulation system is analyzed. The system server is in charge of the virtual model’s motion simulation, hydraulic system simulation, collision detection, and HCI functions. During the operation, the system will receive many computing tasks. The server must respond to the requests of multiple clients in real time. Hence, some servers are prone to overload while other servers are idle, resulting in severe waste of resources, which ultimately affects system performance. Therefore, MEC is adopted to distribute a load of servers in different locations. During operation, tasks in the VR systems are divided into multiple units and distributed to the servers of each node. The central server monitors each node server in real time. The framework of the model for VR systems based on edge computing is shown in Fig. 4.

3.4 Simulation Analysis The MATLAB network simulation platform was employed for simulation experiments. Hypothetically, VR users are randomly distributed in small cells, the wireless channel is fading, and d denotes the distance from VR users to the base station.

48

J. Wu et al.

Table 1 Tools for model construction

Tool version Simulation platform

MATLAB

Matrix transportation

Numpy 1.12.6; Pandas 0.23.0

Programming language

Python 3.2

Development platform

PyCharm

Operating system

Linux

The operating environment for all experiments is the Windows 10 operating system, 3.0 GHz processor, 4 GB internal memory, and CORE-i7 Central Processing Unit (CPU), 4720HQ and 2.6 GHz. The specific modeling tools are shown in Table 1.

4 Results and Discussions 4.1 Comparison of Task Offloading Performance Among Models

Fig. 5 Convergence analysis of the proposed model under replicator dynamics

Proportion distribution of each strategy(%)

Figures 5 and 6 illustrate the convergence curves of the proposed model under the replicator dynamics and the Logit dynamics, respectively. Under replicator dynamics, different models converge stably at the second iteration. Despite the slight fluctuations, the convergence will resume shortly after deviations. Under Logit dynamics, the entire iteration is a process of constant trial-error and learning. Therefore, both replicator and Logit dynamics are applicable for user devices that can change the strategy according to the probability. Nevertheless, the replicator dynamics are more consistent with the actual user devices. 0.8

All local execution uninstall 0.2 uninstall 0.4 uninstall 0.6 uninstall 0.8 Uninstall all

0.6

0.4

0.2

0.0 0

4

8

12

Number of iterations

16

20

Fig. 6 Convergence analysis of the proposed model under Logit dynamics

Proportion distribution of each strategy(%)

Distributed Edge Learning in Emerging Virtual Reality Systems

49

0.8

All local execution uninstall 0.2 uninstall 0.4 uninstall 0.6 uninstall 0.8 Uninstall all

0.6

0.4

0.2

0.0 0

4

8

12

16

20

Number of iterations

1350 1200

Server side load

Fig. 7 Analysis of the relationship between server-side load and server processing capacity

1050 900 750 600 450 40

45

50

55

60

65

70

Server processing capacity

Figure 7 shows the relationship between server-side load and server processing capacity. As the server processing capacity increases, the load increases. As the λ value increases, the user becomes more sensitive to the server processing capacity, and the swing of the server-side load becomes more considerable. Figure 8 shows the analysis of the convergence of task offloading with iterations. When the iteration time is 10, different servers all converge. However, due to the different additional loads of each server, the user load of each server is different, but the difference is not notable.

50

J. Wu et al.

Fig. 8 Analysis of convergence of task offloading with iterations Task offload volume

64

The server 1 The server 2 The server 3 The server 4 The server 5 Virtual server

56 48 40 32 24 0

8

16

24

32

40

48

Number of iterations

5 Conclusion In conclusion, the proposed edge computing-based model has excellent transmission performance, low system costs, and exceptional operating efficiency, which can provide an experimental basis reference the operation and data transmission of VR systems. However, some deficiencies are found in the experimental process. For example, the data come from real data in the simulation. If the performance is tested among real users, privacy may be leaked. Therefore, in the future, technologies, such as differential privacy or Generative Adversarial Networks (GANs), will be utilized for data distortion, thereby protecting the privacy of user data, which is particularly vital for the improvement of system security performance.

References 1. Liu, M., & Liu, Y. (2017). Price-based distributed offloading for mobile-edge computing with computation capacity constraints. IEEE Wireless Communications Letters, 7(3), 420–423. 2. Al-Shuwaili, A., & Simeone, O. (2017). Energy-efficient resource allocation for mobile edge computing-based augmented reality applications. IEEE Wireless Communications Letters, 6 (3), 398–401. Author, F., Author, S., Author, T.: Book title. 2nd edn. Publisher, Location (1999). 3. Taleb, T., Dutta, S., Ksentini, A., Iqbal, M., & Flinck, H. (2017). Mobile edge computing potential in making cities smarter. IEEE Communications Magazine, 55(3), 38–43. 4. Xu, X., Zhang, X., Gao, H., Xue, Y., Qi, L., & Dou, W. (2019). BeCome: Blockchain-enabled computation offloading for IoT in mobile edge computing. IEEE Transactions on Industrial Informatics, 16(6), 4187–4195. 5. Zhang, K., Mao, Y., Leng, S., He, Y., & Zhang, Y. (2017). Mobile-edge computing for vehicular networks: A promising network paradigm with predictive off-loading. IEEE Vehicular Technology Magazine, 12(2), 36–44. 6. Tran, T. X., Hajisami, A., Pandey, P., & Pompili, D. (2017). Collaborative mobile edge computing in 5G networks: New paradigms, scenarios, and challenges. IEEE Communications Magazine, 55(4), 54–61.

Distributed Edge Learning in Emerging Virtual Reality Systems

51

7. Xu, J., Wang, S., Bhargava, B. K., & Yang, F. (2019). A Blockchain-Enabled trustless crowd-intelligence ecosystem on mobile edge computing. IEEE Transactions on Industrial Informatics, 15(6), 3538–3547. 8. Ge, X., Pan, L., Li, Q., Mao, G., & Tu, S. (2017). Multipath cooperative communications networks for augmented and virtual reality transmission. IEEE Transactions on Multimedia, 19(10), 2345–2358. 9. Serafin, S., Geronazzo, M., Erkut, C., Nilsson, N. C., & Nordahl, R. (2018). Sonic interactions in virtual reality: State of the art, current challenges, and future directions. IEEE Computer Graphics and Applications, 38(2), 31–43. 10. Schneider, D., Otte, A., Gesslein, T., Gagel, P., Kuth, B., Damlakhi, M. S., ... & Muller, J. (2019). Reconviguration: Reconfiguring physical keyboards in virtual reality. IEEE Transactions on Visualization and Computer Graphics, 25 (11), 3190-3201. 11. Hu, F., Deng, Y., Saad, W., Bennis, M., & Aghvami, A. H. (2020). Cellular-connected wireless virtual reality: Requirements, challenges, and solutions. IEEE Communications Magazine, 58(5), 105–111. 12. Xiong, Z., Zhang, Y., Niyato, D., Wang, P., & Han, Z. (2018). When mobile blockchain meets edge computing. IEEE Communications Magazine, 56(8), 33–39. 13. Lyu, X., Tian, H., Jiang, L., Vinel, A., Maharjan, S., Gjessing, S., & Zhang, Y. (2018). Selective offloading in mobile edge computing for the green internet of things. IEEE Network, 32(1), 54–60. 14. Ning, Z., Huang, J., Wang, X., Rodrigues, J. J., & Guo, L. (2019). Mobile edge computingenabled Internet of vehicles: Toward energy-efficient scheduling. IEEE Network, 33(5), 198– 205. 15. Zeng, M., Hao, W., Dobre, O. A., Ding, Z., & Poor, H. V. (2020). Massive MIMO-Assisted mobile edge computing: exciting possibilities for computation offloading. IEEE Vehicular Technology Magazine, 15(2), 31–38. 16. Yang, X., Chen, Z., Li, K., Sun, Y., Liu, N., Xie, W., & Zhao, Y. (2018). Communicationconstrained mobile edge computing systems for wireless virtual reality: Scheduling and tradeoff. IEEE Access, 6, 16665–16677. 17. Pan, Y., Chen, M., Yang, Z., Huang, N., & Shikh-Bahaei, M. (2018). Energy-efficient NOMAbased mobile edge computing offloading. IEEE Communications Letters, 23(2), 310–313. 18. Lipton, J. I., Fay, A. J., & Rus, D. (2017). Baxter’s homunculus: Virtual reality spaces for teleoperation in manufacturing. IEEE Robotics and Automation Letters, 3(1), 179–186. 19. Park, J., Popovski, P., & Simeone, O. (2018). Minimizing latency to support VR social interactions over wireless cellular systems via bandwidth allocation. IEEE Wireless Communications Letters, 7(5), 776–779. 20. Yu, D., Fan, K., Zhang, H., Monteiro, D., Xu, W., & Liang, H. N. (2018). PizzaText: Text entry for virtual reality systems using dual thumbsticks. IEEE Transactions on Visualization and Computer Graphics, 24(11), 2927–2935. 21. Coogan, C. G., & He, B. (2018). Brain-computer interface control in a virtual reality environment and applications for the internet of things. IEEE Access, 6, 10840–10849. 22. Bastug, E., Bennis, M., Médard, M., & Debbah, M. (2017). Toward interconnected virtual reality: Opportunities, challenges, and enablers. IEEE Communications Magazine, 55(6), 110– 117. 23. Gupta, A., Cecil, J., Pirela-Cruz, M., & Ramanathan, P. (2019). A virtual reality enhanced cyber-human framework for orthopedic surgical training. IEEE Systems Journal, 13(3), 3501– 3512. 24. Tang, L., & He, S. (2018). Multi-user computation offloading in mobile edge computing: A behavioral perspective. IEEE Network, 32(1), 48–53. 25. Chen, M., & Hao, Y. (2018). Task offloading for mobile edge computing in software defined ultra-dense network. IEEE Journal on Selected Areas in Communications, 36(3), 587–597. 26. Zhou, Z., Feng, J., Tan, L., He, Y., & Gong, J. (2018). An air-ground integration approach for mobile edge computing in IoT. IEEE Communications Magazine, 56(8), 40–47.

52

J. Wu et al.

27. Hao, Y., Chen, M., Hu, L., Hossain, M. S., & Ghoneim, A. (2018). Energy efficient task caching and offloading for mobile edge computing. IEEE Access, 6, 11365–11373. 28. Wan, L., Sun, L., Kong, X., Yuan, Y., Sun, K., & Xia, F. (2019). Task-driven resource assignment in mobile edge computing exploiting evolutionary computation. IEEE Wireless Communications, 26(6), 94–101.

A Review on Deep Learning-Based Object Recognition Algorithms Mohan Mahanty , Debnath Bhattacharyya , and Divya Midhunchakkaravarthy

Abstract Object recognition seems like a sci-fi prediction to a tech future, widely used in computer vision tasks. Computer vision plays a significant role in developing various object recognition applications like biometric regulation, image retrieval, security, machine inspection, medical imaging, and digital watermarking. It also plays a substantial role in the invention of autonomous automobiles, to have safe driving by detecting objects and road signals. Computer vision is an essential aspect of machine learning and deep learning, which enables new medical diagnostic methods to analyze the X-rays and internal body scanning. In this paper, we discussed a few deep learning architectures like ConvNet, Region-based CNN, Fast RCNN, Faster RCNN, YOLO family for object recognition. The perception of recognition of objects is changing from model to model. Computer-Aided Medical Diagnosis (CAMD) has brought a drastic change in the real-time gland (tissue) recognition, and assists the radiologists to interpret a medical pathology image or video in seconds. After analyzing various object recognition algorithms, we have provided a detailed report on the different object recognition algorithms used to recognize objects in images, videos, and live feeds from the webcams. Keywords Object recognition · Deep learning · CNN

M. Mahanty (B) · D. Midhunchakkaravarthy Department of Computer Science and Multimedia, Lincoln University College, Kuala Lumpur, Malaysia e-mail: [email protected] D. Midhunchakkaravarthy e-mail: [email protected] D. Bhattacharyya Department of Computer Science and Engineering, K L Deemed To Be University, KLEF, Guntur 522502, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 D. Bhattacharyya et al. (eds.), Machine Intelligence and Soft Computing, Advances in Intelligent Systems and Computing 1419, https://doi.org/10.1007/978-981-16-8364-0_7

53

54

M. Mahanty et al.

1 Introduction Computer vision plays a significant role in the construction of the technological discipline of a computer. The scientific knowledge of computer vision is all about artificial systems, extracting useful information from image data, also known as digitized data. The learning process of computers resembles the human learning process by using things they see and observe. The perspective of engineering tries to learn and computerize the tasks that the human visual system does, and also include various strategies for gaining knowledge, refining the data obtained as per requirement, and analyzing it to extract meaningful data and understand digital images. Image recognition is a machine learning method, which works similar to the human cerebral system. Object recognition refers to an assortment of related tasks for recognizing objects in videos or images. The computational algorithm for the image recognition had to identify and remember objects for a scene, choose and track their exact areas, and name them precisely. Nowadays, machine learning is the main thrust behind technological advancements, making many image labeling enhancements and object recognition skills [1]. Machine learning algorithms help better analyze and recognize patterns for the new extended data by learning from patterns of old data. But due to Data Acquisition, machine learning may generate high error-susceptibility results. And it certainly failed to deal with large data sets like images and videos. Deep learning [2] is introduced to overcome these types of problems. Deep Learning is also called a deep neural network, which can learn unsupervised data from unstructured data. When compared with Machine Learning, most of the deep learning models have low errorsusceptibility in object recognition. The principal utilization of object recognition is in stock photography, Visual search for improved product discoverability, clinical examination devices investigating the chance of reusing drugs for new ailments, and consumer recommendation apps used in business platforms.

2 Literature Review Object recognition allows us to detect and recognize objects in images and videos. Through this object detection technique, a computer can quickly identify and localize the objects, count the number of objects in a scene, and track their exact locations while accurately labelling them. The object detection is a complex problem in machine learning, but deep learning can easily detect objects by using various types of Convolution Neural Networks [3]. Pulkit Kumar et al. proposed a novel FCNN U-SegNet [4] for brain tissue segmentation. The backbone for U-SegNet is the U-Net and SegNet deep learning model. In the year 2017, Bin Liu et al. proposed the model, which demonstrated different neural networks to achieve classification using RCNN [5]. The combination of the Faster R-CNN and RPN can effectively perform the object detection task.

A Review on Deep Learning-Based Object Recognition Algorithms

55

Ali Farhadi et al. [6] proposed a variant methodology of existing ones for object detection, i.e., YOLO, can predict the class probabilities with their corresponding bounding boxes using a single neural network for the given images. It also makes comparatively more localization errors. Ajeet Ram Pathaka et al. [7] mentioned how deep learning analyzes CNNs for object detection. Singh et al. have proposed a new training model for object recognition named SNIPER [8]. Ross Girshick et al. developed deep convolutional networks [9], which is an earlier work to classify objects to build Fast R-CNN efficiently. Compared to the work done earlier, the accuracy of detecting the objects, testing, and training speeds is improved by Fast R-CNN. FASTER R-CNN [10] architecture efficiently detects and recognizes the objects, which is the zenith of all R-CNN family models. Yu Liu et al. proposed the improved Faster R-CNN [11], considered the best architecture for detecting objects at that time. Yousong Zhu et al. proposed the CoupleNet [12] by combining the region proposal subnetwork and classification subnetworks for accurate object detection tasks. In 2018, Aleksis Pirinen et al. proposed a visual recognition model based on deep reinforcement learning [13] using an object detector and sequential Region Proposal Network (RPN). They used a trained Deep Reinforcement Learning (DRL) model to accumulate class-specific evidence over time, and they used context integration significantly to detect accurately. In 2019, Tian et al. proposed the FCOS [14] object detector (Fully Convolutional One Stage) using the analogue to semantic segmentation and per-pixel prediction. Ghiasi et al. combined different backbone models of RetinaNet to design the NS-FPN [15], which decreases the time delay and increases the accuracy compared to other modern models. ResNeSt [16] proposed by Wu et al. in the year 2019 as a new variant ResNeSt for object detection. Yanghao Li et al. suggested a ScaleAware Trident Networks [17] for detecting objects. Bottom-up Object Detection [18], Thundernet [19], Generalized object detection [20] models are proposed in the recent times for more accurate real-time object detection. Recently, Jingdong Wang et al. proposed the high-resolution networks [21] that can detect human pose models. EfficientDet [22], SpineNet [23], YOLOv4 [24] are well proved models for real-time object detection. Even though many alternative CNN-based object detectors are present in the current scenario, there are still some defaults such as they mostly help in recommendation systems. However, this detection technique helps solve the problem of using large GPUs. They improved accuracy by verifying a wide range of features and helped the classifier and detector enhance its working.

3 Results and Analysis In this paper, we analyzed the performance of the various deep learning models based on the three test cases. First, we consider the object detection based on the COCO dataset [25, 26], PASCAL VOC 2007 dataset [27, 28] for our analysis work. The COCO dataset contains around 1.5 million object instances along with 80 object

56

M. Mahanty et al.

types. By considering the metrics like MAP (Mean average precision), FPS, Inference Time, we analyzed the performance of various deep learning architectures applied on the COCO dataset for real-time object detection [26]. As shown in Eq. (1), Mean average precision is a metric mostly used in object recognition in images, defined as Q MAP =

q=1

AveP(q)

(1)

Q

Table 1 represents the performance of the various deep learning models are evaluated by using the metrics like mAP, FPS, inference time. YOLO V4-512 has higher processing power when compared to other object detection models. The inference time of YOLO is also significantly less. The inference time metric gives the time taken by the algorithm or model to produce predictions from the time input is given, including the time for processing the input. From Table 2, we can see that the YOLO deep learning model is considered to be a better model (FPS value) value and BlitzNet512(S8) with higher mAP is considered to be better (mAP value). Depending upon the multiple viewpoints like feature extraction, input image resolution, boundary box encoding, data augmentation, training, and testing data sets, localization loss function, YOLO generates more accurate results. Figure 1 shows that by considering the performance of various models in object detection, YoloV2 gives the best performance. Table 1 Real-time object detection with COCO data set S. no.

Method

MAP

FPS

Inference time

1

YOLOv4-512

43

83

12

2

YOLOv4-608

43.5

62

16

3

CSPResNeXt50-PANet-SPP

33.4

58

17

4

TTFNet

35.1

54.4

18.4

5

SSD512-HarDNet85

35.1

39

25

6

CenterNet HarDNet-85

42.4

38

26

7

YOLOv3-418

31

34

29

8

CornerNet-Squeeze

34.4

33

30

9

SpineNet-49

46.7

29

34.3

10

CenterNet DLA-34+DCNv2

39.2

28

35

11

YOLOv3-608

33

20

50

12

CenterNet Hourglass-104

42.1

7.8

128.2

13

NAS-FPN AmoebaNet

48.3

3.6

278.9

14

NAS-FPNLite MobileNetV2

25.7

3

285

15

Mask R-CNN X-152–32×8d

40.3

3

333

A Review on Deep Learning-Based Object Recognition Algorithms Table 2 Object detection on PASCAL VOC 2007

S. no.

Model

57 FPS

MAP

1

YOLO

46

63.4

2

BlitzNet512(S4)

24

79.1 81.5

3

BlitzNet512(S8)

19.5

4

R-FCN

9

80.5

5

Faster R-CNN

7

73.2

YOLOV2 544X544 YOLOV2 480X480 YOLOV2 416X416 YOLOV2 352X352 YOLOV2 288X288 SSD 500 SSD 300 YOLO Faster R-CNN VGG-16 Resnet Faster R-CNN VGG-16 Fast R-CNN

Inference Time FPS

0

20

40

60

80 100

Fig. 1 Performance of the models in object detection (MS COCO dataset)

Figure 2, represents the results generated by testing various object detection models on PASCAL VOC 2012 dataset using the mAP metric to measure the accuracy. YOLO v3 concentrated more on increasing the YOLO model’s speed by decreasing the time taken to produce results, which have become a significant advantage over other architectures. 250 200 150 100 50 0

Fig. 2 Performance of various Models on PASCAL VOC 2012 dataset

mAP time

58

M. Mahanty et al.

4 Conclusion and Future Work Object detection created a significant leap in the field of computer vision in its research momentum. Traditional machine learning will not be sufficient for object detection, as they are complex and quite challenging. By feeding the images taken from live feeds, cameras to the deep learning architectures can efficiently detect the objects with a limited number of samples. Object detection using deep learning is used in face detection, face recognition, video surveillance, stock level analysis, inventory management, traffic control, and medical imaging. Effective object detection from real-time live feeds is used mainly in developing self-driving cars, which is the future of the automobile industry. We have seen various deep learning architectures, where each of them follows different architectures, but ultimately every model aims to detect and recognize objects from videos or images. Deep learning in medical imagining can efficiently identify the abnormalities in the glands by analyzing the pathology images and helps pathologists in the diagnosis of the diseases.

References 1. Cioffi, R., Travaglioni, M., Piscitelli, G., Petrillo, A., & De Felice, F. (2020). Artificial intelligence and machine learning applications in smart production. MDPI AG (vol. 12, no. 2, p. 492). https://doi.org/10.3390/su12020492. 2. https://www.cs.toronto.edu/hinton/absps/NatureDeepReview.pdf. Accessed 14 Aug 2020. 3. A Comprehensive Guide to CNN. https://towardsdatascience.com/a-comprehensive-guide-con volutional-neural-networks-the-eli5-way-3bd2b1164a53. Accessed 14 Aug 2020. 4. Kumar, P., Nagar, P., Arora, C., & Gupta, A. (2018). U-Segnet: Fully convolutional neural network based automated brain tissue segmentation tool. In Proceedings - International Conference on Image Processing, ICIP, pp. 3503–3507. https://doi.org/10.1109/ICIP.2018. 8451295. 5. Liu, B., Zhao, W., & Sun, Q. (2017). Study of object detection based on Faster R-CNN. In Proceedings - 2017 Chinese Automation Congress, CAC 2017, vol. 2017-January, pp. 6233– 6236. https://doi.org/10.1109/CAC.2017.8243900. 6. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2020). You only look once: unified, real-time object detection. http://pjreddie.com/yolo/. Accessed 14 Aug 2020. 7. Pathak, A. R., Pandey, M., & Rautaray, S. (2018). Application of deep learning for object detection. In Procedia Computer Science, 132. https://doi.org/10.1016/j.procs.2018.05.144. 8. Singh, B., Najibi, M., & Davis, L.S. (2018). SNIPER: Efficient multi-scale training0 Accessed 24 Sep 2020. https://arxiv.org/abs/1805.09300. 9. Girshick, R. (2015). Fast R-CNN. In 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 2015, pp. 1440–1448. https://doi.org/10.1109/ICCV.2015.169. 10. Ren, S., He, K., Girshick, R., & Sun, J. (2020). Faster R-CNN: Towards real-time object detection with region proposal networks. http://image-net.org/challenges/LSVRC/2015/res ults. Accessed 14 Aug 2020. 11. Liu, Y. (2018). An improved faster R-CNN for object detection. In Proceedings - 2018 11th International Symposium on Computational Intelligence and Design, ISCID 2018, vol. 2, pp. 119–123. https://doi.org/10.1109/ISCID.2018.10128.

A Review on Deep Learning-Based Object Recognition Algorithms

59

12. Zhu, T., Zhao, C., Wang, J., Zhao, X., Wu, Y., & Lu, H. (2017). CoupleNet: Coupling global structure with local parts for object detection. In Proceedings of the IEEE International Conference on Computer Vision, vol. 2017-October, pp. 4146–4154. https://doi.org/10.1109/ICCV. 2017.444. 13. Pirinen, A., & Sminchisescu, C. (2018). Deep reinforcement learning of region proposal networks for object detection. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6945–6954. https://doi.org/10.1109/CVPR.2018.00726. 14. Tian, Z., Shen, C., Chen, H., & He, T. (2019). FCOS: Fully convolutional one-stage object detection. In Proceedings of the IEEE International Conference on Computer Vision, vol. 2019-October, pp. 9626–9635. https://doi.org/10.1109/ICCV.2019.00972. 15. Ghiasi, G., Lin, T. Y., Le, Q. V. (2019). NAS-FPN: Learning scalable feature pyramid architecture for object detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2019-June, pp. 7029–7038. https://doi.org/10. 1109/CVPR.2019.00720. 16. Zhang, H., Wu, C., Zhang, Z., Zhu, Y., Zhang, Z., Lin, H., Sun, Y., He, T., Mueller , J., Manmatha, R., Li, M., & Smola, A. (2020). ResNeSt: Split-attention networks, 2004, 08955. 17. Li, Y., Chen, Y., Wang, N., & Zhang, Z. (2019). Scale-aware trident networks for object detection, 1901, 01892. 18. Zhou, X., Zhuo, J., & Krahenbuhl, P. (2019). Bottom-up object detection by grouping extreme and center points. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2019-June, pp. 850–859. https://doi.org/10.1109/CVPR. 2019.00094. 19. Qin, Z., et al., (2019). ThunderNet: Towards real-time generic object detection on mobile devices. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), vol. 2019October, pp. 6717–6726. https://doi.org/10.1109/ICCV.2019.00682. 20. Bhargava, P. (2019). On generalizing detection models for unconstrained environments. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 4296–4301. https://doi.org/10.1109/ICCVW.2019.00529. 21. Wang, J., Sun, K., & Cheng, T., et al. (2020). Deep high-resolution representation learning for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. https:// doi.org/10.1109/tpami.2020.2983686. 22. Tan, M., Pang, R., & Le, Q. V. (2020). EfficientDet: Scalable and efficient object detection. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 10778–10787. https://doi.org/10.1109/CVPR42600.2020.01079. 23. Du, X., et al., (2020). SpineNet: Learning scale-permuted backbone for recognition and localization. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 11589–11598. https://doi.org/10.1109/CVPR42600.2020.01161. 24. Bochkovskiy, A., Wang, C.-Y., & Liao, H.=Y.M. (2020). YOLOv4: Optimal speed and accuracy of object detection. http://arxiv.org/abs/2004.10934. Accessed 24 Oct 2020. 25. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C. L. (2020). Microsoft COCO: Common objects in context. 26. COCO Benchmark (Real-Time Object Detection) | real-time-object-detection-on-coco. Accessed 24 Oct 2020. 27. Everingham, M., Van Gool, L., Williams, C. K. I., Winn J., & Zisserman A. (2007). The PASCAL Visual Object Classes challenge 2007, VOC2007)} results. In PASCAL VOC 2007 Benchmark (Object Detection). Accessed 09 Nov 2020. 28. Redmon, J., & Farhadi, A. (2020). YOLOv3: An incremental improvement. Accessed 23 Sep 2020.

A Study on Human–Machine Interaction in Banking Services During COVID-19 T. Archana Acharya

Abstract A pandemic situation like COVID-19 calls for safe transactions that include: free from touch or physical movement or using common platforms or using common devices or doing from home. The safer the transaction, the more customer satisfaction. On one hand, banks have become the lifeblood of every human today because any transaction payment from grocery to gold/house demands interaction of human–machine rather than manual payments. On the other hand, the services of banks should ensure ease, convenience, comfort, and security in every transaction to the customer so that the transactions are fast and delivered rapidly. The level of customer satisfaction extrapolates the future platform of banks. The present study highlights the importance of human–machine interaction in pandemic situations. The research study is based on primary and secondary data to examine the relationship between banking services involving human–machine interactions to the level of customer satisfaction. SPSS Software is used to analyze the data. The results of the study highlight positivity thus concluding that the twenty-first century is demanding a hundred percent of transactions based on human–machine interaction and also defines the future status of banks’ existence. Keywords Human–machine interaction · e-banking services · SPSS software

1 Introduction COVID-19, the Black swan has led to a significant impact on the economy. The exponential spread of it is leading to behavioral and structural changes in the world of business in general and in the banking sector in particular. Due to the pandemic situation, there is disruption in physical transactions and demanding digital transactions which are posing a great challenge to the banking services across its key functions [1]. Digital transactions work on the basis of human–machine interaction. Human–machine interactions are nothing but simplified processing done by humans T. Archana Acharya (B) Vignan’s Institute of Information Technology (A), Duvvada, Visakhapatnam, Andhra Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 D. Bhattacharyya et al. (eds.), Machine Intelligence and Soft Computing, Advances in Intelligent Systems and Computing 1419, https://doi.org/10.1007/978-981-16-8364-0_8

61

62

T. Archana Acharya

using electronic gadgets. Generally, the input is given by the humans and the output is generated by the computers. The senses such as vision, hearing, and touch become the main source for human inputs and the output through effectors which include voice, fingers, eyes, position of the body, and the head [2]. Banking industry in general is the backbone of the total economy of the whole world as it supports all sectors, through its functions like credit letters, loans and advances, bank guarantees, etc., but the present scenario is demanding digital mode of transactions thus forcibly bringing in metamorphosis in the whole world of transactions [3]. Today, driven by technology and customers, modern banking is exponentially transforming the modes of transactions, in spite of challenges—competition among public sector, private sector and foreign banks, increasing customers expectations, shrinking margins. In the current scenario, it becomes inevitable for banks to adopt new modes of transactions at cost of customer satisfaction. Technology-based delivery channels like internet banking, mobile banking, telebanking, ATMs, POS have made the pandemic situation into a win-win situation by providing services without physical transfers of cash [4]. The present study presents the determinants of these banking services quality and its impact on customer satisfaction.

1.1 Research Problem COVID-19, the Black swan has made the whole world upside down, creating a panic situation which has by force changed the mode of transactions. Today, the need of the hour is fast, easy, convenient, and safe transactions so that the spread can be minimized to zero in case of monetary transactions. The present research provides the platform to understand the satisfaction levels of customers by improving the quality of the banking services.

1.2 Objectives of the Study 1. 2. 3.

To study the concept of human–machine interaction To understand the important determinants of banking services To evaluate the impact of COVID-19 on human–machine interactions and level of customer satisfaction.

1.3 Research Methodology The methodology used in the study is based on primary data. The data is collected from customers of two banks, one public sector bank and the other is a private sector bank through a structured questionnaire using a survey method. Keeping the

A Study on Human–Machine Interaction in Banking …

63

pandemic situation in consideration, the questionnaire was circulated among various groups of customers through google forms which included the relevant variables. The questions were designed on 5-point Likert-scale (1 = strongly disagree; 2 = Disagree; 3 = neither disagree nor agree; 4 = Agree; and 5 = strongly agree). The questions in the questionnaire mainly focused on safety concerns with respect to the transaction and usability during the pandemic situation. Data analysis is taken up using SPSS Software.

2 Review of Literature COVID-19 pandemic adversely impacted the world economy in general and Indian economy in particular. The Government of India initiated lockdown and various policies to control the spread of the virus [5]. The study analyzed pre and post-economic conditions of COVID-19 in India with reference to the policies initiated. The result showed that the pandemic shock has impacted many parts of Indian economy [6]. The economic effect of outbreak of COVID-19 in India was studied. It addressed three research questions relating to the effect of COVID19 on different sectors of the Indian economy, the effect on relationship of bilateral trade between China and India and Health system performance during the pandemic [7]. The economic loss of India due to the pandemic was studied using a linear I/O model. The results depicted a loss corresponding to 10–30% of GDP [8]. Traditional banking is in the transition stage paving into e-banking offering greater benefits which are based on saving time. In spite of several challenges of competition and demanding customer services banks are trying hard to keep up the service quality to stand in the world market [9]. The services on e-platform provide bank customers the 4 c’s—convenience, control choice, and cost reduction. It is found that for elderly customers, it provides the convenience of time-saving and avoiding unnecessary travels, further for countryside customers, e-banking services are necessary. It has been concluded that the customers can access their financial information, can transact anytime and anywhere, save a lot of time and have complete control in managing their financial activities [10, 11]. E-banking service is self-service by customers, less resources are required for banks for operational activities and thus the transaction costs and production costs are minimized [12]. The study results present that banks’ performance can be improved in terms of reduction in operating expenses, growth in the asset, and enhancement of portfolio with the e-banking application [13]. A number of e-banking services are increasing with growing demands of the market where the clients of the bank are satisfied with their services. Further to increase the usability, it is suggested to conduct awareness programs of e-banking products [14]. E-banking service provided by the banks is becoming an assurance of quality of service to customers and also to offer competition to competitors. Thus, it can be summarized from the literature review that the bank customers required convenient quality services at every tip of time. The present study has

64

T. Archana Acharya

Fig. 1 Human–machine interaction. Source [17]

been taken up to study the impact of COVID-19 on usability of human–machine interactions through e-banking services.

3 Theoretical Framework 3.1 What Are Human–Machine Interactions? Human–Machine Interaction (HMI) is defined to describe the interface between users and computers. In simple terms, the user instructs the machine (computer) what is to be done and the feedback that the system generates. The human–machine interface is mainly concerned about supporting the usability of people with designing computer systems. These designs help to improve or develop utility, effectiveness, efficiency, safety, and usability [15]. It is a software program that aids the user to interact with the device or system. The advancement of technology and innovations has provided newly designed applications in multiple locations—household appliances, automobiles, smartwatches, smartphones, machines like ATMs, etc [16] (Fig. 1).

3.2 Advantages of HMI [16] Improved Productivity, Satisfaction/Pursuit of Happiness, Enhance Data Saving/Recording, Internet of Things, Data Translation, Reduce the Cost of Hardware.

A Study on Human–Machine Interaction in Banking …

65

3.3 Disadvantages of Human–Machine Interface [16] The disadvantages include: 1. 2.

Security Poor Interface Design.

4 Results and Discussions Table 1 gives the details of demographic variables of the respondents of both the banks. The surveyed respondents had a larger ratio of Females (66.23%) as compared to males (37.27%) in both the banks. Around 60% of them were young adults, with an age below 35 years, followed by remaining age groups. On an average, 69% of the respondents had a monthly income ranging from 10,000 to 50,000 and less in the income group above 50,000 with around 10%. Based on qualification, 73%s of the respondents were graduates and above. The employment of the respondents is distributed in the public sector with 34%, private sector 16%, business 22%, and others included unemployed people like students, housewives, etc., with 27%. Most of the respondents held 2 and more than 2 accounts of around 74%. Table 2 shows the details of human–machine interactions in e-banking services. During COVID-19, most of the transactions were based on human–machine interactions. The frequency of usage is about 88% in both the banks. About 94% of the Table 1 Demographics analysis Gender

Bank 1

Male

33.77

Female

66.23

Age

Bank 1

56

3.25

1.27

2.26

Income

Bank 1

Bank 2

Avg

No. of bank accounts

Bank 1

Bank 2

Avg

50,000

8.44

12.10

10.27

>3

4.55

3.18

3.87

Source Primary data

66 Table 2 Human–machine interactions in e-banking services

T. Archana Acharya Particulars

Bank 1

Bank 2

Avg

Frequency of human–machine interactions during COVID-19 Frequent user

85.71

91.08

88.39854413

Not a frequent user

14.29

8.92

11.60145587

Usefulness of human–machine interactions during pandemic situation Useful Not useful

96.10

93.63

3.90

6.37

94.86723468 5.132765324

Human–machine interactions are cost and time saving during COVID-19 Yes

90.26

90.45

No

9.74

9.55

90.35280007 9.647199934

Human–machine application are trustworthiness Yes

97.40

96.82

No

2.60

3.18

97.10894201 2.891057987

Preferences for types of human–machine interactions during COVID-19 Individual gadgets like mobiles

96.75

94.90

95.82885268

Source Primary data

respondents of both banks claimed the usefulness of human–machine interactions during pandemic situations and also 90% stated that the interactions were time-saving and helped in reduction of cost. In difficult times, the banks have built the trust on an average 97% of the respondents opined that the interactions were trustworthy. Due to the pandemic situation and the critical conditions, most of the respondents preferred their individual gadgets for transactions as public machines were vulnerable to infection. Table 3 shows the level of satisfaction of the respondents which was recorded on 5-point Likert Scale. 90% of both banks’ respondents were highly satisfied with the transactions done through human–machine interactions during the COVID-19. With respect to payments, it was expressed that the transactions were safe. The services were quick, saving a lot of time and doesn’t require any expert knowledge to operate. The process is simple. The ease of use is quick, efficient, productive, clear, and understandable for the respondents.

5 Conclusion It can be concluded from the study that human–machine interactions have become the platform for any economic activity and the customers of both the public sector bank and the private sector bank preferred more machine-based transactions than

A Study on Human–Machine Interaction in Banking …

67

Table 3 User’s satisfaction using human–machine interfaces during COVID-19 Particulars

Maximum satisfaction

Actual satisfaction

Percentage

Level of satisfaction in payments Payment of bills

5

4.83

96.6

Mobile and dish T.V recharge

5

4.86

97.2

Payment of online shopping

5

4.38

87.6

Payment at point of sale

5

3.79

75.8

Transfer of cash for other works

5

4.66

93.2

Level of satisfaction in services Quick payments

5

4.87

97.4

Spending time is low

5

4.76

95.2

e-payments are error-free

5

4.28

85.6

e-payments require less skills

5

3.76

75.2

e-payment process involves low trouble

5

4.68

93.6

Level of satisfaction in ease of use Quick transaction

5

4.89

97.8

Productivity increases

5

4.77

95.4

Effectiveness increased

5

4.34

86.8

Easy to use

5

4.29

85.8

Website is clear and understandable

5

4.56

91.2

Average

5

4.51

90.3

Source Primary data

manual transactions as in the pandemic situation it is safe to transact through the machines or gadgets. It is highly demanded and recommended that the future of any transaction shall be with human–machine interactions.

References 1. https://www2.deloitte.com/in/en/pages/financial-services/articles/weathering-thecovid19part2.html. Accessed 22 June 2021. 2. https://www.researchgate.net/publication/224927543_Human-Computer_Interaction. Accessed 18 June 2021. 3. http://ceur-ws.org/Vol-2786/Paper45.pdf. Accessed 19 June 2021. 4. Surulivel, N., & Selvabaskar, A. (2017). Human-Machine interaction in banking industry with special reference to ATM services. In Proceedings of the International Conference on Intelligent Sustainable Systems (ICISS 2017). IEEE Xplore Compliant - Part Number:CFP17M19-ART, ISBN:97801-5386-1959-9 5. Dev, S. M., & Sengupta, R. (2020). Covid-19: Impact on the Indian economy, Indira Gandhi Institute of Development Research, Mumbai April.

68

T. Archana Acharya

6. Rakshit, B., & Basistha, D. (2020). Can India stay immune enough to combat COVID-19 pandemic? An economic query. Journal of Public Affairs, 20(4), e2157. 7. Kanitkar, T. (2020). The COVID-19 lockdown in India: Impacts on the economy and the power sector. Global Transitions, 2, 150–156. 8. Veena, R., & Anupam, R. (2018). Current practices of e-banking technology: Study of service quality in Tricity (Chandigarh, Mohali & Panchkula). Journal of Commerce & Accounting Research, 7(2), 56–76. 9. Nikolaos, B., & Marinos, T. (2013). SOA adoption in e-banking. Journal of Enterprise Information Management, 26(6), 719–739. 10. Sannes, R. (2001). Self-service banking: Value creation models and information exchange’. Informing Science, 4(3), 12–23. 11. Witman, P. D., & Poust, T. L. (2008). Balances and accounts of online banking users: A study of two US financial institutions. International Journal of Electronic Finance., 2(2), 197–210. 12. Dandapani, K., Karels, G. V., & Lawrence, E.R. (2008). Internet banking services and credit union performance. Managerial Finance, 34(6), 437–447. ICMISC2021, 052, v1: ’A Study on Interaction of Human-Machine in Banking Services. 13. Sraeel, H. (1996). Creating real value propositions with virtual banking’. Bank Systems and Technology, 33(8), 6–8. 14. Amato-McCoy, D. M. (2005). Creating virtual value’. Bank Systems and Technology, (5), 22–27. 15. Wu, J., Hsia, T., & Heng, M. S. (2006). Core capabilities for exploiting electronic banking’. Journal of Electronic Commerce Research, 7(2), 111–123. 16. http://www.advantages-disadvantages-human-machine-interface.html. Accessed 20 July 2021. 17. https://www.researchgate.net/figure/Example-of-human-machine-interaction_fig1_3188 90215. Accessed 20 July 2021.

A Study on Review of Application of Blockchain Technology in Banking Industry T. Archana Acharya

and P. Veda Upasan

Abstract Financial innovations, economic transformation based on development of the Internet on one side, and the issues of transparency, security, and integrity of data on the other are the most important concerns in digital transformations. Banking industry is passing through metamorphosis as the customer is more digitized. This paper highlights Blockchain technology—the concept, need, architecture, and application in the banking industry. To validate the application of Blockchain technology in the banking industry primary data has been collected through a structured questionnaire to collect the perceptions of the stakeholders. The results confirmed that the new technology is going to provide promising clear and transparent transactions with the Distributed Ledger System. Keywords Financial innovations · Transparency · Security · Blockchain technology · Distributed ledger · Banking industry

1 Introduction The whole world is on a continuous track of upgradation and innovation. Here comes the point in which direction? Innovation in development from manual system to mechanized system so as to eliminate human error and the impact of innovation leading to issues of transparency, security, and data integrity [1]. Banks are the pillars of the economy. The technological impact on traditional banking has led to modern banking where the transactions demand transparency, security, and data integrity. Security concerns are one of the major challenges to be faced as the application of technology is curtailed. As per the demand, it becomes imperative for the Banks to adopt new technologies. T. Archana Acharya (B) Vignan’s Institute of Information Technology (A), Duvvada, Visakhapatnam, Andhra Pradesh, India P. Veda Upasan Andhra University College of Engineering (A), Andhra University, Visakhapatnam, Andhra Pradesh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 D. Bhattacharyya et al. (eds.), Machine Intelligence and Soft Computing, Advances in Intelligent Systems and Computing 1419, https://doi.org/10.1007/978-981-16-8364-0_9

69

70

T. Archana Acharya and P. Veda Upasan

Blockchain, the new rider which is forwarding the track of innovation into new perspectives. The present study is focused on application of Blockchain technology in the banking industry to highlight the importance of issues of transparency, security, and data integrity. Various applications of concepts are listed in general and applications in the banking sector in particular [2] are given to understand the big picture of the future real-time applications of the new technologies in the banking industry.

2 What is a Blockchain? Blockchain, refers to Distributed Ledger Technology (DLT), where any digital assets are accessible to every participant. The three important features of blockchain include: Unalterable, Transparency through decentralized distribution, and Cryptographic hashing.

2.1 What is Distributed Ledger Technology? [3] Distributed ledger technology (DLT) refers to a digital system which records the assets transaction—specifically [3] it gives the details of all the transactions with respect to the asset which are recorded in multiple places at the same time [4] as shown in the following figure (Fig. 1).

Fig. 1 Centralized ledger versus distributed ledger [5]

A Study on Review of Application of Blockchain Technology …

71

Fig. 2 Distributed ledger technology [6]

2.1.1

Properties of Distributed Ledger Technology (DLT)

The important Properties of DLT are: Distributed—All the participants in the network have a distributed ledger copy for establishing complete transparency. Immutable—No participant can change any record—Validated records are irreversible. Time-stamped—Every transaction is recorded with time on a block. Unanimous—the validity of each and every record is agreed by all the participants of the network. Anonymous—the identity of the participants is either anonymous or pseudonymous. Secure—all records are encrypted individually. Programmable—Smart contracts, i.e., each blockchain is programmable (Fig. 2).

2.2 Transaction in Blockchain See Fig. 3.

2.3 Growth in Blockchain Technology The new technology in its infancy has stepped in 14 countries by way of exploring, developing official cryptocurrencies. The banking industry is experimenting with Blockchain Technology. Stock markets 4 and social websites like LinkedIn are working on with the new technologies to create new platforms for secured, transparent transactions (Fig. 4).

72

T. Archana Acharya and P. Veda Upasan

Fig. 3 Transaction in blockchain [7]

Fig. 4 Growth of blockchain technology [8]

3 Applications of Blockchain in Various Sectors The following are the applications of Blockchain in various sectors: Bitcoin—Digital currency, Spotify—Music, artists, tracks, Maersk—Information exchange in global supply chain, Aeternity—Video Gaming, FinTech, Matchpool-—cryptocurrency payments,connecting parties in the network, Siemens—Platform for transactive energy, Loyyal—Travel & hospitality, incentives of employees and rewards of credit card Grade hosting, SimplyVital Health—Health data management, De Beers— Tracer, diamonds, Circle—Buy and sell bitcoins, payments peer-to-peer, BASF— Fridge to Farm—recording and analyzing livestock, BitGive—Charities—financial information to donors, Ubiquity—Record keeping in real estate, MediLedger— Pharma companies and drug distributors, AIA Insurance—Bancassurance network, Guts—Identifying fraud in ticket prices [9].

A Study on Review of Application of Blockchain Technology …

73

4 Blockchain in Banking Today in the digitalized era the concept of customer satisfaction has emerged as the vision of every organization and is striving to attain. The following are the various uses of blockchain technology in banking and which problems are addressed: Faster payments, Clearance and settlement systems, Buying and selling assets, Fundraising, Credit and Loans, Trade finance refers to all of the financial activities related to international trade and commerce, Digital identity verification, Accounting and auditing, Hedge fund are maximizing investor returns and minimizing the risks— investment partnership between fund manager and a group of investors and peer-topeer (P2P) transfers—Transfer funds from bank accounts or credit cards to another person online [10].

5 Banks Adopting Blockchain Technology According to a report of IBM, the rate of adoption of the blockchain by the Banks is much ‘far faster’ than originally thought of. In a survey conducted in 2017 it was found that 15% of the 200 global Banks have intended to roll out blockchain products on full-scale. The following Banks roll out their commercial blockchain products: Goldman Sachs, Bank of America, R3, Barclays, HSBC and Woori Bank, Royal Bank of Canada, JPMorgan, Royal Bank of Canada Australia and New Zealand Banking Group, Fujitsu and State Bank of India—Bank chain with 27 members (India and the Middle East), Monetary Authority of Singapore (MAS). Lastly, Over 40 central Banks including France, Germany, Hong Kong, and Sweden [11].

6 Results and Discussion To validate the study the perception of the respondents is taken. The respondents include: stakeholders, analysts from banking industry, developers from the blockchain industry, technical students and scholars from the academia working in the field of blockchain, faculties, researchers in the respective fieldwork. The questions were designed on 5-point Likert-scale (1 = strongly disagree; 2 = Disagree; 3 = neither disagree nor agree; 4 = Agree; and 5 = strongly agree). For data analysis, SPSS Software is used. Table 1 presents the demographic analysis of the study. Out of the total respondents, males contributed to 57% and females 43%. Most of the respondents belong to Academia, researchers and faculties around 82% remaining include from blockchain industry, analysts from banking industry. Age wise distribution shows that most of the respondents are young adults who are less than 40 years, contributing to 87%. Experience wise distribution shows that the respondents who have less than one-year

74

T. Archana Acharya and P. Veda Upasan

Table 1 Demographic analysis Demographic variables Gender

Percentage

Respondents

Percentage

Male Female

57.36

Developers from blockchain industry

7.75

42.64

Analysts from banking industry

6.98

Technical and scholars from academia

29.46

Faculties in blockchain

33.33

Researchers in blockchain

22.48

Age

Percentage

Experience

Percentage