215 48 47MB
English Pages XVI, 382 [398] Year 2021
Advances in Intelligent Systems and Computing 1180
Ajith Abraham · Mrutyunjaya Panda · Subhrajit Pradhan · Laura Garcia-Hernandez · Kun Ma Editors
Innovations in Bio-Inspired Computing and Applications Proceedings of the 10th International Conference on Innovations in Bio-Inspired Computing and Applications (IBICA 2019) held in Gunupur, Odisha, India during December 16–18, 2019
Advances in Intelligent Systems and Computing Volume 1180
Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **
More information about this series at http://www.springer.com/series/11156
Ajith Abraham Mrutyunjaya Panda Subhrajit Pradhan Laura Garcia-Hernandez Kun Ma •
•
•
•
Editors
Innovations in Bio-Inspired Computing and Applications Proceedings of the 10th International Conference on Innovations in Bio-Inspired Computing and Applications (IBICA 2019) held in Gunupur, Odisha, India during December 16–18, 2019
123
Editors Ajith Abraham Scientific Network for Innovation and Research Excellence Machine Intelligence Research Labs (MIR) Auburn, WA, USA Subhrajit Pradhan Department of Electronics Engineering GIET University Gunupur, Odisha, India
Mrutyunjaya Panda Utkal University Bhubaneswar, Odisha, India Laura Garcia-Hernandez Area of Project Engineering University of Cordoba Córdoba, Córdoba, Spain
Kun Ma School of Information Science and Engineering University of Jinan Jinan, Shandong, China
ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-3-030-49338-7 ISBN 978-3-030-49339-4 (eBook) https://doi.org/10.1007/978-3-030-49339-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Welcome Message
Welcome to the 10th International Conference on Innovations in Bio-Inspired Computing and Applications (IBICA 2019) and 9th World Congress on Information and Communication Technologies (WICT 2019). Conferences are held at GIET University, Gunupur, Odisha, India, during December 16–18, 2019. In 2018, IBICA and WICT were held in Kochi, India. The aim of IBICA is to provide a platform for world research leaders and practitioners, to discuss the full spectrum of current theoretical developments, emerging technologies, and innovative applications of bio-inspired computing. Bio-inspired computing is currently one of the most exciting research areas, and it is continuously demonstrating exceptional strength in solving complex real-life problems. WICT 2019 provides an opportunity for the researchers from academia and industry to meet and discuss the latest solutions, scientific results, and methods in the usage and applications of ICT in the real world. Innovations in ICT allow us to transmit information quickly and widely, propelling the growth of new urban communities, linking distant places and diverse areas of endeavor in productive new ways, which a decade ago was unimaginable. Thus, the theme of this World Congress is “Innovating ICT For Social Revolutions.” IBICA–WICT 2019 brings together researchers, engineers, developers, and practitioners from academia and industry working in all interdisciplinary areas of intelligent systems, nature-inspired computing, big data analytics, real-world applications and to exchange and cross-fertilize their ideas. The themes of the contributions and scientific sessions range from theories to applications, reflecting a wide spectrum of the coverage of intelligent systems and computational intelligence areas. IBICA 2019 received submissions from 10 countries, and each paper was reviewed by at least five reviewers in a standard peer review process. Based on the recommendation by 5 independent referees, finally 19 papers were presented during the conference (acceptance rate of 35 %). WICT 2019 received submissions from 12 countries, and each paper was reviewed by at least five reviewers in a standard peer review process. Based on the recommendation by 5 independent referees, finally 20 papers were presented during the conference (acceptance rate of 36). Conference proceedings are published by Springer Verlag, Advances in Intelligent v
vi
Welcome Message
Systems and Computing Series, which is now indexed by ISI Proceedings, DBLP, SCOPUS, etc. Many people have collaborated and worked hard to produce the successful IBICA–WICT 2019 conference. First, we would like to thank all the authors for submitting their papers to the conference, for their presentations and discussions during the conference. Our thanks go to Program Committee members and reviewers, who carried out the most difficult work by carefully evaluating the submitted papers. Our special thanks to the following plenary speakers for the exciting plenary talks: • • • • • •
Prof. Arturas Kaklauskas, Vilnius Gediminas Technical University, Lithuania Prof. Patrick Siarry, Université Paris-Est Créteil, Paris, France Prof. Yukio Ohsawa, University of Tokyo, Japan Prof. Laura García-Hernández. University of Cordoba, Spain Dr. Kingsley Okoye, Tecnologico de Monterrey, Writing Lab, TecLabs, Mexico Dr. Ladislav Zjavka, VSB-Technical University of Ostrava, Czech Republic
We express our sincere thanks to the session chairs and local organizing committee chairs for helping us to formulate a rich technical program. We are thankful to Chairman, Secretary, Registrar, and other administrative officers of GIET University for hosting IBICA–WICT 2019. Enjoy reading the proceedings. Ajith Abraham Subhrajit Pradhan General Chairs Maria Leonilde Varela Mrutyunjaya Panda Program Chairs
Organization
IBICA–WICT 2019 Organization Chief Patrons Satya Prakash Panda (President) Chandra Dhwaj Panda (Vice President) Jagadish Panda (Director General)
GIET University, Gunupur, India GIET University, Gunupur, India GIET University, Gunupur, India
Patrons Gautam Gosh (Vice Chancellor) N. V. Jagannadha Rao (Registrar) K. Senthil Kumar (Principal)
GIET University, India GIET University, India School of Engineering, GIET University, India
Honorary Chairs Subal Kar Niranjan Das Sudahnsu Sekhara Nayak Santunu Kumar Nayak
GIET GIET GIET GIET
University, University, University, University,
Gunupur, Gunupur, Gunupur, Gunupur,
India India India India
General Chairs Ajith Abraham Subhrajit Pradhan
Machine Intelligence Research Labs, USA GIET University, Gunupur, India
vii
viii
Organization
Program Chairs Maria Leonilde Varela Mrutyunjaya Panda
University of Minho, Portugal Utkal University, India
Web Chair Kun Ma
University of Jinan, China
Publication Chair Isabel Jesus
Instituto Superior de Engenharia do Porto, Portugal
Publicity Chairs Sanju Tiwari (Chair) Pradeep Laxkar Sudarshan Nandy Marjana Prifti Skenduli
University of Polytecnica, Madrid, Spain ITM Universe, Vadodara, India Amity University, Kolkata, West Bengal, India University of New York, Tirana, Albania
IBICA Program Committee Intan Ermahani A. Jalil Ajith Abraham Arun Agarwal Salvador Alcaraz Flora Amato Juan Jesús Barbarán Nassereddine Bouchaib Alberto Cano Paulo Carrasco Joan-Josep Climent Pedro Coelho Satchidananda Dehuri
Said El Hajji Amparo Fuster-Sabater Mauro Gaggero Abdelkrim Haqiq Chian C. Ho
Universiti Teknikal Malaysia Melaka (UTeM), Malaysia Machine Intelligence Research Labs (MIR Labs) Institute of Technical Education and Research (ITER) Miguel Hernandez University University of Naples, Federico II University of Granada FST Settat Virginia Commonwealth University Univ. Algarve Universitat d’Alacant UERJ Department of Information and Communication Technology, Fakir Mohan University, Vyasa Vihar, Balasore-756019, Orissa, India Faculté des Sciences, Université Mohammed V, Rabat, Morocco Institute of Applied Physics (C.S.I.C.), Serrano 144, 28006 Madrid, Spain National Research Council of Italy FST, Hassan 1st University, Settat National Yunlin University of Science and Technology
Organization
Wladyslaw Homenda Tzung-Pei Hong
Donato Impedovo Zahi Jarir
Kun Ma Ana Madureira João Paulo Magalhaes Constantino Malagón Ficco Massimo Alessio Merlo Jose M. Molina Luiz Satoru Ochi Ghizlane Orhanou Rosaura Palma-Orozco Clay Palmeira Mrutyunjaya Panda
Carlos Pereira Atta Rahman Luis Enrique Sanchez Crespo Borja Sanz Suwin Sleesongsom
Haresh Suthar Sanju Tiwari Leonilde Varela Jose Vicent Yong Wang IBICA Additional Reviewers Dentamaro, Vincenzo Gandhi, Niketa Gaurav, Devottam
ix
Warsaw University of Technology, Warsaw, Poland Department of Computer Science and Information Engineering, National University of Kaohsiung Dipartimento di Informatica-UNIBA Computer Science Department, Faculty of Sciences Semlalia, Cadi Ayyad University, Marrakech University of Jinan Departamento de Engenharia Informática ESTGF, Porto Polytechnic Institute Nebrija University Second University of Naples (SUN) DIBRIS-University of Genoa Universidad Carlos III de Madrid Fluminense Federal University Faculty of Sciences, Mohammed V University in Rabat CINVESTAV-IPN Université François Rabelais Tours Reader, Department of Computer Science, Utkal University, Vani Vihar, Bhubaneswar, Odisha, India ISEC Imam Abdulrahman Bin Faisal University, Dammam, KSA University of Castilla-la Mancha S3Lab-University of Deusto Department of Aeronautical Engineering and Commercial Pilot, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok, Thailand PIET National Institute of Technology Kurukshetra University of Minho Universidad de Alicante .
x
Organization
Massimo, Ficco Roig, Pedro Juan Sarfraz, Mohammad WICT Program Committee Ajith Abraham Laurence Amaral Mohamed Ben Halima
Alberto Cano Oscar Castillo Isaac Chairez Lee Chang-Yong Phan Cong-Vinh Gloria Cerasela Crisan Haikal El Abed El-Sayed M. El-Alfy Carlos Fernandez-Llatas Amparo Fuster-Sabater Xiao-Zhi Gao Alexander Gelbukh Jerzy Grzymala-Busse Thomas Hanne Biju Issac Vitalii Ivanov Kyriakos Kritikos Vijay Kumar Simone Ludwig Kun Ma Ana Madureira Jolanta Mizera-Pietraszko Paulo Moura Oliveira Diaf Moussa Ramzan Muhammad Akila Muthuramalingam C. Alberto Ochoa-Zezatti Varun Ojha Mrutyunjaya Panda
Machine Intelligence Research Labs (MIR Labs) Federal University of Uberlandia REGIM-Lab.: REsearch Groups in Intelligent Machines, University of Sfax, ENIS, BP 1173, Sfax, 3038, Tunisia Virginia Commonwealth University Tijuana Institute of Technology UPIBI-IPN Kongju National University Nguyen Tat Thanh University “Vasile Alecsandri” University of Bacau German International Cooperation (GIZ) GmbH King Fahd University of Petroleum and Minerals Universitat Politècnica de València Institute of Applied Physics (C.S.I.C.), Serrano 144, 28006 Madrid, Spain Aalto University Instituto Politécnico Nacional University of Kansas University of Applied Sciences Northwestern Switzerland Teesside University Sumy State University Institute of Computer Science, FORTH VIT University, Vellore North Dakota State University . Departamento de Engenharia Informática Wroclaw University of Technology UTAD University UMMTO Maulana Mukhtar Ahmad Nadvi Technical Campus KPR Institute of Engineering and Technology Universidad Autónoma de Ciudad Juárez University of Reading Reader, Department of Computer Science, Utkal University, Vani Vihar, Bhubaneswar, Odisha, India
Organization
Konstantinos Parsopoulos Carlos Pereira Eduardo Pires Keun Ho Ryu Neetu Sardana Hirosato Seki Mohammad Shojafar Patrick Siarry Antonio J. Tallón-Ballesteros Shing Chiang Tan Sanju Tiwari Justyna Trojanowska Jih Fu Tu Eiji Uchino Leonilde Varela Gai-Ge Wang Lin Wang WICT Additional Reviewers Das Sharma, Kaushik Gaurav, Devottam Goyal, Ayush
xi
University of Ioannina ISEC UTAD University Chungbuk National University Jaypee Institute of Information Technology Osaka University University of Surrey Universit de Paris 12 University of Huelva Multimedia University National Institute of Technology Kurukshetra Poznan University of Technology Department of Electronic Engineering, St. Johns University Yamaguchi University University of Minho School of Computer Science and Technology, Jiangsu Normal University University of Jinan
Contents
Towards the Speed Enhancement of Association Rule Mining Algorithm for Intrusion Detection System . . . . . . . . . . . . . . . . . . . . . . . Sarbani Dasgupta and Banani Saha
1
Image Retrieval Using Bat Optimization and Image Entropy . . . . . . . . Shashwati Mishra and Mrutyunjaya Panda
10
Logistic Regression on Hadoop Using PySpark . . . . . . . . . . . . . . . . . . . Krishna Kumar Mahto and C. Ranichandra
19
Analysis of Pre-processing Techniques for Odia Character Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mamatarani Das and Mrutyunjaya Panda
27
Cluster-Based Under-Sampling Using Farthest Neighbour Technique for Imbalanced Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Rekha and Amit Kumar Tyagi
35
Vehicle Detection and Classification: A Review . . . . . . . . . . . . . . . . . . . V. Keerthi Kiran, Priyadarsan Parida, and Sonali Dash
45
Methods for Automatic Gait Recognition: A Review . . . . . . . . . . . . . . . P. Sankara Rao, Gupteswar Sahu, and Priyadarsan Parida
57
Comparative Performance Exploration and Prediction of Fibrosis, Malign Lymph, Metastases, Normal Lymphogram Using Machine Learning Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Subrato Bharati, Md. Robiul Alam Robel, Mohammad Atikur Rahman, Prajoy Podder, and Niketa Gandhi Decision Forest Classifier with Flower Search Optimization Algorithm for Efficient Detection of BHP Flooding Attacks in Optical Burst Switching Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mrutyunjaya Panda, Niketa Gandhi, and Ajith Abraham
66
78
xiii
xiv
Contents
Review and Implementation of 1-Bit Adder in CMOS and Hybrid Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bhaskara Rao Doddi and V. Leela Rani Design and Analysis of LOL-P Textile Antenna . . . . . . . . . . . . . . . . . . . Y. E. Vasanth Kumar, K. P. Vinay, and M. Meena Kumari
88 98
Analytical Study of Scalability in Coastal Communication Using Hybridization of Mobile Ad-hoc Network: An Assessment to Coastal Bed of Odisha . . . . . . . . . . . . . . . . . . . . . . . . 107 Sanjaya Kumar Sarangi and Mrutyunjaya Panda Effect of Environmental and Occupational Exposures on Human Telomere Length and Aging: A Review . . . . . . . . . . . . . . . . . . . . . . . . . 120 Jasbir Kaur Chandani, Niketa Gandhi, and Sanjay Deshmukh A Review on VLSI Implementation in Biomedical Application . . . . . . . 130 Nagavarapu Sowmya and Shasanka Sekhar Rout Comparative Analysis of a Dispersion Compensating Fiber Optic Link Using FBG Based on Different Grating Length and Extinction Ratio for Long Haul Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Padmini Mishra, Shasanka Sekhar Rout, G. Palai, and L. Spandana Unsupervised Learning Method for Mineral Identification from Hyperspectral Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 P. Prabhavathy, B. K. Tripathy, and M. Venkatesan Short Term Load Forecasting Using Empirical Mode Decomposition (EMD), Particle Swarm Optimization (PSO) and Adaptive Network-Based Fuzzy Interference Systems (ANFIS) . . . . . . . . . . . . . . . 161 Saroj Kumar Panda, Papia Ray, and Debani Prasad Mishra Energy Conservation Perspective for Recharging Cell Phone Battery Utilizing Speech Through Piezoelectric System . . . . . . . . . . . . . . . . . . . . 169 Ashish Tiwary, Yashraj, Amar Kumar, and Mandeep Biruly A Progressive Method Based Approach to Understand Sleep Disorders in the Adult Healthy Population . . . . . . . . . . . . . . . . . . . . . . . 178 Vanita Ramrakhiyani, Niketa Gandhi, and Sanjay Deshmukh Semantic-Based Process Mining: A Conceptual Model Analysis and Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Kingsley Okoye Educational Process Intelligence: A Process Mining Approach and Model Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Kingsley Okoye and Samira Hosseini
Contents
xv
Design and Development of a Mobile App as a Learning Strategy in Engineering Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Yara C. Almanza-Arjona, Leonel A. Miranda-Camargo, Salvador E. Venegas-Andraca, and Beatriz E. García-Rivera Beyond Things: A Systematic Study of Internet of Everything . . . . . . . 226 K. Sravanthi Reddy, Kavita Agarwal, and Amit Kumar Tyagi Flower Shaped Patch with Circular Defective Ground Structure for 15 GHz Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Ribhu Abhusan Panda, Priya Kumari, Janhabi Naik, Priyanka Negi, and Debasis Mishra Classification of Seagrass Habitat Using Probabilistic Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Anand Upadhyay, Prajna Tantry, and Aarohi Varade Early Brain Tumor Detection Using Random Forest Classification . . . . 258 Anand Upadhyay, Umesh Palival, and Sumit Jaiswal Tree Age Detection Using Pruning Technique . . . . . . . . . . . . . . . . . . . . 265 Anand Upadhyay, Shweta Maurya, and Siddharth Tripathi Learning Analytics: The Role of Information Technology for Educational Process Innovation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Kingsley Okoye, Julius T. Nganji, and Samira Hosseini Uplink and Downlink Spectral Efficiency Estimation for Multi Antenna MIMO User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Prajoy Podder, Subrato Bharati, Md. Robiul Alam Robel, Md. Raihan-Al-Masud, and Mohammad Atikur Rahman Toward Public Opinion Monitoring System of Large-Scale Data with Lambda Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Weijuan Zhang, Yue Lu, and Kun Ma Fault Tolerance in Cloud Computing- An Algorithmic Approach . . . . . 307 Md. Robiul Alam Robel, Subrato Bharati, Prajoy Podder, Md. Raihan-Al-Masud, and Sanjoy Mandal 2-Element Pentagon Patch Array for 25 GHz Applications . . . . . . . . . . 317 Ribhu Abhusan Panda, Madhusmita Kuldeep, Varanasi Swatishree, Gudla Sruthi, Udit Narayan Mohapatro, Pawan Kumar Nayak, and Debasis Mishra Design of Two Slot Multiple Input Multiple Output UWB Antenna for WiMAX and WLAN Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 323 S. Malathi, S. Aruna, K. Srinivasa Naik, and B. Bharani
xvi
Contents
Information Technology in Learning Institutions: An Advantage or A Disadvantage? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Jonathan A. Odukoya, O. Omonijo, Sanjay Misra, and Ravin Ahuja The Role of ICTs in Sex Education: The Need for a SexEd App . . . . . . 343 Victoria Adebayo, Olaperi Yeside Sowunmi, Sanjay Misra, Ravin Ahuja, Robertas Damaševičius, and Jonathan Oluranti Developing a Multi-modal Listing Service for Real Estate Agency Practice in Nigeria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Adewole Adewumi, Chukwuemeka Iroham, Daniel Audu, Sanjay Misra, and Ravin Ahuja Soft Computing Approach on Minimisation of Indirect Losses in Power Generation for Efficiency Enhancing . . . . . . . . . . . . . . . . . . . . 361 Subodh Panda, Rabindrakumar Mishra, Balaram Das, Somya kant kar, and Premansu Rath A Machine Learning Prediction of Automatic Text Based Assessment for Open and Distance Learning: A Review . . . . . . . . . . . . . . . . . . . . . . 369 Guembe Blessing, Ambrose Azeta, Sanjay Misra, Felix Chigozie, and Ravin Ahuja Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Towards the Speed Enhancement of Association Rule Mining Algorithm for Intrusion Detection System Sarbani Dasgupta1(B) and Banani Saha2 1 Department of MCA, Techno International Newtown, Block-DG, Action Area 1 New town,
Kolkata 700156, India [email protected] 2 Computer Science and Engineering Department2, University of Calcutta, JD Block, Sector III, Salt Lake City, Kolkata 700098, India
Abstract. Intrusion detection system is a device or a software application which is used to monitor network traffic data for suspicious activity and alert the system administrator about any malicious activity or network policy violation that has occurred. Among the several techniques proposed for anomaly detection in network audit data, data mining techniques are used for efficient analysis of network audit data to detect any abnormalities occurred due to specific types of attacks. Association rule mining algorithm an unsupervised data mining algorithm has been applied for analysis of network audit data for detecting anomalies. Due to rapid increase of internet based services, cyber security has become a challenging problem. In this paper, a frame work using association rule mining algorithm, has been proposed for detecting suspicious activity in network traffic data. Further in order to increase the speed of processing for large size network traffic data, big data processing tool Apache Spark has been used. Among the several association rule mining algorithm FP growth algorithm has been used to generate attack rules that will detect malicious attack on network audit data. For the purpose of the experiment the Kyoto dataset which is available freely online has been used. Keywords: Association rule mining · Apache spark · Intrusion detection systems (IDS) · Network based intrusion detection systems (NIDS)
1 Introduction Intrusion detection systems (IDS) is a network security system that scans computer systems and analyses network traffic for detecting possible attacks formed within the organisation and also intrusion or attacks generated outside the organisation [1]. The IDS checks the activities within a network and alerts security administrators of suspicious activity [2]. The IDS consists of two components namely Management console and sensors. The Management console mainly manages the host machine and provides intrusion reports. Sensors monitors the host machine as well as network to detect intrusion. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 1–9, 2021. https://doi.org/10.1007/978-3-030-49339-4_1
2
S. Dasgupta and B. Saha
The IDS consists of a database of signature patterns of previously detected signature which is formally known as audit log. In case of misuse detection if the sensors detect any malicious activity, it will check the audit log. If it found a match in the audit log if will report it to the management console. Further the sensor can take action against the malicious attack, which depends on the way it is configured. In case of anomaly based detection, unknown or novel attacks can be detected by sensors and reported to the management console. Data mining techniques, which can find useful information in large volumes of data, can be used to examine the audit logs. By scrutinizing the audit logs, significant data can be extracted to create better detection models. Association rule mining technique an unsupervised data mining method is used to analyse large amount of network traffic data to find malicious patterns. This can provide valuable information about the patterns, which helps in effective anomaly detection. In [3], a system has been successfully developed to detect network intrusions using association rule mining technique. The advantages of association rule mining for implementation of NIDS as stated in [4] is generation attack rules that will detect the attacks in network audit data using anomaly detection, while maintaining a low false positive rate i.e. the number of intrusion detection that are wrongly classified as normal [3]. However with the increase in volume of the network traffic data or audit log sequential implementation of association rule mining technique will consume a lot of processing time. In order to improve the difficulty of sequential association rule mining algorithm, a different kind of implementation of Association rule mining technique has been proposed. In this paper analytical big data technologies [12] have been used for increasing the efficiency of Association rule mining algorithm. The rest of the paper is organised as follows: Sect. 2 provides an overall view about Intrusion detection system. In Sect. 3 an elaborate description about data mining and its various technique particularly association rule mining that has been applied for network data analysis to develop intrusion detection system is given. In Sect. 4, our proposed approach has been discussed in detail and experimental results are shown. The analysis of the result and the discussion based on result obtained is performed in Sect. 5. The final section concludes the article.
2 Overview of Intrusion Detection System An Intrusion detection system (IDS) is a security system that checks computer systems and network traffic. It examines the traffic to detect possible attacks from outside of any organisation. It can also detect system misuse or attacks which can occur within an organisation [5]. Intrusion detection system are basically classified into of two types. These are host based Intrusion Detection System (HIDS) and network based Intrusion Detection System (NIDS). HIDS run on individual computers or devices on the network. It examines the user’s process and activity on local machine for detecting intrusion. NIDS are located at various tactical points within the network to scan traffic to and from all devices on the network. It performs an analysis of passing traffic on the entire subnet, and matches the traffic that is passed on the subnets to the network audit logs, a library of known attacks. When an attack is detected, or unusual behaviour is noticed, the administrator is warned about the abnormal situation [5].
Towards the Speed Enhancement of Association Rule Mining Algorithm
3
The intrusion detection approaches are divided into two categories: misuse detection and anomaly detection [5]. An integrated intrusion detection system (IDS) should include at least one of the two approaches. Misuse detection is based on intrusion characteristics that reported by experts. Once detecting an intrusion characteristics, IDS affirms that intrusion happened and alert the system administrator. This approach has the merits of high accuracy and low false alarm rate, but it is impossible for this approach to find intruders who use new intrusion techniques that have never been reported before, because IDS cannot get the intrusion characteristics before the intrusion succeeds. Anomaly detection considers that an intrusion can be identified based on some deviation from the normal users’ patterns described by some statistical characteristics of system. When a deviation is observed, IDS judges whether the deviation is produced by intruders and determines whether an alarm should be produced. This approach can detect any intrusion techniques even that have never been used before. But for distinguishing anomaly from normal by statistical characteristics is not accurate enough, the false alarm rate of this approach is too high. An audit logs may contain useful and rich information that can be used to build a better network intrusion detection mode.
3 Related Work Data mining is the process of discovering interesting information from huge amount of data using artificial intelligence, machine learning, and statistical techniques. Another popular term used for data mining is knowledge discovery from data or KDD [18]. Several techniques has been proposed for the purpose of data mining, namely, Classification, Clustering, Regression, Association Rule analysis. These techniques are used for detecting intrusion in network audit data [4]. Association rule mining technique an unsupervised data mining technique [10], is generally used to find the interesting association rule among various data items of large transactional database. Support of an item set is defined as number of transaction that contains that itemset. Frequent item set is defined as the item set whose support is greater than or equal to minimum support. Confidence is defined as metric which measures the reliability of association rules generated. Association rules are represented as X → Y where X and Y are two disjoint item sets, X ∩ Y = ∅. The main purpose of association rule mining is to find correlation between several attributes of a database. Several algorithms have been proposed for association analysis, namely Apriori algorithm, FP growth algorithm, Éclat algorithm. In case of analysing network audit data to generate intrusion detection model, association rule mining algorithm can be used [11, 13]. The purpose of association rule mining algorithm is generation of interesting rules from network audit data. These rules can be used to detect unknown intrusion in form of anomaly in network audit data. Wang and Fan proposed non-iterative improved Apriori algorithm [12] to discover IDS alerts. They used intersection of two distinct rows of DARPA 99 dataset to detect reoccurring patterns. If similar patterns are repeated in various operations specially intersection operation, then it is considered as interesting pattern and used as Intrusion alert.
4
S. Dasgupta and B. Saha
Flora S. Tsai [4] used KDD dataset for illustration of network intrusion detection system. In the paper association rule mining technique has been used for production of several interesting rules from the KDD data set. Kamini Nalavade and B.B Meshram in their paper [3] has proposed an intrusion detection system based on Association rule mining technique which detects attack rules in KDD dataset with low false positive rate. To find the association rules Ming-Yang Su et al. [19] proposed incremental fuzzy association rules mining algorithm for Network Intrusion Detection System. In the paper, link list of link list has been used for accumulation of all feasible candidate item sets and support count of the candidate sets in the main memory. The obtained information about the network traffic data is updated regularly. The updated information is used by incremental algorithm for identification of large and interesting item sets. These item sets are utilised for generation of association rules for the purpose of NIDS. All the above mentioned approaches follows Apriori algorithm and modified version of it. But the main disadvantage of the Apriori algorithm is that computational cost increases due large storage requirement as well as increase in processing time. In order to overcome the disadvantages of the Apriori algorithm FP growth algorithm which follows divide and conquer approach have been proposed. The processing speed of this algorithm is much faster than the Apriori algorithm as well as better utilization of storage space is performed by FP-growth algorithm. However as the size of the dataset increases sequential implementation of the algorithm will consume a lot of processing time.
4 Proposed Methodology In the earlier approaches sequential implementation of Association rule mining algorithm is proposed in order to build an efficient intrusion detection system. However with the increase in vast amount of data along with time i.e. for the purpose of analysing large scale network audit log, sequential algorithm will consume a lot of processing time. Hence a parallel association rule mining has been performed for analysing network audit data to design an intrusion detection model. The parallelism is done by using analytical big data technology. In this section the detail description of the experimental approach, the data set used for experimental purpose and the technology used for the purpose of the experiment is given. Finally the experimental results are shown. 4.1 Dataset The dataset used for the purpose of the experiment is the Kyoto dataset [9]. This dataset consists of real network traffic data. The initial version of the dataset consists of network traffic data which was collected from November 2006 to August 2009. The next version consists of network traffic data from November 2006 to December 2015. This dataset consists of 14 statistical features derived from the KDD Cup ’99 [30] dataset as well as 10 additional features which can be used for the analysis and evaluation of the IDS network. As stated in [17] this dataset consists of network traffic data captured from honeypot, dark net sensor, web crawler and email server. In this paper a detail analysis of honeypot and dark net data is provided. As given in [17] there were 50,033,015 normal sessions, 43,043,225 attack sessions and 425,719 sessions related to unknown attacks
Towards the Speed Enhancement of Association Rule Mining Algorithm
5
observed in the traffic data collected to and from network honeypot. For experimental purpose the dataset is downloaded from [6]. 4.2 Technology Used for Experimental Purpose As stated earlier for the purpose of the experiment big data technology is used. There are two types of analytical big data technologies. One of them performs processing in batch mode. They use the concept of Google Map Reduce programming [6]. Map Reduce is a cluster computing framework which is used for large amount of data analysis. In [16] Association rule mining algorithm using Map Reduce Technology of Hadoop have been proposed. But the main problem with this framework is that it does not support data reuse for multiple computation. Several machine learning and data mining algorithms require interactive processing. For this purpose Apache Spark is developed. It can perform faster parallel computation by using in-memory primitive. It has been basically develop for iterative machine algorithm and interactive data mining algorithm. Resilient Distributed Dataset (RDD) is the fundamental data structure of Spark [7]. It is fault tolerant and can provide in-memory storage. It is very essential for interactive distributed computing. Apache Spark supports four major libraries for machine learning and data mining algorithms. They are SparkSQL, Spark Streaming, MLib and GraphX. For the purpose of the experiment the Machine learning Library (MLib) component of Spark is used. Among the several data types and the Data mining algorithms supported by MLib for the purpose of Association Rule Mining algorithms spark.ml package provides parallel implementation of FP-growth algorithm which is a popular Association rule mining algorithm after Apriori algorithm. 4.3 Experimental Approach In order to increase the speed of processing the parallel implementation of FP growth algorithm of Association rule mining technique is performed using Apache Spark [7]. At first the training dataset is loaded into Hadoop Distributed File System (HDFS). In the next step the item sets are generated according to the specified minimum support count. Then the rules are generated from the item sets that satisfies the minimum confidence value. The main steps performed are given below.
6
S. Dasgupta and B. Saha
Input: Kyoto Training Dataset Minimum Support Count minsupp Number of partitions Numpartitions Minimum confidence Minconfidence Output: Attack Type. Procedure: Step1: Load the dataset into HDFS Step2: Trim the dataset using Map () Distribute the dataset among Numpartition Step3: Generate Frequent Itemsets that satisfies the minsupp in each partition Step 4: Generate Association rule that satisfies Minconfidence in each partition
For the purpose of the experiment, minsupp is set to 0.1, Minconfidence is set to 0.9. The Number of partition i.e. Numpartition is 4. 4.4 Experimental Result For sequential implementation of the proposed approach, the data sets are stored and manipulated using MySql. For the purpose of front end Java software version openjdk 1.8.0 is used. The Operating System used for this purpose is Ubuntu 16.04. For the parallel algorithm, one driver node and four worker node for Apache Spark platform Table 1. Time required for execution of sequential and parallel algorithm Size of dataset (MB)
Time required for Sequential execution (sec)
Time required for Parallel execution using Spark (sec)
23
.21 s
.21 s
55
.34 s
.32 s
76
.4 s
.39 s
110
.4 s
.42 s
128
.51 s
.51 s
156
.8 s
.65 s
200
.9 s
.65 s
250
.95 s
.62 s
300
1s
.6 s
350
1.2 s
.5 s
400
1.5 s
.5 s
450
1.5 s
.49 s
500
2s
.49 s
550
2.1 s
.42 s
650
3s
.39 s
708
3.5 s
.33 s
Towards the Speed Enhancement of Association Rule Mining Algorithm
7
with Intel Corei3 processor 2.00 GHz CPU(X4) and 8 GB Ram is used. The Operating system used for this purpose is Ubuntu 16.04 LTS with Hadoop version 2.6.0, Spark version 1.6.0 and openjdk1.8.0. The execution time for the parallel and the sequential algorithm are shown in the Table 1.
5 Analysis of Result and Discussion Basically for the purpose of evaluation of the performance of any algorithm three categories of measure are there. They are measurement, analytical modelling and simulation [14, 15]. For the purpose of analytical modelling a mathematical model of the system is constructed. Sometimes analytical modelling is combined with simulation model of system. In case of parallel algorithm implemented by Apache Spark not only the size of the dataset but the number of nodes is also consider as an important factor. A performance metric is a measure of a systems performance. It focuses on measuring a certain aspect of the system and allows comparison of various types of systems [8]. The performance of the parallel algorithm have been evaluated on the basis of three metrics. These three metrics are speedup [9], size up [9], and scale up [9]. The speedup is defined as the ratio of runtime of the best sequential algorithm for solving a problem to the time taken by the parallel algorithm to solve the same problem on m processors. T1 , (1) Tm T1 is the processing time required by a single processor and Tm is the processing time of the parallel algorithm with m number of processors. Size up metric is defined as Speedup =
T1n , (2) T1 In a size up measurement, the problem size is scaled so as to keep the parallel execution time a constant, as the number of processors increases. Since more than one process is used in parallel processing, in general more work will be done in parallel processing. Size up indicates the ratio of the work increase. Scale up metric or Scalability is a measure of a parallel algorithm’s capacity to increase speedup in proportion to the number of processors. The scale up factor is defined as the capability of an n larger system to perform larger jobs at the same run time as that of a single system. Scale up can be expressed as follows: Sizeup =
T1 , (3) Tnn where T1 is the time required for processing of data on a single core machine. Tnn is the time required for processing of n times data on a machine with n cores. To evaluate the speedup, scale up and size up factor the experimental analysis is performed on 1 to 4 cores and the size of datasets ranges from 128 MB to 708 MB. The results are depicted in Fig. 1, 2 and 3 respectively. Scaleup(data, n) =
S. Dasgupta and B. Saha
Speed up
8
10 5 0 1
2
3
4
No.of cores Fig. 1. Speed up factor
Fig. 2. Scale up factor
10
Size up
8 6 4 2 0 128MB
256 MB 512 MB
708MB
Size of the dataset
Fig. 3. Size up factor
6 Conclusion With the rapid increase of Internet, network security has become a very vital issue. Datamining technique particularly association rule mining technique can be used for detecting various new attack on the network based on the information of the previous attack. However sequential implementation of the association rule mining algorithm will be time consuming. In order to increase the speed of processing parallel implementation of the algorithm have been proposed using Apache Spark. Spark is an open-source processing engine that combines batch streaming and interactive analytics on all the data
Towards the Speed Enhancement of Association Rule Mining Algorithm
9
in one platform via in-memory capabilities. For experimental purpose Kyoto dataset is used. It has been shown that the parallel implementation of the Association rule mining algorithm is less time consuming than the sequential one. The results are shown graphically.
References 1. NIST-Guide to Intrusion Detection and Prevention Systems, February 2007. http://csrc.nist. gov/publications/nistpubs/800-94/SP800-94.pdf. Accessed 05 June 2010 2. Tsai, F.S., Chan, C.K. (eds.): Cyber Security. Pearson Education, Singapore (2006) 3. Nalavade, K., Meshram, B.B.: Mining association rules to evade network intrusion in network audit data. Int. J. Adv. Comput. Res. 4(2) issue 15 (2014). ISSN (print): 2249-7277 ISSN (online): 2277-7970 4. Tsai, F.S.: Network intrusion detection using association rules. Int. J. Recent Trends Eng. 2(2), 202 (2009) 5. Wikipedia. https://en.wikipedia.org/w/index.php?title=Intrusion_detection_system&oldid= 83971452”4 6. Traffic Data from Kyoto University’s Honeypots. www.takakura.com/Kyoto_data/ 7. Zaharia, M., Chowdhury, M., Das, T., Dave, A., Ma, J., McCauly, M., Franklin, M.J., Shenker, S., Stoica, I.: Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing. University of California, Berkley (2012) 8. Alexandrov, V.: Parallel Scalable Algorithms-Performance Parameters. www.bsc.es 9. Sun, X.-H., Gustafson, J.L.: Toward a better parallel performance metric. Parallel Comput. 17, 1093–1109 (1991) 10. Han, J., Kamber, M.: Data Mining Concepts and Techniques, 3rd edn. Morgan Kauffman (2006) 11. Agarwal, S., Agarwal, J.: Survey on anomaly detection using data mining techniques. Procedia Comput. Sci. 60, 708–713 (2015) 12. Wang, T., Guo, F.: Associating IDS alerts by an improved apriori algorithm. In: Third International Symposium on Intelligent Information Technology and Security Informatics, pp. 478–482. IEEE (2010). 978-0-7695-4020-7/10 13. Ma, Y.: The intrusion detection system based on fuzzy association rules mining. In: IEEE Conferences (2010) 14. Jain, R.: The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modelling. Wiley, New York (1991) 15. Brewer, E.: Aspects of a high-performance parallel-architecture simulator. Master’s thesis, Massachusetts Institute of Technology (1991) 16. Alexey, B.: Performance Evaluation in Parallel Systems 17. Song, J., Takakura, H., Okabe, Y.: Cooperation of intelligent honeypots to detect unknown malicious codes. In: WOMBAT Workshop on Information Security Threat Data Exchange (WISTDE 2008). The IEEE CS Press, Amsterdam, 21–22 April 2008 18. SIGKDD - KDD Cup. KDD Cup 1999: Computer network intrusion detection. [Internet]. www.kdd.org. Accessed 13 Feb 2018 19. Su, M.-Y., Chang, K.-C., Wei, H.-F., Lin, C.-Y.: A real-time network intrusion detection system based on incremental mining approach, pp. 179–184. IEEE (2008). 1-4244-2415-3/08
Image Retrieval Using Bat Optimization and Image Entropy Shashwati Mishra(B) and Mrutyunjaya Panda Department of Computer Science and Applications, Utkal University, Vani Vihar, Bhubaneswar, Odisha, India [email protected]
Abstract. Technological advancements have increased the size of image databases and their applications in different fields. This has also increased the importance of image processing and its subfields like image classification, image segmentation, image retrieval, image enhancement, image compression, image restoration etc. This is also the reason behind the increase in the number of research works on image retrieval. It has inspired people to develop new techniques on image retrieval. Text-based retrieval techniques were gradually replaced by content based image retrieval techniques. Content based image retrieval techniques primarily based on the retrieval of images based on primitive features like colour, shape, texture etc. The proposed method considers image entropy and the Bat algorithm for image retrieval. Entropy helps to find the degree of randomness in the images which can be analysed to obtain their similarity. Bat algorithm is applied to obtain optimal values based on which most similar images are retrieved. For experimental analysis of the proposed technique, both medical and nonmedical images are considered. The results obtained prove the effectiveness of the proposed approach in the retrieval of different categories of images. Keywords: Bat algorithm · Entropy · Euclidean distance · Content based image retrieval · Text based retrieval
1 Introduction Image retrieval is one of the popular research areas which is used to extract the desired query image from a huge collection of unordered image databases. Increased use of digital images in various fields has also increased its demand in several fields like medical science, remote sensing image analysis, geographical image analysis, crime investigation, human identification and many more. Concepts from pattern analysis, artificial intelligence, machine learning, image processing are used to retrieve the images from the database according to the input query. In text based image retrieval the annotation process takes a lot of time which makes the process of retrieval infeasible for large databases. The way images should be annotated depends on the topic of search. The search text or the input query varies from person to person and depends on the interpretation of an individual to a particular image © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 10–18, 2021. https://doi.org/10.1007/978-3-030-49339-4_2
Image Retrieval Using Bat Optimization and Image Entropy
11
or object in the image. This process of annotating an image with a description, keyword or caption is also expensive and requires more human effort. To solve these issues automatic annotation techniques were developed. Initially, grayscale images were used for Content Based Image Retrieval (CBIR). With the increased use of colour images, techniques were developed to retrieve images using colour information [1]. As the name explains CBIR techniques extract images based on content similarity instead of searching the matching annotation. Searching an image based on information extracted from the query image increases the accuracy of retrieval. The search process can be refined using relevance feedback techniques. Feature extraction and feature matching are the two important steps of CBIR techniques. The extracted features should be invariant to translation, scaling, rotation, illumination changes etc. A good feature extraction technique should select distinct features in such a way that inter-class images have a high degree of inequality than intra-class images [1]. The global approach and local approach are the two methods used for feature extraction. The global approach considers the whole image for feature extraction whereas the local approach tries to extract information from local image regions [1]. To extract local features images are partitioned into small regions and features are extracted from each region separately. Colour, shape, texture and spatial layout are the low level features that are used for image retrieval in the CBIR approach [2]. The semantic gap between high level image concepts and low level image features are the main problems of CBIR techniques [2, 3]. Section 2 tells about the various works done on image retrieval in detail. Section 3 explains the proposed methodology and discusses the entropy concept as well as the Bat algorithm used in this method of image retrieval. Some of the uses of entropy and Bat algorithm in various fields are also given in Sect. 3. Section 4 contains the experimental analysis followed by the conclusion in Sect. 5.
2 Literature Review M. Yousuf et al. used visual words fusion of SIFT (Scale Invariant Feature Transform) and LIOP (Local Intensity Order Pattern) descriptors for Content Based Image Retrieval [4]. A retraining method was also proposed for obtaining better convolutional representations to retrieve images based on the image contents [5]. A. Nazir et al. [6] developed a technique of CBIR using colour and texture information. Colour histogram is used for finding colour information whereas DWT (Discrete Wavelet Transform) and edge histogram descriptor are used to obtain texture information. The weighted average of triangular histograms of visual words was calculated for getting a better representation of an image which will help in image retrieval and image annotation [7]. Multi-objective whale optimization algorithm was also used for CBIR [8]. Colour descriptor and discrete wavelet transform based approach were also suggested for image retrieval based on image content [9]. Texture information and Self-Organising Map based technique was also proposed for retrieving medical images [10]. Researchers have used different types of entropy functions like fuzzy entropy, Renyi entropy, Tsallis entropy, Kapur’s entropy etc. in their research. P. Roy and S. Adhikari
12
S. Mishra and M. Panda
developed a fuzzy clustering technique using the concept of entropy. The technique was applied to gray-level document images and a binarization method was proposed for separating foreground or texts from the background. Fuzzy logic based decision system is used to obtain a more realistic separation [11]. S. Pare et al. [12] applied the Bat algorithm and Renyi entropy to obtain optimal threshold values for segmenting coloured satellite images. Fuzzy entropy and the Bat algorithm based method were also proposed for microscopic image thresholding [13]. V. Rajinikanth et al. [14] applied Tsallis entropy based on the Cuckoo search algorithm for multilevel thresholding of brain MRI images. Leukocyte images were also segmented using Shannon entropy [15]. A Firefly algorithm assisted approach of tumor extraction from brain MRI images was also proposed using fuzzy entropy concept and DRLS (Distance Regularized Level Set) [16]. T. Jayabarathi et al. performed a detailed analysis of the Bat algorithm, its variations and use in the field of engineering [17]. Bat algorithm is also used for numerical optimization [18], copper price estimation [19] and to solve various optimization problems [20, 21]. Bat algorithm was also applied on colour images for segmentation using various types of entropy functions [22]. S. C. Satapathy et al. [23] applied chaotic Bat algorithm and Otsu method for multilevel image thresholding. Chaotic Bat algorithm is also used to forecast the complex motion of floating platforms [24]. The concept of the Bat algorithm was applied for motion planning of non-holonomic wheeled robots [25], automatic clustering of grayscale images [26]. D. Gupta et al. [27] used binary Bat algorithm to classify white blood cells.
3 Proposed Methodology A graphical representation of the steps involved in the proposed method of image retrieval is given in Fig. 1. Initially, the entropy of each image in the database and the query image is calculated. The differences in the degree of randomness of the query image and the images in the database are computed. To obtain these variations in information Euclidean distance formula is used. On these differences, the Bat algorithm is applied to obtain the optimal value which can be used to retrieve most similar images. The images for
Fig. 1. Proposed methodology
Image Retrieval Using Bat Optimization and Image Entropy
13
which the variations with the query image lies below this optimal value are considered as having similarity with the query image. The outputs are displayed by rearranging the images in ascending order of their differences. 3.1 Entropy In an image, entropy can be used to measure the information content of an image, which can be helpful for analysis, comparison of different images. Entropy is a measure of randomness or degree of uncertainty in a system. This randomness in image features can be used for different image processing operations like image segmentation and image retrieval. Entropy is a statistical measurement that helps to characterize image texture. Entropy can be defined as: (1) E=− p(log2 p)
3.2 Bat Algorithm Bat algorithm is one of the popular nature inspired algorithms developed by X. S. Yang in 2010 and inspired by the echolocation behaviour of bats [28]. This is one of the popular metaheuristic algorithms and popularly used for solving various optimization problems. The way bat moves in darkness and how it differentiates between different types of insects is one of the interesting phenomena [29]. To avoid obstacles during movement, to detect the prey bat uses a type of sonar known as echolocation. Bat emits a sound pulse that can be affected by the technique used for hunting and can also vary from species to species. After emitting the signal bat observes its echo which is bounced back from the objects in the way [29]. Yang assumed that bat moves from its position xi with velocity vi and frequency fmin . λ is the wavelength and A0 is the loudness parameter used for searching the prey or food of the bat. Pulse rate emission r and wavelength (or frequency) are adjusted according to the distance and location of the target. The rate of pulse emission lies between 0 and 1 both inclusive, where 0 symbolizes no emission and 1 means maximum emission. The frequency, velocity and position of a bat can be calculated using Eq. (2) to Eq. (4). fi = fmin + (fmax − fmin )β
(2)
vit = vit−1 + xit − x∗ fi
(3)
xit = xit−1 + vit
(4)
where, β ∈ [0, 1] is a random vector drawn from an uniform distribution, x∗ is the current global best solution or position, xit is the new position at time t, vit is the new velocity at time t, fi is the frequency. fmin and fmax represent the minimum frequency and maximum frequency respectively [29]. Since λi fi is the velocity increment, either one can be used for adjustment of change in velocity keeping the other unchanged. This decision depends on the nature of the
14
S. Mishra and M. Panda
problem. The initial frequency of a bat is chosen randomly between fmin , fmax . For local search, after selecting the current best solution new solution is generated for each bat locally as: xnew = xold + εAt
(5)
where, ε ∈ [−1, 1] is a random number, At is the average loudness of all the bats at time step t [29]. Loudness and pulse rate emission is updated with each iteration to find the prey. A bat stops its emission when it finds its prey, at that time loudness becomes zero. When a bat moves towards its prey loudness gradually decreases whereas pulse rate emission increases. = αAti and rit+1 = ri0 1 − exp(−γ t) So, At+1 i where, α and γ are constants. For 0 < α < 1 and γ > 0, Ati → 0, rit → ri0 when t → ∞ [29]. Using the ideas from the movement of bats, this algorithm tries to find an optimal solution that has a wide variety of applications.
4 Experimental Observations and Discussions For experimental analysis, medical image set containing a specific category of images having different shapes and sizes (brain medical images) are considered first. Then the Table 1. Query image, contour plot of the image, histogram plot of the image, CPU time measured in seconds, optimal value obtained using the Bat algorithm Query image
Contour plot of the image
Histogram plot
CPU time seconds)
(in
Optimal value
32.7914
1.4362
33.6650
1.5484
11.2477
1.9469
10.7173
3.1949
Image Retrieval Using Bat Optimization and Image Entropy
15
Fig. 2. Query image and corresponding retrieved images
Table 2. Accuracy calculation Experiment no.
Total no. of relevant images retrieved
Total number of images retrieved
Accuracy in percentage
1
5
6
83.33
2
9
9
100
3
8
8
100
4
7
8
Average accuracy in percentage
87.5 92.7075
technique is applied to medical images of different types (consisting of brain, lung, oral cavity, stomach and thyroid cancer images) to extract images of a specific category as per the input query (here brain cancer images). These images are collected from Cancer Imaging Archive and publicly available image repositories. Besides this, a set of images from Caltech 101 image set namely, airplane, butterfly, cup and face are taken into consideration for image retrieval. Retrieval results obtained for butterfly and airplane images are shown in Fig. 2. The proposed methodology is applied to these medical as well as nonmedical images for verifying efficiency and effectiveness. Table 1 shows the query image, the contour plot and histogram plot of the image. Besides this, the table also contains the execution time of the program which is measured in seconds and varies according to the size of the database containing images. The optimal value obtained for each category of
16
S. Mishra and M. Panda
images using the Bat algorithm is also given in the table. This optimal value is used for retrieving the most similar images. Based on this retrieval result accuracy is calculated for each query image as shown in Table 2. The query image and the corresponding images retrieved from the collection of images are given in Fig. 2. The figure contains the retrieval results of both medical images and nonmedical images. In the case of the first brain image, the image database contains images of the brain, lungs, thyroid, stomach, oral cavity and the intention is to retrieve only the brain images. In the second case, only brain images are considered and the brain images of a specific shape are retrieved. In both cases, only one retrieved image was found to be different from the query image. To verify the effectiveness of this method nonmedical images are also taken into consideration. An image database was created by collecting images from Caltech 101 image dataset. Butterfly and airplane images were retrieved from this image database using the proposed technique. In both cases, the retrieved results are also found to be very satisfactory.
5 Conclusion A bat based image retrieval technique is proposed in this paper using the concept of entropy. Bat algorithm is simple and its implementation is easy. This algorithm gives an optimal solution in less time which can also be observed from the experimental results. So, this algorithm can also be applied for large databases and complex problems. The algorithm can converge quickly by switching from exploration to exploitation stage. Automatic control and auto zooming to the region can be achieved by controlling loudness and pulse rate emission values. Since entropy is a measure of information content and the degree of randomness in an image, it can be used to analyse similarity among images. This idea is used in the proposed technique of image retrieval. It has been observed that this technique works well for different types of images and the accuracy of retrieval varies between 80% to 100% with an average accuracy of 92.7075%. Observing the robustness of this approach, it can be concluded that the entropy bat based image retrieval is one of the effective and efficient methods of retrieval. The technique can also be applied to digit images and for document retrieval to observe its efficiency in processing natural languages.
References 1. Singh, C., Walia, E., Kaur, K.P.: Color texture description with novel local binary patterns for effective image retrieval. Pattern Recogn. 76, 50–68 (2018) 2. Liu, G.H., Yang, J.Y.: Content-based image retrieval using color difference histogram. Pattern Recogn. 46(1), 188–198 (2013) 3. Wang, X.Y., Li, Y.W., Yang, H.Y., Chen, J.W.: An image retrieval scheme with relevance feedback using feature reconstruction and SVM reclassification. Neurocomputing 127, 214– 230 (2014) 4. Yousuf, M., Mehmood, Z., Habib, H.A., Mahmood, T., Saba, T., Rehman, A., Rashid, M.: A novel technique based on visual words fusion analysis of sparse features for effective content-based image retrieval. Math. Probl. Eng. (2018) 5. Tzelepi, M., Tefas, A.: Deep convolutional learning for content based image retrieval. Neurocomputing 275, 2467–2478 (2018)
Image Retrieval Using Bat Optimization and Image Entropy
17
6. Nazir, A., Ashraf, R., Hamdani, T., Ali, N.: Content based image retrieval system by using HSV color histogram, discrete wavelet transform and edge histogram descriptor. In: 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1–6. IEEE, March 2018 7. Mehmood, Z., Mahmood, T., Javid, M.A.: Content-based image retrieval and semantic automatic image annotation based on the weighted average of triangular histograms using support vector machine. Appl. Intell. 48(1), 166–181 (2017) 8. Aziz, M.A.E., Ewees, A.A., Hassanien, A.E.: Multi-objective whale optimization algorithm for content-based image retrieval. Multimed. Tools Appl. 77(19), 26135–26172 (2018) 9. Ashraf, R., Ahmed, M., Jabbar, S., Khalid, S., Ahmad, A., Din, S., Jeon, G.: Content based image retrieval by using color descriptor and discrete wavelet transform. J. Med. Syst. 42(3), 44 (2018) 10. Mishra, S., Panda, M.: Medical image retrieval using self-organising map on texture features. Future Comput. Inform. J. 3(2), 359–370 (2018) 11. Roy, P., Adhikari, S.: An entropy-based binarization method to separate foreground from background in document image processing. IUP J. Telecommun. 10(2), 34–47 (2018) 12. Pare, S., Bhandari, A.K., Kumar, A., Singh, G.K.: Rényi’s entropy and Bat algorithm based color image multilevel thresholding. In: Machine Intelligence and Signal Analysis, pp. 71–84. Springer, Singapore (2019) 13. Roy, H., Dhar, S., Choudhury, P., Biswas, A., Chatterjee, A.: Microscopic image thresholding using restricted equivalence function based fuzzy entropy minimization and Bat Algorithm. In: 2018 2nd International Conference on Electronics, Materials Engineering & Nano-Technology (IEMENTech), pp. 1–6. IEEE, May 2018 14. Rajinikanth, V., Fernandes, S. L., Bhushan, B., Sunder, N.R.: Segmentation and analysis of brain tumor using Tsallis entropy and regularised level set. In: Proceedings of 2nd International Conference on Micro-Electronics, Electromagnetics and Telecommunications, pp. 313–321. Springer, Singapore (2018) 15. Raja, N.S.M., Arunmozhi, S., Lin, H., Dey, N., Rajinikanth, V.: A study on segmentation of leukocyte image with Shannon’s entropy. In: Histopathological Image Analysis in Medical Decision Making, pp. 1–27. IGI Global (2019) 16. Roopini, I.T., Vasanthi, M., Rajinikanth, V., Rekha, M., Sangeetha, M.: Segmentation of tumor from brain MRI using fuzzy entropy and distance regularised level set. In: Computational Signal Processing and Analysis, pp. 297–304. Springer, Singapore (2018) 17. Jayabarathi, T., Raghunathan, T., Gandomi, A.H.: The Bat algorithm, variants and some practical engineering applications: a review. In: Nature-Inspired Algorithms and Applied Optimization, pp. 313–330. Springer, Cham (2018) 18. Cai, X., Wang, H., Cui, Z., Cai, J., Xue, Yu., Wang, L.: Bat algorithm with triangle-flipping strategy for numerical optimization. Int. J. Mach. Learn. Cybernet. 9(2), 199–215 (2017) 19. Dehghani, H., Bogdanovic, D.: Copper price estimation using Bat algorithm. Resour. Policy 55, 55–61 (2018) 20. Yuvaraj, T., Devabalaji, K.R., Ravi, K.: Optimal allocation of DG in the radial distribution network using bat optimization algorithm. In: Garg, A., Bhoi, A.K., Sanjeevikumar, P., Kamani, K.K. (eds.) Advances in Power Systems and Energy Management. LNEE, vol. 436, pp. 563–569. Springer, Singapore (2018) 21. Bekda¸s, G., Nigdeli, S.M., Yang, X.S.: A novel Bat algorithm based optimum tuning of mass dampers for improving the seismic safety of structures. Eng. Struct. 159, 89–98 (2018) 22. Mishra, S., Panda, M.: Bat algorithm for multilevel colour image segmentation using entropybased thresholding. Arab. J. Sci. Eng. 43(12), 7285–7314 (2018) 23. Satapathy, S.C., Sri Madhava Raja, N., Rajinikanth, V., Ashour, A.S., Dey, N.: Multi-level image thresholding using Otsu and chaotic Bat algorithm. Neural Comput. Appl. 29(12), 1285–1307 (2016)
18
S. Mishra and M. Panda
24. Hong, W.C., Li, M.W., Geng, J., Zhang, Y.: Novel chaotic Bat algorithm for forecasting complex motion of floating platforms. Appl. Math. Model. 72, 425–443 (2019) 25. Roy, A.G., Rakshit, P.: Motion planning of non-holonomic wheeled robots using modified Bat algorithm. In: Nature-Inspired Algorithms for Big Data Frameworks, pp. 94–123. IGI Global (2019) 26. Dey, A., Bhattacharyya, S., Dey, S., Platos, J., Snasel, V.: Quantum-inspired bat optimization algorithm for automatic clustering of grayscale images. In: Recent Trends in Signal and Image Processing, pp. 89–101. Springer, Singapore (2019) 27. Gupta, D., Arora, J., Agrawal, U., Khanna, A., de Albuquerque, V.H.C.: Optimized Binary Bat algorithm for classification of white blood cells. Measurement 143, 180–190 (2019) 28. Yang, X.S.: Bat algorithm: literature review and applications. Int. J. Bio-Inspired Computation 5(3), 141–149 (2013). (arXiv preprint arXiv:1308.3900) 29. Yang, X.S.: A new metaheuristic bat-inspired algorithm. In: Nature inspired cooperative strategies for optimization (NICSO 2010), pp. 65–74. Springer, Heidelberg (2010)
Logistic Regression on Hadoop Using PySpark Krishna Kumar Mahto and C. Ranichandra(B) Vellore Institute of Technology, Vellore, Tamil Nadu, India [email protected]
Abstract. Training a Machine Learning (ML) model on bigger datasets is a difficult task to accomplish, especially when a high-end configuration is not accessible. A relatively good configuration may also not always produce quick outcomes and depending on the dataset size, the time taken would be anything between seconds to several hours. More often, the tasks we are interested in involve big datasets and complex models. The purpose of our work was to see how effective Hadoop can be in terms of increasing the efficiency of working with Machine Learning for a given problem. Out of many models to choose from, Logistic Regression was chosen, which is relatively simpler to implement. Three Logistic Regression models were implemented and trained on MNIST Handwritten Digits dataset. First one was implemented in Python using NumPy without any ML libraries. The second implementation used LogisticRegression class that comes with the Scikit-learn Python package, and the third implementation was done using PySpark MLlib. Towards the end of the paper, we present the observations and results obtained from the execution of each. Keywords: Hadoop Distributed File System (HDFS) · PySpark · Big data · Machine Learning (ML)
1 Introduction The Hadoop Distributed File System (HDFS) was developed in order to enable storage of large amounts of data on a cluster of commodity computer machines. Along with this, Hadoop also comes with a distributed computing paradigm called MapReduce. This allows moving away from the traditional single node processing to a parallelized environment allowing to analyse large amounts of data faster than before. [1] Resilient Distributed Datasets (RDDs) [2] provide Spark abilities which MapReduce is deficit in. RDDs are immutable data structures fundamental to the computation power provided by Spark. Fault-tolerance is the self-recovery property of Spark RDDs. As the RDDs are immutable, the original RDDs are never changed, and each operation returns a new RDD. All transformations on RDDs are made into a lineage of operations and stored in the form of Directed Acyclic Graphs (DAGs). When a failure occurs, Spark refers to the DAG in order to perform the task again. The next important property of RDDs is Lazy Evaluation. Spark follows a declarative paradigm. In Spark, all transformations to be performed are declared and are stored to form a pipeline. DAGs store all the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 19–26, 2021. https://doi.org/10.1007/978-3-030-49339-4_3
20
K. K. Mahto and C. Ranichandra
transformations which are executed only when an action is called. This is called as Lazy evaluation [3]. Thirdly, frequently used data are cached for increased access speed. Also, all the computations by Spark are done in the main memory, unlike MapReduce which does disk-oriented processing. Last but not the least, Spark does partitioning of RDDs. Each partition is then assigned to a task. The amount of time that a system takes to finish training a Machine Learning model for a real-world problem is much higher than what a typical (non-Machine Learning) task takes, for e.g., opening a word document. Precise modelling of real-world problems requires a large number of observations, and each observation often has many attributes. Machine Learning is no more restricted to simple prediction tasks and attempts to model solutions to as many problems as possible are being made. This has led to modelling more complex functions [4] that fit the observations and at the same time, generalise well so to be consistent with unseen examples. This demands more data and consequently, more computational time. Neural Networks are the apt examples of this trend which are modelled over the human neuron. Conceptually, a neural network contains several neurons grouped into layers. Neurons of one layer are connected to the neurons of the next layer, and each connection has a weight associated with it. Modern neural networks can be very deep, having a large number of trainable weights. More training time is required for learning more weights, so neural networks training complexity is often quite high [5]. Hadoop and Spark were introduced to address similar requirements- store and operate on large amounts of data. Logistic regression aims at learning a separating hyperplane (also called Decision Surface or Decision Boundary) between data points of the two classes in a binary classification setting. It is much simpler and less costly to train a Logistic Regression model as compared to a neural network [6]. So, Logistic Regression was selected for this study. Huge amounts of data and a large number of features, make it practically impossible to achieve something substantial with a regular PC environment. A distributed storage and computational environment are required, and Hadoop is very widely used for this. As already mentioned, Spark has been used as the computation engine for this project and HDFS for storage. Python was the choice of language, and the PySpark API was used for Spark. The MNIST dataset has 42,000 training examples and each example has 768 features. The implementation of Logistic Regression involves the implementation of a cost function and a function that optimizes the cost function.
2 Logistic Regression Logistic regression is a classification algorithm used for binary classification [7]. However, extensions to Logistic Regression have been introduced that can do multiclass classification as well, such as Maximum Entropy model and One-vs-rest. Maximum Entropy model uses the same principle as Logistic Regression. Its hypothesis function for a total for C classes is written, as shown in Eq. (1). Equation (1) represents the confidence value for a data point x to be belonging to one of the classes c = 1, 2, …, C
Logistic Regression on Hadoop Using PySpark
21
[8]. hc,θ (x) =
exp(θc Tx ) C
(1)
exp(θc Tx )
c=1
The hypothesis function is the softmax function. Softmax functions can be considered as extensions to the sigmoid function that are often used in the multiclass classification setup [9]. A Binary Logistic regression model hypothesis is formulated as [10]: hθ (x) = g(θ T x) where, g(z) =
1 1 + e−z
(2)
g is called the sigmoid function. The sigmoid function, g(z) is greater than or equal to 0.5 when z ≥ 0. In a binary classification setting, its value is used for classifying each data point into one of the classes. Its value is often interpreted as a confidence score for the classification done. 2.1 Computation of Cost Function and Its Gradient Considering the total number of training examples to be m, the number of features n and regularization parameter λ, the cost function corresponding to the hypothesis of logistic regression (Eq. 2) is given by Eq. 3 [10]. m 1 (i) y log hθ (x(i) ) + (1 − y(i) ) log(1 − hθ (x(i) )) J (θ ) = − m i=1
n λ 2 + θj · · · 2m
(3)
i=1
Computing the value for this function on a single machine may take some time depending on the size of the dataset. Moreover, it involves logarithm computation which in itself is a complex task for a computer. The computation of cost function can be parallelized by splitting the dataset [11] and assigning to different processors in a cluster. Once parallel processors have computed costs on their respective splits, they can be aggregated to calculate the total cost. The partial derivative of the cost function looks like w.r.t. one of the parameters looks like: m 1 ∂J (θ ) λ (i) (i) (i) hθ (x ) − y xj = (4) + θj for j ≥ 1 · · · ∂θj m m i=1
Similar to cost, gradient computation can also be parallelised the same way.
22
K. K. Mahto and C. Ranichandra
2.2 Optimization of the Cost Function The function for computing the cost function (Sect. 2.1) may/may not be required to be explicitly used depending on how we choose to optimize the cost function. If we use a sophisticated function from one of the standard libraries, then the cost function might be required to be passed to it as a parameter. However, manual implementation of the gradient descent algorithm can
be done as follows [12]. There is a nice formula for the partial derivative term ∂J (Θ) ∂Θj that can be easily coded as follows: 1. Start with some randomly initialised values for Θ vector. 2. Repeat for a given number of steps or until a minimum is reached {
i. Find tempj := Θj − α ∂J (Θ) ∂Θj for each j. ii. Update Θj := tempj } Gradient descent algorithm makes changes to the parameter values based on the gradient of the cost function at a given point. Depending on what the current values of parameter values are, they may either be incremented or decremented so that gradually the parameters approach towards an optimal set of values. α is a hyperparameter that controls how much a parameter value should be decreased. A suitable choice for the value of α is crucial since if it is too low, it can cause the model never to converge, a similar outcome is possible if it is chosen to be too high [13]. It is crucial to be noted that in step 2(i), the tempj values for all j (i.e., for each parameter) are computed prior to Θj updation. Through Eq. (3), we can see that gradient descent requires to go through each training example before updating the parameter values. This has to be done for every iteration until convergence is reached. Clearly, gradient descent may take quite long before finally learning the optimum parameter values. Stochastic Gradient Descent (SGD) [14] may be used instead of the regular gradient descent, but still, SGD needs to iterate through the entire batch. It has also been observed that lesser the size of the batch, more is the number of iterations required for convergence, in general.
3 Implementation All the implementations were done on Linux Ubuntu 16.04 LTS operating system that runs on a 7th generation Intel Core i5 processor clocked at 2.6 GHz. The main memory was 8 GB of RAM. At a high level, a Logistic Regression classification model can be trained and evaluated with the following algorithm: 1. 2. 3. 4.
classifier = LogisticRegression() Load dataset classifier.fit(X_train, y_train) # Predicting the test set result y_pred = classifier.predict(X_test) 5. # Confusion matrix a. from sklearn.metrics import confusion_matrix
Logistic Regression on Hadoop Using PySpark
23
b. cm = confusion_matrix(y_test, y_pred) 6. print(cm) The classifier.fit() method call is responsible for running an optimization algorithm, updating the parameters and returning a trained model at the end. However, to implement a classifier on Hadoop using PySpark, the dataset has to be loaded onto the HDFS first, and then the above algorithm can follow. In such a case, the second step (Load dataset) would involve loading data from HDFS. Using Python computation library- NumPy, the entire logistic regression algorithm, including gradient descent, can be implemented from scratch. Since the dataset is a multi-class dataset, a One-vs-Rest [15] implementation of logistic regression was done. One-vs-Rest implementation trains one classifier for each class. The classifier with the highest confidence value is then taken for making the final prediction. Scikit-learn is one of the most famous and widely used Machine Learning libraries. Scikit-learn comes with a package of built-in classes that implement different Machine Learning libraries. Using them, the flow of the actual implementation becomes almost the same as the algorithm mentioned above. Scikit-learn is memory intensive and very large computations require a large main memory to work with. So, if the main memory is low for a given task, we may end up with a memory error (MemoryError in Python). Spark has a Python API PySpark. This API allows developers to avail the benefits of Spark through the Python interface. In order to implement a logistic regression model, MLlib library of PySpark has been used for this project. This is one of the default packages that come with Spark and therefore works well with RDDs.
4 Results The results were gathered by running each implementation three times. The results have been shown in the tables below. Table 1 shows the data loading time as the dataset is read into the program. HDFS read time is more because it has more steps to go through before finally accessing a block of data. Three entities of an HDFS read are Client, Namenode and Datanode. Namenode provides the metadata to the client, which first has to issue a request. This is followed by reading from data nodes which store the actual data required. Table 2 shows the training time comparison between the Spark implementation of Logistic Regression and non-Spark implementations. The data loaded is used to fit a model that is expected to classify the digits. The significant difference in training times between the different implementations is apparent. While the vanilla implementation (implementation done from scratch using Python’s NumPy library) failed to converge each time with a memory error, Scikit-learn took about 30 min each time. However, Spark did it in seconds which is staggeringly small when compared with the other numbers. 4.1 Comparing Test Performance The vanilla implementation could not finish training (Table 2). So the comparison was left to be made only between Scikit-learn and HDFS + PySpark implementations of Logistic
24
K. K. Mahto and C. Ranichandra Table 1. Data loading time
Table 2. Training time
Without HDFS With HDFS
Vanilla Scikit-learn HDFS + implementation PySpark
2.95 s
18.84 s
2.97 s
18.93 s
4.11 s
19.11 s
Convergence 28 min failed (memory error)
26 s
Convergence 31 min failed (memory error)
28 s
Convergence 29 min failed (memory error)
31 s
Regression. A test set of 10000 examples were used for testing. The performance metric that was chosen is the Confusion Matrix. A confusion matrix is a concise representation of the correct and the incorrect classifications done by a model. This matrix can further be referred to for the calculation of other important and performance metrics such as Precision and Recall, which may be relevant for a given problem. A confusion matrix is generally used for binary classification problems. For this project, it was used to have just enough idea about how well each implementation performed. However, other metrics can also be computed from it. Handwritten Digit Classification is a multiclass problem. However, it is easy to extend the application of confusion matrices to multiclass classification problems as well where we can still calculate Precision and Recall for the given model. Figures 1 and 2 show the confusion matrices of Sklearn implementation and HDFS+PySpark implementation, respectively:
Fig. 1. Confusion matrix for Sklearn implementation
Accuracy: Metric
Sklearn HDFS+PySpark
Accuracy 92.89% 93.72%
Logistic Regression on Hadoop Using PySpark
25
Fig. 2. Confusion matrix for HDFS+PySpark implementation
These results show that with a significant advantage on time, HDFS+PySpark implementation also managed to train a better model than Scikit-learn.
5 Conclusion The training time of HDFS+PySpark implementation was more than 65 times lesser than the training time taken by Sklearn implementation. The vanilla implementation failed with a memory error each time. The implementation was done on a single machine and a single-node Hadoop cluster. With more number of nodes being added, the speed of convergence is expected to increase even more. The classification accuracy on the test set, although was found to be better on Spark-based implementation, Scikit-learn was quite close. The results direct for greater research on the usage of Spark for simple tasks along with large scale projects. A more general conclusion can be drawn by testing this setting on other Machine Learning algorithms as well. Still, algorithms using gradient descent for optimization are expected to give similar outcomes.
References 1. Shvachko, K., Kuang, H., Radia, S., Chansler, R.: The Hadoop distributed file system. In: 2010 IEEE 26th Symposium on Mass storage systems and technologies (MSST), pp. 1–10. IEEE, May 2010 2. Zaharia, M., Chowdhury, M., Das, T., Dave, A., Ma, J., McCauley, M., Franklin, M.J., Shenker, S., Stoica, I.: Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing. In: Proceedings of the 9th USENIX Conference on Networked Systems Design and Implementation, p. 2. USENIX Association, April 2012 3. Boehm, M., Dusenberry, M.W., Eriksson, D., Evfimievski, A.V., Manshadi, F.M., Pansare, N., Pansare, N., Reinwald, B., Reiss, F.R., Sen, P., Surve, A.C., Tatikonda, S.: SystemML: declarative machine learning on spark. Proc. VLDB Endow. 9(13), 1425–1436 (2016) 4. Kearns, M.J.: The Computational Complexity of Machine Learning. MIT Press, Cambridge (1990) 5. Bianchini, M., Scarselli, F.: On the complexity of neural network classifiers: a comparison between shallow and deep architectures. IEEE Trans. Neural Netw. Learn. Syst. 25(8), 1553– 1565 (2014)
26
K. K. Mahto and C. Ranichandra
6. Dreiseitl, S., Ohno-Machado, L.: Logistic regression and artificial neural network classification models: a methodology review. J. Biomed. Inform. 35(5–6), 352–359 (2002) 7. Harrell, F.E.: Binary logistic regression. In: Regression Modelling Strategies, pp. 219–274. Springer, Cham (2015) 8. Mount, J.: The equivalence of logistic regression and maximum entropy models (2011). http:// www.win-vector.com/dfiles/LogisticRegressionMaxEnt.Pdf 9. Wolfe, J., Jin, X., Bahr, T., Holzer, N.: Application of softmax regression and its validation for spectral-based land cover mapping. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 42, 455 (2017) 10. Ng, A.: CS229 Lecture notes. CS229 Lect. Notes 1(1), 1–3 (2000) 11. Tenne, Y., Goh, C.K. (eds.): Computational Intelligence in Expensive Optimization Problems, vol. 2. Springer, Heidelberg (2010) 12. Ruder, S.: An overview of gradient descent optimization algorithms. arXiv preprint arXiv: 1609.04747 (2016) 13. Toal, D.J.J., Bressloff, N.W., Keane, A.J.: Kriging hyperparameter tuning strategies. AIAA J. 46(5), 1240–1252 (2008) 14. Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT 2010, pp. 177–186. Physica-Verlag HD (2010) 15. Rifkin, R., Klautau, A.: In defence of one-vs-all classification. J. Mach. Learn. Res. 5(Jan), 101–141 (2004)
Analysis of Pre-processing Techniques for Odia Character Recognition Mamatarani Das(B) and Mrutyunjaya Panda Department of Computer Science and Application, Utkal University, Bhubaneswar, Odisha, India [email protected]
Abstract. OCR is the recognition of characters from digitized documents and the OCR systems tries to eliminate human interactions with computer and now days for better result and effectiveness; OCR systems are completely involved in our daily life. The OCR component becomes an important component of document scanners and is used in numerous fields such as postal address processing, script recognition, cheque processing in banking, passport authentication etc. Research in this field has been underway for more than half a century and the results have been astonishing with effective recognition rates for printed characters exceeding 95%, but not like human eye accuracy and with important efficiency improvements for handwritten cursive character recognition where recognition levels surpassed 90%. The primary job in the pre-processing of the recorded images as input is to increase the identification rate and decrease the complexities. The better the pre-processing of a character document image, a good set of features can be obtained that results a better classification as well as recognition result. This paper describes a full pre-processing method that informs various researchers in this area of research. Keywords: Odia character · Pre-processing · Segmentation · Handwritten · Printed document
1 Introduction 1.1 Optical Character Recognition (OCR) Advancements in pattern recognition has increased lately and significantly affects other major fields of computer science like Optical Character Recognition of Natural language processing, Document Classification, Data Mining, Biometric authentication, Word processing, helping blind people to read text, Automatic manipulation of document and many more and they are also computationally more demanding. The fields of pattern recognition is multidisciplinary that forms the basis for other areas, such as image processing, machine vision and artificial intelligence. Therefore, without Image Processing with machine learning, OCR cannot be implemented. A model of an OCR scheme goes through several stages including picture or image acquisition, pre-processing, extraction © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 27–34, 2021. https://doi.org/10.1007/978-3-030-49339-4_4
28
M. Das and M. Panda
of features, classification and post-processing, shown in Fig. 1. The name pre-processing signifies that the OCR model should take some steps to remove the impurities prior to feature extraction since the results of pre-processing controls the results for the next stages of a character recognition model. The pre-processing stage’s primary goal is to perform normalization of the digitized documents after image acquisition process and remove these kinds of irregularities, otherwise it would complicate the classification process, also lowers the recognition rate.
Fig. 1. Steps of a character recognition system
Rest of the paper is organized as follows: Sect. 2 represents Odia character recognition, factors affecting the quality of character recognition and significance and different types of pre-processing is described in Sect. 3, analysis of different preprocessing techniques are described in Sect. 4 and Sect. 5 is the concluding section.
2 Odia Character Recognition Language enables our ideas to be expressed. India is a multi-lingual nation with multiple scripts and Odia is one of official script. Odia language spoken by Odisha individuals and neighboring individuals. Odia alphanumeric set consist a total of 46 letters and 10 digits, which are given in Fig. 2a and 2b. A number of researchers have done their work on this language [1–4], but performance of OCR model for handwritten character is less compared to printed character. Although there is no line at the top of characters such as Devnagri, but the complexities arises due to the rounded form of letters and digits for the Odia character set and also due to presence of a big amount of compound letters. Odia character recognition is categorized into online and offline group. But the steps of Odia character recognition in both the groups are same and shown in Fig. 3.
Analysis of Pre-processing Techniques for Odia Character Recognition
29
3 Factors Affecting the Quality of Odia Character Recognition and Significance of Pre-processing Pre-processing methods are required for images that containing text as well as graphics in color, gray or binary format. There are many reasons affecting the accuracy of OCRrecognized text. These includes: the type of the scanner and document, the scanned document resolution, the font used in the text of the document, the intricacy of the language and many more.
Fig. 2. a. Handwritten Odia vowels and consonants, b. Handwritten Odia numerals
30
M. Das and M. Panda
Fig. 3. Basic OCR model with input and output
Due to these types of complexities in image acquisition step, pre-processing of the digitized image document is highly recommended to achieve higher recognition rate by extracting prominent features from it because it improves the quality of image by reducing noise, detection and correction of skew, segmentation of line, word, character etc. 3.1 Types of Pre-processing Techniques The different types of pre-processing techniques that are described in this paper are given as: 1. 2. 3. 4. 5.
RGB to Gray Scaling Binarization Inversion Skeletonization Segmentation
3.1.1 RGB to Gray Scaling In image acquisition phase of OCR, the character image document may be captured by a better quality scanner in 300/600 dpi. The captured image may be gray scaled or coloured image. The processing of color images are computationally high as it is comprised of 3-dimensional matrix of Red pixel, Green pixel and Blue pixel that makes the system difficult to extract the text from the document. A scanned document from a scanner is the input to an OCR system. If the document is an RGB image, it must to be gray scaled for the next pre-processing step OCR. This technique is crucial for binarization as only gray shades stay in the image after gray image scaling and this image binarization gives better results. The simplest method to obtain a gray scale image from RGB image is average method. The averaged pixel value of Red cooler, Green color and Blue color provides
Analysis of Pre-processing Techniques for Odia Character Recognition
31
the gray scale pixel value ranging from 0 to 255 from coloured image. But the problem is, a type of blackish image is produced as same weightage is given to RGB as given in Eq. 1. g(i, j) =
1 1 1 Red(i, j) + Green(i, j) + Blue(i, j) 3 3 3
(1)
where g (i, j) is pixel value in gray image, Red (i, j) is Red Pixel Value, Green (i, j) is Green Pixel Value and Blue (i, j) is Blue Pixel value. As Red color, Green color and Blue color are of different wavelength and Green is the colour that causes the eyes more soothing, luminosity method is used that provides different weightages to these colours as given in Eq. 2. g(i, j) = 0.3 ∗ Red(i, j) + 0.59 ∗ Green(i, j) + 0.11 ∗ Blue(i, j)
(2)
Software used for image editing is giving different weightages to RGB colours like 0.22, 0.72 and 0.07. 3.1.2 Binarization Binarization converts an image in gray scale into a white and black image. The pixel values of gray scale image are compared with a threshold value (constant). Given an input pixel f(i, j) and threshold Th, the output pixel g(i, j) is calculated as follows: 1, if f(i, j) ≥ Th (3) g(i, j) = 0, if f(i, j) < Th Binarization method is the best way to convert an image in gray scale of text document in a black foreground (text) over a large white background. Different thresholding method is of two types (1) global thresholding method (2) local thresholding method. In Local thresholding the threshold value of every pixel is set by the pixel information in a region whereas a unique threshold value is set for the entire image for global thresholding. Histogram shape method, clustering method, entropy method, object attribute method, spatial and local methods are popular thresholding methods. In histogram shape method, the thresholding of histogram is used for binarizing the image whereas in clustering technique it tries to group similar patterns together. 3.1.3 Inversion The transformation process of white pixels into black and black pixels into white is known as inversion. After applying the binarization to an image, the image consists of only black pixels on foreground and white pixels on background. As we know white pixels have value binary 1 and the dark pixels have the value binary 0. So the numbers of pixels which are having value 1 are more than the number of pixels which are having value 0. If we convert one’s to zero’s and zero’s to one’s, it takes less calculations in correlation. Hence the text in image is inverted to white on a black background. We get the inverted image by subtracting binary image from unit matrix of same size.
32
M. Das and M. Panda
3.1.4 Skeletonization Transformation of any shape in a character document image to its thin version, i.e. the subset of the image where all the pixels are equidistant from the image boundaries in a way that the originality of image is preserved as well as its connectivity. A lot of irregularities are removed from skeletonization method. A single order pixel shape is obtained, memory consume is less which results the reduction of processing data and time. Extraction of critical characteristics such as endpoints, junction points and relationship between these components plays a great role in feature extraction as well as classification and recognition. 3.1.5 Line, Word and Character Segmentation Different text line segmentation used in OCR system are: projection profile based technique, Hough transform based technique or thinning based technique [5]. Document image is decomposed into vertical strips in piecewise in horizontal projection method. Using partial horizontal projection on each stripe, the places of respective piece-wise separating lines are acquired. Hough transform is used in many fields of image processing and pattern recognition. In the field of character document analysis, text lines can be separated based on Hough peaks. A thinning algorithm is employed on document to separating borderlines. 3.1.6 Skew Detection and Correction At the time of image acquisition, the input image may be skewed or tilted to some extent. By skew detection algorithms, skew angles are calculated and removed because the presence of skew affects the extraction of attributes of image.
4 Analysis of Different Pre-processing Techniques Many researchers have added their works in the field of Odia character recognition, which is a challenging research in the field of pattern recognition. Still research is going on to get a better pre-processing or feature extraction or classification technique so that recognition result is greater. In Table 1, several pre-processing techniques for Odia character recognition are analyzed with classification accuracy. Apart from these, a number of works from researchers are contributed for Odia language, but in most of them, the authors have not mentioned in detail the pre-processing steps they carried out for their work.
Analysis of Pre-processing Techniques for Odia Character Recognition
33
Table 1. Analysis of different pre-processing techniques for Odia character recognition Author
Type of input
Pre-processing techniques
Recognition accuracy in %
[1]
Printed alphabet
Binarization, skew correction by Baird algorithm
96.0
[6]
Printed numeral
Binarization, skeletonization by chain coding, noise removal, segmentation
96.1
[7]
Printed conjunct characters
Binarization
95.9
[4]
Handwritten alphabet
Thresholding, noise reduction, segmentation
–
[8]
Handwritten alphabet
Normalization, segmentation
94.6
[9]
Handwritten alphabet
Thresholding, noise reduction, segmentation
87.6
[10]
Handwritten alphabet
Binarization, noise removal, slant/skew correction, normalization, thinning
94.0
[11]
Handwritten numeral
Segmentation by connected component method, normalization
95.0
[3]
Handwritten numeral
Image cropping and resizing, binarization, thinning
85.3
[12]
Handwritten numeral
Normalization, mean filtering to obtain gray scale image
98.0
[13]
Handwritten numeral
Ostu’s global binarization
98.5
[14]
Handwritten numeral
Binarization, 92.0 morphological operation, thinning, dialation,
[15]
Handwritten numeral
Digitization, normalization, segmentation
93.2
[16]
Handwritten numeral
Data acquisition, binarization, normalization
98.4
34
M. Das and M. Panda
5 Conclusions A number of researchers reported the challenges of OCR for Odia language due to the circular shape of characters, difficulty in finding the suitable algorithm that can be applicable to all types of characters and a robust classification algorithm. It invites the researchers to dedicate more works in pre-processing stage as pre-processing output directly enhances the classification result.
References 1. Mohanty, S.: A novel approach for bilingual (English - Oriya) script identification and recognition in a printed document. Int. J. Image Process. 4(2), 175–191 (2010) 2. Chaudhuri, B.B., Pal, U., Mitra, M.: Automatic recognition of printed Oriya script. In: Proceedings of the International Conference Document Analysis and Recognition, ICDAR, vol. 2001 January, no. February, pp. 795–799 (2001) 3. Sarangi, P.K., Ahemad, P.: Recognition of handwritten Odia numerals using artificial intelligence techniques. Int. J. Comput. Sci. Appl. 2(02), 41–48 (2013) 4. Basa, D., Meher, S.: Handwritten Odia character recognition, no. July 2015, pp. 5–8 (2011) 5. Senapati, D., Mishra, M., Padhi, D., Rout, S.: Text line segmentation on Odia printed documents. Int. J. Adv. Res. Comput. Sci. 2(6), 396–399 (2011) 6. Mohapatra, R.K., Majhi, B., Jena, S.K.: Printed Odia digit recognition using finite automaton. Smart Innov. Syst. Technol. 43, 643–650 (2016) 7. Nayak, M., Nayak, A.K.: Odia-Conjunct character recognition using evolutionary algorithm. Asian J. Appl. Sci. 3(04), 789–798 (2015) 8. Pal, U., Wakabayashi, T., Kimura, F.: A system for off-line Oriya handwritten character recognition using curvature feature. In: Proceedings - 10th International Conference Information Technology, ICIT 2007, pp. 227–229 (2007) 9. Rushiraj, I., Kundu, S., Ray, B.: Handwritten character recognition of Odia script. In: International Conference Signal Processing, Communication, Power and Embedded System, SCOPES 2016 - Proceedings, pp. 764–767 (2017) 10. Padhi, D.: A novel hybrid approach for Odiya handwritten character recognition system. IJARCSSE 2(5), 150–157 (2012) 11. Mitra, C., Pujari, A.K.: Directional decomposition for Odia character recognition, pp. 270– 278. Springer, Cham (2013) 12. Majhi, B., Satpathy, J., Rout, M.: Efficient recognition of Odiya numerals using low complexity neural classifier. In: Proceedings - 2011 International Conference Energy, Automation and Signal, ICEAS 2011, pp. 140–143 (2011) 13. Dash, K.S., Puhan, N.B., Panda, G.: A hybrid feature and discriminant classifier for high accuracy handwritten Odia numeral recognition. In: IEEE TENSYMP 2014 - 2014 IEEE Region 10 Symposium, pp. 531–535 (2014) 14. Mishra, T.K., Majhi, B., Panda, S.: A comparative analysis of image transformations for handwritten Odia numeral recognition. In: Proceedings of 2013 International Conference on Advances in Computing, Communications and Informatics, ICACCI 2013, pp. 790–793 (2013) 15. Mahato, M.K., Kumari, A., Panigrahi, S.: A system for Oriya handwritten numeral recognition for Indian postal automation. In: IJASTRE, pp. 1–15 (2014) 16. Pal, U., Wakabayashi, T., Sharma, N., Kimura, F.: Handwritten numeral recognition of six popular Indian scripts. In: Proceedings International Conference Document Analysis Recognition, ICDAR, vol. 2, pp. 749–753 (2007)
Cluster-Based Under-Sampling Using Farthest Neighbour Technique for Imbalanced Datasets G. Rekha1(B) and Amit Kumar Tyagi2(B) 1 Department of Computer Science and Engineering, Koneru Lakshmaiah Education
Foundation, Hyderabad, India [email protected] 2 School of Computing Science and Engineering, Vellore Institute of Technology, Chennai Campus, Chennai 600127, Tamilnadu, India [email protected]
Abstract. In domain of data mining, learning from imbalanced class distribution datasets is a challenging problem for conventional classifiers. The class imbalance exists when the number of samples of one class is much lesser than the ones of the other classes. In real-world classification problems, data samples often have unequal class distribution. This problem is represented as a class imbalance problem. However, many solutions have been proposed in the literature to improve classifier performance. But recent works entitlement that imbalanced dataset is not a problem in itself. The degradation of classifier performance is also linked with many factors like small sample size, sample overlapping, class disjunct and many more. In this work, we proposed cluster-based under-sampling based on farthest neighbors. The majority class samples are selected based on the average distance to all minority class samples in the cluster are farthest. The experimental results show that our cluster-based under-sampling approach outperform with existing techniques in the previous studies. Keywords: Classification · Clustering · Class disjunct · Imbalance problems · Majority samples · Minority samples
1 Introduction In the current big data era, data mining and machine learning play a vital role in effective decision making. Among that classification is one of the important techniques most widely used for various application from healthcare to a business decision such as bankruptcy prediction [1], cancer prediction [2], churn prediction [3], face detection [4], fraud detection [5], and software fault prediction [6]. In general, the performance of the classifier is associated with data distribution. If equal or balanced distribution of data will increase the performance of the classifier. However, most of the real world is usually skewed in nature, i.e., the number of samples from one class will be more than the other class. For example, for binary classification, if one class has 1000 samples © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 35–44, 2021. https://doi.org/10.1007/978-3-030-49339-4_5
36
G. Rekha and A. K. Tyagi
(majority samples/negative samples) and the other class has 100 samples (minority samples/positive samples) will lead to bias when trained on classification algorithm. The skewed distribution of data is not a problem in itself. Apart, small sample size, small disjunct, sample overlapping, etc., will degrade the performance of the classifier. To address the problem of class imbalance, the machine learning community has relied upon three techniques in general consist of data pre-processing technique, algorithm-level technique and ensemble learning technique. • Data pre-processing: In these techniques, the skewed data is balanced prior to training on a classifier. These techniques are simple and easy to implement. The most popular data pre-processing technique is sampling. The sampling method consists of oversampling and under-sampling techniques. In oversampling, synthetic data samples are generated for minority samples to balance the distribution. Some of the oversampling techniques are Random OverSampling (ROS), Synthetic Minority Oversampling Technique (SMOTE). For under sampling, the majority samples are discarded to balance the distribution. Some of the techniques are Random Under Sampling (RUS), Tomek Link. • Algorithm-level: In these techniques, the existing algorithms are modified by applying or adjusting the weights or including of loss function. It implements within the algorithm itself during the learner phase. • Ensemble or Hybrid level: In these techniques, the combination of data level and algorithm level techniques are used to provide solutions for class imbalance problems. It allows multiple classifiers to be modeled at the same time and create a final model to generate better accuracy. Hence, the organization of this paper is follows as. Section 2 discusses the related work in imbalance classification problem. Section 3 presents the methodology of the proposed work. Experiments and Results are discussed in Sect. 4. Section 5 presents concluding remarks.
2 Related Work As specified in the introduction, the class imbalance problem is crucial and an effective solution is much in demand. Since traditional classification algorithms are not designed to be trained on imbalanced datasets. It basically leads to a series of problems, overfitting of majority/negative classes and under fitting of positive/minority classes. Apart from imbalanced nature, the other problems like small subsamples, overlapping of samples, small disjunct, noise occur in the dataset. To handle class imbalance problems, broadly three methods are been proposed in literature a. Data Level methods b. Algorithm-level methods and c. Ensemble methods. In data-level methods, data resampling is performed to balance the distribution of data before training on a classifier. In algorithm-level methods, the traditional classification algorithms are modified to handle imbalanced data either by adjusting cost or weights. The third technique is ensemble methods where multiple classifiers are trained and the majority voting method is used to select the best classifier.
Cluster-Based Under-Sampling Using Farthest Neighbour Technique
37
As suggested in the literature, rebalancing the datasets at the data-level is simple and effective to avoid the bias in classification [27]. It is a pre-processing technique used to balance the data before training on a classifier. The common sampling methods used to balance the skewed distribution are Oversampling and under-sampling techniques. In oversampling, the minority samples/positive samples are resampled to generate synthetic data to meet the size of the majority class whereas, in under-sampling, the majority samples are discarded to meet the size of the minority class. The major drawback of the former is duplication of data generation and for the latter loss of important information. To overcome the drawback of oversampling, the Synthetic Minority Oversampling Technique (SMOTE) [9] has been proposed in the literature. It is one of the most popular and efficient technique. SMOTE generates synthetic data from its minority class sample neighbor using Euclidean distance. But, the major drawback is that the synthetic samples may overlap with its surrounded majority samples. To address this particular weakness may extend the version of SMOTE has been proposed by the research community, for example, Borderline SMOTE [10], MSMOTE [11] and etc. on the other hand, the under sampling technique may discard important or representative samples from the datasets. Kubat et al. [12] adopted one side selection to under-sampling the majority class by removing noise, boundary and redundant samples. Estabrooks et al. [13] proposed over-sampling and under-sampling techniques with different sampling rates to generate many sub-classifiers and finally integrated it. The results showed a better performance compared to ensemble methods. In Algorithm-level, the class imbalance is addressed by directly modifying the classification algorithm or using different misclassified costs. These methods depend on the classifier to enhance classifier performance. Wu and Chang [14] proposed a method called Kernel-Boundary Alignment (KBA). KBA is based on Radial Basis Function (RBF) to compute the distance between all data points and also for class distribution. In [15], the author proposed a Confusion Matrix based Kernel LOGistic Regression (CM-KLOGR) for handling class imbalanced datasets. CM-KLOGR applied weighted harmonic mean to measure the performance metrics from the cost matrix. In recent years, Deep learning has become a popular research topic for feature representation. Khan et al. [16] proposed a Cost-Sensitive (CoSen) deep neural network to learn feature representations focused on image datasets. Dong et al. [17] proposed incremental minority class discrimination using a multi-label classification problem in deep learning to address class imbalance problem. The ensemble techniques are popular techniques in handling class imbalance problems. They work by learning on multiple base classifiers and then it adopts an ensemble technique to improve the performance of the classifier. The most popular and frequently used ensemble approaches are bagging (bootstrap aggregating) [18], stacking [19] and boosting [20] techniques. Several researchers devised novel approaches that combine either oversampling or under sampling techniques into the ensemble framework. The variation of boosting includes SMOTEBoost [21], RUSBoost [22], DataBoost-IM [23] algorithm. The SMOTEBoost algorithm is an integration of SMOTE technique with AdaBoost algorithm and RUSBoost is an integration of RUS technique with AdaBoost algorithm. The DataBoost-IM algorithm combines AdaBoost with Gaussian distribution to generate synthetic samples. Rekha et al. [28] proposed a noise filtering approach to
38
G. Rekha and A. K. Tyagi
remove the noise samples from the dataset and compared the performance using boosting and bagging techniques with and without noise filtering. The bagging approach is similar to the boosting approach but bagging implements several sub-classifiers and selects the best classifier based on majority voting. The variations of bagging include UnderBagging (UB) [24], SMOTEBagging [25] and many more. UnderBagging adopts under-sampling to discard the majority samples and SMOTEbagging integrates smote with bagging techniques. Galar et al. [26] have provided a systematic review of ensemble techniques for class imbalance problem. They proposed a taxonomy for boosting, bagging approaches. In the past lot of research work is focused on under-sampling the majority class sampling, oversampling the minority samples or tuning the parameters at the algorithm. But it is very important to recognize the majority of samples that are not overlapped with minority samples. To avoid the loss of important information while under sampling the majority samples, it is better to pick the majority samples based on its existence with that of minority samples. Hence, this section discusses about the related work for handling skewed distribution of data. Now, next section will deal with the proposed methodology using cluster-based techniques in addressing class imbalance problem.
3 Proposed Methodology Majority samples are selected based on how well-suited the majority samples are with the minority samples by keeping the minority samples as a whole. In class imbalance problems, the majority/negative samples are overwhelmed the minority class samples and it’s important to find the bias majority class samples as it may suffer the classification accuracy. Over-sampling techniques may crowd the minority samples by generating synthetic data but may cause overlapping between the samples. Under-sampling may discard the majority samples to meet the size of the minority samples. Some under-sampling techniques adopted clustering and instance selection methods. The major concern of under-sampling is how to reduce the majority samples in an effective way. In our proposed work, the majority class samples are selected based on the average distance to all minority class samples in the cluster are farthest. In Fig. 1, the data points are represented using clusters and in each cluster, we calculated the distance between the majority data point with all the minority class samples. Based on the farthest average distance majority samples have been picked. In our work, we applied a Euclidean distance formula to calculate the distance between data points. Figure 2 shows the proposed model. The entire data is been grouped using the kmeans clustering approach. The value of k varies from 3 to 5 to check the best partition of data into different clusters. Once the clustering process is done, the selection of majority samples has been performed as mentioned above. Next, the selection majority samples are combined with minority samples to perform classification. Finally, the classifier is tested based on the test data to check the accuracy of the model. Hence, this section discusses proposed methodology in detail. Now, next section deal with the experimental and simulation results for the proposed methodology.
Cluster-Based Under-Sampling Using Farthest Neighbour Technique
39
Fig. 1. Representation of majority and minority samples
Fig. 2. The proposed model
4 Experimental and Simulation Results In this section, we conducted the experiment on 10 datasets from keel repository1 . The whole experiment is tested and verified using a 10-fold cross-validation technique. Table 1 shows the datasets used in the experiment. All the experiment has been run using the Decision tree algorithm. 1 https://sci2s.ugr.es/keel/imbalanced.php.
40
G. Rekha and A. K. Tyagi Table 1. Datasets with their characteristics Datasets
Size
Ecoli
# attr % IR
336
7
3.36
Glass
214
9
6.38
Haberman
306
3
2.78
Iris
150
4
2
New-thyroid
215
5
5.14
Pima
768
8
1.87
Satimage
6435 36
9.28
Shuttle
1829
13.87
9
Vehicle
846 18
3.25
Wisconsin
683
9
1.86
Ionosphere
351 34
1.79
The evaluation criteria for imbalance problems are considered based on the confusion matrix. The different formulas for evaluation metrics are provided in Eqs. 1 to 5. Precision =
TP TP + FP
Recall Sensitivity =
G-Mean =
TP TP + FN
(2)
TN TN + FP
(3)
2 ∗ precision * recall precision + recall
(4)
Specificity = F-Measure =
(1)
TP TP + FN
+
TN TN + FP
(5)
In this paper, the performance of the proposed method is investigated on two evaluations metric such as F-Measure and G-Mean. 4.1 Results We applied the proposed method on 10 datasets and used F-Measure and G-Mean as the performance metrics. In the experiments, we had trained the model using original data with any sampling technique. Additionally, we performed random oversampling, under-sampling and SMOTE on the data. All four methods are used to compare our proposed model. The experiments are carried on the Decision tree algorithm (C4.5) with 10-fold cross-validation. The experimental results are presented in Table 2 and 3 and the
Cluster-Based Under-Sampling Using Farthest Neighbour Technique
41
Table 2. F-Measure performance results Data set
Original dataset
Random oversampling
Random under-sampling
SMOTE
Proposed method
Ecoli
0.9234
0.9197
0.9191
0.9232
0.9269
Glass
0.9456
0.9717
0.9911
0.9832
0.9912
Haberman
0.6795
0.6805
0.6800
0.6824
0.6838
Iris
0.9812
0.9832
0.9842
0.9812
0.9811
New-thyroid
0.9234
0.9197
0.9191
0.9232
0.9269
Pima
0.8245
0.8241
0.8225
0.8238
0.8248
Satimage
0.7618
0.7611
0.7610
0.7613
0.7625
Shuttle
0.8677
0.8695
0.8698
0.8684
0.8688
Vehicle
0.9732
0.9736
0.9702
0.9835
0.9809
Wisconsin
0.9132
0.9182
0.9210
0.9213
0.9234
Ionosphere
0.8386
0.8420
0.8111
0.8372
0.8511
graphical representation of F-measure and G-mean performance is presented in Fig. 3 and 4. Among the 10 datasets, the proposed model achieved the best performance on 8 datasets. Hence, this section presents experimental and simulation results of our proposed model. In which we provide some valid and efficient results. Now, next section will conclude this work in brief by providing some future enhancements.
Ionosphere Wisconsin Vehicle
1 0.8 0.6 0.4 0.2 0
Ecoli Glass
Original dataset Random
Haberman
Random SMOTE
ShuƩle
Iris
SaƟmage
new-thyroid Pima
Fig. 3. F-Measure performance
Proposed Method
42
G. Rekha and A. K. Tyagi Table 3. G-Mean performance results
Data set
Original dataset
Random oversampling
Random under-sampling
SMOTE
Proposed method
Ecoli
0.9332
0.9197
0.9196
0.9232
0.9269
Glass
0.9145
0.9771
0.9819
0.9723
0.9819
Haberman
0.6982
0.6805
0.6800
0.6823
0.6838
Iris
0.9718
0.9820
0.9842
0.9821
0.9811
New-thyroid
0.9342
0.9291
0.9395
0.9132
0.9299
Pima
0.8342
0.8453
0.8532
0.8538
0.8548
Satimage
0.7698
0.7691
0.7690
0.7693
0.7725
Shuttle
0.8167
0.8169
0.8169
0.8168
0.8288
Vehicle
0.9834
0.9836
0.9820
0.9835
0.9849
Wisconsin
0.9232
0.9282
0.9220
0.9223
0.9234
Ionosphere
0.8386
0.8421
0.8432
0.8372
0.8589
Ionosphere Wisconsin Vehicle
1 0.8 0.6 0.4 0.2 0
Ecoli Glass
Original dataset Random
Haberman
Random SMOTE
ShuƩle
Iris
SaƟmage
Proposed Method
new-thyroid Pima
Fig. 4. G-Mean performance
5 Conclusion In real-world classification problems, imbalanced data occurred in different application domains and received considerable attention from the research community. The degradation of classifier performance is also linked with many factors like small sample size, sample overlapping, class disjunct and many more. In this work, we proposed
Cluster-Based Under-Sampling Using Farthest Neighbour Technique
43
cluster-based under-sampling based on farthest neighbors. The majority class samples are selected based on the average distance to all minority class samples in the cluster are farthest. The experiment results indicate that the proposed work outperforms other sampling methods. Hence, in future we want to perform the experiments using ensemble classification algorithms and also apply for real time datasets. Acknowledgment. This Research is funded by Anumit Academy’s Research and Innovation Network (AARIN), India. The Author Would Like to Thank AARIN, India, a Research Network for Supporting The Project Through its Financial Assistance.
References 1. Lin, W.-Y., Hu, Y.-H., Tsai, C.-F.: Machine learning in financial crisis prediction: a survey. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 42(4), 421–436 (2012) 2. Kourou, K., Exarchos, T.P., Exarchos, K.P., Karamouzis, M.V., Fotiadis, D.I.: Machine learning applications in cancer prognosis and prediction. Comput. Struct. Biotech. J. 13, 8–17 (2015) 3. Mahajan, V., Misra, R., Mahajan, R.: Review of data mining techniques for churn prediction in telecom. J. Inf. Organ. Sci. 39(2), 183–197 (2015) 4. Zafeiriou, S., Zhang, C., Zhang, Z.: A survey on face detection in the wild: past, present and future. Comput. Vis. Image Underst. 138, 1–24 (2015) 5. West, J., Bhattacharya, M.: Intelligent financial fraud detection: a comprehensive review. Comput. Secur. 57, 47–66 (2016) 6. Malhotra, R.: A systematic review of machine learning techniques for software fault prediction. Appl. Soft Comput. 27, 504–518 (2015) 7. Kubat, M., Matwin, S.: Addressing the curse of imbalanced training sets: one-sided selection. In: ICML, pp. 179–186 (1997) 8. Estabrooks, A., Japkowicz, N.: A mixture-of-experts framework for learning from imbalanced data sets. In: Hoffmann, F., Hand, D.J., Adams, N., Fisher, D., Guimaraes, G. (eds.) IDA 2001. LNCS, vol. 2189, pp. 34–43. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-448 16-0_4 9. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002) 10. Han, H., Wang, W.Y., Mao, B.H.: Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning. In: International Conference on Intelligent Computing, pp. 878–887. Springer, Heidelberg, August 2005 11. Bunkhumpornpat, C., Sinapiromsaran, K., Lursinsap, C.: Safe-level-smote: safe-levelsynthetic minority over-sampling technique for handling the class imbalanced problem. In: Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 475–482. Springer, Heidelberg, April 2009 12. Kubat, M., Matwin, S.: Addressing the curse of imbalanced training sets: one-sided selection. In: ICML, vol. 97, pp. 179–186, July 1997 13. Estabrooks, A., Jo, T., Japkowicz, N.: A multiple resampling method for learning from imbalanced data sets. Comput. Intell. 20(1), 18–36 (2004) 14. Wu, G., Chang, E.Y.: KBA: kernel boundary alignment considering imbalanced data distribution. IEEE Trans. Knowl. Data Eng. 17(6), 786–795 (2005) 15. Ohsaki, M., Wang, P., Matsuda, K., Katagiri, S., Watanabe, H., Ralescu, A.: Confusion-matrixbased kernel logistic regression for imbalanced data classification. IEEE Trans. Knowl. Data Eng. 29(9), 1806–1819 (2017)
44
G. Rekha and A. K. Tyagi
16. Khan, S.H., Hayat, M., Bennamoun, M., Sohel, F.A., Togneri, R.: Cost-sensitive learning of deep feature representations from imbalanced data. IEEE Trans. Neural Networks Learn. Syst. 29(8), 3573–3587 (2017) 17. Dong, Q., Gong, S., Zhu, X.: Imbalanced deep learning by minority class incremental rectification. IEEE Trans. Pattern Anal. Mach. Intell. 41(6), 1367–1381 (2018) 18. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996) 19. Ren, Y., Zhang, L., Suganthan, P.N.: Ensemble classification and regression-recent developments, applications and future directions. IEEE Comput. Intell. Mag. 11(1), 41–53 (2016) 20. Martínez-Muñoz, G., Suárez, A.: Using boosting to prune bagging ensembles. Pattern Recogn. Lett. 28(1), 156–165 (2007) 21. Li, Z.X., Zhao, L.D.: A SVM classifier for imbalanced datasets based on SMOTEBoost. Syst. Eng. 26(5), 116–119 (2008) 22. Seiffert, C., Khoshgoftaar, T.M., Van Hulse, J., Napolitano, A.: RUSBoost: a hybrid approach to alleviating class imbalance. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 40(1), 185–197 (2009) 23. Guo, H., Viktor, H.L.: Learning from imbalanced data sets with boosting and data generation: the DataBoost-IM approach. ACM SIGKDD Explor. Newsl. 6(1), 30–39 (2004) 24. Hakim, L., Sartono, B., Saefuddin, A.: Bagging based ensemble classification method on imbalance datasets. Int. J. Comput. Sci. Netw. 6, 7 (2017) 25. Yongqing, Z., Min, Z., Danling, Z., Gang, M., Daichuan, M.: Improved SMOTEBagging and its application in imbalanced data classification. In: IEEE Conference Anthology, pp. 1–5. IEEE, January 2013 26. Galar, M., Fernandez, A., Barrenechea, E., Bustince, H., Herrera, F.: A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 42(4), 463–484 (2011) 27. Rekha, G., Tyagi, A.K., Krishna Reddy, V.: A wide scale classification of class imbalance problem and its solutions: a systematic literature review. J. Comput. Sci. 15, 886–929 (2019) 28. Rekha, G., Tyagi, A.K., Krishna Reddy, V.: Solving class imbalance problem using bagging, boosting techniques, with and without using noise filtering method. Int. J. Hybrid Intell. Syst. 15, 67–76 (2019)
Vehicle Detection and Classification: A Review V. Keerthi Kiran1 , Priyadarsan Parida1(B) , and Sonali Dash2 1 GIET University, Gunupur 765022, India
[email protected], [email protected] 2 Raghu Institution of Technology, Visakhapatnam 531162, India
Abstract. Smart traffic and information systems require the collection of traffic data from respective sensors for regulation of traffic. In this regard, surveillance cameras have been installed in monitoring and control of traffic in the last few years. Several studies are carried out in video surveillance technologies using image processing techniques for traffic management. Video processing of a traffic data obtained through surveillance cameras is an instance of applications for advance cautioning or data extraction for real-time analysis of vehicles. This paper presents a detailed review of vehicle detection and classification techniques and also discusses about different approaches detecting the vehicles in bad weather conditions. It also discusses about the datasets used for evaluating the proposed techniques in various studies. Keywords: Intelligent traffic management · Sensors · Traffic surveillance · Image processing · Vehicle detection · Vehicle classification
1 Introduction From the past few years the traffic control has turned into a serious issue for society. A variety of issues ranging from traffic blockage, absence of vehicle parking, pollution etc. have hassled humans. It has achieved major break in the recent era. However, the detection and classification of vehicles is a demanding concern. The scope in this area is huge because of the variety of challenging features that vehicles possess ranging from edges, colors, shadows, corners, textures, etc. Due to the progress in hardware and reduced manufacturing expenses, the amount of surveillance devices has risen in the past few years, and video cameras are of high resolutions used in these systems. As a result, large amount of video sources generates surprising volume of information that needs to be analysed and realized, but to examine the amount of information is too high for human operators. Therefore, researchers are more take the benefit in all probability from technology like Intelligent Transportation System [1, 2]. An important study of the surveillance system is the detection of different vehicle types. The main phase in traffic management software is the classification of vehicles. Prior information of the model and vehicle type is required, because it allows for queries as to know “which direction the vehicle has passed and at what time?”. Therefore, feature extraction and classification of vehicles cover a vast scope of traffic management applications [3, 4]. Example images from a surveillance system are shown in Fig. 1. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 45–56, 2021. https://doi.org/10.1007/978-3-030-49339-4_6
46
V. Keerthi Kiran et al.
Fig. 1. Example images from a surveillance system
Yu Wang et al. 2019 [5], have developed a system for detection and classification of moving vehicles termed as Improved Spatio-Temporal Sample Consensus. Firstly, the moving vehicles are identified using Spatio Temporal Sample Consensus algorithm, from the intrusion of brightness variation and the vehicles shadow. Furthermore, by means of feature fusion techniques the objects are classified according to area, face, number plate and vehicle symmetry features. Chia-Chi Tsai et al. 2018 [6], proposed an optimized Convolutional Neural Network architecture based on deep learning algorithms for vehicle detection and classification system used for intelligent transportation applications. PVANET as the base network, is selected and improved by fine-tuning to get better accuracy. It uses eight Concatenated ReLU convolution layers, eight inception layers as the base network and hypernet architecture is used to combine different levels of features, thereby making it better to achieve the desired bounding boxes for the Region Proposal Net layer. In 2018, Velazquez-Pupo et al. [7] have presented a model based on vision analysis with a fixed camera for monitoring the traffic, detection of vehicle that includes occlusion handling, counting, tracking and classification. Even though the best classifier is SVM, still they reported that the OC-SVM with an RBF Kernel has delivered the best results with a high performance and F-measure of 98.190% and 99.051% for the midsize vehicles. In the same year 2018, Murugan and Vijaykumar [8], have developed Adaptive Neuro Fuzzy Inference System classifier for classification of moving vehicles on the roads. It includes six main phases like pre-processing, feature extraction, detection, structural matching, tracking, and classification of vehicles. A background subtraction and the Otsu threshold algorithm are used for vehicular detection. The characteristics of the vehicles detected are obtained by the log Gabor filter and Harrish corner detector, which are used to classify the vehicles. Ahmad Arinaldi et al. 2018 [9], presented a traffic video analysis system based on computer vision techniques. The core of such system is the detection and classification of vehicles for which they developed two models, first is a MoG + SVM system and the second is based on Faster RCNN, a recently popular deep learning architecture for detection of objects in images. They reported that Faster RCNN outperforms MoG in detection of vehicles that are static, overlapping or in night time conditions. Also, Faster RCNN outperforms SVM for the task of classifying vehicle types based on appearances.
Vehicle Detection and Classification: A Review
47
In 2017, Audebert et al. [10] have conferred a segment before detect approach using deep learning techniques. Segmentation and followed by detection and classification of multiple wheeled vehicle variants is tested for high-resolution remote sensing pictures. The process detection and classification of vehicles depending on a virtual detection zone was suggested by Seenouvong et al. 2016 [11], which comprises of foreground extraction, detection, feature extraction and classification. A Gaussian Mixture Model (GMM) is used in detection of vehicles and also some operations are performed to get the foreground objects and classification is done, using k-nearest neighbor classifier. In 2015, Dong et al. [12] have recommended a semi-supervised convolutional neural network technique for vehicles classification based on front view of vehicle. Yet, the features trained by the CNN are too biased to work in raster images. In the same year, Banu et al. [13] have recommended Histogram of Gradient feature extraction technique and morphological operations for better detection rate. We organize the rest of the paper as follows. Section 2, delivers the detailed review of recommended vehicle detection approaches available in the literature. Section 3, discusses about various vehicle classification techniques. Section 4, presents overall databases available. Section 5 concludes the review.
2 Methods Used for Vehicle Detection In the steps of processing of video, the initial stage is the image localization or detection of vehicle. Vehicle detection that includes expression of motion, tracking and behavior analysis which are the basis for further processing to achieve classification success rate [14]. There are two approaches in vehicle detection, one is appearance based and the other is motion based [15]. Parameters such as texture, color, and shape of a vehicle are considered for appearance based approach. Whereas, the moving characteristics are used to differentiate the vehicles from the static background scenes in motion-based approach. 2.1 Motion-Based Features In computer vision technology, Motion detection is a significant job. The important characteristic of interest in traffic scenes is only the moving vehicles are of interest. In Motion detection technique, the foreground objects which are in motion are set apart from the still background of an image. For differentiating the moving traffic from the stationary background, the motion indications are utilized and they can be divided into: temporal frame differencing approach [16] which considers past two or three successive frames, a background subtraction approach [17] to construct background model by using frame history and the instantaneous pixel speed on the image surface is used by optical flow approach [18]. 2.1.1 Frame Differencing In temporal frame differencing technique, the difference in the pixels is calculated among two consecutive frames. Whereas, using a threshold value the moving foreground region
48
V. Keerthi Kiran et al.
are found out. The detection rate is improved by using three consecutive frames, where to obtain the moving target region the dual inter-frame subtraction is considered and binarized proceeding the bitwise AND operation [16]. 2.1.2 Background Subtraction Motion detection is found out using background subtraction approach which is the most studied and used approach. Using the difference of pixels among the current and background images, the foreground objects are extracted [17]. The background image is built using a recognized background averaging model, where sequences of images are averaged [19]. Though, the background is varying in actual traffic scenes; hence, this type of strategy is not appropriate for live traffic scenes. 2.1.3 Optical Flow The rapid change of the instantaneous pixels on the surface of images resembles the moving objects in three-dimensional space in this approach. The primary concept is to use temporal and gradient data to equalize the pixels between the image frames. In [20], the problem of merged blobs of vehicles is resolved by dense optical flow approach. In [18], for vehicle segmentation, optical flow through 3-D wireframes is used. At the cost of additional computational time, accurate sub-pixel motion vectors are provided by the iterative nature of optical flow calculations. However, optical flow techniques are quite acceptable for vehicle detection because they are receptive to occlusion problems to a smaller extent. 2.2 Appearance Based Features In terms of color, texture and shape, the stereo vision of an object can be classified. Methods based on these features usually uses prior data for modeling. It compares the derived two-dimensional image features to the real world three-dimensional features by using the feature extraction method. Unlike motion-based approaches, appearance-based approaches can distinguish fixed objects and detect them [21]. 2.2.1 Part Based Model In this approach, the objects are divided into several smaller parts and modeled in partbased detection models. Using the spatial differences between these parts, has proved to be very widespread method for vehicle detection. To improve the vehicle detection rate and resolve the occlusion problem, the vehicles in the image is divided into front, side, and rear parts [14]. For robust vehicle detection, the trained deformable part model is used [26]. 2.2.2 Feature Based Method The vehicle’s visual appearance is defined by encoded representative feature descriptions. Various characteristics such as local symmetry edge operators have been used in
Vehicle Detection and Classification: A Review
49
car detection. But it is vulnerable to differences in size and illuminance; therefore, an edge-based histogram of more spatial invariances is used [22]. These simple characteristics develop into more general and robust features which directly detect and classify vehicles. In vehicle detection literature, the Scale Invariant Feature Transformation [23], Oriented Gradient Dimension Histogram [24] and Haar-like Features [25] are included. 2.3 Neural Networks There are six major stages in the vehicle detection through neural networks. They are loading the data set, designing the convolutional neural network, configuring training options, training the object detector using Faster R-CNN, and evaluating the trained detector. The above processes are discussed as follows. 2.3.1 Regions with Convolutional Neural Network Features (R-CNN) There are two basic concepts, Regional proposals and CNN which are combined in RCNN method. The region proposals are made to locate and dismember objects following bottom to top approach. If there is inadequate label training data, supervised training will be followed for a field-specific fine-tuning process, which in turn offers substantial progress. Hence it is named as R-CNN as they combine Regional proposals along with CNNs [27]. 2.3.2 Faster Regions with Convolutional Neural Network Features A new and popular approach is to use deep convolutional neural networks that can learn discriminative features directly from the input images for a specified task in a supervised manner. The deep convolutional neural network uses many layers of convolution filter sets that learn a hierarchical representation of the input image data, where lower level convolutional layers will learn to detect simple features such as lines and textures, while higher level convolution allayers will learn features that are combinations of the lower level features [9]. Due to poor lighting and weather conditions, background noise, traditional imagebased detection methods for traffic scenes have trouble obtaining good images. Some of the images captured in different weather conditions are shown in Fig. 2. A new technique was proposed by Nastaran Yaghoobi Ershadi et al. 2018 [28] for precisely segmenting and tracking of vehicles. Hough transform is used for extracting road outlines and lanes, after removing the perspective using Modified Inverse Perspective Mapping. GMM is then used to separate moving objects and also a chromacity based operation is applied to resolve the shadow effects of vehicle. An adaptive threshold termed as the triangle threshold method was proposed by Mohamed A. El-Khoreby et al. 2017 [29] for background subtraction algorithm. The entire process comprises of four phases: background modeling, difference histogram, thresholding and post processing. The approximate median filter is used for background modelling and the triangle threshold is applied on the histogram variance of the background model and the current frame. Eventually, to increase the detection efficiency, some morphological operations are performed. Xuerui Dai et al. 2016 [30] adopted the
50
V. Keerthi Kiran et al.
Fig. 2. Left images: images in fog. Right images: images in night.
Viola and Jones’s sliding-window and Aggregated Channel Features for vehicle detection. Upon extracting the image features, a regular AdaBoost classifier is trained as a strong classifier and decision trees as weak learners.
3 Methods Used for Vehicle Classification The focus of Vehicle Classification System is to categorize vehicles into different classes like car, van, truck, bus, etc. In conjunction with distinct classification approaches, a range of geometry, texture and appearance-based feature extraction methods are developed. 3.1 Geometry-Based Approaches In 2000 and 2002, Gupte et al. [31, 32], have used a fixed camera for vehicle classification to concentrate on the highway images. In vehicle classification, the length and height of rectangular patches including vehicle blobs are used as features. However, they confine the work on the classification of vehicles into two classes (cars and non-cars and trucks or non-trucks) on the basis of height and length. Even though the classification was based on the Region of Interest features, it may not be possible to obtain a fine classification of vehicles. 3.2 Appearance-Based Approaches In this approach, to classify the vehicles the features depending on edges, gradients, corners are used. Buch et al. 2009 [33] have suggested appearance based features like 3D-HOG [34], which would identify vehicle models depending on 3D models and also perform model-based matching for classification of vehicles. Morris and Trivedi used simple vehicle blob features [35] after transformation through Fisher’s Linear Discriminant Analysis. The weighted k-Nearest Neighbor classifier is utilized to classify eight different types of vehicles, such as Bike, Sedan, Van, Pickup, SUV, Truck, Merged, and Semi. 3.3 Approaches Based on Texture Texture is one of the major class of discriminatory image feature. Several works in the field of computer vision, texture-based features have been used [36]. Zhang et al. [37]
Vehicle Detection and Classification: A Review
51
used texture based descriptors called as Multi Block Local Binary Patterns and multibranch regression trees dependent AdaBoost classifier. The fundamental Local Binary Pattern creates a binary string for each pixel, by considering a window size of 3 × 3 neighbourhood pixels. 3.4 Mixed Approaches In the process of describing vehicle types, Xiaoxu Ma and Grimson [38] used implicit and explicit edge shaping models and Scale-Invariant Feature Transform and for classification purpose, a two class Bayesian Decision Rule is used. In order to distinguish between the categories of vehicles of different sizes, the geometrical features like area, width, aspect ratio and rectangularity are considered [39]. In addition, the vehicle within a specific form of size is centered on shape-invariant picture scenes [40] and statistical characteristics based on texture parameters, like variance, mean, skewness, and pixel entropy for key vehicle blobs. Classification shall be performed at two levels using the K-Nearest Neighbor Classifier (k-NN) in each class. At the initial stage, a k-NN calculates the size and in the next stage, the vehicle type is predicted. Furthermore, an adaptive k-NN is utilized for the classification vehicle as small or big, and then in a classification of car or motorcycle (if small) or a class of bus or truck (if large). Indeed, they found that the use of such geometric features could actually cause confusion while distinguishing a bus or a truck because of similarity in heights, widths or lengths. Comparison of different approaches for Vehicle classification and their success rate are shown in Table 1. Table 1. Comparison of different approaches for Vehicle classification Reference
Approach
Classification success rate
Yu Wang et al. 2019 [5]
Improved spatio-temporal sample consensus algorithm
97.8%
Chia-Chi Tsai et al. 2018 [6]
Optimized Faster R-CNN
90%
Fukai Zhang et al. 2018 [41]
DP-SSD
77.94%
Velazquez-Pupo et al. 2018 [7]
OC-SVM
99.051%
Murugan and Vijaykumar 2018 [8]
Adaptive neuro fuzzy inference system classifier
92.56%
Ahmad Arinaldi et al. 2018 [9]
MoG + SVM, Faster RCNN
54.5%, 67.2%
Nicolas Audebert et al. 2017 [10] Convolutional Neural Network
67% & 80%
Seenouvong et al. 2016 [11]
K-Nearest Neighbor Classifier
98.53%.
Dong et al. 2015 [12]
Semi-supervised convolutional neural network
96.1%
52
V. Keerthi Kiran et al.
4 Database In [42] they collected the vehicle database from the Vision-Based Intelligent Environment project and available in [43] which comprises single camera per lane and the proposed system identifies only one vehicle per frame. Moreover, to identify multiple vehicles per frame, it is easier to extend the system. Two distinct data sets of vehicles are used in [27]. The first dataset consists of 350 pictures [44] and secondly 1000 pictures from the available vehicle data set [45]. The images contain one or two samples of a marked vehicles in each of these datasets. Each picture involves one or two labeled vehicle samples in these datasets. The R-CNN and Faster R-CNN deep learning techniques have been used to train the vehicle detector using sample vehicle datasets. In [7], the efficiency of the suggested method is verified on traffic videos which are recorded in Guadalajara, Mexico. Along with these it is also tested on GRAM RoadTraffic Monitoring dataset [46] and videos recorded in Britain’s M6 motorway [47]. The example images from the dataset are presented in Fig. 3. In order to reduce the computation time, all videos are brought down to 420_240 pixels at 25 fps and down sampling is done.
Fig. 3. The example images got from GRAM-RTM dataset
In [48] have established a vehicle dataset named as BIT-Vehicle Dataset that consists of 9,850 vehicle images to test the suggested technique. In the whole dataset, the proportion of night light images are about 10%. Figure 4 demonstrates some examples of images taken at distinct moments and locations from two cameras. The images show changes in lighting conditions, viewpoint, vehicle surface color, and scale. Due to the size of vehicle and delay in capturing, the bottom or top portions of certain vehicles are not included in the dataset. The Fig. 4 shows, one or two cars will appear in one picture and each is annotated beforehand. It separates all vehicles into six types in the dataset: Sedan, SUV, Bus, Microbus, Truck, Minivans. DETRAC data set is used in their work [28] which consists of 10 h of videos captured at 24 different locations in China. The videos are recorded at 25 frames per second in different lighting conditions day traffic, occlusion and intersection, with a resolution of 960 × 540 pixels. The dataset includes vehicle types of bus, car, van, etc., which are shown in Fig. 5.
Vehicle Detection and Classification: A Review
53
Fig. 4. Example of BIT-Vehicle Dataset
Fig. 5. Example of vehicle images in DETRAC dataset
5 Conclusion In this paper, a detailed overview of literature on video-based traffic monitoring and classification systems using computer vision methods is presented. The purpose of this study is to support the researcher in the detection, classification and availability of car data sets of vehicles. The most prevalent issues in this field are the biased form of datasets and the distinct vehicle types with the same size and form, which makes it more difficult to categorize them.
References 1. Intelligent Transportation Systems Joint Program Office. United States Department of Transportation. Accessed 10 Nov 2016 2. Aljawarneh, S.A., Vangipuram, R., Puligadda, V.K., Vinjamuri, J.: G-SPAMINE: an approach to discover temporal association patterns and trends in internet of things. Future Gener. Comput. Syst. 74, 430–443 (2017)
54
V. Keerthi Kiran et al.
3. Huang, C.-L., Liao, W.-C.: A vision-based vehicle identification system. In: Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, vol. 4, pp. 364–367 (2004) 4. Kanhere, N.K.: Vision-based detection tracking and classification of vehicles using stable features with automatic camera calibration, p. 105 (2008) 5. Wang, Y., Ban, X., Wang, H., Wu, D., Wang, H., Yang, S., Liu, S., Lai, J.: Detection and classification of moving vehicle from video using multiple spatio-temporal features, recent advances in video coding and security. IEEE Access 7, 80287–80299 (2019) 6. Tsai, C.C., Tseng, C.K., Tang, H.C., Guo, J.I.: Vehicle detection and classification based on deep neural network for intelligent transportation applications. In: APSIPA Annual Summit and Conference 2018. IEEE (2018) 7. Velazquez-Pupo, R., Sierra-Romero, A., Torres-Roman, D., Shkvarko, Y.V., Santiago-Paz, J., Gómez-Gutiérrez, D., Robles-Valdez, D., Hermosillo-Reynoso, F., Romero-Delgado, M.: Vehicle detection with occlusion handling, tracking, and OC-SVM classification: a high performance vision-based system. Sensors 18, 374 (2018) 8. Murugan, V., Vijaykumar, V.R.: Automatic moving vehicle detection and classification based on artificial neural fuzzy inference system. Wirel. Pers. Commun. 100, 745–766 (2018) 9. Arinaldi, A., Pradana, J.A., Gurusinga, A.A.: Detection and classification of vehicles for traffic video analytics. In: INNS Conference on Big Data and Deep Learning, Procedia Computer Science, vol. 144, pp. 259–268 (2018) 10. Audebert, N., Le Saux, B., Lefèvre, S.: Segment-before-detect: vehicle detection and classification through semantic segmentation of aerial images. Remote Sens. 9, 368 (2017) 11. Seenouvong, N., Watchareeruetai, U., Nuthong, C.: Vehicle detection and classification system based on virtual detection zone. In: International Joint Conference on Computer Science and Software Engineering (JCSSE) (2016) 12. Dong, Z., Wu, Y., Pei, M., Jia, Y.: Vehicle type classification using a semisupervised convolutional neural network. IEEE Trans. Intell. Transp. Syst. 16(4), 2247–2256 (2015) 13. Banu, S., Vasuki, P.: Video based vehicle detection using morphological operation and hog feature extraction. ARPN J. Eng. Appl. Sci. 10(4), 1866–1871 (2015) 14. Sivaraman, S., Trivedi, M.M.: Looking at vehicles on the road: a survey of vision-based vehicle detection, tracking, and behavior analysis. IEEE Trans. Intell. Transp. Syst. 14(4), 1773–1795 (2013) 15. Tian, B., Morris, B.T., Tang, M., Liu, Y., Yao, Y., Gou, C., Shen, D., Tang, S.: Hierarchical and networked vehicle surveillance in ITS: a survey. IEEE Trans. Intell. Transp. Syst. 16(2), 557–580 (2015) 16. Li, Q.L., He, J.F.: Vehicles detection based on three frame difference method and cross-entropy threshold method. Comput. Eng. 37(4), 172–174 (2011) 17. Gupte, S., Masoud, O., Martin, R.F.K., Papanikolopoulos, N.P.: Detection and classification of vehicles. IEEE Trans. Intell. Transp. Syst. 3(1), 37–47 (2002) 18. Ottlik, A., Nagel, H.-H.: Initialization of model-based vehicle tracking in video sequences of inner-city intersections. Int. J. Comput. Vis. 80(2), 211–225 (2008) 19. Cucchiara, R., Grana, C., Piccardi, M., Prati, A.: Detecting moving objects, ghosts, and shadows in video streams. IEEE Trans. Pattern Anal. Mach. Intell. 25(10), 1337–1342 (2003) 20. Huang, C.L., Liao, W.-C.: A vision-based vehicle identification system. In: Proceedings of International Conference on Pattern Recognition, vol. 4, pp. 364–367 (2004) 21. Chandran, R.K., Raman, N.: A review on video-based techniques for vehicle detection, tracking and behavior understanding. Int. J. Adv. Comput. Electron. Eng. 02(05), 07 (2017) 22. Gao, T., Liu, Z.G., Gao, W.C., Zhang, J.: Moving vehicle tracking based on SIFT active particle choosing. In: Advances in Neuro-Information Processing, pp. 695–702 (2009)
Vehicle Detection and Classification: A Review
55
23. Yousef, K.M.A., Al-Tabanjah, M., Hudaib, E., Ikrai, M.: SIFT based automatic number plate recognition. In: Proceedings of IEEE 6th International Conference on Information and Communication Systems (ICICS), pp. 124–129 (2015) 24. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886–893 (2005) 25. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, p. I-511 (2001) 26. Lin, L., Wu, T., Porway, J., Xu, Z.: A stochastic graph grammar for compositional object representation and recognition. Pattern Recogn. 42(7), 1297–1307 (2009) 27. Yilmaz, A.A., Güzel, M.S., Skerbeyli, I., Bostanci, E.: A vehicle detection approach using deep learning methodologies. In: International Conference on Theoretical and Applied Computer Science and Engineering (2018) 28. Ershadi, N.Y., Menéndez, J.M., Jiménez, D.: Robust vehicle detection in different weather conditions: using MIPM. PLoS ONE 13(3), e0191355 (2018) 29. El-Khoreby, M.A., Abu-Bakar, S.A.R.: Vehicle detection and counting for complex weather conditions. In: IEEE International Conference on Signal and Image Processing Applications, September 2017 30. Dai, X., Yuan, X., Zhang, J., Zhang, L.: Improving the performance of vehicle detection system in bad weathers. In: IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), October 2016 31. Gupte, S., Masoud, O., Martin, R.F.K., Papanikolopoulos, N.P.: Detection and classification of vehicles. IEEE Trans. Intell. Transport. Syst. 3(1), 37–47 (2002) 32. Gupte, S., Masoud, O., Papanikolopoulos, N.P.: Vision-based vehicle classification. In: Proceedings of the IEEE 2000 Conference on Intelligent Transportation Systems, pp. 46–51 (2000) 33. Buch, N., Orwell, J., Velastin, S.A.: 3D extended histogram of oriented gradients (3DHOG) for classification of road users in urban scenes. In: Proceedings of the British Machine Vision Conference, London, UK, 7–10 September 2009, pp. 1–11 (2009) 34. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886–893 (2005) 35. Morris, B.T., Trivedi, M.M.: Learning, modeling, and classification of vehicle track patterns from live video. IEEE Trans. Intell. Transp. Syst. 9(3), 425–437 (2008) 36. Mammeri, A., Zhou, D., Boukerche, A., Almulla, M.: An efficient animal detection system for smart cars using cascaded classifiers. In: Proceedings of the IEEE International Conference on Communications (ICC 2014), pp. 1854–1859 (2014) 37. Zhang, L., Li, S.Z., Yuan, X., Xiang, S.: Real-time object classification in video surveillance based on appearance learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007) 38. Ma, X., Grimson, W.E.L.: Edge-based rich representation for vehicle classification. In: Proceedings of the 10th IEEE International Conference on Computer Vision (ICCV 2005), vol. 2, pp. 1185–1192 (2005) 39. Mithun, N.C., Rashid, N.U., Rahman, S.M.M.: Detection and classification of vehicles from video using multiple time-spatial images. IEEE Trans. Intell. Transp. Syst. 13(3), 1215–1225 (2012) 40. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 2nd edn. Pearsons, Singapore (2002) 41. Zhang, F., Li, C., Yang, F.: Vehicle detection in urban traffic surveillance images based on convolutional neural networks with feature concatenation. Sensors 19, 594 (2019)
56
V. Keerthi Kiran et al.
42. Hsieh, J.W., Chen, L.C., Chen, D.Y.: Symmetrical SURF and its applications to vehicle detection and vehicle make and model recognition. IEEE Trans. Intell. Transp. Syst. 15(1), 6–20 (2014) 43. http://vbie.eic.nctu.edu.tw/en/introduction 44. Vehicle Detection Data Set, Matlab Official Web Site. https://www.mathworks.com/ 45. Standford Vehicle Data Set (2018). http://ai.stanford.edu/~jkrause/cars/car_dataset.Html 46. GRAM Road-Traffic Monitoring. http://agamenon.tsc.uah.es/Personales/rlopez/data/rtm/ 47. M6 Motorway Traffic—Youtube. https://www.youtube.com/watch?v=PNCJQkvALVc 48. Roecker, M.N., Costa, Y.M.G., Almeida, J.L.R., Matsushita, G.H.G.: Automatic vehicle type classification with convolutional neural networks. In: International Conference on Systems, Signals and Image Processing (IWSSIP) (2018)
Methods for Automatic Gait Recognition: A Review P. Sankara Rao2 , Gupteswar Sahu1 , and Priyadarsan Parida2(B) 1 Department of Electronics and Communication Engineering, Raghu Engineering College,
Visakhapatnam 531162, India [email protected] 2 Department of Electronics and Communication Engineering, GIET University, Gunupur 765022, Odisha, India [email protected], [email protected]
Abstract. With the advancement of technology and the demanding requirements for security-related applications, automatic human identification at a far distance has generated a considerable amount of interest in the research community. The gait recognition has an important advantage of identifying people from a long distance with minimal cooperation from the subject. In recent days, gait recognition has emerged as one of the most noticeable biometric technology for the identification of individuals by looking at the manner in which they walk. One of the most commonly faced problems in gait recognition is the presence of various intraclass variations like clothing and carrying conditions, which drastically reduces the recognition accuracy. The main intent of this article is to present an extensive review of existing contemporary gait recognition methods. Keywords: Gait recognition · Silhouette extraction · Feature extraction · Classification · Covariate factors · CASIA Gait Dataset
1 Introduction The necessity of identification of people arises for the increasing number of theft and terrorist threats. The avoidance of such events becomes a challenging task for police and security forces around the world. The primary challenge in identifying and catching the culprits lies in collecting and processing the data without the subject’s cooperation. Recently, many advanced biometric systems have been developed for human authentication using biometric traits like face, iris, and fingerprint. As compared with other developed techniques, gait recognition has emerged as one of the most unobtrusive and non-intrusive biometric technique. Gait recognition is an effective human identification method that uses an individual walking style to recognize a person [1]. The foremost advantage of gait recognition techniques is its capability to collect the data from a subject at a far distance with minimal support from the subject. However, in practical applications like security and surveillance, gait recognition still has challenging problems due to the existence of covariate factors such as view angle of a camera, clothing, carrying © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 57–65, 2021. https://doi.org/10.1007/978-3-030-49339-4_7
58
P. Sankara Rao et al.
condition, walking surfaces, walking speed, etc. The existence of these covariate factors affects the recognition performance of gait analysis [2]. In general, the gait recognition systems are classified into two main groups: i) Model-based methods ii) Model-free methods The model-based methods are robust to occlusion and noise but it requires a higher computational expense [3–5]. However, model-free methods easy to implement and requires a lower computational expense [7–10]. In literature, many researchers have applied one of the above-mentioned methods for human identification. The main aim of this review paper is to provide an inclusive review of recent techniques developed on human identification through gait recognition. The structure of this review paper is outlined as follows. The important steps of a gait recognition system are described in Sect. 2. The brief reviews of the model-based method are discussed in Sect. 3. Section 4 presents the schemes based on the modelfree methods used in a gait recognition system. Section 5 summarizes the challenges related to gait recognition performance. Finally, the available human Gait database and conclusions are covered in Sect. 6 and 7 respectively.
2 Important Steps in a Gait Recognition System The important steps used in a general gait recognition system are depicted in Fig. 1.
Input image
Fe ature e xtraction
Fe ature selection
Classification
Ide ntifica tion
Database
Fig. 1. A general block diagram of a gait recognition system
2.1 Feature Extraction The feature extraction and selection of pertinent features are considered to be the most vital part of any biometric recognition system. There are several characteristics of gait that might consider as an imperative recognition feature. The relevant features extracted from the gait image are used to compare the similarity between the gait sequences.
Methods for Automatic Gait Recognition: A Review
59
Hence, the selection of the most appropriate features is of utmost importance to achieve maximum recognition accuracy. The features used in gait recognition are categorized into two classes: i. Static features ii. Dynamic features The static features reflect the body parameters like body height and build. While the later replicate the joint angle trajectories of main limbs [6]. Direct extraction of features from segmented video sequences is a tedious task for classification. The most popular methods used for feature extraction for gait recognition are principal component analysis (PCA), linear discriminant analysis (LDA), and their combinations [11, 12]. In the literature, several studies used low-level information for gait recognition. One of them is the silhouette extraction. In this process, the identification of the moving object is done by using an efficient background subtraction technique. The applied technique must have the capability to handle the illumination variations, long-term scene changes and repetitive motions from the clutter [46]. In the background subtraction process, each frame is subtracted from a background model of the particular image sequence. If the frame pixel value is not equal with the pixel value of the background then the pixel is shown as a silhouette region [8]. 2.2 Feature Selection The pertinent feature selection process is considered to be a vital step in a gait recognition system. In this step, the most suitable features subset is selected to achieve maximum classification accuracy. Proper selection of a technique can discover the appropriate features and remove the redundant and inappropriate gait features. The important approaches for feature selection are [13]: i) Filter-based approach ii) Wrapper-based approach iii) Embedded-based approach 2.3 Classification In the classification process, the input video frames are compared with sequences stored in the database. To classify the data for gait recognition, diverse classifiers are available, which ranges from traditional nearest neighbor to the latest Deep Neural Networks, kNearest Neighbor (k-NN), Hidden Markov Model (HMM) and Support Vector Machine (SVM) [14–18].
3 Model-Based Methods In Gait recognition, the model-based method analyzes the human body structure. This approach uses a model fitting technique to extract the kinematic parameters such as
60
P. Sankara Rao et al.
joint trajectories, the center of mass and position of joint centers. The model-based methods are robust to scale and view variation. Bobick et al. [19] have suggested a gait recognition method based on static and activity-specific constraints from body motion. Tanawongsuwan et al. [20] considered trajectories of joint angles for gait analysis. From the walking style of a subject, stride and cadence characteristics are extracted by Ben Abdelkader et al. [21]. Dockstader et al. [22] projected a three dimensional model for extraction of different joint angles; From Bayesian graphical model, Zhang et al. [23] proposed depiction and matching of articulated shapes; Capturing information from silhouettes, Lu et al. [24] implemented a full-body LDM. In [25], the authors proposed an automated human gait recognition method using neural networks. Tafazzoli et al. [26] projected a human gait recognition method from the movements of articulated parts of the body. To characterize the dynamic gait part, Zeng et al. [27] exploited the joint angles of the lower limb side silhouette. Bouchrika et al. [28] explained the angular motion of the hip and knee during the gait cycle of the subject. Yeoh et al. [29] proposed a new technique to extract five joint angular trajectories. Spatio-temporal and kinematic features are combined by Deng et al. [30]. Khamsemanan et al. [31] suggested how to exploit the posture-based features. Kim et al. [32] have proposed a human body model using multiple depth cameras to find body joint angles for gait Analysis (Table 1). Table 1. Summary of the model-based approach of different features and classifiers Year
Reference
2001 Bobick et al. [19]
Features
Classification
Stride, length and width
Nearest-neighbour
2001 Tanawongsuwan et al. [20] Trajectories of joint angle
Nearest-neighbour
2002 Benabdelkader et al. [21]
Cadence and stride
Bayesian
2003 Dockstader et al. [22]
Different joint angles
Nearest-neighbour
2004 Zhang et al. [23]
Non-rigid 2Dbody contourmodel
Chain-model
2007 Lu et al. [24]
Full-body LDM
DTW
2008 Yoo et al. [25]
Two-dimensional model
Neural networks
2010 Tafazzoli et al. [26]
Anatomy method
Nearest-neighbour
2014 Zeng et al. [27]
joint angles of lower limb
Radial basis function neural network
2016 Bouchrikaet al. [28]
Angular motion of the knee Nearest-neighbour and hip
2017 Yeoh et al. [29]
5-combined angular trajectories
2017 Deng et al. [30]
Trajectories of Joint-angles Nearest-neighbour and width
2018 Khamsemanan et al. [31]
Postures
Nearest-neighbour
2018 Kim W. et al. [32]
Body Joints
Nearest-neighbour
*LDM: layered deformable model, DTW: Dynamic time warping
Support vector machine
Methods for Automatic Gait Recognition: A Review
61
4 Model-Free Methods In a model-free gait recognition method, features of the gait are extracted from the walking sequences of a subject. In this approach, the shape of the silhouettes is mostly considered for the analysis of gait information. 4.1 Gait Energy Image (GEI) In a model-free approach, the process of extracting gait features from the gait representation is called GEI [47] or the average silhouette. The GEI is generated by computing the average of all binary silhouette gait images by maintaining the same view angle in a full cycle. It can be expressed as: G(m, n) =
N 1 Zk (m, n) N k=1
Where N and k denote the number of silhouette frames and the frame number in a complete gait sequence, respectively. Zk (m, n) is the binary image at frame k, and (m, n) represents the coordinates of the pixels in a particular frame. The main advantage of GEI comes from its mathematical simplicity and low computational costs. Hence, GEI has been widely used as a pertinent feature to differentiate individual gait patterns. However, other variants of gait images such as Gait Gaussian Image (GGI), Gait Entropy Image (GEnI), Flow Histogram Energy Image (FHEI), and Gradient Histogram Gaussian Image (GHGI) are proposed to improve the overall recognition accuracy [48–51]. Han et al. [33] suggested Gait energy image (GEI) technique for individual gait recognition. Tao et al. [34] extracted a set of discriminative features from GEI templates by using Gabor filters. Zhang et al. [35] projected a low-resolution gait recognition technique. Lai et al. [36] presented a sparse bilinear discriminate analysis based on the matrix method. Guan et al. [37] advised effective ensemble classifier method. Rida et al. [38] performed feature selection by statistical dependency process and some projections methods are used to design robust model-free gait recognition. Wang et al. [39] combined GEI features with the gabor wavelets feature using two-dimensional principal component analysis. Maryam Babaee et al. [40] suggested gait recognition from incomplete gait cycle by using fully convolutional Neural Network (FCNN). Chi Xu et al. [41] suggested a baseline and performance estimation by cross-age gait identification and by age group classification using free-form deformation (FFD). Zhang et al. [42] suggested a joint CNN based gait biometric method for gender and age prediction (Table 2).
5 Gait Recognition Challenges In real-world applications, the design of an efficient gait recognition algorithm has been a challenging task for researchers, because it is largely affected by external parameters like noise, changing of illumination conditions, occlusions and covariate conditions. Amongst the different challenges, covariate conditions are the most challenging issues
62
P. Sankara Rao et al.
Table 2. Summary of the GEI-based model-free approach consists of different types of features, transformation methods, and classification techniques. Year
References
Features
Transformation method
Classification method
2006
Han et al. [33]
Gait energy image
PCA and LDA
Nearest-neighbour
2007
Tao et al. [34]
Gait energy image + Gabor filters
GTDA and LDA Nearest-neighbour
2010
Zhang et al. [35]
Gait energy image
SR and MTP
Nearest-neighbour
2014
Lai et al. [36]
Gait energy image
SBDA
Nearest-neighbour
2015
Guan et al. [37]
Gait energy image
2DPCA and 2DLDA
Nearest-neighbour
2016
Rida et al. [38]
Gait energy image
SD + GLPP
Nearest-neighbour
2017
Wang et al. [39]
Gait energy image + Gabor wavelets
2DPCA
Support vector machine
2018
Babaee et al. [40]
Gait energy image
–
Convolutional neural networks
2018
Xu C. et al. [41]
Gait energy image
FFD
Support vector machine
2019
Zhang Y. et al. [42]
Gait energy image
–
Joint Convolutional neural networks
*GTDA: General tensor discriminant analysis, SR: super-resolution, MTP: multi-linear tensorbased learning without tuning parameters, SBDA: sparse bilinear discriminant analysis, SD: statistical dependency, GLPP: Globality-locality preserving projections.
in this area. It significantly changes the individual’s appearance because of wearing different types of clothes. In addition to this, the camera view angle also severely affects the overall gait recognition performance. The aforementioned issues have gained considerable attention from the researchers. Several proposed methods related to gait recognition have been tested without unambiguously considering the appropriate gait features, which may affect the recognition accuracy of the gait recognition system. The primary design challenge of any gait recognition system is to select the utmost significant and optimum gait features that are invariant for covariate conditions to improve the recognition efficiency of a gait recognition system [43].
6 Existing Gait Databases There are many datasets like USF Dataset, OU-ISIR Gait Dataset, The CMU Motion of Body (MoBo) Database, CASIA Gait Dataset, TUM-IITKGP Dataset and The AVA Multi-View Dataset (AVAMVG) are publicly available [44]. Out of these databases, CASIA Gait Dataset [45] is widely used for the evaluation of gait recognition algorithms.
Methods for Automatic Gait Recognition: A Review
63
7 Conclusion Human authentication through gait analysis is an emerging field of biometric technology, with a wide range of real-time applications. It is of keen interest in modern medicine, security agencies, military organizations, sports industry, bionic prosthetics, and social analytics. Although significant research work was carried by researchers in this area, there are still a few issues to be addressed. Gait recognition performance is more often affected by various intra-class variations. In this review paper, we presented recent approaches used in gait recognition and highlighted the model-based and model-free approaches.
References 1. Hu, M., Wang, Y., Zhang, Z., Zhang, D., Little, J.J.: Incremental learning for video-based gait recognition with LBP flow. IEEE Trans. Cybern. 43(1), 77–89 (2013) 2. Yu. S., Tan D., Tan T.: A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. In: IEEE International Conference on Pattern Recognition, vol. 4, pp. 441–444 (2006) 3. Murat, E.: Human identification using gait. Turk J. Elec. Eng. 14(2), 267–291 (2006) 4. Yam, C.Y., Nixon, M.S., Carter, J.N.: Automated person recognition by walking and running via model-based approaches. Pattern Recogn. 37(5), 1057–1072 (2004) 5. Shakhnarovich G., Lee L., Darrell T.: Integrated face and gait recognition from multiple views. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Hawaii, USA, pp. I439–I446 (2001) 6. Wang, L., Ning, H., Tan, T., Hu, W.: Fusion of static and dynamic body biometrics for gait recognition. IEEE Trans. Circ. Syst. Video Technol. 14(2), 149–158 (2004) 7. Han, J., Bhanu, B.: Statistical feature fusion for gait-based human recognition. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington DC, USA, pp. II842–II847 (2004) 8. Wang, L., Tan, T., Ning, H., Hu, W.: Silhouette analysis-based gait recognition for human identification. IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1505–1518 (2003) 9. Yu, S., Wang, L., Hu, W., Tan, T.: Gait analysis for human identification in the frequency domain. In: Proceedings 3rd International Conference on Image and Graphics, Hong Kong, China, pp. 282–285 (2004) 10. Kochhar, A., Gupta, D., Hanmandlu, M., Vasikarla, S.: Silhouette based gait recognition based on the area features using both model-free and model-based approaches. In: Proceedings of IEEE International Conference on Technologies for Homeland Security (HST) (2013) 11. Cheng, Q., Fu, B., Chen, H.: Gait recognition based on PCA and LDA. In: Proceedings of the Second Symposium International Computer Science and Computational Technology, ISCSCT ‘09, Huangshan, P. R. China, pp. 26–28, 124–127 (2009) 12. Yaacob, N.I., Tahir, N.M.: Feature selection for gait recognition. In: Proceedings of the IEEE Symposium on Humanities, Science and Engineering Research, pp. 379–383 (2002) 13. Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artif. Intell. 97(1–2), 273–324 (1997) 14. Alex, K., Sutskever, I., Geoffrey, E.H.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012) 15. Aravind, S., Amit, R., Rama, R.: A hidden Markov model-based framework for recognition of humans from gait sequences. In: Proceedings of the 2003 IEEE International Conference on Image Processing, vol. 3, p. II-93-6 (2003)
64
P. Sankara Rao et al.
16. Cheng, M.H., Ho, M.F., Huang, C.L.: Gait analysis for human identification through manifold learning and HMM. Pattern Recogn. 41, 2541–2553 (2008) 17. Chen, C., Liang, J., Zhao, H., Hu, H., Tian, J.: Factorial HMM and parallel HMM for gait recognition. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 39, 114–123 (2009) 18. Zongyi, L., Sarkar, S.: Improved gait recognition by gait dynamics normalization. IEEE Trans. Pattern Anal. Mach. Intell. 28, 863–876 (2006) 19. Bobick, A.E., Johnson, A. Y.: Gait recognition using static, activity-specific parameters. IEEE Comput. Vis. Pattern Recogn., I-423 (2001) 20. Tanawongsuwan, R., Bobick, A.: Gait recognition from time-normalized joint angle trajectories in the walking plane. IEEE Comput. Vis. Pattern Recogn. 2, II-726 (2001) 21. BenAbdelkader, C., Cutler, R., Davis, L.: Stride and cadence as a biometric in automatic person identification and verification. IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, pp. 372–377 (2002) 22. Dockstader, S.L., Berg, M.J., Tekalp, A.M.: Stochastic kinematic modeling and feature extraction for gait analysis. IEEE Trans. Image Process. 12(8), 962–976 (2003) 23. Zhang, J., Collins, R., Liu, Y.: Representation and matching of articulated shapes. IEEE Comput. Vis. Pattern Recogn. 2, II-342 (2004) 24. Lu, H., Plataniotis, K.N., Venetsanopoulos, A.N.: A full-body layered deformable model for automatic model-based gait recognition. EURASIP J. Adv. Sig. Process. 2008(1), 1–13 (2008) 25. Yoo, J.H., Hwang, D., Moon, K.Y., et al.: Automated human recognition by gait using a neural network. In: Workshops on Image Processing Theory, Tools and Applications, Sousse, Tunisia, 2008, pp. 1–6 (2008) 26. Tafazzoli, F., Safabakhsh, R.: Model-based human gait recognition using leg and arm movements. Eng. Appl. Artif. Intell. 23(8), 1237–1246 (2010) 27. Zeng, W., Wang, C., Li, Y.: Model-based human gait recognition via deterministic learning. Cogn. Comput. 6(2), 218–229 (2014) 28. Bouchrika, I., Carter, J.N., Nixon, M.S.: Towards automated visual surveillance using gait for identity recognition and tracking across multiple non-intersecting cameras. Mult. Tools Appl. 75(2), 1201–1221 (2016) 29. Yeoh, T.W., Daolio, F., Aguirre, H.E., et al.: On the effectiveness of feature selection methods for gait classification under different covariate factors. Appl. Soft Comput. 61, 42–57 (2017) 30. Deng, M., Wang, C., Cheng, F., et al.: Fusion of spatial-temporal and kinematic features for gait recognition with deterministic learning. Pattern Recogn. 67, 186–200 (2017) 31. Khamsemanan, N., Nattee, C., Jianwattanapaisarn, N.: Human identification from freestyle walks using posture-based gait feature. IEEE Trans. Inf. Forensics Sec. 13(1), 119–128 (2018) 32. Kim, W., Kim, Y.: Human body model using multiple depth camera for gait analysis. IEEE Trans. SNPD (2018) 33. Han, J., Bhanu, B.: Individual recognition using gait energy image. IEEE Trans. Pattern Anal. Mach. Intell. 28(2), 316–322 (2006) 34. Tao, D., Li, X., Wu, X., et al.: General tensor discriminant analysis and Gabor features for gait recognition. IEEE Trans. Pattern Anal. Mach. Intell. 29(10), 1700–1715 (2007) 35. Zhang, J., Pu, J., Chen, C., et al.: Low-resolution gait recognition. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 40(4), 986–996 (2010) 36. Lai, Z., Xu, Y., Jin, Z., et al.: Human gait recognition via sparse discriminant projection learning. IEEE Trans. Circuits Syst. Video Technol. 24(10), 1651–1662 (2014) 37. Guan, Y., Li, C.T., Roli, F.: On reducing the effect of covariate factors in gait recognition: a classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 37(7), 1521–1528 (2015) 38. Rida, I., Boubchir, L., AlMaadeed, N., et al.: Robust model-free gait recognition by statistical dependency feature selection and globality-locality preserving projections. In: 2016 International Conference on Telecommunications and Signal Processing, Vienna, Austria, June 2016, pp. 652–655 (2016)
Methods for Automatic Gait Recognition: A Review
65
39. Wang, X., Wang, J., Yan, K.: Gait recognition based on Gabor wavelets and (2d) 2pca. Multimed. Tools Appl. 2017, 1–17 (2017) 40. Babaee, M., Li, L., Rigoll, G.: Gait recognition from incomplete gait cycle. IEEE Trans. ICIP (2018) 41. Xu, C., Makihara, Y., Yagi, Y., et al.: Gait-based age progression/regression: a baseline and performance evaluation by age group classification and cross-age gait identification. Mach. Vis. Appl. 30, 629–644 (2019) 42. Zhang, Y., Huang, Y., Wang, L., Yu, S.: A comprehensive study on gait biometrics using a joint CNN-based method. Pattern Recogn. 93, 228–236 (2019) 43. Hossain, M.A., Makihara, Y., Wang, J., Yagi, Y.: Clothing-invariant gait identification using part based clothing categorization and adaptive weight control. Pattern Recogn. 43(6), 2281– 2291 (2010) 44. Wang J., She M., Nahavandi S., Kouzani A.: A review of vision-based gait recognition methods for human identification. In: IEEE Transaction on Digital Image Computing: Techniques and Applications (2010) 45. www.cbsr.ia.ac.cn/english/Gait%20Databases.asp 46. Shaikh, H.S., Saeed, K., Chaki, N.: Moving Object Detection Using Background Subtraction. Springer Briefs in Computer Science. Springer, Cham (2014) 47. Wattanapanich, C., Wei, H.: Investigation of new gait representations for improving gait recognition. Int. Sch. Sci. Res. Innov. 11(12), 1272–1277 (2017) 48. Bashir, K., Tao, X., Shaogang, G.: Gait recognition using Gait Entropy Image. In: 3rd International Conference on Crime Detection and Prevention, ICDP 2009, pp. 1–6 (2009) 49. Arora, P., Srivastava, S.: Gait recognition using gait Gaussian image. In: 2nd International Conference on Signal Processing and Integrated Networks (SPIN), pp. 791–794 (2015) 50. Yang, Y., Tu, D., Li, G.: Gait recognition using flow histogram energy image. In: 22nd International Conference on Pattern Recognition (ICPR), pp. 444–449 (2014) 51. Arora, P., Srivastava, S., Arora, K., Bareja, S.: Improved gait recognition using gradient histogram Gaussian image. Procedia Comput. Sci. 58, 408–413 (2015)
Comparative Performance Exploration and Prediction of Fibrosis, Malign Lymph, Metastases, Normal Lymphogram Using Machine Learning Method Subrato Bharati1(B) , Md. Robiul Alam Robel3 , Mohammad Atikur Rahman1 , Prajoy Podder2 , and Niketa Gandhi4 1 Department of EEE, Ranada Prasad Shaha University, Narayanganj 1400, Bangladesh
[email protected], [email protected] 2 Institute of ICT, Bangladesh University of Engineering and Technology,
Dhaka 1000, Bangladesh [email protected] 3 Department of CSE, Cumilla University, Cumilla, Bangladesh [email protected] 4 University of Mumbai, Maharashtra, India [email protected]
Abstract. The main focus of this paper is to predict clusters of patient records such as Fibrosis, Malign Lymph, Metastases, Normal Lymphogram obtained via different classification techniques and compare the presentation of classification algorithms based on some parameters such as precision, recall, F1 value (F-measure), classification accuracy (CA) and area under the curve (AUC). The Lymphographic dataset consist of 18 predictor attributes. Experimental results display that the Neural network provides highest classification accuracy and precision as well as F1 value among the other four classification algorithm which are KNN, SVM (Support vector machine) learner, Random forest learner, Ada boost. It permits the classifier to precisely perform multi-class medical data classification. Confusion Matrix has also been described for the above mentioned classifier using the lymphographic dataset. This paper also visualizes the ranking of Lymphogram dataset using the scoring method and analysis of scatter plot, box plot and distribution plot. Keywords: Data mining · Neural network · KNN · SVM (Support vector machine) learner · Random forest learner · Ada boost
1 Introduction Lymphography is normally a medical imaging system. In this system, a radiocontrast agent is injected. After the injection, an X-ray picture is occupied for the purpose of visualization of configurations of the lymphatic scheme. The lymphatic scheme comprises © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 66–77, 2021. https://doi.org/10.1007/978-3-030-49339-4_8
Comparative Performance Exploration and Prediction
67
lymph nodes, lymph ducts, lymphatic tissues, lymph capillaries and lymph vessels. Lymphangiography can be defined as the same method which is used only to visualize the lymph vessels [1]. The lymphatic scheme is a vital part of the immune organism in order to remove the tissue fluid from the interstitial spaces. Salt, Glucose, Calcium, potassium and other minerals are contained in tissue fluid. This fluid is effective in order to monitor the glucose levels of diabetics’ people. It not only absorbs but also transports fat-soluble vitamins such as Vitamin A, D and E. It transports leukocytes and antigen-presenting cells [2, 10]. Various medical imaging systems have been adopted for the exploration of lymphatics as well as lymph glands position. The present state of lymph nodes that contains WBC’s fighting against infection and various diseases with attained data from lymphographic method can determine the classification of the investigated diagnosis. The study of the lymph nodes is essential in the diagnosis and prognosis of cancer cell as well as finding the treatment of cancer affected patient [3, 11]. Data-mining [4] refers to the procedure of discovering and differentiating related, important and critical information from a numerous database. Machine learning, [5, 6] facilitates a system to inevitably learn to recognize convoluted patterns. Machine learning makes intelligent decisions based on the available data [7, 8, 12, 15, 16]. Several classification algorithms such as K means neural network, Support vector machine learning algorithm, Random Forest learner, Neural network, Adaboost have been executed in order to evaluate the lymphography dataset. Overall accuracy, Kappa value has been calculated. Orange data mining tools have been used for this purpose. Several researchers worked with this dataset. Karabulut et al. [9] adopted Multilayer Perceptron (MLP), Naïve Bayes classifier, as well as J48 decision tree classification algorithm with s several datasets including lymph disease dataset for the purpose of feature selection. Karim Baati et. al. [13] proposed Naïve Bayes Style Possibilistic Classifier (NBSPC) in order to create a verdict from the subjective and categorical medical evidence involved by the dataset. The difference between the proposed paper and other research paper that addresses the same topic is that a comparative performance of several classifier schemes has been shown using python based Orange data mining Package, as well as Confusion Matrix with Producer Accuracy and User accuracy, has also been illustrated which produces more well-organized results than any other research works.
2 Flow Chart of the System A framework of the data mining is observed to comprise of two different segments i.e. classification and clustering. Framework of the clustering efforts to determine assemblies in the records of the patient then identify of attributes clusters that donate to the documentation of patient record with a target class. Feature selection monitored by classification assists the respect of a suitable target class on behalf of the clinical outcomes. The proposed scheme as exposed in Fig. 1, create a framework for data mining to identify classes and clusters of a particular patient record. Figure 2 shows main working diagram in the environment of orange that is developed by Python.
68
S. Bharati et al.
Fig. 1. Proposed experimental method and design for classification and ranking
Fig. 2. Work flow graph in the environment of orange tools
3 Description of the Dataset The original lymphography dataset is collected from UCI machine learning repository [14], Institute of Oncology, Ljubljana, Yugoslavia. It is mainly a classification dataset. Normal find, Fibrosis, Malign lymph and Metastases are the characteristics of the “Class” attribute. Normal, Arched, Deformed, Displaced are the four types of “Lymphatics” attributes. There are also other attributes such as changes in node, defect in node, changes in lym etc.
Comparative Performance Exploration and Prediction
69
4 Result Analysis 4.1 Evaluation and Predictions Results Table 1 illustrates the overall performance of classifier adopted in UCI lymphographic dataset. From Table 1, it can be said that the value of AUC (Area under the curve) is 0.926 in Neural network. The highest precision and recall value is 0.883 and 0.878 respectively that is achieved in the Neural network where the lowest precision and recall value is 0.809 and 0.818. The highest classification accuracy is 0.878. Table 2, 3, 4, 5, and 6 illustrates Confusion Matrix with producer accuracy and user accuracy for Random Forest, SVM learner, AdaBoost, neural network, KNN respectively. The number of fold is 10. The overall accuracy is 88.514% in Neural network which declares the neural network gives better result compared to the other classifiers. Figure 3 illustrates the ranking of Lymphogram dataset using scoring method.
Table 1. Performance evaluation of different classification algorithm Method
AUC CA
F1
Precision Recall
KNN
0.886 0.818 0.809 0.809
0.818
SVM learner
0.930 0.858 0.858 0.859
0.858
Random forest learner 0.911 0.831 0.821 0.824
0.831
Neural network
0.926 0.878 0.880 0.883
0.878
Ada boost
0.843 0.824 0.824 0.826
0.824
Table 2. Confusion matrix with producer accuracy and user accuracy for random forest when number of fold is 10 Fibrosis
Malign Lymph
Metastases
Normal
1
3
0
0
4
Malign Lymph 0
45
16
0
61
73.77%
Metastases
7
74
0
81
91.358%
Fibrosis
0
Classification overall
Normal
0
1
1
0
2
Truth overall
1
56
91
0
148
User accuracy (recall)
100%
80.357%
81.319%
No data
Overall Accuracy: 81.081%, Kappa: 0.627
Producer accuracy (precision) 25%
0%
70
S. Bharati et al.
Table 3. Confusion matrix with producer accuracy and user accuracy for SVM learner when number of fold is 10 Fibrosis
Malign Lymph
Metastases
Normal
3
1
0
0
4
Malign Lymph 0
51
10
0
61
83.60%
Metastases
0
10
71
0
81
87.65%
Normal
0
0
0
2
2
Truth overall
3
62
81
2
148
User accuracy (recall)
100%
82.258%
87.65%
100%
Fibrosis
Classification overall
Producer accuracy (precision) 75%
100%
Overall Accuracy: 85.811%, Kappa: 0.731
Table 4. Confusion matrix with producer accuracy and user accuracy for AdaBoost when number of fold is 10 Fibrosis
Malign Lymph
Metastases
Normal
1
2
0
1
4
Malign Lymph 1
51
9
0
61
83.60%
Metastases
0
9
70
2
81
86.42%
Normal
0
0
2
0
2
Truth overall
2
62
81
3
148
User accuracy (recall)
50%
82.258%
86.42%
0%
Fibrosis
Classification overall
Producer accuracy (precision) 25%
0%
Overall Accuracy: 82.432%, Kappa: 0.667
4.2 Classification Tree Figure 4 shows a classification tree. It can propose a quantity of declaration the accurate classification of lymphogram. This tree is assembled through a scheme predictable as binary recursive extrication. A classification tree is an iterative system of awful the all data into separators, and then terrible it up auxiliary correspondingly of the channels.
Comparative Performance Exploration and Prediction
71
Fig. 3. Ranking of Lymphogram dataset using scoring method such as Info gain, Gain ratio, Gini, Anova, FCBF and so on Table 5. Confusion matrix with producer accuracy and user accuracy for neural network when number of fold is 10 Fibrosis
Fibrosis
4
Malign Lymph
Metastases
Normal
Classification overall
Producer accuracy (precision)
0
0
0
4
Malign Lymph 0
53
8
0
61
86.88%
Metastases
7
73
1
81
90.12%
0
Normal
0
0
1
1
2
Truth overall
4
60
82
2
148
User accuracy (recall)
100%
88.333%
89.02%
50%
Overall Accuracy: 88.514%, Kappa: 0.783
100%
50%
72
S. Bharati et al.
Table 6. Confusion matrix with producer and user accuracy for KNN when number of fold is 10 Fibrosis
Fibrosis
2
Malign Lymph
Metastases
Normal
Classification overall
Producer accuracy (precision)
2
0
0
4
Malign Lymph 0
46
15
0
61
75.41%
Metastases
0
8
73
0
81
90.12%
Normal
0
1
1
0
2
Truth overall
2
57
89
2
148
User accuracy (recall)
100%
80.702%
82.02%
No data
Overall Accuracy: 81.757%, Kappa: 0.644
Fig. 4. Lymphogram dataset classification tree viewer
50%
0%
Comparative Performance Exploration and Prediction
73
4.3 Distribution Plot The Distribution plot represents the value distribution of attributes. The attributes may be continuous or discrete. Distributions can be trained on class having a data containing a class variable (Fig. 5).
Fig. 5. Distribution plot for (a) KNN (b) SVM (c) Random forest (d) Neural Network (e) AdaBoost grouped by ‘Neural Network’
74
S. Bharati et al.
4.4 Scatter Plot for Lymphogram Figure 6 represents the scatter plot for the Lymphogram dataset considering the Fibrosis, Malign lymph and Metastases characteristics. It also focuses on the class density and regression line.
Fig. 6. Scatter Plot (a) Neural Network (normal) vs. Neural Network (Fibrosis), (b) Neural Network (normal) vs. Malign Lymph (c) Neural Network (normal) vs. Metastases
4.5 Box Plot Analysis A Box plot is a useful way of demonstrating the data distribution with the help of their quartiles. The lines prolonging parallel from the boxes can be known as whiskers. The yellow upright line symbolizes the median. The tinny blue line signifies the standard deviation. The blue splattered state shows the values among the first and the third quartile. Figure 7 represents the Box plot using Neural Network considering the Fibrosis, Malign lymph, Metastases, normal characteristics.
Comparative Performance Exploration and Prediction
75
Fig. 7. Box plot for (a) Neural Network (Fibrosis) (b) Neural Network (Malign Lymph) (c) Neural Network (Metastases) (d) Neural Network (normal)
76
S. Bharati et al.
5 Conclusion Lymphogram has been predicted and evaluated for different classifiers: Neural network, SVM learner, KNN, Random forest learner, Ada boost. To present the results of this classifier, python based Orange data mining package has been used. In this paper generally deliberates the various performance of classifier algorithm according to the ranking process, classification tree, distribution plot, scatter plot and box plot. Here performance evaluation of different classification algorithm for cross-validation ten number of folds, besides confusion matrix have also been calculated and explained. Evaluation of the confusion matrix is executed and Neural Network offers the highest accuracy which is 88.514%. The second highest accuracy is 85.811% which is provided by SVM learner. This paper is employed ranking of Lymphogram dataset using the scoring method and analysis of scatter plot, box plot and distribution plot.
References 1. Lee, B.-B., Rockson, S.G., Bergan, J. (eds.): Lymphedema: A Concise Compendium of Theory and Practice. Springer, Cham (2018) 2. Alonso-burgos, A., Urbano, J., Cabrera Gonzalez, J., Pérez-de-la-Fuente, T., García tutor, E., Franco-Lopez, A.: MR-lymphography: technique, indications and results. Br. J. Surg. 101(Suppl. 1), 8 (2014). https://doi.org/10.1055/s-0034-1374002 3. Fuchs, W.A., Davidson, J.W., Fischer, H.W.: Lymphography in Cancer. Springer, Heidelberg (2012) 4. Mondal, M.R.H., Bharati, S., Podder, P., Podder, P.: Data analytics for novel coronavirus disease. Inform. Med. Unlocked 20, 100374 (2020) 5. Jacob, S.G., Geetha Ramani, R., Nancy, P.: Discovery of knowledge patterns in lymphographic clinical data through data mining methods and techniques. In: Meghanathan, N., Nagamalai, D., Chaki, N. (eds.) Advances in Computing and Information Technology. Advances in Intelligent Systems and Computing, vol. 178. Springer, Heidelberg (2013). https://doi.org/10.1007/ 978-3-642-31600-5_13 6. Arora, R., Suman: Comparative analysis of classification algorithms on different datasets using WEKA. Int. J. Comput. Appl. 54(13), 21–25 (2012) 7. Bharati, S., Podder, P., Mondal, R., Mahmood, A., Raihan-Al-Masud, M.: Comparative performance analysis of different classification algorithm for the purpose of prediction of lung cancer. In: Abraham, A., Cherukuri, A.K., Melin, P., Gandhi, N. (eds.) ISDA 2018 2018. AISC, vol. 941, pp. 447–457. Springer, Cham (2020). https://doi.org/10.1007/978-3-03016660-1_44 8. Bharati, S., Rahman, M.A., Podder, P.: Breast cancer prediction applying different classification algorithm with comparative analysis using WEKA. In: 2018 4th International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT), Dhaka, Bangladesh, pp. 581-584 (2018). https://doi.org/10.1109/ceeict.2018.8628084 9. Karabulut, E.M., Ibrikci, T.: Analysis of cardiotocogram data for fetal distress determination by decision tree based adaptive boosting approach. J. Comput. Commun. 2, 32–37 (2014) 10. Kotsiantis, S.B.: Supervised machine learning: a review of classification techniques. Informatica 31, 249–268 (2007) 11. Han, J., Kamber, M.: Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers, Burlington (2000)
Comparative Performance Exploration and Prediction
77
12. Raihan-Al-Masud, M., Mondal, M.R.H.: Data-driven diagnosis of spinal abnormalities using feature selection and machine learning algorithms. PLOS ONE 15(2), e0228422 (2020). https://doi.org/10.1371/journal.pone.0228422 13. Baati, K., Hamdani, T.M., Alimi, A.M.: Diagnosis of lymphatic diseases using a naive Bayes style possibilistic classifier. In: 2013 IEEE International Conference on Systems, Man, and Cybernetics, Manchester, pp. 4539–4542 (2013). https://doi.org/10.1109/smc.2013.772 14. Kononenko, I., Cestnik, B.: UCI Machine Learning Repository. Institute of Oncology, Ljubljana, Yugoslavia (1988). https://archive.ics.uci.edu/ml/datasets/Lymphography 15. Bharati, S., Podder, P., Paul, P.K.: Lung cancer recognition and prediction according to random forest ensemble and RUSBoost algorithm using LIDC data. Int. J. Hybrid Intell. Syst. 15(2), 91–100 (2019). https://doi.org/10.3233/HIS-190263 16. Bharati, S., Podder, P., Mondal, M.R.H.: Hybrid deep learning for detecting lung diseases from X-ray images. Inform. Med. Unlocked 20, 100391 (2020)
Decision Forest Classifier with Flower Search Optimization Algorithm for Efficient Detection of BHP Flooding Attacks in Optical Burst Switching Network Mrutyunjaya Panda1 , Niketa Gandhi2 , and Ajith Abraham3(B) 1 Utkal University, Bhubaneswar, Odisha, India
[email protected] 2 University of Mumbai, Mumbai, Maharashtra, India
[email protected] 3 Machine Intelligence Research Labs (MIR Labs), Auburn, WA, USA
[email protected]
Abstract. This research is focused on the efficient classification of BHP flooding attacks in the Optical switching network environment. The burst switching network is the backbone of the future generation of the optical network. The burst header packet flooding attacks poses a key security challenge that may have a negative impact on its resource utilization performance and in some cases may create issues like denial of service (DoS). A possible solution to this is to develop efficient classification techniques with optimized features from the network data, so that misbehaving edge notes may be detected at an early stage and remedial action may be taken as a counter measures to protect the network. This research investigates the efficient feature selection by using a novel flower Pollination optimization algorithm (FPA) and then the implementation of a Decision Forest algorithm by Penalizing Attributes (Forest PA) classifier for the detection of flooding attacks. The comparison of the proposed approach with the other existing approaches in terms of various performance metrics such as: Accuracy, precision, recall, sensitivity, specificity and Informedness are presented to understand its suitability. Keywords: Accuracy · Decision forest · Flooding attack · Flower pollination · Optical switching network (OBSN)
1 Introduction While the traditional network uses the cable as a transmission medium, data are transmitted from transmitter to receiver using light through an optical fiber medium in Optical Network (OptNet). As the number of internet users increases exponentially nowadays with increasing penetration of fiber-to-the-home for the next decade, there are a huge demand for the bandwidth-intensive services like video on demand (VoD), Peer to Peer © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 78–87, 2021. https://doi.org/10.1007/978-3-030-49339-4_9
Decision Forest Classifier with Flower Search Optimization Algorithm
79
(P2P) and grid computing applications, drives the attention on the deployment of Wavelength Division Multiplexing networks in wide area network scenario. Conventional optical circuit switching networks (OCSN) faces limitation on bandwidth utilization, hence not suitable for intensive internet traffic. At the same time, the lack of efficient realization of optical buffers together with immature realization of high-speed optical switches is a problem in the optical packet switching network (OPSN). Further, in OCSN, data are not handled dynamically and in OPSN, switching id not efficient. These limitations in conventional networks attract many researchers to explore the usefulness of optical burst switching networks (OBSN) as a potential candidate for future optical network solutions [1–5]. OBSN, on the contrary, performs bandwidth allocation dynamically with its efficient usage by creating a separate control packet for data burst. It is customary to say here that the OBSN features out bandwidth and speed as two performance measures to compare with its traditional counterpart. OBSN being a network reliable one poses not only good quality of service (QoS) but also a secured one that uses huge bandwidth at a lower error rate makes itself very promising for applications in optical network scenarios. An optical burst is a data packet of variable length with control and payload as two components. While the control packet contains the packet header information, actual data to be transmitted in the network resides in the payload. The OBSN comprises of edge and core nodes as ingress, egress and core nodes. Core nodes are the intermediate nodes used basically to avoid buffering and process the optical data burst (DB) through a control data packet, i.e. burst header packet (BHP) with information. The assembly of user datagram protocol (UDP) packets collected from the clients, are done into a data burst at the ingress edge node and then the burst routes over a buffer-less core network. The data are then passed through the path established by BHP from source to destination, where there is always a chance of intrusions (i.e. attacks) by the possible intruders due to lack of security in the optical network. The misbehaving node designate as an attacker is those where BHPs are sent at a higher rate with no corresponding DB. The possible attack types include BHP flooding attacks, DoS attack, Spoofing, Land attack, Replay attack. Replay attack, also known as playback attack [6] is a form of network attack that occurs when an intruder eavesdrop on a secured optical network intercepts it and then delays or resend it according to its will and wants. This attack does not warrant any specialization by the attacker, only resending the entire thing once again solves the purpose. The land-attack, which is a Layer 4 Denial of Service (DoS) [7] attack where packets are sent with the same IP address and same port number for the source and destination IP configuration settings. This attack may cause the victim machine to lock up or become unstable. The present networking system is non-vulnerable to such type of attack issues. Spoofing attack or sometimes referred to as man-in-the-middle attack [8], is one where the intruder present itself as a genuine and safe sender generally found its applications in emails, phone calls or spoofing a computer IP address etc. This attack has the capability to reroute the network traffic to malicious websites and then stealing the information. This way, this attack can infect the computer systems of an organization
80
M. Panda et al.
with stealing of data, that incur a heavy loss to the organization and the reputation may also be spoiled. BHP flooding attack [9] is one that is intended for making the destination node totally unavailable for data transmission. It is observed that with the channel reservation policy of OBSN, once a BHP arrives at a router, the router changes the state of the optical channel from free to busy state. If no WDM channels are found, then the router discards the incoming data burst. Here, the attacker takes this chance to attack the system by injecting a huge number of target-specific malicious BHP with long offset time added to it, so that the destination nodes start reserving new BHP channels for each malicious BHP injected by the intruder. As a result of which, the system will not accept a valid BHP when it arrives. This creates DoS (Denial of service) to the actual operation. OBSN architecture can be seen as collections of Ingress Edge nodes, Core nodes, and Egress edge nodes. The edge nodes and core nodes are connected via a WDM link. At the ore nodes, the data bursts containing multiple packets are all-optically switched [10]. Once, data burst is constructed in the source Ingress edge node and before its transmission of an offset time, a burst header packet (BHP) is transmitted for switch configuration and then the recourses are reserved along the light path. BHP is responsible to forward the corresponding burst through a path from source to destination nodes by keeping information related source and destination address, offset time and burst length, etc. It is observed from the literature that very few works on detecting and classifying the BHP flooding attacks in OBSN have been conducted and many of them based on human experiences and mere statistical analysis [11]. Data Mining plays an important role in such classification of attacks efficiently. So, the motivation lies in: (1) detecting malicious edge nodes efficiently, as of now, it needs domain experts to be best of our knowledge and then, (2) Classification of BHP flooding attacks in a most accurate manner. Now, even though there are some articles available by applying Data mining algorithm to cater to the needs in this issue, still there are a lot of scope of improvement in this emerging area of research. The main objectives of this research are to address the following: (1) To develop an efficient optimization technique as a pre-processing step to have better feature selection out of the OBSN dataset, and (2) To build a robust, stable classification model that can classify the BHP attacks in a most accurate way, and (3) To compare the proposed approach for its effectiveness with the other related approaches including some Deep learning methods. The rest of the paper is systematized as follows: Sect. 2 highlights the survey of related works of BHP flooding attacks in the OBS network. The proposed methodology for feature selection and classification of BHP flooding attacks is explained along with a brief discussion about the dataset used in this study presented in Sect. 3. Experimental work is discussed in Sect. 4. The obtained results with discussion and comparison with others’ work with various performance measures are presented in Sect. 5. Finally, Sect. 6 concludes the work.
Decision Forest Classifier with Flower Search Optimization Algorithm
81
2 Related Work A few kinds of literature exist in discussing the detection and prevention related issues due to BHP flood attacks in OBSN [12–15]. Sliti et al. [14] proposed an optical layer flow filtering approach to prevent the BHP flooding attack at an early stage. Some researchers [15, 16] used Chi-square and correlation-based feature subset selection method combined with decision tree classifier (C4.5) in order to accurately classify the OBSN flooding attacks. Musumeci et al. [17] provided an in-depth analysis of the suitability of the Advanced machine learning algorithms to not only make optical network be a smart one but at the same time, to be more agile and adaptive in nature in reference to several types of research available in the literature with new potential future scope. Hasan et al. [18] proposed a deep learning framework (DCNN) in order to automatically detect the edge nodes in OBSN and suggests for the usefulness of such a method in comparison to traditional approaches like: better feature selection and OBS network attack classifications. Rajab et al. [19] used Decision tree-based learning rules to have an eye for an eye in detecting the BHP flooding attack in OBSN and concluded that their approach provides classification accuracy of 93% while classifying the BHP flooding attack to fall either in behaving or misbehaving classes. Kaur and Singh [20] used Genetic algorithm-based optimization method for the detection and prevention of DDoS flooding attacks and analyzed its impact on OBSN with various performance metric like energy consumption, packet delivery and network error and then presented a comparison of their approach without using any optimization technique. Alshboul [21] proposed a RIPPER rule induction based classifier for classification of BHP flooding attack in OBSN with 98% accuracy and opines the effectiveness of the rule-based classifier in comparison to the probabilistic classifiers such as naive Bayes and Bayes net with 69% and 89% accuracy for the same dataset. Some researchers [22, 23] applied supervised machine learning methodology in order to categorize the network packets and their flow within Internet. A few [24] also applied machine learning techniques within OBSN by classifying data burst losses by calculating the number of bursts between failures using Expectation-Maximization (EM) clustering and HMM (Hidden Markov Model) classification algorithms. Lévesque et al. [25], adopted a Bayesian network-based graphical probabilistic routing model in order to reduce the burst loss ratio in comparison to other available fixed methods without using any wavelength converters at OBSN core switch.
3 Materials and Methods 3.1 Datasets An Optical Burst Switching (OBS) Network dataset containing details of Burst Header Packet (BHP) flooding attack is considered in this research and can be freely obtained from UCI machine learning repository [26]. This text dataset consists of 21 attributes and 4 class variables with 1075 instances. The security threats generally found in OBSN
82
M. Panda et al.
categorized as (a) Denial-of-service (DoS) attack where it disrupt the service and provides low quality of service, (b) Spoofing attack where unauthorized access to a system is tried with false identity, (c) Eavesdropping which is similar to traffic analysis, (d) Traffic analysis that describes the unethical information extraction between source and destination, and (e) control burst duplication during its travel from the ingress node to the egress node via intermediate core nodes. 3.2 Methodologies Used 3.2.1 Data Pre-processing We proposed to use flower search-based optimization (i.e. Flower Pollination Algorithm) as a pre-processing step in order to find the best suitable features for BHP flooding attack classification in OBSN. 3.2.1.1 Flower Pollination Algorithm Method. The Flower Pollination method (FPA) [27] is motivated by the nature of flower pollination with reproduction as its main objective. There are two ways in which flower pollination occurs: (a) biotic pollination where Pollinators are either birds, fly or bees that transfers the pollen. And (b) the other one is Abiotic pollination with no pollinators but the pollination occurs due to wind or moving water etc. In FPA, the process of pollination can be either self-pollination or cross-pollination [28]. In the first case, no pollinator is available but the transfer of pollen takes place either from the same flower or from one flower to another of the same plant. In this way, it intends to provide a local solution. In contrast, cross-pollination presents a global solution where the transfer of pollination takes place from the flower of one plant to a flower of another plant. Local pollination is viewed as Abiotic and self-pollination [29]. Biotic cross-pollination is a suitable one to solve a global optimization problem with long distance pollination process where the pollinators behave as a levy flight obeying levy distribution steps. This process guarantees pollination and reproduction of the fittest solution. This concludes that FPA may be used for both local search and global search process, to address any optimization problem at hand. These above steps carried to solve single objective optimization problem in a nice way. The single objective FPA may be extended to have multi-objective PFA ones by simply using a weighted sum to integrate the whole multiple objectives into a composite single objective. The effectiveness of the PFA lies in its exploration, by avoiding local search in a huge search space and then in exploitation, having faster convergence to have an optimum solution by ensuring consistent choosing of similar flower species. The advantages of FPA in comparison to other existing bio-inspired optimization algorithms are not only for its simplicity and flexibility but also more importantly for its easy implementation as it needs very few tuning parameters in comparison to the others.
Decision Forest Classifier with Flower Search Optimization Algorithm
83
3.2.2 Classification by ForestPA: Decision Forest Algorithm by Penalizing Attributes Construction of the Decision Forest algorithm (ForestPA) is made by penalizing attributes based on previously used decision forest tree [30]. In ForestPA, lower level testing of the attributes receives higher penalty and lower weight in comparison to the attributes tested at a relatively higher level. This is so because, in ForestPA, it is considered that lower level attribute tests can generate more logic rules than the test at a higher level. For getting a diverse set of logic rules, it is required to avoid the attributes tested at a lower level more seriously than the other. To achieve strong diversity, a random selection of the weight for an attribute is required from the range of weights that is allocated for the attribute’s level, so that attributes in the same level can have different weights. During model construction, ForestPA inflicts weights to those attributes that are present in the most recent tree and those who are not present, the weights of those attributes are retained automatically so that switching among similar trees can be avoided. This may have some baneful aftermath in the construction of any subsequent trees construction. In order to tackle this situation, ForestPA adopts the strategy to increase the weight value gradually [31].
4 Experiments The experimental framework of this proposed work is illustrated in Fig. 1 below.
Network Simulation Runs
Raw Data
Data preprocessing using Flower P ollination method
P reprocessed Data
Attack Classification by ForestP A Classifier
Decision Making P rocess
Fig. 1. The proposed data mining framework for flood attacks in OBS network
The purpose of the above experimental framework is to perform feature selection and classification of ingress nodes in OBSN, so that the risk of BHP flooding attacks can be reduced to a greater extent. As can be seen from Fig. 1, raw data are collected from different simulation runs with the attributes featuring ingress nodes and the behaviors of the OBS network. Data preprocessing is performed on the raw data as a feature selection method in order to remove the redundant features that might be present during multiple network simulation runs. Multi-objective Flower pollination algorithm is used here to select the best possible features, so that a stable classification result may be obtained. The processed data obtained after Multi-objective flower pollination algorithm is now passed through a most efficient decision forest classifier, ForestPA. Separate training and testing dataset in terms 70% and 30% respectively are chosen to test the performance of the ForestPA classifier on the processed dataset. In this paper, following performance quality measures are used to evaluate the effectiveness of the proposed approach, such as:
84
(a) (b) (c) (d) (e) (f)
M. Panda et al.
Classification Accuracy = (TP + TN)/(TP + FN + FP + TN) Sensitivity or Recall = TP/(TP + FN) Specificity = TN/(TN + FP) Precision = TP/(TP + FP) F1 Score = (2 (Precision × Recall))/(Precision + Recall) Informedness = Sensitivity + Specificity − 1
Here, TN-True Negative, TP-True Positives, FN-False negatives, FP-False Positives, as can be understood from the confusion matrix. With the above proposed framework, the main distinguishing features of the classification model are as follows: • Pareto-optimal multi-objective optimization using Flower pollination algorithm procedure generates best optimized features • Classification by the highly accurate classifier can improve the quality of the decisionmaking process • The results in terms of simple rules help the network administrator in understanding the countermeasures required to protect the OBSN from BHP flooding attack.
5 Results and Discussion All experiments are conducted using a PC with a processor running at 2.60 GHz and 8 GB RAM in Java based Data Mining tool running under Windows 10 with 1 TB HDD. The results obtained after conducting experiments are illustrated in Table 1. Specificity is more important than sensitivity. Sensitivity and specificity are often inversely related. High sensitivity indicates that few false negative occurs whereas high Table 1. Results using proposed method Performance measures/algorithm
FPA − MO + ForestPA FPA − MO + J48 FPA − MO + DCNN with learning rate = 0.7
Feature reduced from 3 original 21 features to
4
3
Accuracy (%)
100
99.81
73.12
Precision
1
0.998
0.739
Recall
1
0.998
0.731
Sensitivity
1
0.998
0.731
Specificity
0.999
0.999
0.809
F1-score
1
0.997
0.727
Informedness
1
0.998
0.54
Time taken to build the model in seconds
0.33
0.08
0.48
Decision Forest Classifier with Flower Search Optimization Algorithm
85
specificity means that there are very few false positive results. This strongly recommends the effectiveness and novelty of the approach. Table 2 and Table 3, comparison with the existing related searches are outlined. Table 2. Comparison (Part-1) using correlation-based feature selection Algorithm/measures
% Accuracy
Precision
Recall
F-measure
Decision Tree J48 [32]
100
1
1
1
Logistic Regression [32]
86.57
0.862
0.866
0.859
MLP [32]
89.35
0.892
0.894
0.892
Naive Bayes [32]
82.41
0.796
0.824
0.783
RT [32]
100
1
1
1
RepTree [32]
93.98
0.941
0.94
0.94
Ours (FPA − MO + ForestPA)
100
1
1
1
Table 3. Comparison (Part-2): with Hasan et al. [18] Algorithm/measures Accuracy Sensitivity Precision Specificity F1 Informedness value Naive Bayes
0.79
0.69
0.69
0.84
0.69
0.53
SVM
0.88
0.81
0.81
0.91
0.81
0.72
KNN
0.93
0.9
0.9
0.95
0.9
0.85
DCNN
0.99
0.99
0.99
0.99
0.99
0.98
1
1
0.999
1
1
Ours (FPA − MO + 1 ForestPA)
From Tables 2 and 3, it is quite evident that the proposed approach not only is the most accurate classifier with 100% classification accuracy in predicting the BHO flooding attacks in comparison to others but also fast (takes only 0.33 s to build the model), zero false positives (with specificity = 0.999) and zero false negatives (with sensitivity = 1). J48 decision tree also equally performs well, next to ours. This can be seen from Table 2 that the proposed feature selection algorithm produces a good results in comparison to the correlation based feature selection in most of the cases and equally good in comparison to J48 and RT.
6 Conclusion and Future Scope With the advances in optical switching technology for recent optical networks, OBSN is a new technology that is gaining the attention of many researchers nowadays and found to be unexplored much yet for its effective implementation and deployment. Knowing
86
M. Panda et al.
that any network performance and its quality of service largely depends on countering the threats successfully. In OBSN, detection of BHP flooding attacks which severely affects the performance of the optical network sought for the development of new classification strategies to detect and prevent the attacks at an early stage. In order to address these issues, Multi-objective Flower pollination algorithm is used as an efficient feature selection method to select the best features for the classification of BHP flooding attacks by removing the redundant ones from the dataset. ForestPA, which is a new classification algorithm with its powerful feature of penalizing attributes, presents 100% classification accuracy. It is observed that our proposed approach outperforms all other data mining techniques as compared in Table 2 and Table 3, in terms of accuracy, precision, sensitivity, specificity. Our proposed approach is also fast taking only 0.33 s to build the model. We conclude that the proposed approach is best suitable for detecting the BHO flooding attacks which are very much essential in developing and deployment of OBSN in the near future.
References 1. Ramesh, P.G.V., Nair, P.: A multilayered approach for load balancing in optical burst switching network. Optik 124, 2602–2607 (2013) 2. Qiao, C., Yoo, M.: Optical burst switching (OBS) – a new paradigm for an optical internet. J. High Speed Netw. 8(1), 69–84 (1999) 3. Mukherjee, B.: WDM optical communication networks: progress and challenges. IEEE J. Sel. Areas Commun. 18(10), 1810–1824 (2000) 4. Garg, A.K., Kaler, R.S.: Performance analysis of optical burst switching highspeed network architecture. IJCSNS Int. J. Comput. Sci. Netw. Secur. 7(4), 292–301 (2007) 5. Ozturk, O., Karasen, E., Akar, N.: Performance evaluation of slotted optical burst switching systems with quality of service differentiation. J. Light Wave Technol. 27(14), 2621–2633 (2009) 6. Jesudoss, A., Subramaniam, N.P.: A survey on authentication attacks and countermeasures in a distributed environment. Indian J. Comput. Sci. Eng. (IJCSE) 5(2), 71–77 (2014) 7. Sreenath, N., Muthuraj, K., Sivasubramanian, P.: Secure optical internet: attack detection and prevention mechanism. In: IEEE, pp. 1009–1012 (2012) 8. Jindal, K., Dalal, S., Sharma, K.K.: Analyzing spoofing attacks in wireless networks. In: 2014 Fourth International Conference on Advanced Computing & Communication Technologies, Rohtak, India, February 2014, pp. 398–402 (2014) 9. Muthuraj, K., Sreenath, N.: Secure optical internet: an attack on OBS node in a TCP over OBS network. Int. J. Emerg. Trends Technol. Comput. Sci. (IJETTCS) 1(4), 75–80 (2012) 10. Sliti, M., Boudriga, N.: BHP flooding vulnerability and countermeasure. Photon Netw. Commun. 29(2), 198–213 (2015) 11. Alshboul, R.: Flood attacks control in optical burst networks by inducing rules using data mining. Int. J. Comput. Sci. Netw. Secur. 18(2), 160–167 (2018) 12. Moore, A.W., Zuev, D.: Internet traffic classification using Bayesian analysis techniques. ACM SIGMETRICS Perform. Eval. Rev. 33(1), 50–60 (2005) 13. Bowes, D., Hall, T., Petri´c, J.: Softw. Qual. J. 26, 525. Springer (2018). https://doi.org/10. 1007/s11219-016-9353-3 14. Sliti, M., Hamdi, M., Boudriga, N.: A novel optical firewall architecture for burst switched networks. In: Proceedings of 12th International Conference on Transparent Optical Networks (ICTON), pp. 1–5 (2010)
Decision Forest Classifier with Flower Search Optimization Algorithm
87
15. Rajab, A., Huang, C.T., Alshargabi, M., Cobb, J.: Countering burst header packet flooding attack in optical burst switching network. In: International Conference on Information Security Practice and Experience, 16 November 2016, pp. 315–329. Springer (2016) 16. Rajab, A.: Burst Header Packet (BHP) flooding attack on Optical Burst Switching (OBS) network data set. Ph.D. dissertation, University of California Irvine Data Repository (2017) 17. Musumeci, F., Rottondi, C., Nag, A., Macaluso, I., Zibar, D., Ruffini, M., Tornatore, M.: A survey on application of machine learning techniques in optical networks. arXiv preprint arXiv:1803.07976v1 [cs.NI], 21 March 2018 18. Zahid Hasan, Md., Zubair Hasan, K.M., Sattar, A.: Burst header packet flood detection in optical burst switching network using deep learning model. In: 8th International Conference on Advances in Computing and Communication (ICACC-2018). Procedia Comput. Sci. 143, 970–977. Elsevier (2018) 19. Rajab, A., Huang, C.T., Al-Shargabi, M.: Decision tree rule learning approach to counter burst header packet flooding attack in optical burst switching network. J. Opt. Switch. Netw. 29, 15–26 (2018) 20. Kaur, H., Singh, S.: Prevention of DDOS in optical burst switching using genetic algorithm. Indian J. Sci. Technol. 9(36), 1–8 (2016) 21. Alshboul, R.: Flood attacks control in optical burst networks by inducing rules using data mining. IJCSNS Int. J. Comput. Sci. Netw. Secur. 18(2), 160–167 (2018) 22. Thabtah, F., Hadi, W., Abdelhamid, N., Issa, A.: Prediction phase in associative classification. J. Knowl. Eng. Softw. Eng. 21(6), 855–876 (2011) 23. McGregor, A., Hall, M., Lorier, P., Brunskill, J.: Flow clustering using machine learning techniques. In: Proceedings of the Fifth Passive and Active Measurement Workshop, pp. 205– 214. Springer, Heidelberg (2004) 24. Jayaraj, A., Venkatesh, T., Murthy, C.: Loss classification in optical burst switching networks using machine learning techniques: improving the performance of TCP. IEEE J. Sel. Areas Commun. 26(6), 45–54 (2008) 25. Lévesque, M., Elbiaze, H.: Graphical probabilistic routing model for OBS networks with realistic traffic scenario. In: IEEE Global Telecommunications Conference, GLOBECOM, pp. 1–6 (2009) 26. Rajab, A., Huang, C.T., Alshargabi, M., Cobb, J.: Countering burst header packet flooding attack in optical burst switching network. In: International Conference on Information Security Practice and Experience, 16 November 2016, pp. 315–329. Springer International Publishing (2016) 27. Yang, X.S.: Flower pollination algorithm for global optimization. In: Unconventional Computation and Natural Computation. Lecture Notes in Computer Science, vol. 7445, pp. 240–249 (2012) 28. Nabil, E.: A modified flower pollination algorithm for global optimization. Expert Syst. Appl. 57, 192–203 (2016) 29. Hezam, I.M., Abdel-Baset, M.: An improved flower pollination algorithm for ratios optimization problems. Appl. Math. Inf. Sci. Lett. Int. J. 3(2), 83–91 (2015) 30. Nasim Adnan, Md., Zahidul Islam, Md.: Forest PA: constructing a decision forest by penalizing attributes used in previous trees. Expert Syst. Appl. (2017). https://doi.org/10.1016/j. eswa.2017.08.002 31. Samat, A., Liu, S., Persello, C., Li, E., Miao, Z., Abuduwaili, J.: Evaluation of ForestPA for VHR RS image classification using spectral and superpixel-guided morphological profiles. Eur. J. Remote Sens. 52(1), 107–121 (2019). https://doi.org/10.1080/22797254.2019. 1565418 32. Uzel, V.N., E¸ssiz, E.S.: Classification BHP flooding attack in OBS network with data mining techniques. In: International Conference on Cyber Security and Computer Science (ICONCS 2018), Safranbolu, Turkey, 18–20 October 2018, pp. 1–4 (2018)
Review and Implementation of 1-Bit Adder in CMOS and Hybrid Structures Bhaskara Rao Doddi1(B) and V. Leela Rani2 1 Department of Electronics and Communication Engineering,
GIET University, Gunupur 765022, Odisha, India [email protected] 2 Department of Electronics and Communication Engineering, GVP College of Engineering (Autonomous), Visakhapatnam 530048, India [email protected]
Abstract. Different adder structures have been reviewed with CMOS logic and hybrid logic styles in this paper. XOR/XNOR cell, which is the key element in full adder design was also reviewed. Hybrid adders have advantage of low delay and low area occupancy due to less transistor count used. Lower value of PDP also can be achieved with hybrid structures. Adder structures are implemented in this paper by designing XOR and XNOR cells. Critical path estimations are made and number of transistors in critical path helps in estimating the critical path delay. Delay comparison for several adders was reviewed. Keywords: Hybrid adder · CMOS · Full swing · Complemented carry
1 Introduction Adder is an important module in any processing unit. Various adder structures were proposed by many researchers in literature. Adders can be designed using CMOS logic style. CMOS style has an advantage of low static power and low power delay product (PDP). In recent days hybrid logic structures became popular, which uses less number of transistors compared to CMOS logic style. Based on the output voltage level, digital circuits can be classified in to full-swing and non full-swing categories. Adder has better power consumption but not able to achieve the delay constraint when compared to few designs which may effect the EDP constraint [3]. Hybrid FA has less average power consumption but by taking twenty four number of transistors [4]. Adder has less PDP by utilizing the data selector as one of the cell for the design [5]. Power efficient design was proposed by optimizing the switching activity [6]. Number of logic levels were reduced to achieve higher operating frequency [7]. Design was proposed which minimizes the trade-off between the operating voltage and it’s frequency by hybrid structure [8]. Design was proposed which minimizes the trade-off between the operating voltage and it’s frequency [9]. Design is proposed with less transistor count but with suffering from the full levels [10]. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 88–97, 2021. https://doi.org/10.1007/978-3-030-49339-4_10
Review and Implementation of 1-Bit Adder in CMOS and Hybrid Structures
89
2 Review of Full Adders This section reviews some of the CMOS and hybrid full adder structures. Design of full adder is also proposed with CMOS logic which can be extendable upto N-bits. 2.1 Standard CMOS Full Adder [1] Number of transistors considered for the design of standard CMOS full adder is 28. CMOS logic design can achieve full output voltage swing with low static power. The number of demarcation lines used for the adder design are four, out of which one demarcation line has a load of 14 transistors which is the maximum of all. Second demarcation line is having 10 number of transistors. Third and fourth has two number of transistors. Sum section of full adder has eight paths to get the output whereas Carry section of full adder has six paths to get the output. 2.2 Hybrid Full Adder [2] Table 1 shows the sum and carry outputs for all possible input combinations of full adder. As there are three inputs, there can be eight possible input combinations. Table 1. Truth table of full adder A B Cin S Cout 0 0 0
0 0
0 0 1
1 0
0 1 0
1 0
0 1 1
0 1
1 0 0
1 0
1 0 1
0 1
1 1 0
0 1
1 1 1
1 1
2.2.1 Design for Sum From the truth table shown above in Table 1, it can be observed that if A XOR B = ‘1’ then S = Cin’, whereas if A XNOR B = ‘1’ then S = Cin. These two conditions are satisfied for all the input combinations. Hence, hardware required to compute sum output requires XOR, XNOR and NOT gates respectively. Using two NMOS transistors and one NOT gate, ‘Cin’ can be passed to generate ‘sum’ output. As NMOS transistor cannot produce strong logic 1, a Transmission gate can be used which consists of both NMOS and PMOS transistors. This structure can generate strong values of logic 0 and logic1.
90
B. R. Doddi and V. Leela Rani
2.2.2 Design for Carry As we can observe from the Table 1 if A XOR B = ‘1’ then S = Cin and if A XNOR B = ‘1’ then S = A = B. These two conditions are satisfied for all the input combinations. Hence, hardware required to compute sum output requires XOR and XNOR gates respectively. Using two NMOS transistors are required to pass ‘Cin’ and either of the primary inputs A/B to generate ‘carry’ output. As NMOS transistor cannot produce strong logic 1 value, a Transmission gate can be used which consists of both NMOS and PMOS transistors. This structure can generate strong values of logic 0 and logic1. 2.2.3 Design for Full Adder From the above discussion, the hardware required to generate ‘sum’ and ‘carry’ outputs in common are XOR, XNOR, NOT and Transmission gates. Hence, there is a need for designing of XOR/XNOR cells along with other hardware requirements. 2.2.3.1 XOR/XNOR Cell XOR section of cell has taken five number of transistors. The critical path takes the delay of one pmos and one nmos transistors. XNOR section of cell consists of five number of transistors and critical path takes the delay of two pmos transistors. Total number of transistors required to built XOR/XNOR cell including NOT gate is twelve. In the design of full adder, Transmission gate, NOT gate are also to be used along with XOR/XNOR cells. The total number of transistors required to design full adder is 26. Here, some input combinations can generate output with less noise and few input combinations are more prone to noise.
3 Implementation of Full Adder 3.1 XNOR Cell Boolean expressions for outputs X and Y of XNOR cell are shown below X = (A · B) Y = (A · B) · (A + B) = A xnor B Design of XNOR cell is made by proper placement of NMOS and PMOS transistors. PMOS transistors are placed in pullup structure and NMOS transistors are placed in pull down structure to produce strong logical values. Figure 1 shows the implementation of XNOR cell. For input combinations ‘00’ and ‘11’ XNOR gate should give logic 1 as its output. Whereas, for ‘01’ and ‘10’ input combinations XNOR gate should give logic 0 as its output. Depending on the input combination NMOS/PMOS transistors go to ON/OFF state to give respective logic values. Table 2 shows critical path estimations and activated paths of XNOR Cell for various possible input combinations. From Table 2 different outputs called X and Y of XNOR cell can be observed for possible input combinations. These X and Y are treated as half carry and half sum of
Review and Implementation of 1-Bit Adder in CMOS and Hybrid Structures
91
Fig. 1. Schematic of XNOR.
an adder circuit. The activated paths for different inputs can be identified. Number of transistors in critical path for every possible input combination are shown in the Table 2. Number of transistors in critical path helps in estimating the critical path delay. From the below critical path estimations, it can be observed that for ‘01’, ‘10’ and ‘11’ input combinations the activated paths may lead to critical path, as all three paths contains same number of transistors. Hence, critical path delay can be estimated for the above design by identifying critical path. Table 2. Delay estimation and activated path for XNOR Cell A
B
X
Activated path
Y
Delay of the path
0
0
1
P4-P5
1
2 PMOS
0
1
1
P1-N3-N5
0
1 PMOS and 2 NMOS
1
0
1
P2-N3-N4
0
1 PMOS and 2 NMOS
1
1
0
N1-N2-P3
1
1 PMOS and 2 NMOS
3.2 4-T Cell Figure 2 shows the schematic of propagate function. Propagate part feeds the logic requirements of SUM and complemented carry. Hardware logic required for this is, a
92
B. R. Doddi and V. Leela Rani
two input NOR gate with inputs as complementary version of carry and Y (i.e. Y = A xnor B).
Fig. 2. Schematic of propagate
P = Cin + Y = Cin + (A xnor B) = Cin · (A xnor B) = Cin · (A xor B) Table 3 shows critical path estimations and activated paths of 4-T Cell for various possible input combinations. From Table 3 output P of 4-T cell can be observed for possible input combinations. The activated paths for different inputs can be identified. Number of transistors in critical path for every possible input combination are shown in the Table 3. Number of transistors in critical path helps in estimating the critical path delay. From the above critical path estimations, it can be observed that for ‘00’ input combination the activated path p9-p10 consists of two PMOS transistors. Hence, this is the critical path and critical path delay can be estimated for the above design. Table 3. Delay estimation and activated path for 4-T Cell CinBar
Y
Output (p)
Activated path
Delay of the path
0
0
1
P9-P10
2 PMOS
1
0
0
N9
1 NMOS
0
1
0
N10
1 NMOS
1
1
0
N9, N10
1 NMOS
Review and Implementation of 1-Bit Adder in CMOS and Hybrid Structures
93
3.3 6-T Cell (Carry) Figure 3 shows the schematic of 6-T cell to generate carry output. Following is the Boolean expression for carry generation. Logic requirement is a NOR gate with one input as complemented version of x and other is P. (COUT) = P + X = Cin + Y + (A · B) = AB + Cin · (A xnor B) = (AB + Cin · (A xor B))
Fig. 3. Schematic of 6-T Cell (Carry)
Table 4. Delay estimation and activated path for 6-T Cell (Carry) X
P
Output (Carrybar)
Activated path
Delay of the path
1
0
1
N13-P12-P11
1 NMOS and 2 PMOS
0
0
0
P13-N12
1 PMOS and 1 NMOS
1
1
0
N11
1 NMOS
0
1
0
P13-N12, N11
1PMOS and 2NMOS
Logic requirement is just NOR gate with one input as complemented version of x and other is P.
94
B. R. Doddi and V. Leela Rani
Table 4 shows critical path estimations and activated paths of 6-T Cell for various possible input combinations. The activated paths for different inputs can be identified. Number of transistors in critical path for every possible input combination are shown in the Table 4. Number of transistors in critical path helps in estimating the critical path delay. From the above critical path estimations, it can be observed that for ‘10’ input combination the activated path N13-P12-P11 consists of two PMOS transistors and one NMOS transistor. Hence, this is the critical path and critical path delay can be estimated for the above design. 3.4 6-T Cell (Sum) Figure 4 shows the schematic of 6-T cell to generate sum output. Following is the Boolean expression for generation of sum output. Logic requirement is an X NOR gate with three inputs called A, B and Cin. SUM = P + Y · Cin = Cin + Y + Y · Cin = Cin · y + Y · Cin = Cin · Y + Y · Cin = (Y xor Cin) = Y xnor Cin = A xnor B xnor Cin
Fig. 4. Schematic of 6-T Cell (Sum)
Review and Implementation of 1-Bit Adder in CMOS and Hybrid Structures
95
Table 5 shows critical path estimations and activated paths of 6-T Cell for various possible input combinations. The activated paths for different inputs can be identified. Number of transistors in critical path for every possible input combination are shown in the Table 5. Number of transistors in critical path helps in estimating the critical path delay. From the above critical path estimations, it can be observed that for the activated path P6-P7 consists of two PMOS transistors. Hence, this is the critical path and critical path delay can be estimated for the above design. Table 5. Delay estimation and activated path for 6-T Cell (Sum) P
Cin bar
Y
Output (Sum)
Activated path
Delay of the path
X
1
1
1
X
X
0
N7-N8
2 NMOS
0
N6
1 NMOS
0
0
X
1
P6-P7
2 PMOS
0
X
0
1
P6-P8
2 PMOS
3.5 Full Adder Figure 5 shows the schematic of full adder to generate sum and carry outputs. Table 5 shows critical path estimations and activated paths of full adder circuit for various possible input combinations.
Fig. 5. Schematic of full adder
Table 6 shows critical path estimations and activated paths of full adder Cell for various possible input combinations. The activated paths for different inputs can be identified. Number of transistors in critical path for every possible input combination are shown in the Table 6. Number of transistors in critical path helps in estimating the
96
B. R. Doddi and V. Leela Rani Table 6. Delay estimation and activated path for Sum and carry
Abcin
Sum
Carry’
Path (SUM)
Path (carry)
0
0
1
P4-P5-N7-N8
P4-P5-N10-P11-P12
1
1
1
P4-P5-N10-P6-P7
P4-P5-N10-P11-P12
2
1
1
P1-N3-N5-P6-P8
N9-P11-P12
3
0
0
P1-N3-N5-P9-P10-N6
P1-N3-N5-P9-P10-N11
4
1
1
P2-N3-N4-P6-P8
N9-P11-P12
5
0
0
P2-N3-N4-P9-P10-N6
P2-N3-N4-P9-P10-N11
6
0
0
N1-N2-P3-N7-N8
N1-N2-P13-N12
7
1
0
N1-N2-P3-N10-P6-P7
N1-N2-P13-N12
critical path delay. From the above critical path estimations, it can be observed that for the activated path for the input combination ‘011’ consists of three NMOS and three PMOS transistors. Hence, this is the critical path and critical path delay can be estimated for the above design.
4 Performance Analysis Table 7 indicates that HFA-20T, HFA-22T and HFA–NB-26T has the lowest delay by taking only three number of transistors in the critical path [2]. HFA-19T, HFA-17T and HFA–B-26T has worst delay when compared to the above designs with four number of transistors [2] in critical path. N. Weste et.al. [1] proposed CMOS logic by taking 28 number of transistors with the delay of five MOS transistors. The implemented design takes 26 number of transistors with the delay of 6 MOS transistors. Table 7. Transistor count comparison for several adders Reference/Technique
Logic style
Transistor count
Delay Delay
N. Weste et.al. [1]
CMOS
28
2 NMOS+3PMOS
HFA–B-26T [2]
Hybrid
26
XOR/XNOR+TG+NOT
Proposed design
CMOS
26
3 NMOS+3PMOS
HFA–NB-26T [2]
Hybrid
26
XOR/XNOR+TG
HFA–19T [2]
Hybrid
19
XOR+NOT+TG
HFA–20T [2]
Hybrid
20
XOR/XNOR+TG
HFA–17T [2]
Hybrid
17
XOR+NOT+TG
HFA–22T [2]
Hybrid
22
XOR/XNOR+TG
Review and Implementation of 1-Bit Adder in CMOS and Hybrid Structures
97
5 Conclusion Different adder structures have been reviewed with CMOS logic and hybrid logic styles. Hybrid adders have advantage of low delay and low area occupancy due to less transistor count used. The disadvantage with hybrid structures is complete logic values cannot be achieved. CMOS logic style has the advantages of low static power and full logic swing. Adder structures are implemented in this paper by designing XOR and XNOR cells. Critical path estimations are made by identifying activated paths. Number of transistors in critical path helps in estimating the critical path delay. Transistor count and Delay comparison for several adders was reviewed.
References 1. Weste, N., Eshraghian, K.: Principles of CMOS VLSI Design. Addison-Wesley, New York (1985) 2. Naseri, H., Timarchi, S.: Low-Power and fast full adder by exploring new XOR and XNOR gates. IEEE Trans. VLSI Syst. 26(8), 1418–1493 (2018) 3. Bhattacharyya, P., Kundu, B., Ghosh, S., Kumar, V., Dandapat, A.: Performance analysis of a low-power high-speed hybrid 1-bit full adder circuit. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 23(10), 2001–2008 (2014) 4. Vesterbacka, M.: A 14-transistor CMOS full adder with full swing nodes. In: 1999 IEEE Workshop on Signal Processing Systems SiPS 99 Design and Implementation, pp. 713–722, October 1999 5. Aguirre-Hernandez, M., Linares-Aranda, M.: CMOS full-adders for energy-efficient arithmetic applications. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 19(4), 718–721 (2011) 6. Hassoune, I., Flandre, D., O’Connor, I., Legat, J.D.: ULPFA: a new efficient design of a power-aware full adder. IEEE Trans. Circ. Syst. Integr. Reg. Pap. 57(8), 2066–2074 (2010) 7. Chang, C.-H., Gu, J., Zhang, M.: A review of 0.18-µm full adder performances for tree structured arithmetic circuits. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 13(6), 686– 695 (2005) 8. Kumar, P., Sharma, R.K.: Low voltage high performance hybrid full adder. Eng. Sci. Technol. Int. J. 19(1), 559–565 (2016). https://doi.org/10.1016/j.jestch.2015.10.001 9. Wairya, S., Nagaria, R.K., Tiwari, S.: New design methodologies for high-speed low-voltage 1 bit CMOS Full Adder circuits. Int. J. Comput. Technol. Appl. 2(2), 190–198 (2011) 10. Chowdhury, S.R., Banerjee, A., Roy, A., Saha, H.: A high speed 8 transistor full adder design using novel 3 transistor XOR gates. Int. J. Electron. Circ. Syst. 2(4), 217–223 (2008)
Design and Analysis of LOL-P Textile Antenna Y. E. Vasanth Kumar1,2(B) , K. P. Vinay2(B) , and M. Meena Kumari2(B) 1 Department of ECE, GIET University, Gunupur, Odisha, India
[email protected] 2 Department of ECE, Raghu Engineering College(A), Visakhapatnam 531162, AP, India
[email protected], [email protected]
Abstract. “Textile Antenna” connotes a wearable antenna which is proposed for clothing exploited that includes mobile computing, tracking and navigation and health monitoring. Exploitation of wearable textile materials for the advance of microstrip antenna segment has been brisk due to the current miniaturization of wireless devices. At present there are few applications for antennas, where they are exploited to monitor the biometric data of human body continuously. Importantly, the textile antennas can be unassumingly woven onto clothing, without affecting fashion, comfort and washability. In this paper a textile antenna is designed over a range of 3.1 GHz–10.6 GHz called the UWB, which is a burgeoning technology for short-range communication and can be operated at any of the desired frequencies for various applications. Simulations are carried out and evaluated using CST microwave studio. Evaluated results shows that the proposed system provides a better return loss results along the bandwidth of 2–16 GHz with less than 2 VSWR over the entire bandwidth providing an efficient transmission covering all the Ultra Wide Band range of frequencies that can be utilized for wireless health monitoring. Keywords: Microstrip antenna · Textile antennas · UWB
1 Introduction In our day to day life communication plays a crucial role. The major element required for any type of communication is an “antenna” [1–3]. The dearth of an antenna makes the communication speechless. In early days, Antennas are generally non planar antennas such as yagi-uda antenna, dish antenna, etc. but today with increase in technology, the size of the devices are decreasing. So, these non-planar antennas cannot be embedded into those devices. To solve this problem planar antenna came into existence which led to rapid usage of microstrip patch antennas [2–5]. Microstrip patch antennas [1, 6] covering the broad frequency range from 100 MHz to 100 GHz with many applications and advantages over traditional antennas which integrates these antennas in Wireless Body Area Networks [7] led to the discovery of next generation health monitoring system. It is known that UWB has been regarded as by IEEE as one of the proficient candidates for WBAN applications. Therefore, today © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 98–106, 2021. https://doi.org/10.1007/978-3-030-49339-4_11
Design and Analysis of LOL-P Textile Antenna
99
many researchers have paid gigantic concentration to UWB-WBAN communications, of which modeling UWB body channel model, implementing UWB on-body antenna and evaluating on-body UWB communication system recital are the most considered. This provided us an idea to design a wearable antenna i.e., textile antenna that can be used in WBAN that operates in UBW frequency range. A dielectric is sandwiched between the patch and the ground plane for MPA [8]. In health monitoring, LAN and WAN are used where the patients are confined to the hospital beds, restricted their mobility [9]. This is completely over-ruled by WBAN where a patient can move around and his/her health can be monitored continuously using a body mounted module. By employing textile antenna in place of microstrip patch antenna, instead of body mounted it will be exploited in the clothing [10]. These textile antennas are wearable antennas which provide mobility to the user along with continuous health monitoring and have large bandwidth. This paper organizes as Sect. 1 includes introduction of the antennas, Sect. 2 presents the proposed design whilst Sect. 3 evaluates the simulation results and concludes in Sect. 4.
2 Proposed Design In this paper a planar textile antenna with dimensions 60*60 mm2 is proposed using denim jeans substrate with dielectric constant of 1r 1.78, tangent loss 0.078 and thickness h = 1 mm. The defected ground structure contains PEC which plays a decisive role for attaining broadband and wideband characteristics that can be utilized for wireless health monitoring. 2.1 Analysis of Circular Textile Antenna A circular textile antenna is fabricated with 3 segments consists of ground plane, substrate and the patch. The proposed design is further etched with 3 rectangular slots, out of which 2 rectangular slots are mirror images, third one lies at the centre of the ground plane. Denim jeans textile is used for the substrate and a copper material for patch and ground. The schematic front view and rear view of circular textile antenna is shown in Fig. 1. The dimensions of these segments are detailed in Table 1.
Fig. 1. Proposed schematic design a) front view b) rear view
Proposed antenna is fabricated on denim textile substrate with dielectric εr = 1.78. Figure 2 shows the fabricated antenna
100
Y. E. Vasanth Kumar et al. Table 1. Dimensions of the proposed textile antenna Parameters Patch antenna
Substrate
Ground plane
Dimensions (in mm) R1
6
R2
14
R3
10
R4
9.75
R5
3
R6
7.5
R7
7
L
60
B
60
H
1
L
48
B
33
K
1.79
a) Front View
b) Rear View Fig. 2. Fabricated antenna
3 Results and Discussions All Simulations are obtained using CST microwave studio for designing textile antenna. To achieve UWB characteristics the design is optimized in various steps and is fabricated. 3.1 Optimization Results In step 1, a circular patch of radius 15 mm is designed along with a 60*60 mm2 ground plane. The radiating element radius was calculated using the following equation a=
87.94 √ fr εr
Design and Analysis of LOL-P Textile Antenna
101
Proposed textile antenna in Fig. 3 shows the return loss (RL ) is the parameter indicates power which does not reflect. When the antenna and transmitter impedance does not match, standing waves exists due to reflected waves. This is a basic design without DGS structure. A minimum RL of −26.89 dB at a resonating frequency of 7.67 GHz is obtained and is further optimized to improving the results.
Fig. 3. RL optimization step 1
Fig. 4. VSWR optimization step 1
Figure 4 show that value of VSWR for proposed antenna is below 2 in the operating frequency range of 4.1 GHz to 5.1 GHz, 7 GHz to 8.59 GHz and 9.7 to 11.6 GHz. In step 2, the ground plane design of the proposed antenna with basic DGS and the dimensions of the ground structure were reduced from 60*60*0.2 mm3 to 48*33*0.2 mm3 . Figure 5 depicts RL plot where the minimum −36.42 dB at a resonating frequency of 9.042 GHz.
Fig. 5. RL optimization step 2
Fig. 6. VSWR optimization step 2
Figure 6 depicts the VSWR plot for the optimization step 2. Over the frequency range of 2–12 GHz the VSWR is less than 2 which is typical value. The radiation pattern for the optimization step can be observed from Fig. 7.
Fig. 7. Radiation pattern for the optimization step 2
In step 3, the defected ground structure has been further optimized by etching two rectangular slots of dimensions 10*22 mm2 which are mirror images. Figure 8 shows a
102
Y. E. Vasanth Kumar et al.
return loss of −49.39 dB and VSWR of 1.006. The plot of VSWR is depicted in Fig. 9. Due to extension of DGS, the return loss has been improved from −36.42 dB to − 49.39 dB at a resonating frequency of 9.056 GHz.
Fig. 8. RL optimization step 3
Fig. 9. VSWR optimization step 3
In step 4, another rectangular slot of dimensions 1.3*1.7 mm2 has been etched at the centre of the defected ground structure to further improve the results. From Fig. 10 we obtained a RL of −49.39 dB at a resonating frequency of 9.056 GHz. Figure 11 shows the VSWR of less than 2 over the entire range of 2–16 GHz frequency.
Fig. 10. RL optimization step 4
Fig. 11. VSWR optimization step 4
In step 5, a circular slot of radius 3 mm is taken out from the circular patch and a segment of radius 14.21 mm is removed from the top of the circular patch. From Fig. 12 we observed the RL curve below 10 dB for 2–16 GHz range. At a frequency of 9.21 GHz we observed the minimum return loss of −37.93 dB. From Fig. 13 we observed a minimum VSWR of 1.025.
Fig. 12. RL optimization step 5
Fig. 13. VSWR optimization step 5
In step 6, a circular shaped ring of outer and inner radii of 7.5 mm and 7 mm respectively is removed from the circular patch obtaining a return loss of −34.32 dB which can be depicted from the Fig. 14 and from the Fig. 15 we have obtained a minimum VSWR of 1.039 at a resonating frequency is 9.42 GHz.
Design and Analysis of LOL-P Textile Antenna
Fig. 14. RL optimization step 6
103
Fig. 15. VSWR optimization step 6
In step 7, a circular shaped conducting material with a radius of 6 mm is integrated with the circular patch to obtain the RL of –42.53 dB at a frequency of 5.472 GHz. which can be depicted from the Fig. 16 this can be widely used for WLAN application which is a part of UWB range. Figure 17 shows VSWR graph of 1.015 can be observed.
Fig. 16. RL optimization step 7
Fig. 17. VSWR optimization step 7
In step 8, another circular ring of radii 10 mm and 9.75 mm is removed from the patch to obtain a better return loss at 5.4 GHz and 8.98 GHz which can be depicted from the Fig. 18. Figure 19 shows the VSWR which is less than 2 over the range 2–16 GHz. Figure 20 shows the radiation pattern for the 8th optimization design.
Fig. 18. RL optimization step 8
Fig. 19. VSWR optimization step 8
Fig. 20. Radiation pattern of the proposed antenna for optimization step 8
104
Y. E. Vasanth Kumar et al.
In step 9, two slots of dimensions 1.6*1.2 mm2 are implanted on the patch to join the circular ring and the circular patch. This led to a rapid improvisation of return loss i.e., −79.2 dB of return loss is obtained at a frequency of 9.16 GHz and overall bandwidth of 2–16 GHz is obtained shown in Fig. 21. Figure 22 shows the VSWR plot which has the value ranging from 1–2 over the range of 2–16 GHz. The minimum VSWR of 1.00 is obtained at 9.16 GHz frequency. Figure 23 shows the optimization step 9 for radiation pattern.
Fig. 21. RL optimization step 9
Fig. 22. VSWR optimization step 9
Fig. 23. Radiation pattern of the proposed antenna for optimization step 9
3.2 Analysis of Optimized Steps All the results of every optimization steps are analyzed in order to improvise the design further. The detailed values of the results are evaluated in Table 2 and the obtained design is further simulated using various materials that can be seen in Table 3. 3.3 Comparison Results of Various Substrate Materials The optimized design of the proposed textile antenna has been simulated using various substrates using Microwave CST studio in order to compare the best possible results. 3.4 Fabrication Results Figure 24 shows the optimized textile antenna design has been fabricated and tested using vector analyzer and found that the minimum return loss of −39.48 dB is obtained at 9.08 GHz which is very near to the evaluated results. These variations occurred as the testing of the proposed textile antenna is done in the open environment (Table 4).
Design and Analysis of LOL-P Textile Antenna
105
Table 2. Analysis of optimization steps in the proposed design Optimization step
Return loss (in dB)
VSWR
Gain (in dB)
Directivity (in dBi)
1
−26.89
1.094
−2.748
7.243
2
−36.42
1.03
1.826
4.626
3
−49.39
1.006
2.359
5.189
4
−49.39
1.006
2.359
5.189
5
−37.93
1.025
2.541
5.403
6
−34.32
1.039
2.535
5.415
7
−42.53
1.015
2.138
5.036
8
−40.37
1.019
1.177
4.395
9
−79.2
1.000
1.478
4.738
Table 3. Simulation results of various substrates M aterial
Permittivity
Loss Tangent
Flannel
1.72
0.02
Silk
1.75
0.012
Simulation Results (S11)
Fig. 24. Fabrication result of the proposed textile antenna
106
Y. E. Vasanth Kumar et al. Table 4. Comparison between simulated and measured results
Mode
Operating frequency (GHz)
S11 (dB)
BW (GHz)
Simulated Results
9.16
−79.2
2 to 16
Measured Results
9.08
−39.48
3.1 to 10.8
4 Conclusions and Future Scope The fundamental aim is to cover the frequency from 3.1 to 10.6 GHz for wearable textile antenna. The performance of all the optimized antenna of the proposed design are simulated and analyzed in terms of RL to enhance the performance the by exploiting CST Microwave studio simulation tool. The implemented antenna design achieves effective return loss characteristics of −79.2 dB at a widest frequency from 2–16 GHz whilst the optimized antenna designs have almost similar radiation patterns. The antenna has high efficiency, low power consumption and also have high data rate with effective gain. The proposed textile antenna design can be exploited in Body Area Networks for continuous health monitoring applications besides these, further investigation must be done to study the radiation effects on human body from the antenna. Future extensions can be done by exploiting feeding techniques.
References 1. Vasanth Kumar, Y.E., Srinivas, B., Praveen, V., Kamalesh, T., Mounica, A.: Novel design of thz microstrip patch antenna for radar applications. IJITEE 8 (2019). ISSN 2278–3075 2. Balanis, C.A.: Antenna Theory: Analysis and Design. Wiley, New York (2004) 3. Ojaroudi, M., Ojaroudi, N., Ghadimi, N.: Dual band-notched small monopole antenna with novel coupled inverted U-Ring strip and novel fork-shaped slit for UWB applications. IEEE Antennas Wirel. Propogation Lett. 12(2), 182–185 (2013) 4. Koohestani, M., Zurcher, J.F., Antonio, A., Moreira, A.A., Skrivervik, K.: A novel, lowprofile, vertically-polarized UWB antenna for Wban. IEEE Trans. Antennas Propagations 62(4), 1888–1894 (2014) 5. Klemm, M., Troester, G.: Textile UWB antennas for wireless body area networks. IEEE Trans. Antennas Propagation 54(11), 3192–3197 (2006) 6. Pozar, D.M., Schaubert, D.H.: Microstrip Antenna: The Analysis and Design of Microstrip Antenna and Arrays. IEEE Antennas and Propagation Society, Willey–Default (1995) 7. Sanz-Izequierdo, B., Huang, F., Batchelor, J.C.: Department of Electronics, Dual Band Button Antenna for Wearable Applications; The University Of Kent, Canterbury, Kent (2006) 8. Kwon, K., Choi, J.: Antennas for wireless body area network applications. In: Antennas Propagation (Eucap) Conference, pp. 375–379 (2013) 9. Lim, J.-S., Kim, C.-S., Lee, Y.-T., et al.: A spiral-shaped defected ground structure for coplanar waveguide. IEEE Microwave Wirel. Compon. Lett. 12(9), 330–332 (2002) 10. Boutejdar, A., Nadim, G., Amari, S., et al.: Control of bandstop response of cascaded microstrip low-pass-bandstop filters using arrow head slots in backside metallic ground plane. In: IEEE Antennas and Propagation Society International Symposium, vol. 1b, pp. 574–577 (2005)
Analytical Study of Scalability in Coastal Communication Using Hybridization of Mobile Ad-hoc Network: An Assessment to Coastal Bed of Odisha Sanjaya Kumar Sarangi(B) and Mrutyunjaya Panda Department of Computer Science and Application, Utkal University, Bhubaneswar, India {sanjaya.res.cs,mrutyunjaya.cs}@utkaluniversity.ac.in
Abstract. Mobile ad-hoc networks (MANETs) presenting a distinct and infrastructure free network which are group of wireless nodes functioning with selfdriven and self-routing principle. Ad-hoc networks, a new set of communication platform, experimented for enabling effective communication during cyclonic disaster and calamities. Dynamic mobility system is an emergent evolution for different type of rescue operations. The Ad-hoc networks can be set up for a reliable communication among provisionally assembled user terminals without depending on the cellular communication system. So this study is prepared on scalability and geographical routing protocols employed for the communication in rescue operations for the fishing boats in coastal bed of odisha during cyclonic disaster or in an emergency communication situation. Keywords: MANET · Multicast routing · Multi-hop · Odisha coastal zone
1 Introduction Odisha, a cyclone state of India situated in east coast of Bay of Bengal. Cyclone alert is the major determination by the state government by Early Warning Dissemination System (EWDS) which is enabled with coast siren system goes off concurrently from the towers at 122 locations during cyclonic disaster. In the near past the meteorological department has improved the alert system significantly and expecting the cyclone intensity earlier 6 to 7 days. But cyclone alert is not possible to spread in the mid night or the fishermen those are already in the deep sea before the cyclone alert, and then the siren alert system may not be successful, stated by OSDMA, Odisha State Disaster Management Authority [1]. So various methods of communication are suggested and experimented for the prevention of cyclone in the coastal bed of odisha. 1.1 Vulnerability of the Odisha Coastal Zone Difference in the vulnerability of odisha coastal area, the coast line is traced from Balasore to Ganjam. Mostly, severe cyclones were hit at the coastal bed of odisha during © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 107–119, 2021. https://doi.org/10.1007/978-3-030-49339-4_12
108
S. K. Sarangi and M. Panda
the post-monsoon season. Metrological Department says India has faced 39 very severe cyclones in the past 52 years. Nearly 60% that is 23 were between October to December at the east coast zone that is majorly in odisha. Some of the cyclones are originated in the proximity of the Odisha coast, but it scatters in the deep sea. These cyclones are not so hazardous to the Odisha Coast. Table 1. The District wise Coastal Length of Odisha Coastal districts
Length (in km)
Balasore
87.96 km
Bhadrak
52.61 km
Kendrapada
83.55 km
Jagatsinghpur
58.95 km
Puri
136.48 km
Ganjam
60.85 km
Total coastal length 480.40 km Coast area
Nearly 24,000 km2
1.2 Marine and Coastal Lines of Odisha Balasore, Bhadrak, Kendrapara, Jagatsinghpur, Puri and Ganjam, are six marine districts of odisha where one third of the coastline is facing by Puri district (shown in Fig. 1). Nearly 15% of revenue area of odisha state demarked as coastal area and 30% of the population resides in the bay facing districts. Nearly 89% of Coastal population is from the bay areas of the marine districts. Odisha state has a marine land with area of 24,000 km2 and closely the depth range is 65% that is 0–50 m shown in Table 1. The sea facing area also wide to the northern district of balasore and narrow to south [4]. The availability of a large area up to 50 m depth gives rise to rich shrimp fishing grounds and facilitates the operation of non-motorized boats in the near shore waters. 1.3 Objective Most of the cyclone that make arrival on the east Coast of India, 30% hit Odisha, particularly in the month of April–May and September–November. In last 2 to 3 years, severe cyclones were stroke in Odisha [5]. The six coastal districts are exposed to cyclonic zone. State Government has taken arrangements like installation of modern communication systems and other improved set up including concrete houses for the poor in the cyclone areas to reduce the physical liability of the coastal districts [6]. The natural disaster and severe storms are greater public concern in view of their large-scale damage potentiality primary to loss of property and life [2, 3]. The severe storms are landing with huge damage, so the coastal zone of odisha is more vulnerable as compare to the other eastern states of India.
Analytical Study of Scalability in Coastal Communication
109
Fig. 1. Six districts of Odisha state are measured with marine distance and coastal lines of Odisha.
From Dighato Gopalpur in Odisha coastline (shown in Fig. 2), there is a longest sea facing area. Every day huge no of fishermen are moving from the different shipping jetty to the deep sea in their regular profession. As they are getting into the deep sea and staying 5 to 10 days, they are unaware of their position and their movement in sea. During disaster only, announcement is there not to go inside the sea. But already they are inside the deep sea. As huge fishing boats are moving in this belt, it’s a challenge and research how to detect and save those fishing boats in the deep sea during the natural disaster by using automatic warning communication system.
Fig. 2. The yellow circles are pointed as major fishing jetties with red coastal lines of Odisha state.
110
S. K. Sarangi and M. Panda
The total coastal region of Odisha coast is grouped into no of clusters as per the physical region (shown in Fig. 3). From each cluster unit a number of shipping boats are added to the multi-level cluster. Through the message passing, a message is forwarded to the clusters via cluster head. The fishing boats are the Ad-hoc nodes, are always moving dynamically in the cluster region. The nodes are communicating by sending the data packets to the multicast region obeying multi-hop rule. The sender node selects the neighboring node which has maximum no of neighbors and minimum distance from the sender to guaranty the data packet reach in the multicast region. In this way fishing boats may be monitored and controlled by the proposed methods which will be beneficial to the fishermen those are entering to the deep sea every day during disaster.
Fig. 3. Multilevel clustering and communication link between clusters and base station using MANET in the coastal bed.
A scalable coastal communication system may be implemented during disaster warning in the coastal area and there quests could be linked to automatically look-up contacts to order the equipment’s or services. So we may produce scalable node selection and multicast routing method of communication for reaching every node within the clustered region. Early Warning Broadcasting System involves the technologies like group messaging system, Satellite-Based communication system and Community Radio for inter-operability communication system. But due to major limitation over the message passing, we propose a method of scalable communication system using hybridization of MANET and some followed technology innovated by the researchers in the recent past based on the routing protocol, type of infrastructure and communication techniques for
Analytical Study of Scalability in Coastal Communication
111
the effectiveness of disaster management may be used for the rescue operation in the deep sea.
2 Literature Survey Here we have survey the multipath communication system [8] that explores the performance issues, pros and cons of the multipath routing protocols and resource allocation of mobile ad-hoc networks. Routing protocols over multipath [18] are offering consistent communication and ensuring load balancing to improve the quality of service in MANETs. Also to determine the multi-hop routing capability, minimal processing and control overhead, dynamic topology maintenance, reliable communication and immediate responses are guaranteed to miss and bypass due to unreachability and inaccessibility to the disaster centers [9]. 2.1 Disaster Response System No of communication methods can be settled for the awareness of victims by connecting disaster response centers. The techniques can be established in advance to alert the victims and the rescue vacuators those are involved to do so. In recent past various emergency response system are proposed to manage the outcome of cyclonic disaster and its consequences [10] like timely-resource allocation, information management, synchronization among the preparedness, communication system, moderation and decision support system. Existence of the cyclone or after the hit, rescue operation can be done before or after. So before the incidence of hazard, it has to prepare and aware the early warning system and is measured the preparedness of a cyclone. After the cyclone, it has to response and takes necessary actions in the situation and recovery in the fact of incidence [7, 13, 14]. 2.2 Pre-disaster Communication Preparedness and attentiveness is the major step of pre-disaster cyclonic warning system for executing set of instruction of actions. The mobile phones are being embedded in the current technology which has also provided the users with weather detecting capabilities [16]. Wireless and satellite communication are providing major role for forwarding alert message by fixing of appropriate sensors in a particular area. Using of satellite transceivers and the base stations, the cyclonic rescue center checks the strength and validity of the message and reminding back the same to the base station tobe carried next as an emergency. The active mobile users are communicated via base station in the signal area that transmits the alert message to the active users becoming a mobile adhoc network. In this study we presents calable communication system during cyclonic disaster and preparedness techniques are used to alert in the seashore. Communication among the fishing boats area big challenge in the recent climatically hazards.
112
S. K. Sarangi and M. Panda
2.3 Post-disaster Communication During post disaster, the main communication system enabling the active networks to communicate between the affected zones so that the confined victims can be linked and traced out by the mobility nodes or some of the active sensor devices those are set up before [22]. The existing system recommends various approaches for communication after disaster. The base stations remain active with maximum coverage area even if it is not in range of communication. The communication can be made reachable with mobile nodes in the sea area by providing the active linked reply message through the base station by route reply message within the existing ad-hoc network.
3 MANET in Rescue Communication, the Most Challenging Trend Mobile ad-hoc networks are always distinct and infrastructure-free network where routing job is a big challenge for message passing [19]. Without using of central network administration, it enables each wireless node to perform routing operation to justify stability of network, reliable connectivity and data integrity. In recent past the researchers have focused on device management, medium access control, routing technique, energy constraint and mostly on security [11, 17]. The major three types of routing protocols are basically used as the movement based responses and communication hazards based on the protocols likely reactive routing protocol, proactive protocol and hybrid routing protocol. Based on multi-hop and multicast communication system, the followings are stated as compare to other traditional communication system. 3.1 Node Based Communications System As compare to various communications system the routing protocols are active for the pre-disaster situation based on proactive network system. The number of routing protocols is derived like time to return, epidemic, and Max Prop are tested for adoptable network, where the authors conclude that time to return is reliable routing method for the rescue operation [25]. In [23, 24], the authors proposed a hierarchical multicast distance vector routing protocol on demand to have a good reliability in node locating in the hazardous situations. For an effective response and recovery, maximum routing protocols are either proactive, reactive or hybrid in their principle. For simplification of detecting emergency attacks, a graphical information system based on smart emergency is also developed for effectiveness of security [20]. 3.2 Cellular Based Communications System In recent smart mobile technology, the cellular based applications for disaster situation are independent that includes of mobile maps to trace out most of the communication problems [10, 21] and a successive method relay on mobile automated system to decide the order of operation that collects the victims information [12]. In recent past the researchers have observed MANETs to be a well-defined solution for disaster communication system. The followings are somehow used by the fishermen during an emergency as a mode of cellular communication.
Analytical Study of Scalability in Coastal Communication
• • • • • • •
113
Mobile Phones (MP). Very High Frequency (VHF) radio. Distress Alert Transmitters (DAT). Community Radio (CR). Sagarvani Integrated Information Dissemination System (SIIDS). Navic Satellite Navigation System (NSNS). Television And Radio (TR).
3.2.1 Mobile Phone (MP) Limitation: In the deep sea, the relay time for the emergency message communication from the base station to the coastal area is comparatively ineffective. During communication, their range is limited to 2–5 nmi that SIM cards of the service provider is not reachable to allow a range of up to 15 nmi. The network strength decreases near the coast and the service providers have planned to set up increasing the range of signals in the near future.
3.2.2 Very High Frequency (VHF) Radio Limitation: Indian fishing vessels are using VHF radios without having particular license and not using marine radios as per the restriction of law. Marine radios having specific features which give each vessel a Maritime Mobile Service uniqueness that is when a vessel turns to missachannel, a successive route is captured by a GPS generated latitude-longitude coordinate by all vessels in the neighborhood. Since Marine Radios are more expensive and license is difficult to obtain by the fishermen, VHF radio is limited and has a range not more than 15–20 nmi, even when fitted to large built-up fishing vessels.
3.2.3 Distress Alert Transmitters (DAT) Limitation: Since the device cost is Rs 10, 000 and 75% subsidy presented through a government sponsored scheme, a small number of DATs have been distributed to the fishermen from the sanctioned project. As per the circular, Indian Coast Guard issued 1,853 nos in 2017, Kerala issued 5000 nos in 2016, Tamilnadu issued 1800 nos in 2010, and Andhrapradesh issued 2000 nos in 2015. The fishermen are acknowledged that DATs had been circulated to a few members but not trained properly for the effective use and maintenance. Fishermen have reported that false alarms on many sites, due to which the Coast Guard had stopped responding the incoming calls.
3.2.4 Community Radio (CR) Limitation: Community Radio also known as FM radio which has minimum range of 25–50 nmi, conditioning the strength of the transmission signal. High towers are prohibited for the use of resource constrains. Reliable modes of communication cannot be replaced to Community Radio.
114
S. K. Sarangi and M. Panda
3.2.5 Sagarvani Integrated Information Dissemination System (SIIDS) Limitation: Sagarvanisms systems are fully depended on mobile GSM or CDMA communication system and it is only useful if the fishermen are close to the sea shore. Fishermen have registered for the sms services and simply 480 fishermen had signed up for the service before the cyclone. The number has increased to more than 100,000 as per the report by odisha state disaster management authority.
3.2.6 Navic Satellite Navigation System (NSNS) Limitation: Till now the satellite navigation system is not properly wide spread in the use of navigating the fishing vessels. ISRO has planned for a pilot project to launch and is in the step of estimation to commercialize and preparing the team to expand the project.
3.2.7 Television and Radio (TR) Limitation: Television and Radio are common method of channels to broadcast alert message but it does not give guaranty of receiving of the warning message because of the availability of the devices. Many fishermen carry transistor radios for receiving warning message but their range is limited as compare to other mode of communications.
4 Propose Method of Communication From the above limitation of the cellular communication, here we are proposing the node based communication system using hybridization of Mobile Ad-hoc Network. 4.1 Scalable node selection using multi-hop hybrid routing protocol and 4.2 Optimized geographical multicast routing protocol ensuring the ability of cost function of the routing protocol in the multi-hop region as one or more essential parameters of the network that allows reliable communication in between the no of regions through effective message passing on multi clustered network. Wireless multi-hop MANETs (shown in Fig. 4) are networks of connecting nodes linked with multiple clustered which is switched in a path assigned. Because of their limited radio range, it is not possible for some of the devices to communicate directly among each other. So this type of networks depends on intermediate nodes to forward messages by message passing. In this situation intermediary nodes are acting as relays and transmission occurs through multiple hops on the way to their final destination. 4.1 Scalable Node Selection Method 1. A scalable node selection method with a set of scalability pair factors is assigned to the multicast tree in network.
Analytical Study of Scalability in Coastal Communication
115
Fig. 4. The Multi-hop MANET communications between multilevel clusters and base station.
2. Scalability pair factor (FSP) states the link stability position based on the parameters such as level of energy, node positioning and the path cost assign between them. 3. Successful communication between the data packets from the sender and a group of endpoints is inversely proportional to the cost between the path assigned and directly proportional to the least energy level in between communicating nodes. 4. Several parameters are considered to compute the scalability pair factor (FSP). (i) Observing the energy remaining in the nodes by regulating the energy model in the communication process. (ii) Scalability is also formulated by mobility model by getting the alterations of the node locations with deriving the direction of communication and its average velocity. 4.2 Optimized Geographical Multicast Routing Protocol (OGMRP) For effective routing resolutions, Optimized Geographical multicast routing protocol uses the geographical location and a structured network of the node where information about the location would generally be picked up either from GPS system or from the suggested location given in the positions of neighboring nodes (shown in Fig. 5). Geographical multicast routing depends on two factors: 1. Self-stored position information by each node and 2. Corresponding distance between mobile nodes geographically within the clustered network topology.
116
S. K. Sarangi and M. Panda
Fig. 5. Corresponding link within the multi clustered network with lost nodes identification by geographical distance.
5 Methodology 1. Select a multicast tree with set of scalability pair factor (FSP). 2. Depend on number of parameters that level of energy, node positioning and the path cost assign between them, FSP is formulated for the scalable pairing. 3. By availing multicast routes and group memberships, the path is reorganized by the on demand source. Then it sends the multicast packets through the meshtopology. Due to the on demand protocol, it uses simple way of staying group membership. 4. A Link request message is to broadcast in the selected network gradually by refreshing the membership information and multicast routes when multicast senders have data to forward but there is no route on the way to multicast group. After receiving of link request message by a transitional node, it keeps the source address and sequence number to check duplicates of the received packet by the message buffering. 5. Link request message is broadcast once more by the node when received packet is not matching during the time and checks the value is greater than zero or not. Source node id and cost function is also updated in the routing table prior to the communication. The node broadcasts the link reply message when link request message reaches at the node that belongs to the identical multicast group. 6. Self-identification is matched for the next hop address after the link reply message is received. Sets of node then forward the group flag for the same multicast group and resend the link reply message in the self-routing table with successful matching. So through the shortest path the link reply message transmitted to the multicast tree. 7. The cost function is calculated after the forwarding nodes and senders multicast the packets to the receivers. After receiving of data packets, the node checks whether the packet is identical or not and then forwards the forwarding node through the selected shortest path. Here the node selection is done with the parameter, passing through source to destination.
6 Performance Evaluation Here we are documenting the simulation parameters for the proposed methodology and analyze the expected outcome of scalable node selection with optimized geographical
Analytical Study of Scalability in Coastal Communication
117
multicast routing protocol (OGMRP). The effectiveness of cost function during multicasting are derived with number of communications, Percentage of Forwarding nodes and total number of data packets sent per data packet received that is the packet transfer ratio. We will also go through multiple experiments to improve the routing performance by using the following network parameters. • • • • •
Energy Model Mobility Model Network Traffic Load Multicast Group size Group Member Size
7 Simulation Analysis In this simulation study we will be focusing on the procedure to compute the scalability pair factor (FSP) and Optimized Geographical Multicast Routing Protocols (OGMRP) for improving routing resolutions within the derived geographical location. Physical topography is the source of destination to select the node randomly by using the optimized geographical routing method of communication. Uniform selection is done through the minimum and maximum speed in the route of the destination. Using a break time, the node stopovers and the transmission again carries the data set to its destination. A particular network Simulator will be used to simulate the proposed methodology based on the parameters which will be fit for the best performances.
8 Conclusion We are proposing the method of scalability and optimized routing protocol for the selection of neighboring node within the multicast tree. Using various network parameters such as Energy Model, Mobility Model, Network Traffic Load, Multicast Group size, Group Member Size, the simulation results will show the effectiveness of the proposed method of communication. To detect and protect the fishing boats in the deep sea and coastal communication during cyclonic disaster is the measure issue in this assessment. QoS is also a measuring parameter to provide an effective and scalable within the multicast area. So, the above analytical study may be within the outline and face some limitations. Finally, scalable method of communication may continue with a pivotal point of attraction that stands in the positioning of widespread ad-hoc network a reality applied to the number of rescue applications.
References 1. State Disaster Management Plan, ODISHA (2016). https://www.osdma.org/plan-and-policy/ state-disaster-management-plan/ 2. People’s Report on Status of Marine Fisher Community in Orissa as on (2003). http://uaaodi sha.org/upload/Fishers%20of%20Orissa%20Coast-OTFWU.pdf
118
S. K. Sarangi and M. Panda
3. ODISHA Fisheries Policy (2015). https://investodisha.gov.in/download/Odisha_Fisheries_ Policy_2015.pdf 4. Coastal Processes along the Indian coastline. Current Science, vol. 91, no. 4, pp. 530–536. http://drs.nio.org/drs/bitstream/2264/350/1/Curr_Sci_91_530.pdf. Accessed 13 Sept 2019 5. ODISHA Disaster Recovery Project (2010). https://www.osdma.org/HRPdocument.aspx?vch glinkid=GL042&vchplinkid=PL097 6. National Disaster Management Division Ministry of Home Affairs Government of India, January 2006. https://nidm.gov.in/PDF/safety/flood/link1.pdf 7. Onwuka, E., Folaponmile, A., Ahmed, M.: Manet: a reliable network in disaster areas. Jorind 9(2), 105–113 (2011) 8. Tarique, M., Tepe, K.E., Adibi, S., Erfani, S.: Survey of multipath routing protocols for mobile ad hoc networks. J. Netw. Comput. Appl. 32(6), 1125–1143 (2009). https://doi.org/10.1016/ j.jnca.2009.07.002 9. Smith, P., Simpson, D.: Technology and communications in an urban crisis: the role of mobile communications systems in disasters. J. Urban Technol. 16(1), 133–149 (2009). https://doi. org/10.1080/10630730903076494 10. Monares, A., Ochoa, S.F., Pino, J.A., Herskovic, V., Rodriguez-Covili, J., Neyem, A.: Mobile computing in urban emergency situations: improving the support to firefighters in the field. Expert Syst. Appl. 38, 1255–1267 (2011). https://doi.org/10.1016/j.eswa.2010.05.018 11. Velasco, J.R., López-Carmona, M.A., Sedano, M.: Role of multi-agent system on minimalist infrastructure for service provisioning in Ad-Hoc networks for emergencies. In: Proceedings of First International Workshop on Agent Technology for Disaster Management, AAMAS 2006, Hakodate, Japan, pp. 151–152 (2006) 12. Martí, R., Robles, S., Martín-Campillo, A., Cucurull, J.: Providing early resource allocation during emergencies: the mobile triage tag. J. Netw. Comput. Appl. 32, 1167–1182 (2009). https://doi.org/10.1016/j.jnca.2009.05.006 13. Robinson, W.H., Lauf, A.P.: Resilient and efficient MANET aerial communications for search and rescue applications. In: 2013 International Conference on Computing, Networking and Communications, ICNC 2013, pp. 845–849 (2013). https://doi.org/10.1109/iccnc.2013.650 4199 14. Raffelsberger, C., Hellwagner, H.: A hybrid MANET–DTN routing scheme for emergency response scenarios. In: 2013 IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), pp. 505–510 (2013). https://doi.org/ 10.1109/percomw.2013.6529549 15. Shesh, K.S., Ramendra, K., Anshul, G., Kamaljeet, P.: Routing protocols and security issues in MANET: a survey. Int. J. Emerg. Technol. Adv. Eng. 4(4), 918–924 (2014) 16. Fajardo, J.T.B., Oppus, C.M.: A mobile disaster management system using the Android technology. WSEAS Trans. Commun. 9(6), 343–353 (2010) 17. Abolhasan, M., Wysocki, T., Dutkiewicz, E.: A review of routing protocols for mobile ad hoc networks. Ad Hoc Netw. 2, 1–22 (2003). https://doi.org/10.1016/S1570-8705(03)00043-X 18. Mueller, S., Tsang, R.P., Ghosal, D.: Multipath routing in mobile ad hoc networks: issues and challenges. In: Calzarossa, M.C., Gelenbe, D. (eds.) MASCOTS. LNCS, vol. 2965, pp. 209– 234. Springer, Heidelberg (2004) 19. Baseer, S., Channa, M.I., Ahmed, K.: A review of routing protocols of heterogeneous networks. IJCA Spec. Issue Mob. Adhoc Netw. MANETs 2, 58–66 (2010). https://doi.org/10. 5120/1021-63 20. Kwan, M.-P., Lee, J.: Emergency response after 9/11: the potential of real-time 3D GIS for quick emergency response in micro-spatial environments. Comput. Environ. Urban Syst. 29(2), 93–113 (2005). https://doi.org/10.1016/j.compenvurbsys.2003.08.002
Analytical Study of Scalability in Coastal Communication
119
21. Sasaki, Y., Shibata, Y.: Distributed disaster information system in DTN based mobile communication environment. In: Proceedings of 2010 International Conference on Broadband, Wireless Computing Communication and Applications, BWCCA 2010, pp. 274–277 (2010). https://doi.org/10.1109/bwcca.2010.81 22. Sanderson, N., Goebel, V., Munthe-Kaas, E.: Metadata management for ad-hoc infoware—a rescue and emergency use case for mobile ad-hoc scenarios. In: On the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASE—OTM Confederated International Conferences, CoopIS, DOA, and ODBASE 2005, Proceedings, Part II, Agia Napa, Cyprus, vol. 3761, no. 152929, pp. 1365–1380 (2005) 23. Shao, Z., Liu, Y., Wu, Y., Shen, L.: A rapid and reliable disaster emergency mobile communication system via aerial Ad Hoc BS networks. In: 7th International Conference on Wireless Communications, Networking and Mobile Computing, WiCOM 2011, pp. 8–11 (2011). https://doi.org/10.1109/wicom.2011.6040479 24. Tsai, C.-S., He, C.-Y.: A novel group communication for disaster positioning in mobile ad hoc network. In: 2010 IEEE RIVF International Conference on Computing and Communication Technologies, Research, Innovation, and Vision for the Future (RIVF), pp. 1–4 (2010). https:// doi.org/10.1109/rivf.2010.5633127 25. Martín-Campillo, A., Crowcroft, J., Yoneki, E., Martí, R.: Evaluating opportunistic networks in disaster scenarios. J. Netw. Comput. Appl. 36(2), 870–880 (2013). https://doi.org/10.1016/ j.jnca.2012.11.001
Effect of Environmental and Occupational Exposures on Human Telomere Length and Aging: A Review Jasbir Kaur Chandani, Niketa Gandhi, and Sanjay Deshmukh(B) Department of Life Sciences, University of Mumbai, Santacruz (E), Mumbai 400 098, India [email protected], [email protected], [email protected]
Abstract. Today’s hectic and demanding lifestyle leaves no time for anyone to focus on the personal health aspects which in the long run is hazardous to one’s health. Environmental and occupational stresses are found to play a specific role in aging process. The purpose of this review is to highlight the role of telomeres as a possible biomarker of aging and to identify the potential exposures causing telomere length attrition. Better choice of these exposures can further help to increase lifespan. This paper reviews the published data on the effect of various work environment exposures on telomere length attrition. It also discusses future directions for telomere length related research. Keywords: Aging · Biomarker · Environmental exposure · Occupational exposure · Occupational stress · Stress · Telomere length
1 Introduction 1.1 Environmental and Occupational Exposures Occupational and environmental exposures have become a topic of major concern in developing countries like India where exposure levels are likely to be higher as they have lesser strict rules and regulations than other developed countries. The intensity or duration of exposures to numerous substances in the workplace can lead to higher chances of cancer in the exposed workers. It was also found in research that more than 40% of possibly carcinogenic exposures were mostly from exposures in the workplace [1]. Occupational exposures are also linked to the development of various physiological diseases like nervous, cardiovascular disease and may cause severe developmental issues [2]. Workers are exposed to extreme exposure levels throughout the day, so there is an urgent need of conducting research in the workplace. It was also reported that though occupational cancer occurs in a very small number of working population but puts a large number of the working population on disease threats [3]. The identification of occupational hazards should be given prime importance in any cancer prevention program was thus emphasized. This explains the © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 120–129, 2021. https://doi.org/10.1007/978-3-030-49339-4_13
Effect of Environmental and Occupational Exposures on Human
121
reason for the universal nature of occupational exposure and the absolute need to study its effect on telomere length and aging. Various work environment exposures are found to affect the rate of telomere length attrition. But it was late during 20th century; researchers found that telomere length could be an important tool in aging studies. Telomeres, the nucleoprotein complexes are found to shorten by each cell division and with increasing age [3, 4]. Telomerase, a reverse transcriptase enzyme, adds repeats to these shortened telomeres and is found to reverse this process to some extent [5]. Short telomere length is found linked to increasing age, weight gain, desk-bound habits and smoking from time to time [6, 7]. Smoking also leads to accelerated telomere shortening [8]. More is the dosage of cigarette smoking, lesser is the telomere length [8]. It was found in a study that telomere length was lost at the rate of 25.7 to 27.7 base pair/year and an additional 5 bp was lost with daily cigarette smoking [9]. So, smoking 1 pack of cigarettes daily for 40 years is equivalent to more than 7 years of one’s life [9]. Thus smoking increases oxidative stress, this leads to early shortening of telomere length, which may speed the aging process. Obesity is also associated with excessive telomeres length attrition. Obese women have significantly shorter telomere length than the lean women of the same age [9]. An increase in the rate of telomere attrition in an obese individual was equivalent to almost 9 years of one’s life. Rate of telomere attrition thus is very helpful in determining once the health and pace of aging. Many studies have tried to relate various work environment exposures with telomere length, but the results are conflicting with no clear picture of telomere length variation.
2 What Are Telomeres? Telomeres are like beanies on ends of our chromosomes that protect the DNA from any wear and tear [10]. They are present at both ends of chromosomes and prevent ends of chromosomes from binding each other and causing degradation of important cellular information. Telomeres are duplicative of G-rich nucleotides [11, 12]. Duplicative units of a hexamer of 5 -T-T-A-G-G-G-3 nucleotides specific for human telomere length and are of the average length of 4-15 kilobase pairs [13, 14]. This DNA extends from 5 -> 3 towards chromosomal ends with 100–150 nucleotides, over C-rich strand [15]. It extends further as a telomeric loop (T-loop) to save more DNA from degradation [16]. There are total of 42 chromosomes in humans (Fig 1). Telomeres are thus specialized structures that protect the ends of chromosomes. They play a vital role in protecting our genetic material from any wear and tear. As aging progresses, it leads to a reduction of telomere length, which leads to cell senescence and death. At an organism level, it shows the effect of lifetime stresses, aging and many age-associated diseases.
3 Telomere Length Measurement Length of telomere can be measured in numerous ways. These different methods measure telomere length through different angles like:
122
J. K. Chandani et al.
Fig. 1. Simplified structure of telomere
• their total population in cell, • presence in single cell, • presence in single chromosome. To assess the role of telomere length as a possible biomarker of aging, we require a method that can efficiently and perfectly measure telomere length attrition accurately with sound results, proving co relation between telomere length, occupational and environmental stressors and aging. So the different methods that are used to analyze telomere length are described here in short: A. Southern blot This was the first method to measure the average telomere length in a cell population [17]. Genomic DNA was digested with specific restriction enzymes that cut only non-telomeric DNA into multiple fragments and undisturbed terminal restriction fragments (TRFs). These fragments were then separated by agarose gel electrophoresis (AGE) and studied [17]. B. Q- FISH (Quantitative fluorescence Insitu hybridization) Fluorescence was used for the first time in this method to measure telomere length [18]. Landsorp et al. [19] discovered Q- FISH for quantification. It helped to compare telomere length between cells as it measured individual telomere length. C. Flow FISH It combines FISH (fluorescence Insitu hybridization) and flows cytometry for various analytical reactions [20]. It helps to analyze the telomere length of cells population and helps in the identification of individual subpopulation [21].
Effect of Environmental and Occupational Exposures on Human
123
D. Single telomere length analysis (STELA) This method is based on the principle of PCR. It helps us study telomere length at a single chromosome level [22]. E. q-PCR (Quantitative Polymerase chain reaction) In this method of measurement of telomere length, target DNA is amplified and analysed at the same time [23]. DNA product (T) Relative quantification [23] = Telomeric Reference gene (S) F. Callaghan method- A PCR based method for measurement of absolute Telomerelength (aTL) This method uses an oligomer standard which is used to study absolute values of telomere length [24]. G. Monochrome multiplex PCR (MMqPCR) In this method both DNA of telomeres and single copy gene are amplified together, to reduce the chances of pipetting error [25]. H. Droplet Digital PCR (ddPCR) This is the most recent method for the absolute quantification of DNA [26]. In this method, the sample is made to move through the same size droplets [27]. It is a PCR based method that counts DNA molecules that are enclosed in droplets. Advantages of this method are: • • • • •
It is very specific, with detection rate of 1–100,000 molecules per reaction [27] More proficient than qPCR It is easier, as no standard curves are involved Very precise dd PCR provides absolute quantification with better results than other gel based methods
As described above, there are many methods of telomere length measurement. It depends on the type and aim of one’s study which methods should be used. Each of these methods has its own pros and cons.
4 Telomere Length: A Biomarker of Aging Telomeres length depicts the age of a cell biologically and the accumulative effect of genetic, biological and environmental stressors. These stressors include socioeconomic status, adverse life events, smoking, oxidative stress, physical and sexual child abuse, mental illness, stress, inflammation, cancer, cardiovascular diseases and alcohol and drug use. These factors along with end replication problem have an important role in telomere length shortening [28]. These stressors do increase cellular multiplication and aging, through an unknown mechanism [29]. Various discoveries showing a correlation between telomere length shortening and replicative senescence, predict that telomere act as a “mitotic clock”, which determines the cellular life span [30, 31]. Telomere length is used as a prospective indicator in numerous aging-related studies [32].
124
J. K. Chandani et al.
It was proved that as telomere length fluctuates with age, differs from person to person and it is highly related to various aging-related diseases, it fulfills the majority of its role as an indicator of aging [15]. Telomere length gets shortened with increasing age as a normal cellular process [9]. Human fibroblast cell enters senescence after 50 to 70 cell divisions [15]. Telomere length is humans mostly decrease at the rate of 24.8 to 27.7 bp/year [33]. Telomere length has a specific role in the aging process [34, 35]. It also varies with variation in different parameters like age [4], gene and its nearby environment [36], Socioeconomic status [37], physical fitness [37], weight [9, 38], Smoking status [9, 39]. Gender has no major effect on telomere length [22]. Telomere length shortening is related to diseases like heart diseases [40], Diabetes [41], cancer [42, 43] and osteoporosis [44]. Telomere length measurement can thus be used as a molecular determinant of overall health. Various lifestyle factors like smoking, excessive drinking, etc. are found to decrease telomere length, which can be reversed by a change in lifestyle factors.
5 Association Between Telomere Length Shortening and Aging During Environmental or Occupational Exposures: Overview Work environment exposures play a specific role in telomere length shortening and thus aging [45]. In a study conducted a decade ago, it was found that extended exposure to situations like care giving for a very sick child, resulted in shorter telomere length with almost 10 years of age difference in caregivers compared to the healthy children [46]. Three years later the same results were observed in women working full-time and no. of working years also shortened telomere length significantly [47]. The effect of pollution was also studied in office workers and traffic police officers [48]. Traffic police officers were found to have shorter telomere length than office workers [48]. A year later, in a study conducted on coke oven workers, coke oven workers were found to have shorter telomeres length than controls [49]. A number of years of harmful exposure impacted telomere length significantly [49]. On the contrary, the first incidence of telomere length increase was reported a year later, in 63 steelworkers after short term work environment exposure to air pollution [50]. In a study conducted in the same year on 157 rubber industry workers occupationally exposed to compounds in rubber fumes, shorter telomere length was observed which was especially linked to exposure to N-Nitrosamines [51]. In another study conducted on 608 women aged 35–74 long term job was linked to telomere length attrition [52]. Exposure to the highly stressful situation was also studied in 50 chronically stressed female caregivers [53]. It was also accentuated that occupational stress increases the rate of telomere length shortening and biological aging [53]. Shorter telomere length was also reported in 144 battery manufacturing workers in china with atypical lead levels in blood and urine. It was also found that higher was the Telomere Length, lower was the blood lead level [54]. Occupational exposure accelerated telomere length shortening in 240 car mechanical workshop workers and leads to early aging, which was also experimentally proved later in the same year. The role of duration of exposure in increased DNA damage and telomere
Effect of Environmental and Occupational Exposures on Human
125
shortening was also highlighted [55]. Antithetically, short term exposure to pollutant was found to increase telomere length in 120 truck drivers than that of 120 office workers [55]. It was also predicted that longer exposure may shorten telomere length [55]. In Agriculture Health Study (AHS) conducted on pesticides applicators to assess the effect of lifetime use 48 pesticides telomere length, it was found that seven pesticides were negatively correlated with telomere length [56]. In a survey conducted on 87 boilermakers increased inflammation was found to decrease the telomere length [56]. Ionizing radiation exposure also found to affect telomere length. It was studied in 595 cleanup workers in a nuclear power plant in Chernobyl. Telomere length of cleanup workers was found to be a bit higher than controls. It was finally concluded that as more no. of incidences of cancer were found in exposed people compared to unexposed ones, the increase in telomere length in cleanup workers was attributed to defect in the regulation of telomerase enzyme [57]. In another agricultural health study conducted in Iowa and North Carolina to study the effect of pesticide use on telomere length in 568 pesticide applicators, increasing lifetime days of few pesticides like 2, 4-D, butylate etc. were highly associated with shorter telomere length. Contrary to that, the increasing effect of Alachlor was positively linked with longer telomere length [58]. The correlation between work environment exposure to welding fumes particles and telomere length was elucidated in 101 welders and 127 controls in the same year. Telomere length in welders was found to be shorter than controls [59]. More where the number of working years as a welder, shorter were the telomere length [59]. Telomere length was also assessed in 334 male lead smelters and 60 unexposed males [60]. Higher were the blood lead levels in the exposed group, lower was the telomere length [60]. In astronauts occupational stress can be studied very well, as maintenance of telomere is significant in that case; as they are jointly exposed to many stressors like radiation, noise, mental, nutrition and physical stressors during space travel [61]. It was found that the telomere length changed considerably during space travel [61]. Telomere length changes came back to almost preflight levels in 6 months post return to Earth, higher numbers of short telomeres were reported [61].
6 Future Directions Although, all these cross-sectional studies provide sufficient evidence linking various work environment exposures to telomere length alteration, very less number of longitudinal studies have been conducted in this direction. There are also very few studies correlating these exposures to different lifestyle factors like work stress, sleep, diet, smoking, obesity, socioeconomic status, etc. Many such longitudinal studies are required in the future to study the appropriate role of these work environment exposures on telomere length and aging. Also, whether minimizing the dose of exposure, the level has favorable effects on telomere length and aging needs to be investigated. Another question that needs to be answered is how much reduction in environmental and occupational exposures has a reversible effect on telomere length.
126
J. K. Chandani et al.
In short, we can use telomere length as a biological indicator of aging and it can help understand the effect of work environment exposures on aging, which in the future can prevent the occurrence of various aging-related diseases and will help to create a healthy work environment. In summary, Telomere length was found to both shorten and lengthen with different work environment exposures. But, the proper mechanism behind this process is still unknown. Whether excessively longer or excessively shorter telomere leads to aging, still remains a mystery.
References 1. Siemiatycki, J., Richardson, L., Straif, K., Latreille, B., Lakhani, R., Campbell, S., Rousseau, M.C., Boffetta, P.: Listing occupational carcinogens. Environ. Health Perspect. 112(15), 1447–1459 (2004) 2. Newman-Taylor, A.J., Coggon, D.: Attribution of disease. In: Baxter, P.J., Aw, T.C., Cockcroft, A., Durrington, P., Harrington, J.M. (eds.) Hunter’s Diseases of Occupations, 10th edn, pp. 89– 95. Hodder Arnold, London (2010) 3. Doll, R., Peto, R.: The causes of cancer: quantitative estimates of avoidable risks of cancer in the United States today. J. Natl. Cancer Inst. 66(6), 1191–1308 (1981) 4. Frenck, R.W., Blackburn Jr., E.H., Shannon, K.M.: The rate of telomere sequence loss in human leukocytes varies with age. Proc. Natl. Acad. Sci. U.S.A. 95(10), 5607–5610 (1998) 5. Chan, S.W., Blackburn, E.H.: New ways not to make ends meet: telomerase, DNA damage proteins and heterochromatin. Oncogene 21(4), 553–563 (2002) 6. Shiels, P.G., Mc Glynn, L.M., Mac Intyre, A., Johnson, P.C., Batty, G.D., Burns, H., Cavanagh, J., Deans, K.A., Ford, I., Mc Connachie, A., Mc Ginty, A., Mc Lean, J.S., Millar, K., Sattar, N., Tannahill, C., Velupillai, Y.N., Packard, C.J.: Accelerated telomere attrition is associated with relative household income, diet and inflammation in the p SoBid cohort. PLoS One 6(7), e22521 (2011) 7. Fyhrquist, F., Saijonmaa, O.: Telomere length and cardiovascular aging. Ann. Med. 44(Suppl. 1), S138–S142 (2012) 8. Song, Z., von Figura, G., Liu, Y., Kraus, J.M., Torrice, C., Dillon, P., Rudolph-Watabe, M., Ju, Z., Kestler, H.A., Sanoff, H., Lenhard Rudolph, K.: Lifestyle impacts on the aging-associated expression of biomarkers of DNA damage and telomere dysfunction in human blood. Aging Cell 9(4), 607–615 (2010) 9. Valdes, A.M., Andrew, T., Gardner, J.P., Kimura, M., Oelsner, E., Cherkas, L.F., Aviv, A., Spector, T.D.: Obesity, cigarette smoking, and telomere length in women. Lancet 366(9486), 662–664 (2005) 10. Puterman, E., Epel, E.: An intricate dance: life experience, multisystem resiliency, and rate of telomere decline throughout the lifespan. Soc. Pers. Psychol. Compass 6(11), 807–825 (2012) 11. Adaikalakoteswari, A., Balasubramanyam, M., Mohan, V.: Telomere shortening occurs in Asian Indian type 2 diabetic patients. Diabet. Med. J. Br. Diabet. Assoc. 22(9), 1151–1156 (2005) 12. Benetos, A., Gardner, J.P., Zureik, M., Labat, C., Xiaobin, L., Adamopoulos, C., Temmar, M., Bean, K.E., Thomas, F., Aviv, A.: Short telomeres are associated with increased carotid atherosclerosis in hypertensive subjects. Hypertension 43(2), 182–185 (2004) 13. Getliffe, K.M., Martin Ruiz, C., Passos, J.F., von Zglinicki, T., Nwokolo, C.U.: Extended lifespan and long telomeres in rectal fibroblasts from late-onset ulcerative colitis patients. Euro. J. Gastroenterol. Hepatol. 18(2), 133–141 (2006)
Effect of Environmental and Occupational Exposures on Human
127
14. Ornish, D., Lin, J., Daubenmier, J., Weidner, G., Epel, E., Kemp, C., Magbanua, M.J., Marlin, R., Yglecias, L., Carroll, P.R., Blackburn, E.H.: Increased telomerase activity and comprehensive lifestyle changes: a pilot study. Lancet Oncol. 9(11), 1048–1057 (2008) 15. von Zglinicki, T., Martin-Ruiz, C.M.: Telomeres as biomarkers for ageing and age-related diseases. Curr. Mol. Med. 5(2), 197–203 (2005) 16. Sampson, M.J., Winterbone, M.S., Hughes, J.C., Dozio, N., Hughes, D.A.: Monocyte telomere shortening and oxidative DNA damage in type 2 diabetes. Diab. Care 29(2), 283–289 (2006) 17. Southern, E.M.: Detection of specific sequences among DNA fragments separated by gel electrophoresis. J. Mol. Biol. 98(3), 503–517 (1975) 18. Lengauer, C., Riethman, H., Cremer, T.: Painting of human chromosomes with probes generated from hybrid cell lines by PCR with Alu and L1 primers. Hum. Genet. 86(1), 1–6 (1990) 19. Lansdorp, P.M., Verwoerd, N.P., van de Rijke, F.M., Dragowska, V., Little, M.T., Dirks, R.W., Raap, A.K., Tanke, H.J.: Heterogeneity in telomere length of human chromosomes. Hum. Mol. Genet. 5(5), 685–691 (1996) 20. Rufer, N., Dragowska, W., Thornbury, G., Roosnek, E., Lansdorp, P.M.: Telomere length dynamics in human lymphocyte subpopulations measured by flow cytometry. Nat. Biotechnol. 16(8), 743–747 (1998) 21. Baerlocher, G.M., Lansdorp, P.M.: telomere length measurements in leukocyte subsets by automated multicolor flow-FISH. cytometry. Part A. J. Int. Soc. Anal. Cytol. 55(1), 1–6 (2003) 22. Baird, D.M., Rowson, J., Wynford-Thomas, D., Kipling, D.: Extensive allelic variation and ultrashort telomeres in senescent human cells. Nat. Genet. 33(2), 203–207 (2003) 23. Cawthon, R.M.: Telomere measurement by quantitative PCR. Nucleic Acids Res. 30(10), e47 (2002) 24. O’Callaghan, N.J., Fenech, M.: A quantitative PCR method for measuring absolute telomere length. Bio. Proced. Online 13, 3 (2011) 25. Cawthon, R.M.: Telomere length measurement by a novel monochrome multiplex quantitative PCR method. Nucleic Acids Res. 37(3), e21 (2009) 26. Biorad.com (2010). http://www.bio-rad.com/en-in/applications-technologies/droplet-digitalpcr-ddpcr-technology 27. Pinheiro, L.B., Coleman, V.A., Hindson, C.M., Herrmann, J., Hindson, B.J., Bhat, S., Emslie, K.R.: Evaluation of a droplet digital polymerase chain reaction format for DNA copy number quantification. Anal. Chem. 84(2), 1003–1011 (2012) 28. Monaghan, P.: Telomeres and life histories: the long and the short of it. Ann. N.Y. Acad. Sci. 1206, 130–142 (2010) 29. Blackburn, E.H.: Telomeres and telomerase: the means to the end (Nobel lecture). Angew. Chem. (Int. Ed. Engl.) 49(41), 7405–7421 (2010) 30. Allsopp, R.C., Harley, C.B.: Evidence for a critical telomere length in senescent human fibroblasts. Exp. Cell Res. 219(1), 130–136 (1995) 31. Harley, C.B.: Telomere loss: mitotic clock or genetic time bomb? Mutat. Res. 256(2–6), 271–282 (1991) 32. Butler, R.N., Sprott, R., Warner, H., Bland, J., Feuers, R., Forster, M., Fillit, H., Harman, S.M., Hewitt, M., Hyman, M., Johnson, K., Kligman, E., McClearn, G., Nelson, J., Richardson, A., Sonntag, W., Weindruch, R., Wolf, N.: Biomarkers of aging: from primitive organisms to humans. J. Gerontol. Ser. A Biol. Sci. Med. Sci. 59(6), B560–B567 (2004) 33. Olovnikov, A.M.: A theory of Marginotomy. the incomplete copying of template margin in enzymic synthesis of polynucleotide’s and biological significance of the phenomenon. J. Theor. Biol. 41(1), 181–190 (1973)
128
J. K. Chandani et al.
34. Farzaneh-Far, R., Cawthon, R.M., Na, B., Browner, W.S., Schiller, N.B., Whooley, M.A.: Prognostic value of leukocyte telomere length in patients with stable coronary artery disease: data from the heart and soul study. Arterioscleriosis Thromb. Vasc. Biol. 28(7), 1379–1384 (2008) 35. Yang, Z., Huang, X., Jiang, H., Zhang, Y., Liu, H., Qin, C., Eisner, G.M., Jose, P.A., Rudolph, L., Ju, Z.: Short telomeres and prognosis of hypertension in a Chinese population. Hypertension 53(4), 639–645 (2009) 36. Benetti, R., García-Cao, M., Blasco, M.A.: Telomere length regulates the epigenetic status of mammalian telomeres and subtelomeres. Nat. Genet. 39(2), 243–250 (2007) 37. Cherkas, L.F., Hunkin, J.L., Kato, B.S., Richards, J.B., Gardner, J.P., Surdulescu, G.L., Kimura, M., Lu, X., Spector, T.D., Aviv, A.: The association between physical activity in leisure time and leukocyte telomere length. Arch. Int. Med. 168(2), 154–158 (2008) 38. Nordfjäll, K., Eliasson, M., Stegmayr, B., Melander, O., Nilsson, P., Roos, G.: Telomere length is associated with obesity parameters but with a gender difference. Obesity (Silver Spring, Md) 16(12), 2682–2689 (2008) 39. Nawrot, T.S., Staessen, J.A., Gardner, J.P., Aviv, A.: Telomere length and possible link to X chromosome. Lancet 363(9408), 507–510 (2004) 40. Fitzpatrick, A.L., Kronmal, R.A., Gardner, J.P., Psaty, B.M., Jenny, N.S., Tracy, R.P., Walston, J., Kimura, M., Aviv, A.: Leukocyte telomere length and cardiovascular disease in the cardiovascular health study. Am. J. Epidemiol. 165(1), 14–21 (2007) 41. Svenson, U., Nordfjäll, K., Baird, D., Roger, L., Osterman, P., et al.: Blood cell telomere length is a dynamic feature. PLoS ONE 6(6), e21485 (2011) 42. Wu, X., Amos, C.I., Zhu, Y., Zhao, H., Grossman, B.H., Shay, J.W., Luo, S., Hong, W.K., Spitz, M.R.: Telomere dysfunction: a potential cancer predisposition factor. J. Natl. Cancer Inst. 95(16), 1211–1218 (2003) 43. McGrath, M., Wong, J.Y., Michaud, D., Hunter, D.J., De Vivo, I.: Telomere length, cigarette smoking, and bladder cancer risk in men and women. Cancer Epidemiol. Biomark. Prev. 16(4), 815–819 (2007) 44. Valdes, A.M., Richards, J.B., Gardner, J.P., Swaminathan, R., Kimura, M., Xiaobin, L., Aviv, A., Spector, T.D.: Telomere length in leukocytes correlates with bone mineral density and is shorter in women with osteoporosis. Osteoporos. Int. 18(9), 1203–1210 (2007) 45. Zhang, X., Lin, S., Funk, W.E., Hou, L.: Environmental and occupational exposure to chemicals and telomere length in human studies. Occup. Environ. Med. 70(10), 743–749 (2013) 46. Epel, E.S., Blackburn, E.H., Lin, J., Dhabhar, F.S., Adler, N.E., Morrow, J.D., Cawthon, R.M.: Accelerated telomere shortening in response to life stress. Proc. Natl. Acad. Sci. U.S.A. 101(49), 17312–17315 (2004) 47. Parks, C.G., McCanlies, E.C., Miller, D.B., Cawthon, R.M., DeRoo, L.A., Sandler, D.B.: Telomere length and work schedule characteristics in the NIEHS sister study. Occup. Environ. Med. 64(12), e21 (2007) 48. Hoxha, M., Dioni, L., Bonzini, M., Pesatori, A.C., Fustinoni, S., Cavallo, D., Carugno, M., Albetti, B., Marinelli, B., Schwartz, J., Bertazzi, P.A., Baccarelli, A.: Association between leukocyte telomere shortening and exposure to traffic pollution: a cross-sectional study on traffic officers and indoor office workers. Environ. Health 8(1), 41 (2009) 49. Pavanello, S., Pesatori, A.C., Dioni, L., Hoxha, M., Bollati, V., Siwinska, E., Mielzy´nska, D., Bolognesi, C., Bertazzi, P.A., Baccarelli, A.: Shorter telomere length in peripheral blood lymphocytes of workers exposed to polycyclic aromatic hydrocarbons. Carcinogenesis 31(2), 216–221 (2010)
Effect of Environmental and Occupational Exposures on Human
129
50. Dioni, L., Hoxha, M., Nordio, F., Bonzini, M., Tarantini, L., Albetti, B., Savarese, A., Schwartz, J., Bertazzi, P.A., Apostoli, P., Hou, L., Baccarelli, A.: Effects of short-term exposure to inhalable particulate matter on telomere length, telomerase expression, and telomerase methylation in steel workers. Environ. Health Perspect. 119(5), 622–627 (2011) 51. Li, H., Jönsson, B.A., Lindh, C.H., Albin, M., Broberg, K.: N-nitrosamines are associated with shorter telomere length. Scand. J. Work Environ. Health 37(4), 316–324 (2011) 52. Parks, C.G., DeRoo, L.A., Miller, D.B., McCanlies, E.C., Cawthon, R.M., Sandler, D.P.: Employment and work schedule are related to telomere length in women. Occup. Environ. Med. 68(8), 582–589 (2011) 53. Ahola, K., Sirén, I., Kivimäki, M., Ripatti, S., Aromaa, A., Lönnqvist, J., Hovatta, I.: Workrelated exhaustion and telomere length: a population-based study. PLoS ONE 7(7), e40186 (2012) 54. Wu, Y., Liu, Y., Ni, N., Bao, B., Zhang, C., Lu, L.: High lead exposure is associated with telomere length shortening in Chinese battery manufacturing plant workers. Occup. Environ. Med. 69(8), 557–563 (2012) 55. Hou, L., Wang, S., Dou, C., Zhang, X., Yu, Y., Zheng, Y., Avula, U., Hoxha, M., Díaz, A., McCracken, J., Barretta, F., Marinelli, B., Bertazzi, P.A., Schwartz, J., Baccarelli, A.A.: Air pollution exposure and telomere length in highly exposed subjects in Beijing, China: a repeated-measure study. Environ. Int. 48, 71–77 (2012) 56. Hou, L., Andreotti, G., Baccarelli, A.A., Savage, S., Hoppin, J.A., Sandler, D.P., Barker, J., Zhu, Z.Z., Hoxha, M., Dioni, L., Zhang, X., Koutros, S., Freeman, L.E., Alavanja, M.C.: Lifetime pesticide use and telomere shortening among male pesticide applicators in the Agricultural Health Study. Environ. Health Perspect. 121(8), 919–924 (2013) 57. Wong, J.Y., De Vivo, I., Lin, X., Fang, S.C., Christiani, D.C.: The relationship between inflammatory biomarkers and telomere length in an occupational prospective cohort study. PLoS ONE 9(1), e87348 (2014) 58. Reste, J., Zvigule, G., Zvagule, T., Kurjane, N., Eglite, M., Gabruseva, N., Berzina, D., Plonis, J., Miklasevics, E.: Telomere length in Chernobyl accident recovery workers in the late period after the disaster. J. Radiat. Res. 55(6), 1089–1100 (2014) 59. Li, H., Hedmer, M., Wojdacz, T., Hossain, M.B., Lindh, C.H., Tinnerberg, H., Albin, M., Broberg, K.: Oxidative stress, telomere shortening, and DNA methylation in relation to lowto-moderate occupational exposure to welding fumes. Environ. Mol. Mutagen. 56(8), 684–693 (2015) 60. Pawlas, N., Płachetka, A., Kozłowska, A., Mikołajczyk, A., Kasperczyk, A., Dobrakowski, M., Kasperczyk, S.: Telomere length, telomerase expression, and oxidative stress in lead smelters. Toxicol. Ind. Health 32(12), 1–10 (2015) 61. Garrett-Bakelman, F.E., Darshi, M., Green, S.J., Gur, R.C., Lin, L., Macias, B.R., Miles McKenna, M.J., Meydan, C., Mishra, T., Nasrini, J., Piening, B.D., et. al.: The NASA twins study: a multidimensional analysis of a year-long human spaceflight. Science 364(6436), eaau8650 (2019). https://doi.org/10.1126/science.aau8650
A Review on VLSI Implementation in Biomedical Application Nagavarapu Sowmya and Shasanka Sekhar Rout(B) GIET University, Gunupur, India [email protected]
Abstract. In the present era, health care is becoming expensive with rising costs and an aging population where on the other hand, the government is unable to allocate the expenses for them. Due to this problem, patients are undergoing trauma and feeling difficulty to avail the medical services. In this case, wireless communication reduces cost and patient suffering. This wireless system can be implemented using very large scale integration (VLSI) concept which is applicable for biomedical applications. Using VLSI methods in neurology helps to reduce the size of the circuits, area and speed improvement. In this review, different approaches are discussed for VLSI implementation of neural networks which is mentioned as CMOS technology, design of medical implant communication system (MICS) receiver for biomedical applications, field programmable gate array (FPGA) implementation of neural networks, neuro-fuzzy system, neural network implementation in analog hardware and digital network. Also, the strengths and drawbacks of these systems and approaches are included here. Keywords: Biomedical · Neural networks · FPGA · MICS · VLSI
1 Introduction In present time, medical care is becoming luxurious with growing population and public is unable to afford the services due to financial expenditures. New technologies are implemented to end up this burden. Wireless technology in medical services helps to reduce the cost, patient trauma and difficulties faced by patients while visiting the doctor. Mostly, to maintain the low power consumption in medical field, wireless technology has been evolved with new aspects and started using RF circuitry elements, RF transceiver, IF mixer etc. to reduce the patients discomfort while availing any medical services in day to day life. Medical implant communication system (MICS) was established to enhance short range medical communications between implanted medical device and external equipment [1]. The MICS band is used as short range wireless link to diagnosis and monitor the biomedical signals in between low power implanted medical devices and external equipment. The frequency band from 402 MHz to 405 MHz has been researched for MICS application by the Federal Communications Commission (FCC) [2, 3]. The MICS band receiver system desires fully integrated, low power, small size and low cost © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 130–138, 2021. https://doi.org/10.1007/978-3-030-49339-4_14
A Review on VLSI Implementation in Biomedical Application
131
RF front end over this short range megahertz bandwidth [4]. These medical implanted devices must be magnetically coupled to external programmers so that patients will be in close contact to the external equipment. Neurology is the medical area of expertise which diagnosis and treatments of disorders and diseases of the nerve system and brain. Neurologists not only treat stroke patients but also patients with epilepsy, traumatic brain injury, movement disorders and other neurological issues. Artificial neural network (ANN) is defined as a parallel algorithm. Therefore, it is suitable for parallel VLSI implementations, where circuits or architectures are able to perform their operations in parallel manner [5–7]. The features like parallelism, simple operations and high computational load are the major motives which is making VLSI implementation of artificial neural network popular day by day. Analog implementations believe that ANN is for simple operations. As the operations are simple, analog cells with few transistors are able to implement them efficiently [8]. ANN algorithms precision is limited and flexible with analog computations. High number of operations per second (HPS) is parallel in this type of operation. Digital implementations don’t allow parallelism, where digital cells are limited in number. With respect to HPS, digital implementations are less advantageous than analog one. From the point of view of circuits, analog ones are characteristically limited in precision whereas digital ones need an understanding between precision and complexity. The future is on application dependent systems where the application will be integrated on one or few components for most of the applications related to sensors and signal processing. This is the reason which makes us to assume that the VLSI implementation of neural networks will slowly became a trending one in present market [9]. The paper is planned as follows. The literature review regarding VLSI implementation in biomedical applications is summarized in Sect. 2. Comparison of previous works with the corresponding results is reported in Sect. 3. Finally, Sect. 4 concludes it.
2 Literature Review Shinde, J. R. et al. [10] suggested a most favourable multi-objective optimization process for VLSI operation of feed forward neural network to make sure that, it can be either area/power/speed efficient synchronously. A step by step ideal multi objective method is used so that area/power/speed improves instantaneously where high precision and dynamic range are maintained for hardware design of neural network. It has some design issues during implementation with respect to data representation, analog versus digital neural network and multiplier unit. Floating point arithmetic scheme, digital neural network and array multiplier has been used to achieve the multi-objective optimization for ANN with some drawbacks. VLSI implementation of digital FIR filter with the multi optimization of area/power/speed simultaneously has been achieved without causing any disturbances in functionality of the circuits and filter module. Designs are made using synopsis design compiler in 90 nm and 45 nm process technology [8, 11]. Omondi, A. R. et al. [12] focused on efficient performing of ANN computations on extremely parallel computing systems implemented in FPGA. The computer has been built using a standard array of bit-serial processors and was executed using FPGA
132
N. Sowmya and S. S. Rout
circuits. Scalability of the approach has been increased in terms of size and clock speed. Issues have been faced with different VLSI generations, but the FPGA implementation of neural networks has been enhanced with current technology to meet the current trend expectations. FPGA gives the flexibility in programmable systems. In [5, 13], a hardware implementation of neural network using FPGA chips are discussed with low cost, lesser error, optimum result and increasing efficiency. Stochastic algorithm has shown best results in terms of low power, area efficient hardware implementation of neural networks. Quasi synchronous implementation has been used to look over the reduction in energy consumption and better performance [9]. Arbib, M. A. [14] proposed an idea on calculation in brain and how the calculation should be prearranged on future computer systems. It mainly concentrates on artificial intelligence (AI), neuroscience, artificial neural networks and control theory. The main idea behind this paper is supportive computation, schemas, synchronized control programs, action oriented perception, different neural levels and perception-oriented action, where schema is the primary objective. It is in relevance to the distribution computation at various levels of description as its uniting of neural networks and other techniques, its importance on relationship among action and perception and its specific tactic to normal language processing. It also gives an idea on the artificial intelligence focusing on current applications which uses ANNs and AI. Fuzzy controller technique is implemented to improve the data interpretation. Draghici, S. [15] described the analog hardware implementation of neural networks. Various classifications of general neural networks operations have been debated and nomenclature made by these criteria is depicted. Analog type of neural networks has always been remarkable for researchers, companies and short delays. Analog chips executing neural networks have been portrayed for different practical applications such as nuclear physics, intelligent control, tracking and target recognition. VLSI friendly algorithms are used for designing and coding purposes. Kakkar, V. [16] suggested the comparison in between analog and digital type of neural networks. It showcases which type of implementation suits for which sort of applications. This paper is based on the neural network operations limited to pattern recognition and emphases on layered feed forward neural network. Digital neural networks are appropriate for classification problems. They are far better than analog type in some points such as robustness, drift, noise, its logic description, loading digital weights is easy, no feedback required and it adjusts with new processes and architecture. It is then observed that for low power consumption, an analog neural network is better for classification issues. Morgan, N. et al. [17] reported that, the parallelism is used as a class of algorithm for ANN in signal processing and pattern recognition. The set of ANN specific library cells and computer-aided design (CAD) interface are used for digital ANN design. New measure of silicon productivity for ANN is defined. It has evaluated the existing and planned circuit implementations of common ANN algorithms showcasing the relative effectiveness of silicon usage for the particular application. The result implies that digital implementation is flexible and effective than analog operations. Lotric, U. et al. [18] studied a high range of internal parallelism of neural network for custom chip design. This work is concentrating on actual digital design of a hardware
A Review on VLSI Implementation in Biomedical Application
133
neural network with the help of FPGA technology. And it also aims at proposal of a source, speed and power consumption proficient, feed forward neural network with on chip learning skill. Neural network processing includes lots of multiplication and it must be done in parallel to achieve better design. Fixed point arithmetic is used due to the shortage on chips. Hardware neural network is built with the support of iterative logarithmic multiplier which uses lots of levels of rectification circuits to get a product to the subjective precision. Finally, neural networks become an adaptive model and power efficient. Madhumitha, G. B. et al. [19] reported that ANN is the best option for complex scheming, adoption and learning biological system. An ANN is developed by means of analog and digital operations. This paper’s idea is to develop an ANN with low power and low area similar to the biological arrangement in analog domain and overcome the weight accuracy, device mismatch problems and precision. Back propagation algorithm has been used for this neural network architecture. CMOS 180 nm technology is used to implement circuits which performs arithmetic operations and for implementing the neural network. Signal compression is done with the help of neural networks. Here, both the analog and digital signals are applicable for VLSI implementation of neural networks. Suganya, A. [20] proposed the knowledge based neural network model with hyperbolic tangent function via hashing trick. VHDL programming language is used for coding purpose and simulated in Xilinx 12.1. It reduces the quantity of multiplications, area, power, delay, cost and calculation in VLSI operation of ANN. Schuman, C. D. et al. [21] reported an overview on previous work in neuromorphic computing. Variety of neurons, synapse and networks models is used in neuromorphic and neural network hardware previously. It is unknown that the wide diversity of models will be combined to one model in future as single model has its own weaknesses and strengths. Neuromorphic computing landscape surrounds all from feed forward neural network to biological neural networks. Neural networks as a software model helps for identification the wrong calculations prepared by the human brain and predicts one of the annual export air cargo command with the help of learning algorithms such as back propagation algorithm, supervised learning to formulate surely the neural network fits correctly in this actual world [22]. Ranade, R. et al. [23] discussed the design of neurons, extended range load and digital networks. With the help of ANN, energy function for the multiplication function is designed. ANN related digital functions for the multipliers and adders are useful because of its parallel calculations. In [24], the perception of ANN is used to design binary digital multiplier and compensation of ANN such as synchronous, parallelism and fast speed on processing information are considered by the multiplier. Kung, S. Y. et al. [25] suggested the essentiality of neural networks for multimedia functionalities which includes effective illustrations for audio information, classification & detection techniques, mixing of multimodal signals and multimodal conversion & synchronization. The adaptive neural network technology by giving a uniform resolution to a wide spectrum of multimedia applications is also discussed. It is cleared that space limitation issue will be there while working on these networks. Multimedia can expand
134
N. Sowmya and S. S. Rout
its position and power by taking care of integration of content, incorporation with human ear and integration with other media systems. Awodele, O. et al. [26] found that neural network has evolved over the years and has made a tremendous contribution to the enhancement of various fields. Their purpose is to scrutinize neural networks and the developing applications in the area of engineering and mostly on controls. The needs of neural networks, it’s training and important algorithms to design the neural networks are discussed. The issues like scalability, testing, verification and integration of neural networks into modern environment are mostly concerned in present days. It is recommended that the intelligence systems need to be tested and verified as it is does for the humans. Hayati, M. et al. [27] presented real convection heat transfer simulation from a suitable isothermal horizontal elliptic tube based on ANN, where the experimental job was time taking and costly. Different parameters have been used as inputs and output such as tube axis ration, wall spacing, Rayleigh number and average Nusselt number respectively. Here, the multi layer feed forward network technique is used to elevate the potential of artificial intelligence concepts for the calculation of free convection heat transfer-coefficient. A neural network is built to estimate the average Nusselt number efficiently. Al-Allaf, O. N. A. [28] talks about face detection as one of the most related application of biometric, pattern recognition and image processing by the use of ANN. Different structural design, concepts, database for testing images, performance measuring of face recognition has been used. In future, face detection structure might be based on back propagation neural network with numerous hidden layers. Egwoh, A. Y. et al. [29] presented that old algorithm methods aren’t appropriate for solving today’s problem. Neuro-fuzzy systems are gaining popularity day by day and able to solve the problems faced in any sector. It provides information regarding the recent applications in agriculture and other sectors where neuro-fuzzy approach can be used as per the requirements. Mathematical terms and its functions are used in some areas like robotics, engineering, physics where one of the terms is Fuzzy logic. This term is getting more influenced in all areas like medical science, engineering including households. They have given more insights on fuzzy logic technique and its application in today’s world. Fuzzy logic method has three main stages: fuzzification, inference and defuzzification. Fuzzy logic became a serving hand not only in Mathematics but also in chemical science, robotics, engineering and medical science. Jothi, M. et al. [30] took an example of one of the serious healthcare issues in biomedical sciences is diabetes. With several problems, diabetes leads to epilepsy. It relates to brain disorder where group of brain nerve cells work unusually. Consistent fuzzy model is used in diabetic epilepsy risk level categorization. In two input rule method, heterogeneous and homogeneous fuzzy system has been inspected. For singleinput rule models (SRIM) fuzzy system is suggested. Both the single and two rule methods were tested separately for cerebral blood flow level through FPGA. The testing and validation is completed previously in Matlab and simulated using VHDL and finally synthesization is done by FPGA. Quality and performance were intended to find the improved fuzzy classifier. Finally, FPGA results compare with Matlab results which indicate both the results are closely similar in average area and performance for VLSI systems.
A Review on VLSI Implementation in Biomedical Application
135
3 Results and Comparison Here, different previous works related to neural network are discussed and compared. The different parameters related to the corresponding works also verified with the implementation of various technologies. The issues and outputs of individual works are also observed. Table 1 shows the comparison of various VLSI implementations based neural network systems. Finally, a complete neural network with low cost, low power and low area using VLSI and wireless technology is focused for the future work.
Table 1. Comparison of various VLSI implementation based neural network systems Refs.
Parameters
Implementation type
Issues
Output
Tools/Technology used
[10]
Area, power, speed
Multi objective method
Data representation, analog vs digital and multiplier unit
Low area, low power and low speed multi-objective optimization ANN
MATLAB, VHDL
[12]
Clock speed, time
FPGA
Different generations of VLSI
Increase in scalability
VLSI
[14]
Brain calculations, action and perception
Neural schemas, ANN computations
Pattern recognition, different neural levels
Different neural schemas, action and perception w.r.t. natural language processing
Natural language processing
[15]
Analog hardware, neural networks
Analog type
Classification problems
Neural network implemented successfully
VLSI
[16]
Weight accuracy, drift, noise
Analog vs digital
Classification problems
NA
NA
[17]
Silicon productivity
Digital networks
Problems faced in analog type of implementation
Robust and flexible neural networks
Digital VLSI, CAD, ANN specific libraries
[18]
Speed, power, chip design
Hardware neural network, fixed point arithmetic
Multiplication circuit taking lot of resources and time
Adaptive and power efficient neural network
FPGA
[19]
Area, power, speed, signal compression
Analog type
Classification problems
Effective neural network and also applicable for digital operations
Analog VLSI
[20]
Area, power, delay, cost
Hyperbolic tangent NA function, hashing trick
Weighted neuron net
VHDL, Xilinx 12.1
[21]
Neurons, synapses, network models
Hardware implementation
Study of neuromorphic and neural network hardware
NA
NA
(continued)
136
N. Sowmya and S. S. Rout Table 1. (continued)
Refs.
Parameters
Implementation type
Issues
Output
Tools/Technology used
[23]
Energy function
ANN parallelism computations
NA
Design of energy function for multiplication
VLSI
[25]
Neural networks, multimedia applications
Statistical and parameter estimation techniques
Space limitation issues
Human communication with machines, audio/visual, pattern recognition
Intelligent multimedia processing technology
[26]
Neural networks, controls
NA
Scalability, testing and integration issues
NA
VLSI
[27]
Convection heat transfer coefficient
Multi-layer feedback network, back propagation
Time consuming and expensive
Average Nusselt number with low error
Horizontal elliptic tube method
[28]
Face detection
ANN
Lack of determination Partial Face of face recognition detection system
[29]
Fuzzy logic
Neuro-fuzzy approach NA
NA
[30]
Fuzzy classifier, CBF, EEG
FPGA, fuzzy method and SRIM fuzzy system
VLSI fuzzy classifier VHDL, MATLAB
Classification problems
Computer and information technology NA
4 Conclusion The complete neural network using VLSI implementation with low cost, low power and minimized area are very essential in biomedical system. MICS receiver with better RF circuitry is important building blocks for monitoring, diagnostic and control purposes in biomedical application. Although, analog implementations are quite advantageous than digital implementations but digital implementations are flexible, reduces the size of circuits and increases the operating speed of the circuit. The literature review and comparison table will help the researcher for further research in this domain.
References 1. Iniewski, K.: VLSI Circuits for Biomedical Applications. Artech House, Norwood (2008) 2. Hsu, C.M., Lee, C.M., Yo, T.C., et al.: The low power MICS band biotelemetry architecture and its LNA design for implantable applications. In: Proceedings IEEE Asian Solid-State Circuits Conference, Hangzhou, China, pp. 435–438 (2006) 3. Yuce, M.R., Ng, S.W.P., Myo, N.L., et al.: A MICS band wireless body sensor network. In: Proceedings IEEE Wireless Communications and Networking Conference, Kowloon, China, pp. 2475–2480 (2007) 4. Chang, C.H., Gong, C.S.A., Liou, J.C., et al.: A 260-µW down-conversion demodulator for MICS-band receiver. J. Circuits Syst. Comput. 26(2), 1750027:1–1750027:10 (2017). https:// doi.org/10.1142/s021812661750027x 5. Zaghar, D.R.: Reduction of the error in the hardware neural network. Al-Khwarizmi Eng. J. 3(2), 1–7 (2007)
A Review on VLSI Implementation in Biomedical Application
137
6. Kumar, K., Thakur, G.S.M.: Advanced applications of neural networks and artificial intelligence. Int. J. Inf. Technol. Comput. Sci. 6, 57–68 (2012). https://doi.org/10.5815/ijitcs.2012. 06.08 7. Chasta, N., Chouhan, S., Kumar, Y.: Analog VLSI implementation of neural network architecture for signal processing. Int. J. VLSI Des. Commun. Syst. (VLSICS) 3(2), 243–259 (2012). https://doi.org/10.5121/vlsic.2012.3220 8. Shinde, J.R., Salankar, S.: VLSI implementation of neural network. Curr. Trends Technol. Sci. 4(3), 515–524 (2015) 9. Ardakani, A., Primeau, L., Onizawa, F.N., Hanyu, T., Gross, W.J.: VLSI implementation of deep neural networks using integral stochastic computing. In: Proceedings of 9th International Symposium Turbo Codes Iterative Information Processing (ISTC), pp. 216–220 (2016) 10. Shinde, J.R., Salankar, S.: Multi-objective optimization for VLSI circuits. In: IEEE International Conference on Computational Intelligence & Communication Networks, Kolkata, India (2014) 11. Shinde, J.R., Salankar, S.: Optimal multi-objective approach for VLSI implementation of digital FIR filters. Int. J. Eng. Res. Technol. (IJERT) 3(2), 2470–2474 (2014) 12. Omondi, A.R., Rajapakse, J.C.: FPGA implementation of neural networks, pp. 3–6. Springer (2006) 13. Sahin, S., Becerikli, Y., Yazici, S.: Neural networks implementation in hardware using FPGAs. LNCS, vol. 4234, p. 1105. Springer, Heidelberg (2006) 14. Arbib, M.A.: The Metaphorical Brain: An Introduction to Cybernetics as Artificial Intelligence and Brain Theory. Wiley, New York (1972) 15. Draghici, S.: Neural networks in analog hardware-design and implementation issues. Int. J. Neural Syst. 10(1), 19–42 (2000) 16. Kakkar, V.: Comparative study on analog and digital neural networks. Int. J. Comput. Sci. Netw. Secur. 9(7), 14–21 (2009) 17. Morgan, N., Asanovic, K., Kingsbury, B., Wawrzynek, J.: Developments in digital VLSI design for artificial neural networks. Technical report TR-90-065 18. Lotric, U., Bulic, P.: Applicability of approximate multipliers in hardware neural networks. Neurocomputing 96, 57–75 (2012). https://doi.org/10.1016/j.neucom.2011.09.039 19. Madhumitha, G.B., Devadiga, V.: Analog VLSI implementation of artificial neural network. Int. J. Innov. Res. Comput. Commun. Eng. 3(5), 72–80 (2015) 20. Suganya, A., Sakubar, S.J.: An priority based weighted neuron net VLSI implementation. In: International Conference on Advanced Communication, Control & Computing, Ramanathapuram, India, pp. 285–289 (2016) 21. Schuman, C.D., Potok, T.E., Patton, R.M., Birdwell, J.D., et al.: A survey of neuromorphic computing and neural networks in hardware. Neural Evol. Comput. 1–88 (2017) 22. Sivanandam, S.N., Sumathi, S., Deepa, S.N.: Introduction to Neural Networks Using Matlab 6.0, pp. 1–26. Tata McGraw-Hill, New Delhi (2006) 23. Ranade, R., Bhandari, S., Chandorkar, A.N.: VLSI implementation of artificial neural network based digital multiplier and adder. In: Proceedings of the IEEE International Conference on VLSI Design, Bangalore, India, pp. 318–319 (1996) 24. Biederman, D.C., Ososanya, E.T.: Design of a neural network-based digital multiplier. In: Proceedings of the Twenty-Ninth South-Eastern Symposium on System Theory, Cookeville, TN, pp. 320–326 (1997) 25. Kung, S.Y., Hwang, J.N.: Neural networks for intelligent multimedia processing. Proc. IEEE 86(6), 1244–1272 (1998) 26. Awodele, O., Jegede, O.: Neural networks and its application in engineering. In: Proceedings of Informing Science & IT Education Conference, pp. 83–95 (2009). https://doi.org/10.28945/ 3317
138
N. Sowmya and S. S. Rout
27. Hayati, M., et al.: Application of artificial neural networks for prediction of natural convection heat transfer from a confined horizontal elliptic tube. World Acad. Sci. Eng. Technol. 28, 269–274 (2007) 28. Al-Allaf, O.N.A.: Review of face detection systems based artificial neural networks algorithms. Int. J. Multimed. Appl. 6(1), 1–16 (2014) 29. Egwoh, A.Y., Onibere, E.M., Odion, P.O.: Application of neuro-fuzzy system: a literature review. Int. J. Comput. Sci. Netw. Secur. 18(12), 1–6 (2018) 30. Jothi, M., Balamurugan, N.B., Harikumar, R.: Design and implementation of VLSI fuzzy classifier for biomedical application. Int. J. Innov. Res. Sci. Eng. Technol. 3(3), 2641–2648 (2014)
Comparative Analysis of a Dispersion Compensating Fiber Optic Link Using FBG Based on Different Grating Length and Extinction Ratio for Long Haul Communication Padmini Mishra1(B) , Shasanka Sekhar Rout1 , G. Palai2 , and L. Spandana1 1 GIET University, Gunupur, India
[email protected] 2 GITA, Bhubaneswar, India
Abstract. In the current scenario, optical communication is widely used because of its merits. The main advantages of optical fiber include flexibility, transparent, reliable, cost effective and secure. With these merits, it also has some demerits such as dispersion, which may affect the system performance. This research is done to analyze the effects of dispersion and how to overcome it. In this paper, the use of fiber Bragg grating (FBG) in an optical transmission link is discussed. The main advantage of FBG is, it is cost effective and easy to use. To get better performance results, the transmission system is simulated and analyzed the output based on the parameters like grating length, MZ extinction ratio, type of pulse generator, erbium doped fiber amplifier (EDFA) length, PR bit rate and length of the optical fiber. The variation of system performance is observed by varying the parameters of the system components and these parameters are compared with the help of eye diagram and BER values. Keywords: FBG · EDFA · Q-factor · MZ extinction ratio · BER
1 Introduction The use of optical communication system started from the year 1790. During 1920, Clarence W. Hansell and John Logie Baird discovered the idea to transmit the optical signal using transparent hollow pipes to transmit data (images) for television systems. Van-Heelin (1954) developed a fiber with transparent cladding on a bare fiber which reduces the interference between fibers and also reduces the outside distortion. In 1960, fibers with glass as cladding were invented which has shown attenuation of about 1 dB/m. In 1964, Charles K. Kao invented an optical fiber made up of pure glass which reduced the signal loss to great extent. Optical Communication uses light as medium to carry information. The fiber optic communication system basically has a transmitting end, a channel and a receiving end. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 139–147, 2021. https://doi.org/10.1007/978-3-030-49339-4_15
140
P. Mishra et al.
The transmitter transmits the message signal through the channel to the receiver. In fiber optics communication system, the optical signal uses a transmitter, then it relays the signal through a fiber to the receiver end, ensuring no distortion and receive the optical signal and convert it into an electrical signal [1]. The use of erbium doped fiber amplifier (EDFA) in optical communication system has made the performance better by compensating the signal loss due to dispersion in the fiber optic system. Dispersion is one of the major performances limiting factor for optical communication. It hampers the performance of optical fiber communication. Dispersion is proportional to length of the fiber, so when a pulse travels through an optical fiber due to dispersion it becomes broadened. When digital data is transmitted through fiber optic channel, the signal strength decreases and pulse duration increases after covering some distance. At a particular point, signal overlap occurs and the effect is known as inter symbol interference (ISI). In fiber optics, ISI is represented as signal dispersion as shown in Fig. 1.
Fig. 1. Figure showing the effects of ISI on the signal
Fiber Bragg grating has a set of parallel plate like deflectors which are separated by a particular wavelength that reflect the light if the Bragg condition is satisfied for constructive interference, i.e.; 2dSin(x) [2, 3]. It acts like a fiber which is used to minimize the dispersion. It is cheap and easy to use. Fiber Bragg grating spectral specification is based on a periodic change of refractive index in the core of the optical fiber at different lengths which can vary from 0.3 to 50 mm [4]. The most important property of fiber Bragg grating which we have used here i.e. FBG reflects particular wavelength of light and transmits all other as shown in Fig. 2.
Fig. 2. The effects on the signal after passing through the FBG [2]
Comparative Analysis of a Dispersion Compensating Fiber Optic Link
141
In this paper, the simulation of the optical transmission system in optical fiber has been discussed by comparing the parameters like grating length, modulation extension ratio, types of pulse generator, EDFA length, bit rate and length of optical fiber.
2 System Description and Simulation Design In the designed transmission system setup, the data is generated by pseudo-random bit generator and is fed to the pulse generator. The produced signal is then optically modulated using the MZ modulator derived by continuous wave (CW) LASER. The optical signal from optical modulator is transmitted over the optical fiber. At the receiver end, an optical amplifier EDFA is used to amplify the optical signal without converting the signal into electrical domain. The output of EDFA is connected to FBG which acts as a filter to reduce dispersion [5, 6]. Further the output of FBG is connected to the low pass filter to recover the low frequency signal and then the final output signal is visualised using the BER analyser. Finally, the Q-factor and BER are analysed. The simulated optical system is shown in Fig. 3.
Fig. 3. Block diagram of the simulated optical system
Here the simulation is done by using optisystem software. It gives the ability to change various parameters of the components that we use in our optical communication system and helps us in finding better results as shown in Fig. 4.
3 Result and Analysis The parameters which we considered and optimised for better system performance are fiber length, EDFA length, grating length, MZ extension ratio. The types of pulse generator and the recorded readings of fiber length, MZ extension ratio, grating length and EDFA length are shown in Tables 1, 2, 3 and 4 respectively. The performance of the system is characterised and realised by the Q-factor. The Fig. 5 shows variation in Q factor with increase in distance.
142
P. Mishra et al.
Fig. 4. Figure represents the simulated optical communication system
Table 1. The Q-factor is analyzed by varying the distance of the optical fiber Distance (km) Q-factor 50
12.6751
55
12.555
60
11.567
65
10.235
70
9.156
75
8.657
80
7.396
85
6.854
Table 2. The Q-factor is analyzed by varying the MZ extinction ratio MZ extinction ratio (dB) Q-factor 5
6.235
10
6.538
15
6.874
20
6.953
25
7.143
30
7.334
35
7.491
Where, MZ extinction ratio = 30 dB, grating length = 8 mm, EDFA length = 12 m and bit rate = 10 Gbps. From Table 1 it can be seen that with increase in distance, Qfactor is decreasing due to dispersion. While analysing the above table, the fiber length
Comparative Analysis of a Dispersion Compensating Fiber Optic Link Table 3. The Q-factor is analyzed by varying the grating length Grating length (mm) Q-factor 2
6.775
4
6.807
6
6.870
8
6.933
10
7.041
12
7.199
14
7.214
16
7.289
Table 4. The Q-factor is analyzed by varying the EDFA length EDFA length (m) Q-factor 5
6.534
10
6.912
15
7.073
20
7.134
25
7.175
30
7.261
35
7.369
Fig. 5. Quality factor variations with the enlargement distance value
143
144
P. Mishra et al.
is to be considered 80 km for better results. The corresponding eye diagram is shown in Fig. 6, with distance of 80 km and Q-factor of 7.396.
Fig. 6. Eye diagram with distance of 80 km and Q-factor of 7.396
Here, distance = 80 km, grating length = 2 mm, EDFA length = 12 m and bit rate = 10 Gbps. From Table 2, it is observed that with the increase in MZ extinction ratio the Q-factor increases. MZ extinction ratio is the ratio between the maximum intensity to the minimum intensity at the same port [7]. With the increase in the MZ extinction ratio, the Q-factor increases and the BER value improves. From the above table it is clear that when MZ extinction ratio is 30 dB, better system performance can be obtained. The Fig. 7 shows the variation in Q-factor with the increase in MZ extinction ratio.
Fig. 7. Quality factor variations with the incremental MZ extinction ratio value
The corresponding eye diagram is shown in Fig. 8, with MZ extinction ratio of 30 dB and Q-factor of 7.334. Here, distance = 80 km, EDFA length = 15 m, bit rate = 10 Gbps and MZ extinction ratio = 30 dB. From Table 3, it is observed that with the increase in grating length, Qfactor increases. From the Bragg’s wavelength formula i.e.; λ = 2 nl, where λ is the reflected wavelength, n is the refractive index, and l is the grating length [8, 9], the
Comparative Analysis of a Dispersion Compensating Fiber Optic Link
145
Fig. 8. Eye diagram with MZ extinction ratio of 30 dB and Q-factor of 7.334
increase in the grating length increases the reflected wavelength. The Fig. 9 shows variation in Q-factor with increase in grating length.
Fig. 9. Quality factor variations with the incremental value of length
The corresponding eye diagram is shown in Fig. 10 with grating length of 12 mm and Q-factor of 7.199. Where, distance = 80 km, grating length = 10 mm, bit rate = 10 Gbps and MZ extinction ratio = 30 dB. From Table 4, it is observed that with the increase in the EDFA length, the Q-factor increases. The active medium in the Erbium doped fiber amplifier consists of 10 m to 30 m length of optical fiber which is lightly doped with Erbium (a rare earth metal) [10]. The output signal of the EDFA is directly proportional to the length of this optical fiber in the EDFA. The Fig. 11 shows the variation in the Q-factor with increase in EDFA length. The corresponding eye diagram is shown in Fig. 12 with EDFA length of 15 m and Q-factor of 7.073.
146
P. Mishra et al.
Fig. 10. Eye diagram with grating length of 12 mm and Q-factor of 7.199
Fig. 11. Quality factor variations with the incremental value of EDFA length
Fig. 12. Eye diagram with EDFA length of 15 m and Q-factor of 7.073
Comparative Analysis of a Dispersion Compensating Fiber Optic Link
147
4 Conclusion In this paper, an effective optical transmission system is designed and simulated to measure the system performance. The various parameters like grating length, MZ extinction ratio, type of pulse generator, EDFA length, PR bit rate and length of the optical fiber in terms Q-factor and bit error rate is analysed and compared with each other. This work can be further extended and analysed in terms of laser input power or by considering a hybrid dispersion compensation technique.
References 1. Panda, T.K., Mishra, P., Patra, K.C., Barapanda, N.K.: Investigation and performance analysis of WDM system implementing FBG at different grating length and data rate for long haul optical communication. In: IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI), Chennai (2017). https://doi.org/10.1109/icpcsi.2017. 8392343 2. Dar, A.B., Jha, R.K.: Chromatic dispersion compensation techniques and characterization of fiber Bragg grating for dispersion compensation. Opt. Quant. Electron. 49(3), 1–35 (2017). https://doi.org/10.1007/s11082-017-0944-4 3. Panda, T.K., Mishra, R.K., Shikarwar, S., Ray, P.: Performance analysis and comparison of dispersion compensation using FBG and DCF in DWDM system. Int. J. Res. Appl. Sci. Eng. Technol. 5(12), 1133–1139 (2017) 4. Giles, C.R.: Lightwave applications of fiber Bragg gratings. J. Lightwave Technol. 15(8), 1391–1404 (1997). https://doi.org/10.1109/50.618357 5. Panda, T.K., Sahu, A.N., Sinha, A.: Performance analysis of 50 km long fiber optic link using fiber Bragg grating for dispersion compensation. Int. Res. J. Eng. Technol. 3(3), 95–98 (2016) 6. Dwnie, J.D., Ruffin, A.B., Hurley, J.: Ultra low loss optical fiber enabling purely passive 10 Gbits PON system with 100 km length. Opt. Express 17, 2392–2399 (2009) 7. Panda, T.K., Mishra, R.K., Parakram, K., Sinha, A.: Performance comparison of dispersion compensation in a pre, post and symmetrical arrangement using DCF for long haul optical communication. Int. J. Eng. Technol. 3(7), 14–20 (2016) 8. Chen, G.F.R., Wang, T., Donnelly, C.A., Tan, D.T.H.: Second and third order dispersion generation using nonlinearly chirped silicon waveguide gratings. Opt. Express 21(24), 29223– 29230 (2013). https://doi.org/10.1364/OE.21.029223 9. Erdogan, T.: Fiber grating spectra. J. Lightwave Technol. 15(8), 1277–1294 (1997). https:// doi.org/10.1109/50.618322 10. Diasty, F.E., Heaney, A., Erdogan, T.: Analysis of fiber Bragg gratings by a side-diffraction interference technique. Appl. Opt. 40(6), 890–896 (2001). https://doi.org/10.1364/AO.40. 000890
Unsupervised Learning Method for Mineral Identification from Hyperspectral Data P. Prabhavathy1(B) , B. K. Tripathy1 , and M. Venkatesan2 1 School of Information Technology and Engineering,
Vellore Institute of Technology, Vellore, India {pprabhavthy,bktripathy}@vit.ac.in 2 Department of Computer Science Engineering, National Institute of Technology, Karnataka, Mangalore, India [email protected]
Abstract. Hyperspectral imagery is one of the research area in the field of Remote sensing. Hyperspectral sensors record reflectance (also called spectra signature) of object or material or region across the electromagnetic spectrum. Mineral identification is an urban application in the field of Remote sensing of Hyperspectral data. EO-1 hyperion dataset is unlabeled data. Various types of clustering algorithms are proposed to identify minerals. In this work principal component analysis is used to reduced it’s dimension by reducing bands. Hard-clustering and soft-clustering algorithms are applied on given data to classify the minerals into classes. K-means is hard type of clustering which classify only non-overlapping cluster however, PFCM is soft type of clustering which allow a data points to belongs more than one cluster. Further, results are compared using cluster validity index using DBI value. Both clustering algorithms are experiments on original HSI image and reduced bands. Result shows that PFCM is perform better than K-means for the both type of images. Keywords: Hyperspectral imagery · Clustering · K-Means · PCA · Fuzzy c-means · Possibilistic FCM · Davies Bouldin Index
1 Introduction Over the past three decade, there has been remarkable increase in the remote sensing applications in various fields e.g. geographic, medicals, mineralogy, land cover classification and in different region of climates also. Hyperspectral data with geographical information system (GIS) provides exceptional spectral and spatial information, which is used in various studies related to earth’s environment. Hyperspectral sensors such as AVIRIS, HYDICE, HyMap and HYPERION are used to collect images of earth surface in the form of narrow, continuous and discrete spectral bands. These spectral bands form a complete, continuous spectral pattern of each pixel. Geology is one of the most hot research area of remote sensing applications which includes identification and mapping, target information detection, resource prediction © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 148–160, 2021. https://doi.org/10.1007/978-3-030-49339-4_16
Unsupervised Learning Method for Mineral Identification
149
and evaluation and so on. Hyperspectral image classification can be of three type (i) supervised (ii) semi supervised (iii) unsupervised classification, based on ground truth data availability. In semi supervised classification [1] we use some labeled data to classify the unlabeled data. However, in unsupervised classification [2], an image is divided into a number of groups of similar characteristic of their pixel values and then classify into classes, without any knowledge of ground truth data. Spectral and spatial features learning with unlabeled data is high interest topic. According to supervised classification [3] on standard HSI data set more than 95% of accuracy is achieved. Unsupervised classification algorithm’s accuracy are far away compared to supervised technique. A. Spectral Image Spectral image is define as an, image format representation of the measurement of the reflectance value of a phenomenon, object, region, etc., that is scattered from the earths surface. Spectral Images are three dimensional matrices which is the product of stacking several two dimensional (x-axis and y axis) images together. Each of these image stores the reflectance data of the land-form of a particular wavelength of Infrared Regions. These wavelength are have a contagious range (400–2500 nm) with a step of some nanometers. The resultant third dimension is representing the spectral data representing discrete spectral bands. In order to find minerals using unsupervised classification we used clustering concept, which group the similar kind of spectral features into one type of mineral. The main idea behind unsupervised learning is to extract the useful information from unlabeled data. Based on cluster formation it divide into two type i.e. soft clustering and hard clustering. Hard clustering makes such a cluster in which each data points belong to only one cluster. However, in soft clustering each data points may belong to more than one cluster often with a degree of membership associated with each cluster. This paper is organized in such a way. Section 2 contains previous related work, Sect. 3 provides brief view of HSI model and study area. Next Sect. 4 will show the K-means and Fuzzy c-means algorithm and then Sect. 5 explain about possibilistic fuzzy c-means. Next Sect. 6 will have result and discussion and last section will conclude this paper.
2 Related Work Minerals mapping used different types of minerals and map them according to their physical and chemical characteristic. It is one of the important application in high resolution of remote sensing hyperspectral data technique. According to [4, 5], Deep learning research has been extensively pushed by Internet companies, such as Google, Baidu, Microsoft, and Facebook, for several image analysis tasks, including image indexing, vehicle detection, segmentation, and object detection. Remote-sensing applications are achieved by deep learning (Table 1). Many research has been presented by analyzing mineral spectra of VNIR or SWIR bands which gives a promising model of mineral identification with acceptable accuracy.
150
P. Prabhavathy et al. Table 1. List of acronyms
Acronyms
Full form
HSI
Hyperspectral Image
EO-1
Earth Observation-1 (satellite)
FLAASH
Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes
PCA
Principal Component Analysis
FCM
Fuzzy C-means
PFCM
Possibilistic Fuzzy C-means
BSQ
Band Sequential
BIL
Band Interleaved by Line
BIP
Band Interleaved by Pixel
SWIR
Shortwave Infrared
VNIR
Visible Near Infrared
In reference to [6, 7], to identify clay mineral by collecting soil samples using spectral angler mapper, abundance of clay mineral available in the range of 0.4 to 2.5 m. Specific to the 2.0 to 2.5 m spectral range covers spectral features of hydroxyl-bearing minerals, sulfates, and carbonates common to many geologic units and hydrothermal alteration. The absorption feature parameter like depth, width, area etc. was derived from spectral profile of soil sample. Different type of hydrothermal alteration mineral are mapped in [8] with gold perspective. Classification of Hyperspectral images are based on both spectral and spatial features. In references to [9], their proposed work are based on low spatial resolution because spatial features can changed within meters that affects different factors of the images which will result as imperfect imaging, atmospheric scattering while detecting reflectance etc. Other factor which degrade image quality are sensor noise and secondary illumination effects, spatial resolution also focus on to remove these effects to improve in hyperspectral imagery. In [10–12], mineral identified by using combination of K-means and linear spectral unmixing (LSU), which used LSU result with cluster analysis result. Classes do not have always a crisp boundary or a homogeneous area because of that alHongyuan Huo et al. [13] has presented land cover classification using type-II fuzzy c-means clustering. They proposed algorithm by considering spectral uncertainty called as type-II fuzzy c-means. Improved fuzzy c-means gives better result than traditional fuzzy c-means clustering.
3 HSI Model Hyperspectral remote sensing data are represented as a 3D data cube with spatial and spectral information in X-Y and Z-plane respectively. These data have more than 200 contiguous bands of wavelength at bandwidths of about 5-10 nm. Images has been
Unsupervised Learning Method for Mineral Identification
151
collected from EO-1 Hyperion satellite, are used to identify the minerals available at Nilgiris district of Tamil Nadu. This hyperspectral image has 242 narrow continuous spectral bands with spectral variety in between 0.4 to 2.5 m at 10 nm interval and 30 m spatial resolution. Hyperspectral imaging has four components [14] are- imaging scene, sensor, image processing and atmospheric correction. Two components imaging scene and sensor are already covered in existing data available at [15] and rest two are explain in brief in below section. Image pre-processing require to provide necessary information and atmospheric correction used to remove all absorption and non-absorption feature from data. A. Study Area Images has been collected from EO-1 Hyperion satellite, are used to identify the minerals available at Nilgiri district of Tamil Nadu. This hyperspectral image has 242 narrow continuous spectral bands with spectral variety in between 0.4 to 2.5 m at 10 nm interval and 30 m spatial resolution. Figure 1 shows the geological location of study area. This covered some area of Nilgiri Hills which has excellence of minerals from which some common minerals available are [16] Quartz, China clay, Magnetise, Iron ore, Hematite.
Fig. 1. Geological location of study area
B. Data Pre-processing Pre-processing is the process, in which all noise and negative impact on data we will be removed or try to minimize those effects. In hyperspectral data, pre-processing will remove all bad band, noisy band and zero band instead of pixels. The bands which dont
152
P. Prabhavathy et al.
have any pixel information in Hyeprspectral data is called zero band. By using ENVI software (used for visualizing a geological data) we visualize that some set of bands are zero and bad band. Vertical stripe may occur in the region where brightness of pixel varies relatively to nearby pixels and make image unclear and gives negative impact on further processing. Using local destriping algorithm in Eq. 1 we can remove this type of strips at some extent. n
[(xi .. 1; j; k) + (xi + 1; j; k)] = 2n
(1)
j=1
In EO-1 Hyperion data all zero band and bad band are listed below (Table 2) Table 2. List of zero band S. No.
Zero band
Reason
1.
1–7
Zero bands
2.
58–78
Overlap region
3.
120–132, 165–182, 218–224
Water vapour absorption
4.
184–186, 225–242
Bad Bands
C. Atmospheric Correction When types of particles and gases, downward radiation and upward radiation etc. are scattered by atmosphere are also collected by sensors, which affects the reflected energy store in the form of spectrum. Atmospheric correction is compulsory to remove all these unwanted effects. Atmospheric correction is compulsory to remove all these unwanted effects. FLAASH stand for Fast Line-of-sight Atmospheric Analysis of Hyper-cube and able to process wavelengths in VNIR and SWIR region up to 3 m. FLAASH also able to remove adjacency consequence, cirrus and opaque cloud map effect. In [17], a comparison shows that FLAASH give better performance compare to other algorithm. FLAASH algorithm is based on given Eq. 2 L=
Bp Ap + + La 1 − pe S 1 − pe S
(2)
Where is the pixel surface reflectance, e is an average surface reflectance for the pixel and a surrounding region, S is the spherical albedo of the atmosphere, La is the radiance back scattered by the atmosphere, A and B are coefficients that depend on atmospheric and geometric conditions but not on the surface.
Unsupervised Learning Method for Mineral Identification
153
4 Classification Approach This section focuses on data reduction, which is important in the high dimension space and then we will discuss k-means and fuzzy c-means clustering algorithm and then also see how possibilistic fuzzy c-means clustering is better than traditional clustering. A. Dimensionality Reduction PCA (Principle Component Analysis) is dimensionality reduction or band selection technique. In Hyperspectral data feature extraction done by band selection. PCA is based on eigen value decomposition of co-variance matrix. Let u consider hyperspectral image is of M N B size. Pixel vector is calculated using all band as in stack for particular pixels. Xi = [x1 ; x2 ; x3 ; :::xN ] (Fig. 2).
Fig. 2. Pixel Vector
Where B is number of band and M and N are number of rows and columns respectively, i = 1; 2; 3; :::M1 and M1 = M * N. Mean will be calculated by Eq. 3. m=
M1 1 Xi = [x1; x2; x3; :::xN]T M1
(3)
i=1
Co-variance matrix will be calculated by Eq. 4Cx =
M1 1 (Xi − M )(Xi − M )T M1
(4)
i=1
For eigenvalue decomposition of the covariance matrixCx = ADAT
(5)
where D = diag (λ1, λ2, λ3, λ4, . . . ..λN ) is the diagonal matrix composed of the eigenvalues (λ1, λ2, λ3, λ4, . . . ..λN ) of the covariance matrix Cx and A is the orthonormal
154
P. Prabhavathy et al.
matrix composed of the corresponding N dimension eigenvectors ak(k = 1; 2; :::;N) as follows: A = (a1; a2; :::; aN)
(6)
The linear transformation isYi = AT Xi (i = 1; 2;:::; M1 ) is the PCA pixel vector, and all modified pixel vectors mapped using above process. The new image with reduced band is obtained by replacing the old pixel vector to modified pixel vector. Original image after pre-processing has 154 bands from which 22 bands are selected as a principal component. 1) Advantage of Principal Component Analysis: Large percentage of total co-variance of input image can be covered in only first few principal component. PCA doen’t require any parameter or any information about how data was recorded and it is completely non-parametric i.e. any data set can be used as an input in PCA algorithm. B. K-means Clustering K-means is one of the most popular and used unsupervised algorithm. It finds cluster by calculating euclidean distance between the points such that minimize the distance inside the cluster and maximize the distance between clusters. We divide the whole dataset into two part almost in the ratio of 70–30 Model is trained using 70% of data and rest data used for testing. Hyperspectral data after pre-processing is of size 3400 × 256 × 154. K-means algorithm is apply on M × B where B is no. of band and M is of size 3400 × 256 and k (no of cluster) is 6. K-Means Clustering Input:Hyperspectral image of size MXB and no. of cluster k; Result:k cluster and mean[k]; 1) 2) 3) 4)
initialize k means by random k vectors from the input matrix X; mean sq error = 0; new mean sq error = 0; Do a) b) c) d) e) a) b)
mean sq error = new mean sq error For vector 1 x do; calculate the index i such that vector has min euclidean distance from mean[i]; add vector to cluster[i]; Endfor For i = 1 to k do count0; i) ii) iii) iv)
For vector 1 cluster[i] do; count = count + 1; mean[i] = mean[i] + vector; Endfor
c) mean[i] = mean[i] = count;
Unsupervised Learning Method for Mineral Identification
155
d) Endfor 5) recalculate new mean sq error; 6) While|mean sq error - new mean sq errorj! = 0; C. Fuzzy C-means K-means classify only those points which belong to only one cluster but real life data generally belong to more than one cluster. Unlike k-means clustering techniques FCM classify an observation point more than one cluster. Based on degree of membership to each cluster observation classify into classes. Let’s consider hyperspectral data points are Xi = [x1 ; x2 ; x3 ; :::xN ] where N is number of band and wants to partition into k cluster. Initially algorithm assign random degree to each observation with each cluster and then the repeatedly for each cluster i calculate centroid and degree of membership until converge and value of degree of membership in two consecutive step is more than threshold value. • First calculate the centroid for each cluster i by finding mean of all observation with weighted degree of membership U as a weight values using Eq. 7.
k n
FCM(X,U,C) =
i=1 j=1
2 uijm ∗ xi − cj N j=1
(7) uijm
Where U = [uij ] ∈ Mfc , is fuzzy partition matrix of X, C = C1 ; C2 ; :::Ck is vector of cluster centers • Upgrade the degree of membership value for all observation of each cluster using the Eq. 8.
ui =
1 k n=1
(8)
2 −ci m−1 ( xxji−c ) k
Where m is degree of fuzziness. However, FCM is able to assign the degree of membership of cluster to each data points
5 Possibilistic Fuzzy C-Means PFCM [18] is similar to FCM clustering algorithm. But in FCM clustering there is constraint that all uji for given data point xi sum upto to 1, is removed in PFCM. Each
156
P. Prabhavathy et al.
data points is associated with each cluster ci with the value of degree of membership uji . Unlike FCM degree of membership uji of each data point with each cluster is no more constant. Instead of grade of membership, uji is treated as degree of compatibility between xi and uij and it become independent of xi and ci . • PFCM algorithm [19] is given below-
k n
PFCM(X,U,T,C) =
i=1 j=1
2 (auijm + btijn ) ∗ xi − cj N j=1
+ uijm
k i=1
ni
n
(1 − tij )n
(9)
j=1
Where a > 0; b > 0; m > 1; > 1 and tij is typicality matrix and update by using Eq. 10. tij = (1 + (
1 b xi − ci K−1 ))−1 ni
(10)
Where 1 ≤ i ≤ K and 1 ≤ j ≤ N. • Degree of compatibility for all observation will be calculated using the Eq. 11.
n
ui =
j=1
(auijm + btijn )m ∗ xj
n j=1
(11) (auijm
+ btijn )m
Value of a and b is constant and relatively important. Higher value of b with a = 1 gives better cluster
6 Experiment and Result For this study data is collected from EO-1 satellite, is a hyperspectral image of size 3400 256 154. Then it is converted into M B such that M is of 3400 256 and B = 154, is no of bands. Nilgiri Hills has excellence of minerals from which some common minerals available are [16] Quartz, China clay, Magnetise, Iron ore, Hematite. This study include two study. In first study, it consider PCA result as a ground truth. Based on this ground truth we trained some well known supervised models. Bayes classification and Decision tree classification are used as supervised model. We divide input image in two part of ratio 70–30 ratio. And then trained the model and accuracy score achieved by both is 92% and 94% respectively.
Unsupervised Learning Method for Mineral Identification
Fig. 3. Performance of Algorithms based on DBI
157
Fig. 4. K-means Classification Result
In second part, K-means and PFCM is applied on dataset. Figure 4 shows K-means clustering result. K-means is hard type of clustering which does not consider outliers and classify the mineral into six classes (including water body). Some area is like where more than one minerals are present but k-means doesn’t find the overlapped cluster. However soft clustering algorithm can classify overlapped cluster as well. PFCM classify minerals into six classes and also shows the area where more than one minerals can present as shown in Fig. 5. A. Davies Bouldin Index (DBI) The validation of clustering structures is the most difficult part of cluster analysis. It is used to measure the goodness of a clustering structure without respect to external information. DBI is a metric for evaluating clustering algorithm. DBI considers the average case of each cluster by utilizing the mean error of each cluster. This is an internal evaluation scheme, where the validation of how well the clustering has been done is made using quantities and features inherent to the dataset. Minimize intra cluster variance and maximize the distance between clusters. Minimum value of DBI consider good clustering. To validate the clustering algorithm we used DBI. Also checked DBI value on reduced data as well to check the performance of both clustering algorithms. PFCM gives good result in both the cases than K-means. DBI score achieved by both algorithm is shown in Table 3 below. Graph plot in Fig. 3 is between clustering algorithm with their respective DBI score. Lowest DBI score means more accurate cluster [20]. Minerals classes present in this data are Quartz, China Clay, Hematite, Magnetite, Iron Ore, Water Body. Hematite (F e2 O3 ), Magnetite(F e3 O4 ) contain 60% of iron component. So Magnetite, Hematite and Iron ore, these three minerals are having similar type of spectral signature and have more possibility of present in more than one cluster. Similarly clay minerals contains water component. China clay (Al2O3(SiO2 )2 (H2 O)2 ) and water body have similar type of features so these minerals also have a chance be
158
P. Prabhavathy et al. Table 3. Comparison of cluster’s Davies Bouldin Index S.
Dataset
Methods DBI value
1.
Input Image
K-Means 1.1004035 PFCM 0.49211867e−07
2.
Reduced Data K-Means 0.47175 PFCM 0.002187
No.
Fig. 5. PFCM Clustering result
Fig. 6. Spectra profile of Six Minerals Class
present in more than one cluster. Because of these type of minerals present in the region PFCM perform better than K-means clustering. Spectra profile of these six class of minerals shown in Fig. 6. This spectra profile is plot between reflectance value of image and actual range of wavelength of minerals.
7 Conclusion and Future Work This work include unsupervised classification algorithms for classifying the minerals from hyperspectral image. Reduction of HSI dimension done by using PCA algorithm as a band selection technique. It reduces the band by selecting 22 bands out of 154 bands. These selection is based on eigen value, band with high eigen value consider as a high information band. These band consider as a ground truth and compare with some supervised classification. Algorithms like Bayesian classification and decision tree classification give good accuracy result. Also compare two clustering technique Kmeans and PFCM. Their validity checked using cluster validity index value using DBI value. However, PFCM perform better than K-means by considering DBI value. Till now many application and their techniques possible in each area there are many technique which considerably improve performance of the existing model still it is at the relatively
Unsupervised Learning Method for Mineral Identification
159
young. We can also used labeled data and apply existing model so that we check it’s performance. Deep learning can improve it’s performance. To extract the features from deep network we need a labeled data so that it can work on deep network with limited training data is another challenging task.
References 1. Sawant, S.S., Prabukumar, M.: Semi-supervised techniques based hyper-spectral image classification: a survey. In: 2017 Innovations in Power and Advanced Computing Technologies (i-PACT), pp. 1–8, April 2017 2. Mou, L., Ghamisi, P., Zhu, X.X.: Unsupervised spectral-spatial feature learning via deep residual Conv-Deconv network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 56(1), 391–406 (2018) 3. Zhao, Y., Yuan, Y., Wang, Q.: Fast spectral clustering for unsupervised hyperspectral image classification. Remote Sens. 11(4) (2019). http://www.mdpi.com/ 2072-4292/11/4/399 4. Zhu, X.X., Tuia, D., Mou, L., Xia, G., Zhang, L., Xu, F., Fraundorfer, F.: Deep learning in remote sensing: a comprehensive review and list of resources. IEEE Geosci. Remote Sens. Mag. 5(4), 8–36 (2017) 5. Cao, L., Wang, C., Li, J.: Vehicle detection from highway satellite images via transfer learning. Inf. Sci. 366, 177–187 (2016). http://www.sciencedirect.com/science/article/pii/S00200255 16000062 6. Janaki Rama Suresh, G., Kandrika, S., Sivasamy, R.: Hyperspectral analysis of clay minerals. ISPRS – Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. XL-8, 443–446 (2014) 7. Satpathy, R.: Spectral analysis of hyperion data for mapping the spatial variation of in a part of Latehar Gumla district, Jharkhand. J. Geograph. Inf. Syst. 2, 210–214 (2010) 8. Carrino, T.A., Crsta, A.P., Toledo, C.L.B., Silva, A.M.: Hyperspectral remote sensing applied to mineral exploration in southern Peru: a multiple data integration approach in the Chapi Chiara gold prospect. Int. J. Appl. Earth Obs. Geoinf. 64, 287–300 (2018). http://www.scienc edirect.com/science/article/pii/S0303243417301071 9. Villa, A., Chanussot, J., Benediktsson, J., Jutten, C., Dambreville, R.: Unsupervised methods for the classification of hyperspectral images with low spatial resolution. Pattern Recogn. 46(6), 1556–1568 (2013). http://www.sciencedirect.com/ science/article/pii/S0031320312004967 10. Ishidoshiro, N., Yamaguchi, Y., Noda, S., Asano, Y., Kondo, T., Kawakami, Y., Mitsuishi, M., Nakamura, H.: Geological mapping by combining spectral unmixing and cluster analysis for hyperspectral data. ISPRS - Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. XLI-B8, 431–435 (2016) 11. Clark, R., Boardman, J., Mustard, J., Kruse, F., Ong, C., Pieters, C., Swayze, G.: Mineral mapping and applications of imaging spectroscopy. In: 2006 IEEE International Symposium on Geoscience and Remote Sensing, pp. 1986–1989, July 2006 12. Tangestani, M.: Spectral angle mapping and linear spectral unmixing of the ASTER data for alteration mapping at Sarduiyeh area, SE Kerman, Iran, May 2019 13. Huo, H., Guo, J., Li, Z.-L.: Hyperspectral image classification for land cover based on an improved interval type-II fuzzy C-means approach. Sensors 18(2) (2018). http://www.mdpi. com/1424-8220/18/2/363 14. Li, N., Huang, X., Zhao, H., Qiu, X., Geng, R., Jia, X., Wang, D.: Multiparameter optimization for mineral mapping using hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 11(4), 1348–1357 (2018) 15. https://earthexplorer.usgs.gov/
160
P. Prabhavathy et al.
16. Vigneshkumar, M., Yarakkula, K.: Nontronite mineral identification in Nilgiri Hills of Tamil Nadu using hyperspectral remote sensing. IOP Conf. Ser.: Mater. Sci. Eng. 263, 032001 (2017). https://doi.org/10.1088/1757-899X/263/3/032001 17. Rani, N., Mandla, V.R., Singh, T.: Evaluation of atmospheric corrections on hyperspectral data with special reference to mineral mapping. Geosci. Front. 8(4), 797 – 808 (2017). Special Issue: Deep Seated Magmas and Their Mantle Roots. http://www.sciencedirect.com/science/ article/pii/S1674987116300603 18. Simhachalam, B., Ganesan, G.: Possibilistic fuzzy c-means clustering on medical diagnostic systems. In: 2014 International Conference on Contemporary Computing and Informatics (IC3I), pp. 1125–1129, November 2014 19. Pal, N.R., Pal, K., Keller, J.M., Bezdek, J.C.: A possibilistic fuzzy c-means clustering algorithm. IEEE Trans. Fuzzy Syst. 13(4), 517–530 (2005) 20. Siddique, M.A., Bente Arif, R., Mahmudur Rahman Khan, M., Ashrafi, Z.: Implementation of fuzzy c-means and possibilistic c-means clustering algorithms, cluster tendency analysis and cluster validation, September 2018
Short Term Load Forecasting Using Empirical Mode Decomposition (EMD), Particle Swarm Optimization (PSO) and Adaptive Network-Based Fuzzy Interference Systems (ANFIS) Saroj Kumar Panda1 , Papia Ray1 , and Debani Prasad Mishra2(B) 1 VSSUT, Burla 768018, India
[email protected], [email protected] 2 IIIT, Bhubaneswar, Bhubaneswar 751003, India [email protected]
Abstract. Precise sustainable power source age and power request gauging instruments assume pivotal jobs in accomplishing proficient and stable activity of powerplant frameworks. Anticipating instruments comprise a fundamental piece of the vitality the executives’ framework capacities. A half and half approach for short term load forecasting (STLF) in powerplant is proposed in this investigation. The proposed methodology coordinates empirical mode decomposition (EMD), particle swarm optimization (PSO) and adaptive network-based fuzzy interference systems (ANFIS). It initially utilizes EMD to disintegrate the muddled burden information arrangement into a lot of a few intrinsic mode functions (IMFs) and a buildup, and PSO calculation is then used to enhance an ANFIS model for every IMF segment and the buildup. The last transient electric burden conjecture worth can be acquired by summarizing the forecast outcomes from every part model. The exhibition of the proposed model is inspected with a burden request dataset of a contextual analysis Xingtai power plant and is contrasted and four other generally utilized estimating techniques utilizing the equivalent dataset. The outcomes demonstrate that the proposed methodology yielded unrivaled execution for momentary determining of power plant burden request contrasted with different techniques. Keywords: Short term load forecasting (STLF) · Empirical mode decomposition (EMD) · Particle swarm optimization (PSO) · Adaptive Network-Based fuzzy interference systems (ANFIS) · Intrinsic mode functions (IMFs)
1 Introduction Burden determining is essentially characterized as the science or craft of foreseeing the future burden on a given framework for a predefined timeframe ahead [1]. Transient © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 161–168, 2021. https://doi.org/10.1007/978-3-030-49339-4_17
162
S. K. Panda et al.
burden estimating is a key issue for solid and financial activity of intensity frameworks. Numerous operational choices, for example, dispatch planning, dependability investigation, security appraisal, programmed age control, load shedding and upkeep plan for generators depend on momentary burden determining [2, 3]. Power plants are a quickly developing method for incorporating dispersed age frameworks into the electric power frameworks. They are relied upon to establish a basic part later on keen vitality supply frameworks [4]. Power plant burden anticipating assumes a key job to improve the board and use of customary and sustainable power source blend inside the power plant. It likewise improves financial matters of vitality trade with different powerplants and the utility framework. Different STLF systems have been proposed in the most recent decades. Customary and early anticipating methodologies incorporate exponential smoothing [5], relapse [6], autoregressive moving normal, Kalman channel [7, 8], and time-arrangement strategies [1]. Different computerized reasoning (AI) based strategies, for example, counterfeit neural systems[12], design acknowledgment, master frameworks [11], spiral premise capacities, fluffy time-arrangement [10], fluffy neural systems [10] have additionally been proposed. Reference [12, 13] exhibits the viability of joining wavelet change and extraordinary learning machine for momentary burden conjecture demonstrating. Different examinations have likewise demonstrated the adequacy of neuro-fluffy systems for burden anticipating. Use of transformative improvements, for example, hereditary calculation (HC) and molecule swarm streamlining (PSO) and other half and half conjecture displaying strategies have been exhibited in. Another helpful method, known as observational mode decay (EMD), has as of late been applied in numerous investigations in the field of gauging. EMD is a versatile nonlinear disintegration strategy utilized in breaking down nonlinear and non-stationary sign. The strategy can decay any convoluted time-arrangement signal into a limited number of constituent sign called IMFs which can be all the more reasonable and displayed and estimated. This paper manages the advancement of another half and half EMD-PSO-ANFIS based momentary burden anticipating model (in the future called PHA) utilizing verifiable burden design portrayals as principal inputs. The PHA understands an adaptable and versatile displaying of the heap design for a situation study powerplant condition. In the main arrange, we apply EMD to decay the crude objective burden information arrangement signal into a limited arrangement of characteristic mode capacities and a buildup to upgrade burden anticipating exactness. The motivation behind applying EMD to the heap request bend is to part the bend into segment signals which are increasingly reasonable to demonstrate all the more accurately and in this manner have progressively unsurprising conduct. In the subsequent stage, the attributes of the individual segment sign are displayed and estimated independently utilizing ANFIS. The ANFIS structure for every segment is improved utilizing a molecule swarm enhancement calculation. The gauging yields acquired from all parts are summarized to yield the total burden estimate. The first run through arrangement sign is the total of the IMFs and the buildup. Hence the IMFs and the buildup are adequate to portray the first, with the extra advantage that the segment sign can be all the more reasonably and exactly displayed and anticipated contrasted with the first signal, in the end prompting an improved total determining proficiency. Prevalent execution of the PHA was assessed and approved by contrasting and other burden anticipating strategies including PSO-ANFIS strategy without applying
Short Term Load Forecasting Using EMD, PSO and ANFIS
163
EMD preprocessing, ANN and ARIMA. The proposed procedure is intended to profit by the generally basic execution and effectiveness of the PSO calculation to enhance the complex ANFIS structures for displaying of the IMFs. The heap request information of Xingtai power plant frameworks are utilized to execute the models in this investigation. Power request in the power plant primarily originates from modern plants. The framework furthermore supplies capacity to a wide range of regular loads, for example, office, lighting, cooling, server farm, and other broadly useful burdens. Contextual analysis has been done for Xingtai powerplant, China. to gauge Xingtai’s 24 h STLF in one day. The informational collection utilized for this investigation comprises of hourly burden and climate circumstance information over the period 10 June 2006 to 30 June 2006. The accompanying segment exhibits in detail the proposed half and half burden gauging system. The information is isolated into three informational indexes: the preparation informational index (11 days, from 10 June to 20 June), the approval informational collection (9 days, from 21 June to 29 June) and the testing informational index (1 day, 30 June).
2 Proposed Work The PHA created in this examination consolidates EMD, PSO, and ANFIS. The general calculated structure of the PHA is appeared in Fig. 1. The conjecture demonstrating method is completed in three primary advances. EMD is utilized in the main stage to separate the crude burden request time-arrangement signal into its segments. In the subsequent stage, an ANFIS structure upgraded by PSO calculation is created to demonstrate the connection between every IMF (counting the buildup) and the indicators set. Each model is utilized to foresee the estimation of the individual IMF at the following hour (once step conjecture). In the third step, the total burden conjecture estimation of the following hour is determined by adding the gauge estimations of every segment model. The Proposed Hybrid Modeling Approach. The proposed half breed strategy consolidates EMD preprocessing system to decay the first run through arrangement load information and molecule swarm advancement calculation to tune the parameters of an ANFIS model for every part. An underlying ANFIS model is created for each subsequent sign segment (IMFs including the buildup). The movable parameters of the ANFIS structure are at first arbitrarily introduced. These parameters are then upgraded utilizing the PSO calculation during the preparation procedure. The preparation procedure is directed to limit a wellness capacity characterized by the RMSE of the residuals created by the ANFIS model. The itemized bit by bit depiction of the proposed mixture demonstrating approach is portrayed underneath. Step 1 – Decompose crude information arrangement utilizing EMD Step 2 – Initialize FIS structure Step 3 – Generate introductory swarm Step 4 – Update the speed and position of particles. Step 5 – Assign parameters to ANFIS Step 6 – Evaluate cost work Step 7 – Check assembly Step 8 – Extract Model
164
S. K. Panda et al.
3 IMF Formation in Decomposition of EMD A. Exact Mode Decomposition to Obtain Intrinsic Mode Function in Fig. 1. The EMD strategy decayed a composite sign into a modest number of IMFs which must fulfill two conditions (1) The number of nearby extremist equivalent to or vary from number of zero intersection by one and (2) The normal is legitimately be zero. Step-1. Calculate nearby maxima and minima of the signal y(t)(y0 = signal to be decomposed). Step-2. Add between maxima to get upper envelope (eu ) and minima to get lower envelope (el ) l Step-3. Calculate the mean of the upper and lower envelope i.e.m = eu +e 2 . Step-4. Extract IMF i.e.Ii+1 = m − y(t) Step-5. Is Ii+1 an IMF i Ii . Do i = i + 1, and supplant S Truly, store Ii+1 calculate residue by ri+1 = y0 − k=1
by ri in stage 2. No, consider Ii+1 a contribution to step-2.
Fig. 1. Flow chart of EMD to form IMF
Short Term Load Forecasting Using EMD, PSO and ANFIS Input
InputMF
Rules
OutputMF
165
Output
Fig. 2. Structure of ANFIS
Step-6. On the off chance that estimation of buildup ri surpasses the edge mistake resistance esteem, at that point rehash ventures from 1–6 to acquire the following IMF and new buildup.
4 Working of ANFIS A versatile system is a multilayer feed-forward system structure comprising of hubs and directional connections, where by and large info yield conduct of the framework is dictated by the estimations of a lot of parameters through which the hubs are associated. ANFIS alludes to a class of versatile systems which depend on the ideas of fluffy set hypothesis where information is encoded utilizing an accumulation of unequivocal semantic principles to execute nonlinear frameworks. ANFIS frameworks are a crossover stage which profit by oneself learning capacity of neural systems to adaptively alter parameters of the guidelines that assemble the fluffy framework. It has been discovered that ANFIS have the ability to easily surmised any genuine consistent capacities to any degree of precision, and subsequently are considered as all inclusive approximators as shown in Fig. 2.
Fig. 3. Comparison between actual load and forecasted load
166
S. K. Panda et al.
Fig. 4. Decomposition of load
Fig. 5. Comparison between error and epochs
Fig. 6. Comparison between in1 and out1
Table 1. Epoch and error Epochs Error 1
0.1
5
0.1
10
0.1
15
0.1
20
0.1
25
0.1
30
0.1
5 Results and Discussion The proposed hybrid approach (PHA) as shown in Figs. 3, 4, 5, 6, Table 1 and Table 2 was applied to the heap information arrangement of the contextual investigation power
Short Term Load Forecasting Using EMD, PSO and ANFIS
167
Table 2. Comparison between actual load (MW) and forecasted load (MW) Actual load (MW) Forecasted load (MW) 890
900
850
700
750
950
990
1000
1500
1500
1450
1450
1600
1600
1250
1250
1500
1500
1180
1200
plant framework. Hourly normal burden request dataset compared to the period from 10 June 2006 to 30 June 2006 was utilized to build up an EMD-PSO-ANFIS model for momentary burden determining and assess its exhibition. The information is isolated into three informational indexes: the preparation informational index (11 days, from 10 June to 20 June) and the testing informational index (1 day, 30 June).
6 Conclusion This paper incorporated EMD, PSO, and ANFIS to build up a momentary burden estimating model for a power plant framework. The investigation considered hourly normal burden request information of the contextual analysis powerplant framework for around three successive months to create and test the proposed methodology. The heap sign is disintegrated into ten IMFs including the buildup utilizing the EMD preprocessing procedure. An ANFIS model improved through PSO calculation is created to estimate every segment sign or IMF. The total conjecture is made by procuring the estimates of segment IMFs and summarizing them. The proposed half breed approach was found to have accomplished better displaying of the heap example and precisely caught the vacillations inside the information contrasted with ingenuity, ARIMA, ANN, and PSO-ANFIS based methodologies. This method is best where trial results demonstrated that the proposed EMD-PSO-ANFIS based coordinated methodology gave generous upgrades over the anticipating results got from the other strategy.
168
S. K. Panda et al.
Appendix See Table 3. Table 3. PSO Parameters Parameter
Value
Swarm size
25
C1
1
C2
2
WMAX
1
WMIN
0.4
Number of generation 20
References 1. Elattar, E.E., Goulermas, J., Wu, Q.H.: Electric load forecasting based on locally weighted support vector regression. IEEE Trans. Syst. Man Cybern.-Part C: Appl. Rev. 40, 438–447 (2010) 2. Islam, B.U.I.: Comparison of conventional and modern load forecasting techniques based on artificial intelligence and expert systems. Int. J. Comput. Sci. Issues 8, 504–513 (2011) 3. Amjady, N.: Short-term hourly load forecasting using time series modeling with peak load estimation capability. IEEE Trans. Power Syst. 16, 798–805 (2001) 4. Amjady, N., Keynia, F., Zareipour, H.: Short-term load forecast of microgrids by a new bilevel prediction strategy. IEEE Trans. Smart Grid 1, 286–294 (2010) 5. Christiaanse, W.R.: Short term load forecasting using general exponential smoothing. IEEE Trans. Power Apparatus Syst. 90, 900–911 (1971) 6. Papalexopoulos, A.D., Hesterberg, T.C.: A regression based approach to short term system load forecasting. In: Proceedings of PICA Conference, no. 3, pp. 414–423 (1989) 7. Irisarri, G.D., Widergren, S.E., Yehsakul, P.D.: On-line load forecasting for energy control center application. IEEE Trans. Power Apparatus Syst. 101, 71–78 (1982) 8. Taylor, J.W., de Menezes, L.M., McSharry, P.E.: A comparison of univariate methods for forecasting electricity demand up to a day ahead. Int. J. Forecast. 22, 1–16 (2006) 9. Hippert, H.S., Pedreira, C.E., Souza, R.C.: Neural networks for short-term load forecasting: a review and evaluation. IEEE Trans. Power Syst. 16, 44–55 (2001) 10. Ray, P., Mishra, D.: Signal processing technique based fault location of a distribution line. In: 2nd IEEE International Conference on Recent Trends in Information Systems (ReTIS), pp. 440–445, July 2015 11. Panda, S.K., Ray, P., Mishra, D.: Effectiveness of PSO on short-term load forecasting. In: 1st Springer International Conference on Application of Robotics in Industry Using Advanced Mechanisms (ARIAM), pp. 122–129, August 2019 12. Ray, P., Mishra, D.: Artificial intelligence based fault location in a distribution system. In: 13th International Conference on Information Technology (ICIT), pp. 18–23, December 2014 13. Ray, P., Panda, S.K., Mishra, D.: Short-term load forecasting using genetic algorithm. In: 4th Springer International Conference on Computational Intelligence in Data Mining (ICCIDM), pp. 863–872, December 2017
Energy Conservation Perspective for Recharging Cell Phone Battery Utilizing Speech Through Piezoelectric System Ashish Tiwary(B) , Yashraj, Amar Kumar, and Mandeep Biruly GIET University, Gunupur, India [email protected], [email protected], [email protected], [email protected]
Abstract. This paper demonstrates the energy harvested through piezoelectric material that helps in charging of a mobile phone battery with speech sound vibrations. Predominantly, the mechanical vibrational energy gets transformed to electrical energy. Following above science of thought, the leading postulate associated with a research establishment is a PZT substrate which absorbs the vibrational energy produced by the speech sound of a human being during conversation on cell phones. In this approach, the Zinc Oxide filaments are intervened amidst electrodes. Apparently, the intervened zinc oxide electrodes are coated with a layer of PZT substrate. The motion of air molecules accelerated due to the corresponding intensities experienced during talk that arises a massive vibrational energy when strikes the piezoelectric substrate. As per the analysis, it results into tangling up the zinc oxide nano wires. The sensitivity of ZnO creates a region in a longitudinal wave where the particles are closest together and furthest apart. A nano generator is introduced in the middle of the arbitrated zinc oxide electrodes. The transformation of energy takes place from pneumatic to electrical when the longitudinal waves make contact with nano generator. Nano generator modifies the specified pneumatic energy into electric potential form. Thereafter, battery is charged with the converted form of energy (electrical) at ease. Nano generator provides continuous power by harvesting energy from the environment. This process is implemented using MEMS technology as current scenario demands miniaturization technology to save energy. Keywords: Speech sound · Nano generator · ZnO nanowires · SAW · PZT substrate · Amplifier
1 Introduction The elevated consumption of energy in portable electronic devices and the approach of production of renewable energy for the man force has created and inspired the mankind to see new dimensions in the field of harvesting new energy [1]. This present study focuses on the production of new sources of energy by the use of piezoelectric materials. Piezoelectric materials can be used as a catalyst to transform contexture pneumatic © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 169–177, 2021. https://doi.org/10.1007/978-3-030-49339-4_18
170
A. Tiwary et al.
energy into electric potential energy, that will be utilized and restore to energize devices by charging the batteries of cell phones by speech sound. This is dealt with the transformation of energy where the longitudinal waves change its characteristics into electrical energy by following the postulates of piezoelectric effect [2]. This work will indeed be an easy way of adhering the longevity issues of the cell phones batteries. In this world of modernization cell phones have a diversified usage, starting from interactions with near and dear of the family, communication in the world of business, seeking advice and suggestions in emergency needs. Moreover, use of mobile apps and net surfing would lead a battery to discharge abruptly and become less functional. Hence, in this project we have established an uninterrupted charging process technology which practically helps the battery to overcome with the constraints of quick drain with ease by the speech sound of a human being [3]. The current framework comprises of an amplifier to retrieve the audible waves generated from noisy speakers and additional sources. The transformation of electrical energy from sound energy is carried out by a receiver. Mouthpieces discover the usage via extensive variety of gadgets, for example, media transmission gadgets, hearing gadgets, and furthermore in numerous ultrasonic detecting gadgets or thump detecting gadgets [4–6]. Another methodology which is used presently is to utilize the piezoelectric plate that circles around the vital concept of piezoelectric impact for estimating the outside power, quickening and anxiety.
2 Existing Background Empowering of piezoelectric innovations has been quite apt due to its numerous focal points. Taking an account on the dynamics of flexible structures, piezoelectric materials possess high energy densities incorporated with flexibility characteristics ranging from 103–113 N/m2 overpriced in comparison to other materials [7, 8]. Moreover the piezoelectric cells and sensors considered as a widely rough gadgets possessing a large regular recurrence which also demonstrate straight conduct for an extensive variety of plenty fullness recurrence. The channelization of signals through a mouthpiece audio speaker shown in the Fig. 1 below.
Fig. 1. Existing circuit diagram of mouthpiece audio speaker
3 Basic Theory of Concept In present day, energy is long standing need of the world for which various methods of energy generation are developed and implemented. Current framework of study unveils
Energy Conservation Perspective for Recharging Cell Phone Battery
171
the Zinc Oxide filaments intervened between two cathodes. The piezoelectric substrate is put upon the mediated zinc oxide terminals [9]. At the point when the produced mechanical (audible) waves when hit the piezoelectric substrate, reverberation occurs that bend the zinc oxide wires as well. A Nano generator is placed to get into the intervened ZnO cathodes. The contraction and uncommon divisions are subjected directly to the nano generator. Therefore the Nano generator conceives the electrical form of energy coming from mechanical stress. This in turn charges the cell battery. Nano generator is another sort of gadget which can change over mechanical vitality and furthermore warm vitality into electrical vitality. The three fundamental sorts of Nano generators are Piezoelectric Nano generator, Tribo electric Nano generator, and Pyro electric Nano generator. A Piezoelectric and Triboelectric Nano generators are both energy harvesting devices that transforms the external mechanical energy into electricity acts as the power generation unit fabricated an image configured in Fig. 2.
Fig. 2. Fabricated Nano generator image
The Sonic waves which fundamentally mechanical waves are created by human voice either amid talking or shouting that comprises of shock and vibrations. Amidst the squeezed form of the sound waves, the nature mirrors the procedure where the air particles are packed and subsequently are arranged near one another [10–12]. Amidst the expand form of the sound (longitudinal) waves, the aerated atoms are free from the shared connections with one another. 1. To make utilization of sound vitality which is a lesser utilized type of vitality when contrasted with different structures, for example, sun based vitality, wind vitality, warm vitality, and daylight. 2. To make utilization of undesirable clamors delivered in roads turned parking lots, air terminals, building destinations, enterprises and transform into more suitable form of energy. 3. Charging of cell phone is done by raising the intensity of speech sound. 4. Predominately, it becomes more convenient for the clients to charge their mobile batteries at any point of dire need.
172
A. Tiwary et al.
4 Structural Modelling of System 4.1 Nano Generator A Nano generator is a type of technology that converts mechanical energy as produced by small-scale physical change into electricity. A Nano generator has three typical approaches: piezoelectric, tribo electric, and pyro electric Nano generators. Both the piezoelectric and tribo electric nano generators can convert mechanical energy into electricity. A piezoelectric Nano generator in Fig. 3 proven to be a vitality reaping gadget equipped for changing over outside motor vitality into electrical vitality by means of activity by a nano-organized piezoelectric material [13].
Fig. 3. Piezoelectric Nano generator
The most extreme voltage produced in the nanowire can be figured by the accompanying condition given in Eq. 1: Vmax = ±
3 a3 [e33 − 2(1 + ν)e15 − 2νe31 ] 3 νmax 4(κ0 + κ) l
(1)
where κ0 refers as vacuum permittivity, κ is the dielectric permittivity, e33 , e15 and e31 are the piezoelectric factors, ν is the ratio of stress and strain, a specify nanowire radius, l unveils nanowire length and νm axis the maximum deviation of the nanowire’s tip. The nano generator comprises of ZnO nanowires, a kind of PZT fired material. At whatever point these intervened, ZnO nanowires are curved flexibly then electrical flow is delivered. Piezoelectric element can produce desired robust output by making utilization of the essential idea of piezoelectric effect. The width size of ZnO nanowires is not even as
Energy Conservation Perspective for Recharging Cell Phone Battery
173
much as that of a human hair. The Nano generator comprises of tons of ZnO nanowires successfully intervened between two terminals. The Nano generator is exceptionally adaptable gadget and to a great degree minimal in size and can change over even the scarcest mechanical vibrations into helpful electrical power [14]. It produces 4 watts/m3 control per each nano wire it changes over mechanical to electrical transformation. The width (diameter) of the individual nano wire is computed in the range of 100 and 300 nm. Length of individual nanowire is considerably specified as hundred microns; one micron = 105 nm. To place this in context, take note of that, the length of the wire (not the width) is about the equivalent as the width of two human hairs. The cathode has a crisscross example on its surface (Fig. 4).
Fig. 4. (a) - top perspective of developed nanowires (b) - side perspective of developed nanowires
At the point, a small body weight is connected to the nano generator; each nano wire bends and produces an electric charge. Then after terminal catches that charge and brings it through whatever remains in nano generator circuit. The entire Nano generator may have a few terminals catching force from a large number of nano wires. 4.2 Piezoelectric Substrate The Piezoelectric Substrate utilized in this innovation is made of certain solid materials in response to mechanical stress. The fundamental surface acoustic wave gadget comprises of a piezoelectric substrate, an info interdigitated transducer (IDT) on one side of the surface of the substrate shown in Fig. 5 and a second, yield interdigitated transducer on the opposite side of the substrate [15]. These are put over two sandwiched Zn terminals.
Fig. 5. SAW device consist of a piezoelectric substrate
174
A. Tiwary et al.
4.3 ZnO Electrodes In this system, the ZnO strands are intervened between two electrodes. The piezoelectric substrate is put upon the intervened zinc oxide cathodes. At the point when the produced mechanical waves as sound waves occurrence upon these cathodes, they are viably exchanged the ZnO nanowires [16]. 1. With this framework, the cell phone battery would now be able to be charged by making utilization of discourse sound. 2. The undesirable commotions of the environment can likewise be changed over to helpful electrical power with the end goal to charge the cell phone. 3. This gadget can be utilized in broad daylight places and furthermore out in the open occasions to gather vitality for charging of the cell phone battery. The sound created by the enterprises coming about because of the working of various machines can likewise be utilized to charge cell phone battery. 4. The framework is on a costlier side when contrasted with other vitality collecting frameworks. 5. This framework can’t be utilized in very places and in spots where the force of sound is low. 6. The effectiveness of the framework is low and subsequently quite a bit of enhancement still stays to be done in this framework.
5 Methodology and Design Through the above flow diagram in Fig. 6, it is clearly shown that the human voice is generated as an input in the form of sonic waves subjected to the piezoelectric substrate loaded in ZnO Nanowires. The output signal has been inculcated with erroneous variation which would be brought into pure signal by the utilization of noise filter. Further, amplification is carried over in a suitable form to meet the strength of signal required for charging the cell phone battery.
Fig. 6. Flow diagram of cell phone battery charge
6 Analysis and Outcomes List 1 - constructed between the discrete noise sources Vs the measured dB level in decibels and curve obtained that has shown in Fig. 7(a) and (b) [16].
Energy Conservation Perspective for Recharging Cell Phone Battery
175
Fig. 7. (a). Chart drawn between voltage created and change in power perceive. (b). Chart drawn between power produced and intensity change.
List 1 . Noise source Hushed hearable ambient sound in library premises
dB level 0
Normal breathing, threshold of good hearing
20
Soft whisper
30
Average townhall, rainfall
50
Ordinary Conversation
60
Busy street
70
Power lawn mower
80
Electric drill
95
Car horn, orchestra
100
Squeaky toy held close to ear
110
Jet engine at 30 m
150
List 2 is constructed between the Change in Decibel scale, Voltage factor field (V), Acoustic Power Sound Intensity(W) and Perceived Loudness gain factor and corresponding Chart drawn between voltage created and change in power perceive and Chart drawn between power produced and intensity change.
176
A. Tiwary et al. List 2 .
Change in Decibel scale
Perceived Loudness gain factor
Voltage factor field (V)
Acoustic Power Sound Intensity (W)
+20
4.000
10.000
100.000
+10
2.000
3.162
10.000
+6
5.162
2.000
4.000
+3
1.232
1.414
2.000
0
1.000
1.000
1.000
−3
0.812
0.707
0.500
−6
0.660
0.500
0.250
−10
0.500
0.312
0.100
−20
0.250
0.100
0.010
7 Conclusions The plan of the proposed harvested energy collected for cell phones has been displayed in this paper. The working of this framework is the piezoelectric impact by which the mechanical vibrations, powers, anxiety can be changed over into usable type of electric power. The wellspring of sound waves that are a sort of mechanical waves are episode on the Nano generator utilized in this framework and through Nano generator it tends to be changed over into electric flow. The plan exhibited here will be very viable in giving substitute methods for power supply for the specified gadgets amid crisis. Further, the methodology introduced in this paper work can be reached out to numerous different area utilization where there is energy collection sorted with comparisons. This procedure is fairly executed utilizing MEMS innovation as present situation requests scaling down innovation to spare energy.
References 1. Priya, S., Myers, R.D.: Piezoelectric energy harvester. United States patent (2010) 2. Paradiso, J.A., Starner, T.: Energy scavenging for mobile and wireless electronics. IEEE Pervasive Comput. 4(1), 18–27 (2005) 3. Johar, J., Cesar, S., Jason, F., Yuxin, P., Hereman, N., Adhish, T.: Jakobovski: free spoken digit dataset: v1.0.8, August 2018. https://doi.org/10.5281/zenodo.1342401 4. Cady, W.G.: Piezo-electric terminology. In: Proceedings of the Institute of Radio Engineers, pp. 2136–2142. IEEE (2006). https://doi.org/10.1109/jrproc.1930.221968 5. Dresselhaus, M.S., Bhushan, B.: Springer Handbook of Nanotechnology, 1st edn. Springer, Heidelberg (2004) 6. Kim Hyun, B., Nguyen, Q., Kwon, J.W.: Paper-based ZnO nanogenerator using contact electrification and piezoelectric effects. J. Microelectromech. Syst. 24(3), 1–3 (2015) 7. Zhao, M.-H., Wang, Z.-L., Mao, X.S.: Piezoelectric characterization of individual zinc oxide nano belt probed by piezo response force microscope. Nano Lett. 4(4), 587–590 (2004)
Energy Conservation Perspective for Recharging Cell Phone Battery
177
8. Billinghurst, M., Starner, T.: Wearable devices: new ways to manage information. IEEE Comput. 32(1), 57–64 (1999) 9. Urban, E.C.: Wearable computing: an overview of progress. In: 3rd International Symposium on Wearable Computers, pp. 141. IEEE Computer Society, Washington, DC, USA (1999) 10. Meindl, J.D.: Low power microelectronics: retrospect and prospect. In: Proceedings of the IEEE, pp. 619–635. IEEE, USA (1995) 11. Blomgren, G.E.: Current status of lithium ion and lithium polymer secondary batteries. In: 15th Annual Battery Conference on Applications and Advances, pp. 97–100. IEEE, USA (2000) 12. Hahn, R., Reichl, H.: Batteries and power supplies for wearable and ubiquitous computing. In: 3rd International Symposium on Wearable Computers, pp. 168–169. IEEE, San Francisco (1999) 13. Allen, J.J., Smits, A.J.: Energy harvesting eel. J. Fluids Struct. 15(3–4), 629–640 (2001) 14. Kymissis, J., Kendall, C., Paradiso, J., Gershenfeld, N.: Parasitic power harvesting in shoes. In: 2nd IEEE International Conference on Wearable Computers, pp. 52–55. IEEE, USA (1997) 15. Wang, Z.L., Song, J.: Piezoelectric nano generators based on zinc oxide nanowire arrays. Science 312(5771), 242–246 (2006) 16. Jansen, A., Leeuwen, S., Stevels, A.: Design of a fuelcell powered radio, a feasibility study into alternative power sources for portable products. In: IEEE International Symposium on Electronics and the Environment. IEEE, USA (2000) 17. Starner, T.: Human powered wearable computing. IBM Syst. J. 35(3–4), 618–629 (1996)
A Progressive Method Based Approach to Understand Sleep Disorders in the Adult Healthy Population Vanita Ramrakhiyani, Niketa Gandhi, and Sanjay Deshmukh(B) Department of Life Sciences, University of Mumbai, Santacruz (E), Mumbai 400 098, India [email protected], [email protected], [email protected]
Abstract. Today’s fast-paced lifestyle has led globally to an increase in sleep deprivation. Although the recommended amount of sleep in a 24 h period is 7– 8 h, recent data show that almost 50% of Indians are sleeping 6 h or less and other countries are also reporting a decline in sleep duration. Neurocognitive function decreases in a dose-dependent manner with chronic sleep deprivation, which can impair productivity at work and in daily functioning. Various brain functions involving the frontal lobe can be assessed objectively by conducting a neuropsychology test battery. The research or data in the Indian population is lacking. Hence, this project evaluated neurocognitive functions in real-life setting sleep deprivation. The current study aimed to assess the effect of chronic sleep deprivation in the general public on various neuropsychology functions, mainly involving the prefrontal brain lobe. Volunteers were asked to wear Actiwatch and/to fill the sleep diary for seven consecutive days. The Neuropsychology test battery utilized included Psychomotor Vigilance Task, Forward Digit Span, Iowa Gambling Task, Tower of London, Wisconsin Card Sorting Test, Stroop, and Rey Auditory Verbal Learning Test. Results show that chronic sleep deprivation has the most significant effect on the younger generation as compared to older adults. There was no significant effect on the elderly population. Future large cohort studies are underway to substantiate the findings of this study. Keywords: Chronic sleep deprivation · Excessive daytime sleepiness · Neuropsychology test battery
1 Introduction Sleep as defined in the medical dictionary “is a period of rest for the body and mind, during which volition and consciousness are in partial or complete abeyance and the bodily functions partially suspended”. Sleep plays an essential role in maintaining optimal weight, better cognitive control as well as reduces risk of depression. Chronic Sleep Deprivation is common in modern society [1]. Working hours have been increased drastically while sleeping lowering the sleep hours. Although the recommended amount of sleep in a 24 h period is 7–8 h, recent data show that almost 50% of Indians are sleeping © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 178–187, 2021. https://doi.org/10.1007/978-3-030-49339-4_19
A Progressive Method Based Approach to Understand Sleep Disorders
179
6 h or less, and other countries are also reporting a decline in sleep duration. Neurocognitive function decreases in a dose-dependent manner with chronic sleep deprivation, which can impair productivity at work and in daily functioning. Besides, there is a burden of sleep disorders-insomnia and sleep apnea being the two most giant conditions [2–13]. Sleep plays an important role in reducing cardiovascular risks, depression, maintaining optimal weight, and optimal cognition. Sleep deprivation affects hand-eye coordination and response times. Sleep increases the person’s ability to think correctly and reduces stress. The progressive approach to assess sleep deprivation should be an objective tool rather than a subjective questionnaire which has been utilized traditionally. The objective way to assess the effect of chronic sleep deprivation on various brain functions such as Attention, Planning, and Memory is to conduct actual neuropsychology test battery. The research or data in the Indian population is lacking. There is a need to evaluate neurocognitive functions in real-life setting sleep deprivation. The current study aims to approach the chronic sleep deprivation effects by objective way of Neuropsychology Test battery assessing the prefrontal brain lobe functions mainly. The outcome of the study will determine the incidence of chronic sleep deprivation and its daytime neurocognitive effects, which people generally ignore. The long term goal of the study is to provide appropriate non-medication interventions to sleep deprived population.
2 Process A general population was recruited through awareness camps for sleep disorders, across the society to participate in this study. Each participant was asked to fill a sleep quiz and administered the neuropsychology test battery. Demographic details were taken into account. 2.1 Approaching Healthy Adults The study was conducted on a healthy adult population of Thane, Mumbai and Alibaug. The study population was divided into different groups as per age, gender and education level. The age groups were 16–30, 31–50 and 50 above. Education level was noted as School educated and College educated. 2.2 Subject Inclusion and Exclusion Criteria The volunteer/participant should be a healthy adult, age 16 above. The volunteer should be willing to fill sleep diary and/wear Actiwatch and thereby willing to devote at least 1 h of his/her timings to perform neuropsychology test battery. The volunteers with major psychological disorders, medical disorders-cancer, HIV, Tuberculosis, with active seizures were excluded. Also, carrying ladies were not eligible.
180
V. Ramrakhiyani et al.
2.3 Meeting Subject Each of the participants was explained thoroughly the sleep quiz as well as all the neuropsychology tests, if repeated many times for proper understanding.
3 Screening Questionnaire Epworth Sleepiness Scale (ESS) and Sleep Quiz was filled by all the participants, for determining daytime sleepiness and screening for sleep disorder/problems. 3.1 The Epworth Sleepiness Scale The ESS is a questionnaire that can be self-administered and proven to provide a valid measurement of sleep propensity [14]. The scale depends on remembrance of chances of dozing off in particular situations. 3.2 Sleep Quiz It is a simple yes/no questionnaire including questions pertaining to various sleep disorders.
4 Screening Tools Screening tools include Actiwatch and sleep diary to get an average number of sleep hours of participant’s sleep. Actiwatch is more objective while sleep diary relies on a person’s subjectiveness. 4.1 Actigraphy and Sleep Diary The volunteers were either given an Actiwatch for a minimum of one week to study their sleep patterns in an actual situation or sleep diary to be filled for a minimum of one-week timing. Actiwatch scoring was done with the help of actiware software. Sleep diary scoring was done manually to calculate average sleep hours in a week.
5 Administration of Neuropsychological Test Battery The neuropsychology test battery was conducted on all the participants. The battery tests were chosen based on earlier studies [15, 16] on sleep deprivation and considering the effects on the prefrontal lobe. The test battery includes 7 tests which are described in NIMHANS specifications [17] as follows:
A Progressive Method Based Approach to Understand Sleep Disorders
181
5.1 Psychomotor Vigilance Test (PVT) PVT assesses sustained attention by measuring reaction time. The stimulus utilized is a visual one. The variables include omission and commission errors, reflecting fluctuations in endogenous cognitive conditions. The study used PEBL software version 0.13 for conducting PVT. The participant was seated comfortably; the level of a computer screen and subject head was kept parallel. The participants were asked to respond, as soon as possible, to a red circle appearing in the center of the screen, by pressing the “Space Bar” button. Each response was accompanied by displaying reaction time in milliseconds (ms). The variables used were Average Reaction time (ART), Number of lapses (RT > 500 ms), Number of too fast responses (Errors of Commission) and Number of sleep responses (RT > 30 s). The test duration was 10 min. 5.2 Forward Digit Span Test This is a standard digit span task. It had a visual presentation of number strings. The participant was asked to type the list of digits exactly in displayed order. Starting with a 4 digit number, the length increased on correct recall and decreased on incorrect recall. The average memory span was the only variable utilized here. The study used PEBL software version 0.13 for conducting the digit span test. The test duration was an average 7 min. 5.3 IOWA Gambling Task Four decks were given; the participant must draw cards from whatever deck they choose. On each selection, a reward and/penalty is given. When 100 cards have been selected, the task is complete. Two decks were net positive, two were net negative. The variable used in this case is the difference in the number of good decks selected from the bad decks. The average duration of this test was 5 min. 5.4 Wisconsin Card Sorting Task (WCST) It assesses executive function mainly involving concept formation, mental flexibility and abstract reasoning. The test consists of 128 cards. Stimuli: Color (Red, Green Yellow, Blue), form (triangle, star, cross, circle) and Shapes (Circle, Triangle, Cross and Star). The volunteer was instructed to match each successive card from the pack to one of the four stimulus cards. The participant has to guess the concept based on the computer’s feedback and continue selecting or matching cards. After the participant places 10 consequent cards correctly, the tester changes the concept without the subject’s knowledge. The subject’s capacity to perceive a change in the concept when the next sorting principle is introduced is a measure of the set-shifting ability. Cards placed according to the sorting principle are correct responses. Perservative response is one where a card is placed according to the previous principle. Errors that do not match the previous sorting principle in operation are non-preservative errors. The variables utilized are a number of trials to complete the first category, a number of Perservative errors and a number of categories finished. The test duration is 10 min.
182
V. Ramrakhiyani et al.
5.5 Tower of London Test (TOL) Planning is tested using TOL. The goal is to place a disks pile from their original configuration to that displayed on the top of the screen. There were a total of 12 problems. The variable here was the total number of problems solved with minimum moves. The test duration is 10 min. 5.6 Stroop Test It assesses Response Inhibition. The color names “Blue”, “Green”, “Red” and “Yellow” are printed in capital letters on a paper. The color of the print does not always match with the color designated by the word. The words were arranged in 16*11 figure. The subject was asked to read the text in the first trial and colors in the second trial. The difference in timings is taken as a stroop effect score. 5.7 Auditory Verbal Learning Test It consists of words designating familiar objects like vehicles, tools, animals and body parts. There are 2 lists A and B, with 15 different words in each list. The learning score forms by the correct recall of the number of words of List A and the total number of words recalled over five trials. The memory score was formed by the immediate recall, delayed recall trial and the recognition trial. Omissions and Commissions form the errors. The other score is the Long Term Percent Retention. The present study utilized the PEBL software [18, 20] to administer the above test battery on all participants.
6 Nocturnal Polysomnogram Nocturnal Polysomnography also called as a sleep study was conducted utilizing Philips Respironics Alice 5 System. It is laboratory based equipment. It involves recording the following parameters: Electroencephalogram (EEG)-Brain activity Electrooculograpm (EOG)- Eye activity Electrocardiogram/Heart rate(ECG)- cardiac activity Chin Electromyogram (EMG)- Sleep staging. Limb Electromyogram- Limb and Muscle movement Respiratory Effort(Thorax and abdomen movement) Airflow and Nasal Canula- airflow movement Pulse Oximetry –Oxygen desaturation and pulse The variables utilized were sleep efficiency, Apnea-Hypopnea Index, Nadir Oxygen saturation and Periodic Limb movement index. The sleep study was done for a few volunteers to study their sleep patterns.
A Progressive Method Based Approach to Understand Sleep Disorders
183
7 Productivity Scale Productivity measuring tools to date available are subjective in nature based on questionnaires. Utilization of such questionnaire based scales cannot be applied to the target population from varied fields such as a corporate officer productivity scale cannot be compared with that of scale of a professor. We didn’t find any appropriate objective productivity scale to be utilized in this study. However, we have shortlisted a productivity scale, viz, Endicott Work Productivity Scale, which can be utilized for sleep deprivation studies in the future. The Endicott Work Productivity Scale (EWPS) is a sensitive measure of work productivity based on a self-report questionnaire. Behaviors or attitudes likely to reduce productivity are considered as total scores. This scale is easy to use, brief and sensitive measure for assessing the effects on work performance of various disorders and the efficacy of different therapeutic interventions.
8 A Case Study on the Indian Population A recent study evaluated the effects of chronic sleep deprivation among Indian population by assessing neurocognitive functioning [11]. 365 volunteers were recruited for the study. The population was divided into three age teams as 16–30, 31–50 and above fifty. The elaborated demographics are represented in Table 1. Every cohort volunteer was categorised into two teams as sleep deprived and not sleep deprived. Sleep disadvantaged cluster had average hours of sleep as less than 6 h/night for per week whereas Not-sleep deprived group had more than 6 h of sleep/night for per week as ascertained in Actiwatch and/sleep diary. Table 1. Demographics Age group
16–30
31–50 >50 Total
Population size(n)
120
144
101
Mean Age
22.8
40.5
55.6 39.6
Mean BMI
21.79
26.33 26.8 24.9
365
Males
56
70
74
200
Females
64
74
27
165
Education-School
1
9
17
27
Education-College
119
135
84
338
Average sleep hours < 6
38
69
30
137
Average sleep hours > 6
82
75
71
228
Incidence of sleep deprivation 31.66% 47.91 29.7 37.5
The incidence of sleep deprivation was highest in the 31–50 cohort as 47. 91%. Although, the incidence rate is exceptionally high as 31.66 of 16–30 cohort that indicates
184
V. Ramrakhiyani et al.
chronic sleep deprivation amongst Indian youth presumably because of dynamical life vogue. In 16–30 cohort, the typical hours of sleep in sleep disadvantaged cluster is 5.1 whereas in alternative cluster it’s seven hours average. The subjective somnolence scores are ascertained to be low across all age groups that clearly depict the denial state of the Indian population regarding sleep deprivation and conjointly lack of awareness concerning the importance of sleep and its disorders. In an earlier study by J. C. Suri and cluster on the prevalence of sleep disorders in the metropolis population, just about over [12] the population beneath study was ascertained to be sleep disadvantaged (sleep time < 8 h per day); and 29.3% of them slept for fewer than seven hours. Regarding the fourth part of the population (26%) that slept for fewer than eight hours per day had perceptible symptoms of excessive daytime somnolence (EDS). Many factors which will be accountable for sleep deprivation include conditions like poor sleep hygiene, improper sleeping surroundings, and sickness, work (shift work and frequent traveling), alternative sleep disorders (sleep symptom, PLMS and snoring), medications, personal alternative, parenting of babies etc. The Neuropsychology test battery domain and its variables are tabulated in Table 2. Table 2. Neuropsychology test battery and its variables Domain
Test
Variables
Sustained attention
Psychomotor Vigilance Task
Lapses, Commission Errors and Average Reaction Time
Memory span
Forward Digit Span Test
Average forward digit span score
Decision making
Iowa Gambling Task
Difference of good and bad deck selection
Executive function
Wisconsin Card Sorting Task
Perservative errors
Planning
Tower of London
Problems solved with minimum moves
Social cognition/Response inhibition
Stroop Test
Stroop percentile
Verbal learning
Rey Auditory Verbal Learning Task
Immediate Recall percentile and Delayed Recall Percentile
For younger age group, as shown in Table 3, all the domains were found to be affected by chronic sleep deprivation of less than 6 h. The middle age group participants got affected in reaction time and memory span aspects. The elderly age group scores were mostly indifferent in all the domains except for executive function and planning. These results can be due to age related poor baseline cognition. However, the above results need to be confirmed by doing a large cohort study. Chronic sleep deprivation has deteriorating effects on memory span, decision making, cognitive performance, Planning, Social knowledge and Verbal Learning. However,
A Progressive Method Based Approach to Understand Sleep Disorders
185
Table 3. Affected brain functions Sr. Test No.
Function
Results 16–30 Age group
31–50 Age group
51–70 Age group
1
Sustained Attention
Psychomotor Affected; Higher Vigilance average reaction Test time
Affected; Higher average reaction time
Indifferent
2
Working memory
Digit Span Test
Affected; Lower memory span
Affected; Lower memory span
Indifferent
3
Decision making
IOWA Gambling test
Affected; lower score
Not Affected;
Indifferent
4
Executive functioning
Wisconsin Affected; lesser Not Affected; Card Sorting categories achieved, Test increased number of trials to achieve first category and increased Perservative errors
Indifferent
5
Planning
Tower of London
Indifferent
6
Response Stroop test inhibition/social cognition
7
Verbal memory
Affected; decreased Not Affected; number of problems solved Affected; lowered percentile scores
Rey auditory Affected; all verbal percentiles are learning test lowered
Affected; lowered Indifferent percentile scores Affected; all percentiles are lowered
Indifferent
the results are ascertained in sustained attention within the mean values although statistically not important. The baseline score for sustained attention were extremely low presumably because of numerous reasons, the foremost common being chronic sleep deprivation however not acknowledged by the person. The average range of sleep hours in 31–50 cohort population was 6.79 and 5.28 severally in not sleep deprived and sleep deprived cluster. Average interval, memory span, social knowledge and verbal learning are considerably suffering from chronic sleep deprivation. There was no distinction between sleep deprived and not sleep deprived cluster cohort of the population, except within the range of perservative errors in Wisconsin card sorting task and range of issues solved with minimum moves in the tower of London. Although, the scores were terribly low in sleep deprived cluster, that indicates poor baseline knowledge. Earlier studies have rumored sleep deprivation effects among the younger population than the old, this study confirms constant.
186
V. Ramrakhiyani et al.
8.1 Utilization of Neuropsychology Test Battery Most of the earlier studies on sleep deprivation have centered on the sustained attention task of activity vigilance check. This study utilizes a form of Neuropsychology tests to assess completely different lobe functions like set-shifting ability, deciding, designing and memory. These functions have real world applications that, if deteriorated, will unbalance the social, mental, economic and health of a person. The disadvantage with these tests is lack of administration understanding for e.g. in Wisconsin card sorting check, individuals were unable to know initial conceptualization. Some modification is guaranteed to administer these tests in the Indian population.
9 Conclusion Sleep deprivation has become one of the foremost important, unrecognized public health problems with the contemporary world. Lack of sleep is commonly chronic – because of the excessive social and work demands of lifestyle combined with poor sleep habits and sleep disorders, that area unit usually unrecognized. The present study aimed to judge the result of chronic sleep deprivation among the general public on numerous Neuropsychology functions principally involving the Frontal brain lobe. This analysis hypothesized the incidence of chronic sleep deprivation to be terribly high furthermore because of the effects of constant on numerous brain domains across all age teams. For the younger cohort, all the domains were found to be suffering from chronic sleep deprivation but half-dozen hours. The center cohort participants got affected in reaction time and memory span aspects. The old cohort scores were principally indifferent, all told the domains apart from executive function and planning. These results because of age connected poor baseline knowledge. However, the results got to be confirmed by doing an outsized cohort study. This project was an endeavor to review the chronic sleep deprivation among the Indian population for the primary time in real-world settings by conducting Neuropsychology tests. Earlier studies have principally been done by subjective questionnaires. This can be one in every of its kind supported objective measures. The timely measures ought to be undertaken to curtail this neurocognitive lowering of the young generation. The foremost vital step towards this goal is creating population awareness to sleep disorders furthermore as developing economical ways to see for unknown sleep disorders. Also, appropriate men and sleep science education supplementary in the medical program is useful to handle the difficulty of sleep deprivation that has an impact on the social, mental, physical and economic health of the society. Ethical Approval. All procedures performed in this study involving human participants were in accordance with the ethical standards of University of Mumbai, the institute and the research committee of Department of Life Sciences where the research was carried out as part of the Ph.D work. Further, informed consent was obtained from all individual participants included in the study.
A Progressive Method Based Approach to Understand Sleep Disorders
187
References 1. Shukla S.: “Waking up to sleep Therapy” Express Healthcare, June 2010 2. Devnani, P., Bhalerao, N.: Assessment of sleepiness and sleep debt in adolescent popultion in urban western India. Indian J. Sleep Med. 6(4), 140–143 (2011) 3. Rajaratnam, S.M.W., Arendt, J.: Health in a 24-h society. Lancet 358, 999–1005 (2001) 4. Chokroverty, S.: Overview of sleep and sleep disorders. Indian J. Med. Res. 131, 126–140 (2010) 5. Shah, N., Bang, A., Bhagat, A.: Indian research on sleep disorders. Indian J. Psychiatry 52, 255–259 (2010) 6. Shaikh, W., Patel, M., Singh, S.K.: Sleep deprivation predisposes Gujurati Indian adolescents to obesity. Indian J. Community Med. 34(3), 192–194 (2009) 7. Sharma, H., Sharma, S.K.: Overview and implications of obstructive sleep apnoea. Indian J. Chest Dis. Allied Sci. 50, 137–150 (2008) 8. Sharma, H., et al.: Pattern & correlates of neurocognitive dysfunction in Asian Indian adults with severe obstructive sleep apnea. Indian J. Med. Res. 132, 409–414 (2010) 9. Sharma, S.K., Ahluwalia, G.: Epidemiology of adult obstructive sleep apnoea. Indian J. Med. Res. 131, 171–175 (2010) 10. Surendra, S.: Wake-up call for sleep disorders in developing countries. Indian J. Med. Res. 131, 115–118 (2010) 11. Ramrakhiyani, V., Sanjay, D.: Study of the incidence and impact of chronic sleep deprivation in Indian population with special emphasis on neuropsychology testing. Indian J. Sleep Med. 14(2), 23–28 (2019) 12. Suri, J.C., Sen, M.K., Adhikari, T.: Epidemiology of sleep disorders in the adult population of Delhi: a questionnaire based study. Indian J. Sleep Med. 3(4), 128–137 (2008) 13. Udwadia, Z.F., et al.: Prevalence of sleep-disordered breathing and sleep apnea in middle-aged urban Indian men. Am. J. Respir. Crit. Care Med. 169, 168–173 (2004) 14. Balkin, T., Rupp, T., Picchioni, D., Wesenten, N.J.: Sleep loss and sleepiness: current issues. Chest 134, 653–660 (2008) 15. Murray, J.: New method for measuring daytime sleepiness: the Epworth sleepiness scale. Sleep 14(6), 540–545 (1991) 16. Banks, S., Dinges, D.: Behavioral and physiological consequences of sleep restriction. J. Clin. Sleep Med. 3(5), 519–528 (2007) 17. Durmer, J.S., Dinges, D.F.: Neurocogntive consequences of sleep deprivation. Semin. Neurol. 25(1), 117–129 (2005) 18. Rao, S.: Neuropyschology Test Battery Manual, National Institute of Mental Health Sciences (NIMHANS), Bangalore (2004) 19. Mueller, S.T., Piper, B.J.: The psychology experiment building language (PEBL) and PEBL test battery. J. Neurosci. Methods 222, 250–259 (2014) 20. Ulrich III, N.J.: Cognitive performance as a function of patterns of sleep, PEBL Technical report 2012-06 (2012)
Semantic-Based Process Mining: A Conceptual Model Analysis and Framework Kingsley Okoye1,2(B) 1 Tecnologico de Monterrey, Writing Lab, TecLabs, Vicerrectoría de Investigación y
Transferencia de Tecnología, 64849 Monterrey, NL, Mexico [email protected] 2 School of Architecture Computing and Engineering, College of Arts Technologies and Innovation, University of East London, London, UK
Abstract. Semantics has been a major challenge when applying the Process Mining (PM) technique to real-time business processes. In theory, efforts to bridge the semantic gap has spanned the advanced notion of Semantic-based Process Mining (SPM). The SPM devotes its methods to the idea of making use of existing semantic technologies to support the analysis of PM techniques. Technically, the semantic-based process mining is applied through acquisition and representation of abstract knowledge about the domain processes in question. To this effect, this paper demonstrates how semantically focused process modelling and reasoning methods are used to improve the outcomes of PM techniques from the syntactic to a more conceptual level. Also, the work systematically reviews the current tools and methods that are used to support the outcomes of the process mining, and to this end, propose an SPM-based framework that proves to be more intelligent with a higher level of semantic reasoning aptitudes. In other words, this work provides a process mining approach that uses information (semantics) about different activities that can be found in any given process to generate rules and patterns through the method for annotation, conceptual assertions, and reasoning. Moreover, this is done to determine how the various activities that make up the said processes depend on each other or are performed in reality. In turn, the method is applied to enrich the informative values of the resultant models. Keywords: Semantic modelling · Process mining · Annotation · Events log · Ontologies · Semantic reasoning · Process models
1 Introduction Analyzing the large volumes of datasets derived from the real-time (e.g. business) processes or information system have raised intense debate within the research and industrial community. Perhaps, the challenges have been related to the unprecedented need for process mining methods that are capable of supporting the recent and rapid shift from big data to big analysis. This means that there is now a growing need for data engineering tools that are focused on not just the collection, organization, and analysis of the big © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 188–200, 2021. https://doi.org/10.1007/978-3-030-49339-4_20
Semantic-Based Process Mining: A Conceptual Model Analysis
189
datasets that are constantly being generated in today’s information systems, but also intelligent methods that could be used to understand and identify the abstract information that are contained in the different datasets. In theory, a typical example of the many areas in which such process-related research and/or search for insightful analysis have been applied is within the process mining (PM) field [1]. Consequently, this paper shows that the value (useful insights) of data could be realized through the provision of an understandable and effective method for exploration (analysis) of the readily available data at a more conceptual level. For instance, the newly discovered insights or information could be used to understand correlations (association) between the process instances and the actual business process in general [2, 3]. Thus, the main purpose of this work which aims to analyze the events data logs in terms of concepts (abstraction view) rather than focusing on labels (tags) in events log about the processes [4, 5]. Perhaps, the common problem with the PM methods has been the technical focus of the available datasets (event logs), whereby a majority of the techniques rely on the tags (labels) in the event logs to produce the models or mappings. Moreover, semantic technologies such as ontologies has shown to be one of the most less applied methods within the wider context of the PM field [6]. Therefore, the process mining methods appear to be somewhat limited when confronted with unstructured data. Consequently, most of the discovered models tend to support just machine-readable systems rather than machineunderstandable systems at large. Perhaps, by machine-understandable systems, we refer to systems that trail to inclusively process the information that they contain or supports. In other words, the input and output parameters/values are both (i) understandable by humans, and (ii) understandable by machines. Technically, with such type of consideration; the captured events log and models are either semantically labelled (annotated) to ease the analysis process, or defined in a structured format (e.g. ontology) which allows a computer (the reasoning engine) to automatically compute newly or previously undiscovered information by making references to the pre-defined metadata or libraries. In summary, the paper draws its main contributions to knowledge from the following aspects: • Definition of a semantic-based process mining framework that uses metadata (conceptual information) about the datasets and models to support the PM analysis, and in turn, obtains a more accurate results that are closer to human understanding. • Technique for semantical annotation and analysis of the process models towards the development of PM methods that exhibit a high level of semantic reasoning capabilities. • Systematic review/analysis of the current methods which support the semantic-based process mining. The rest of this paper is structured as follows: Sect. 2 discusses different technologies which are applicable to the process mining and semantic modelling approaches. In Sect. 3, the work provides a thematic analysis of the existing tools, algorithms, and methods which support the semantic-based process mining. Section 4 presents the proposed SPM-based framework, and how the work applies the method to analyze the real-time process and data based on concepts rather than events tags or labels about the processes.
190
K. Okoye
The experimentation using a case study example of data about a real-time business process is presented in Sect. 5. Section 6 discusses the results and implications of applying the SPM-based framework. Whereas, the conclusion and direction for future works of this paper is provided in Sect. 7.
2 Preliminaries 2.1 Semantic Web Search Technologies (SWS) SWS technologies are used to define methods/tools which tend to integrate the concept of Information Extraction (IE) [7] and Information Retrieval (IR) [8] to find meaningful information (e.g. files, corpus) from large collections of data (databases, web pages, etc.), and then returns the outputs to the users (search initiator) based on the pre-specified information need. In theory, SWS simply means finding a set(s) of text or information that are pertinent to the users’ query [9]. Moreover, Cunningham [10] notes that the SWS technologies target to add a machine tractable and/or repurposable layer of annotations that are relative to ontologies. For instance, in terms of web mining, the method could be applied by creating semantically annotated terms that links the resultant web pages to an ontology. In turn, the process (web search) turns out to be an automatic or semi-automatic one owing to the formal design, development, and interrelation of the ontologies. Likewise, the work in this paper provides a method that is useful towards improving information values of the different datasets (events logs) and model through provision of semantically annotated terms that links to concepts defined in an ontology. Accordingly, the work of [10] notes that a typical illustration of the SWS in practice is the KIM (Knowledge and Information Management system) [11]. KIM offers a type of IE-based facility for creating metadata, storage, and semantically enriched web browsing or search mechanism. Equally, many other tools exist in the literature that supports the SWS. For example, SemTag system [12], Magpie [13] an add-on for the browser that functions by using ontologies to provide precise or tailored perspectives about web pages which the users might be interested in or wishes to browse. Nonetheless, OWL (web ontology language) [14, 15] has emerged as the standard format for defining SWS-based tools and has since been accepted and widely used for logical structuring of information (e.g. conceptual modelling) and/or knowledge engineering. Typically, OWL has proved useful in enriching datasets or depiction of inference rules (as illustrated in this paper in Sect. 4) to support the automatic assertions or reasoning of semantic models at a more conceptual level. Besides, as a set(s) of semantically annotated terms, the resulting ontologies are used to support abstract information extraction particularly allied to Ontology-based Information Extraction (OBIE) systems [16]. 2.2 Data Labelling and Linking (Integration) Indeed, semantic web technologies have matured over the last few decades. One of the many areas in which the technology has experienced substantial advancement is the Linked Open Data (LOD) cloud [17, 18]. LOD consists of a number of machine-readable data (e.g. the RDF triples) that are useful when describing classes and the underlying
Semantic-Based Process Mining: A Conceptual Model Analysis
191
properties. Although, Poggi et al. [18] notes that in some LOD applications; it is difficult to understand the ontology alignments between multiple datasets. To resolve the identified problem, the work [18] introduce a domain-independent framework which decreases the heterogeneity in ontologies in terms of the linked datasets, retrieves core entities or objects, and automatically enriches the underlying ontology by adding the domain, range, and annotations. Besides, another problem with LOD is owing to the fact that the envisaged datasets are largely categorized according to domains and interlinked mainly with owl:sameAs (a built-in OWL property) with limited use of some other descriptive properties (such as the owl:equivalentClass and owl:equivalentProperty) which are more useful for linking equivalent classes and their properties. However, a very distinctive property of ontologies (especially OWL) that makes the technology capable of reasoning is types of the relations that exist across the different concepts (e.g. functional, inverse functional, transitive, symmetric, asymmetric, reflexive and irreflexive) [15]. Interestingly, reference [19] measures the different perspectives of similar objects within a specified domain (IT benchmarking) by creating an ontology-based formalization/integration of all relevant properties, attributes, and elements using expressive functionalities of the OWL [14] and logical reasoning [20, 21]. Likewise, this paper applies descriptive languages such as OWL [14], SWRL (semantic web rule language) [20], and DL queries (description logics) [21] to propose an SPM-based method for real-time process modelling, descriptions (e.g. ontology-based) and formalization of meaning of the different process instances/entities. Perhaps, this process is achieved by allowing attributes or labels about the different entities (process instances) to be enriched through the metadata creation or data labelling. Thus, semantic-based annotations using ontological schema/vocabularies.
3 Systematic Mapping Study: Main Application Areas of the Process Mining and Semantic Modelling Tools/Methods The work methodically presents in Table 1, the related studies that are pertinent to the method and proposals of this paper. Essentially, the systematic analysis considers the tools and methods which cover either or both the process mining and semantic modelling fields as closely related to the proposed SPM-based method of this paper. As gathered in the table (Table 1) a number of works have been done within the process mining field including the several tools and methods that are used to support the technique in real-world settings. Perhaps, the development of the semantic process mining tools/methods which are focused on supporting the real-time process modelling and analysis has spanned due to challenges associated with the existing PM methods. Besides, there has been a significant improvement with the process mining techniques over the years. This includes analysis and presentation of the process mining results at an abstraction level that are closer to human understanding. However, early researches in this area has shown that conceptual analysis of the input datasets and models are only possible if the process scientists and/or analysts should take the additional step of providing the real (semantics) information or knowledge that describes the said processes in question. Thus, emergence of the semantic-based process mining methods.
192
K. Okoye
Table 1. Systematic mapping study of the process mining and semantic modelling methods Main contribution area
Main tools/Instrument
Studies
Semantic Web Search, OBIE, Information Extraction (IE), IR (Information Retrieval), Process Mining, and Database Management
XESame, ProMimport, eXtensible Event Streams (XES), OBDA (Ontology-Based Data Access) model, OWL, Balanced Distance Metric (BDM), Learning Accuracy (LA), KIM (Knowledge and Information Management system), SemTag system, Magpie
Calvanese, et al. [3], Yankova, et al. [22], Cunningham [10], Calvanese, et al. [7], Maynard, et al. [23]
Process Querying, Trace Abstraction, Classifications, Ontology for BPM, BI, Semantic Modelling and Annotations
Ontologies, semantic annotations, Process Models, process querying tools, reasoners
Polyvyanyy, et al. [24, 25], Montani, et al. [26], De Giacomo et al. [27]
Educational process mining (EPM), Sequential Pattern mining, Intentional Mining, and Graph mining
Process discovery, Bogarín, et al. [28] Conformance checking, Dotted Chart and Social Network Analysis, MOOCs (Massive Open Online Courses), LMS, Hypermedia LE’s, Curriculum Mining, Computer-Supported Collaborative Learning, Software Repositories, etc.
Process discovery, IR, Semantics particularly ontologies, Learning Process Automation, EPM
Ontology abstract filter plugin (ProM), SA-MXML (Semantic Annotated Mining eXtensible Markup Language), Heuristic miner
Semantic Process Mining (SPM), Events Logs, Annotation, Ontologies
Conformance Analysis de Medeiros et al. [4] plugin (LTL Checker in ProM), Semantic Reasoning, Semantic Annotation, LTL formulas and Template
Classification, Clustering techniques, Data mining (DM), Ontology, Classes, and Concepts.
Data Mining (DM) techniques, Semantic labelling, Process Description logics, and multiple Classifiers
Cairns, et al. [29]
Han, et al. [30], d’Amato, et al. [31], Elhebir and Abraham [32]
(continued)
Semantic-Based Process Mining: A Conceptual Model Analysis
193
Table 1. (continued) Main contribution area
Main tools/Instrument
Studies
Fuzzy Logic, Fuzzy Sets, Fusion Theory, Fuzzy Mining and Reasoning, Standard percent of Classification
Extended Bayesian classifiers, Generalized Minimum-based (G-Min) algorithm, OWL Schema and declaration sentences Classes, Datatype Properties, Functional Properties, and Individuals or process instances
Baati, et al. [33, 34], Zadeh [35], Peña-Ayala and Sossa [36]
Intelligent and Adaptive Educational learning systems (IAELS), AI’s, User Modelling, Content Representation, Case studies Application
Computer-based systems and BI tools, WFM (Workflow management system), Learner Models, OWL, and property descriptions
Peña-Ayala [37]
Process Mining, BI, BPM, BAM, CPM (Corporate Performance Management), PAIS (Process-Aware Information Systems)
Process Mining tools PROM, Disco. Data mining techniques - Clustering, Regression, Classification, Association Rule Learning, Predictive Analytics, WFM systems, BI tools e.g. SAP, WebFOCUS, SQL Server, TIBCO, Pentaho, Tableau, etc.
Van der Aalst [1], de Leoni, et al. [38], de Leoni, et al. [39], Van Dongen, et al. [40], Ingvaldsen [9]
Process Mining, Semantic Modelling, Input Model and Events Log Annotation, Ontologies, Semantic Reasoning, Fuzzy Mining, BPMN notation, SPM, etc.
PM tools/algorithms - Disco Okoye, et al. [2, 5, 41–44] and PROM; Process Modelling tools - Bizagi Modeller, Fuzzy and BPMN Models, Ontologies and Process description Languages - OWL, SWRL, Description Logics (DL), Reasoners e.g. Pellet, OWL API
4 SPM-Based Framework and Design The work in this paper shows that quality augmentation of the PM methods and the resultant models is a result of applying data analysis techniques that tend to combine the said systems with the main components or building blocks, namely: (i) Semantic Labelling (Annotation), (ii) Semantic Representation (Ontology), and (iii) Semantic Reasoning (Reasoner) [5, 44]. Technically, the SPM-based method utilizes the benefits of
194
K. Okoye
the rich-semantics (annotations) [24, 25, 27] that are contained in events log and models about any given process (e.g. business process) and links the properties to concept(s) in ontologies in order to allow for extraction of useful patterns through the semantic reasoning aptitudes as illustrated in Fig. 1.
Fig. 1. Conceptual framework of the SPM-based approach.
As illustrated in Fig. 1 the SPM-based framework (Fig. 1) is designed to show how the input datasets or models are being extracted, prepared, and transformed into minable formats that allows for a more conceptual information retrieval/analysis. Moreover, on one hand, the proposed framework can be easily applied to analyze any given process domain of interest provided the readily available data contains the minimum requirement for process mining. On the other hand, the generalization of the framework and its implementation ensures the repeatability of the experiments and for traceability purposes. Fundamentally, the framework constitutes of the following main components/phases: 1. Model extraction from the events data logs, where: the discovered models are described as sets of annotated properties that link to defined terms in an ontology. 2. Ontological classifications and representations that allow for inferencing of the meaning (concepts assertions) of the labels/attributes within the model. 3. Automated reasoning (inference engine) designed to support computing and classification of the different entities in the model, and in turn, presents the underlying inferred class hierarchies or taxonomies. 4. Conceptual references, retrieval, and extraction process which allows for automatic discovery of new knowledge (information) about the individual concepts, including the relationships that exist between the different process instances.
Semantic-Based Process Mining: A Conceptual Model Analysis
195
In short, automated computing (reasoning) of the concepts and different relationships that exist within the events log or models is permitted owing to the well-defined ontologies (formal definitions) and annotations (semantic labels). Therefore, with the SPM-based framework, valuable information (semantics) about the different activities and how they are associated within the existing knowledge-base is made possible and essential for extracting models capable of producing new knowledge. Moreover, the automatic reasoning by way of referencing concepts in the ontologies; (i) provides us with a robust way to answer different questions regarding relationship the process instances share between themselves in the model, and (ii) to perform a more conceptual analysis capable of providing real-world answers that are closer to human understanding.
5 Experimentation Analysis and Results In order to test for how the different components of the SPM-based framework fit and is capable of analyzing the events logs and model at a more conceptual level; the work makes use of the event data from a real-time business process in [45] to illustrate the SPM method [5]. Essentially, the implementation is performed in order to weigh up the capability of the SPM-based framework being able to produce a more accurate classification of individual traces that constitutes the derived models. The datasets used for the experimentation includes a training set and a test set that were used to discover the models and for cross-validation purposes. Consequently, the resulting method (semantic-based fuzzy mining approach) [5] allows for the meaning of the process elements to be enhanced through the use of properties description and/or semantic assertions (e.g. Class_assertions, Object_property_assertions, and Data_property_assertions) that supports automatic classification of discoverable entities (taxonomies). In so doing, the method was able to generate inference knowledge which is used to discover useful patterns (traces) in the models by means of the conceptualization method of analysis. Practically, the work implements the semantic fuzzy mining approach using the OWL API (Web Ontology Language Application Programming Interface) [46] for the extraction and loading of the inferred concepts and process parameters. We also use the semantic reasoner (Pellet) [47] to perform all the logical inferencing and classification of the different concepts. Clearly, the purpose of performing the automated inferences or concepts classification is to match the questions one would like to answer about the different attributes and/or relationship the process instances share within the knowledgebase by linking the different entities or properties (classes, object and data properties, etc.) in the model with the concepts that they represent in the ontologies. Technically, for each ontology, all concepts in their turn were considered by using the reasoner (Pellet) and are checked for consistency by referencing the process parameters. Based on the behavioural features of the provided datasets [45] which contain in each test log 10 traces that are considered allowed (true positives) and 10 other traces that are seen as disallowed (true negatives) [45], a cross-validation method was performed with the goal to overcome the variability in composition of the different datasets. The traces were computed and recorded based on the reasoners response in order to further weigh its performance with respect to determining the correctly classified traces. In other words, for each result of the classification process, the replayable (true positives)
196
K. Okoye
and non-replayable (true negatives) traces were learned. The results of the experiments were noted considering if the specified trace has been classified as true positives (TP), false negative (FN), false positive (FP), or true negatives (TN). Thus, the following performance metrics are utilized to determine the accuracy of the classification process [1, 48] whereby; • TP denotes the true positives i.e. traces that were correctly classified as positive • FN signifies the false negatives i.e. traces that are predicted to be negative but ought to have been classified as positive • FP denotes the false positives i.e. traces that are predicted to be positive but ought to have been classified as negative. • TN signifies the true negatives i.e. traces that were correctly classified as negative. In general, the work notes from outcomes of the experiment that for every run set of parameters, the commission error, i.e. false positive (FP) and false negative (FN) values was null, thus, equal to zero (fp + fn = 0) [7]. Clearly, this means that the reasoner (classifier) did not make unnecessary run-time mistakes. For example, settings where a trace is deemed to be instances of a particular class whereas it really is an instance of another class. Moreover, the work also notes that the trace accuracy rate was very high, i.e., for the true positive (TP) and true negative (TN) values and was consistently observed for all the input parameters or test sets. Consequently, the experimental outcomes shows that the SPM-based framework and the resulting semantic fuzzy mining approach exhibits a high level of accuracy when applied to classify the various process elements.
6 Discussion The semantic-based process mining methods appear to be one of the most closely related or promising areas that could be explored to resolve the challenges with the process mining techniques. To do this, the envisioned systems/methods should involve the process of extracting streamlined models from the recorded events log that fits or represents the actual processes as performed in reality. In other words, to benefit from the real semantics behind the events log and attributes (tags or labels), the SPM which enforces the PM and its analysis at the conceptual levels, has to be employed. For this purpose, this work has introduced the SPM-based framework that is directed towards conceptual discovery and enhancement of the process models. Essentially, the paper demonstrates how events log from any specified process or domains are being extracted, semantically-prepared, and transformed into minable formats to support the discovery, monitoring and improvement of the processes through the semantically motivated (conceptual) mechanism or analysis. Therefore, the semantic-based approach proves to support or allows for analysis of the events log based on concepts rather than events tag or label about the process. For instance, the semantic-based planning and analysis of the method allows for the meaning of the different objects/data types in the model to be enhanced by making use of the property description languages (e.g., OWL, SWRL, DL Queries) to support the automatic representation, classification or manipulation of the various entities. Perhaps, the method shows to be useful by generating
Semantic-Based Process Mining: A Conceptual Model Analysis
197
inference knowledge which is then utilized to determine the different patterns and/or improve the analysis of the derived models based on the domain (semantic) concepts. In summary, the analysis and experimentations of this paper is on one hand concerned with how the semantic-based method is utilized to support the automatic generation and population of the ontologies. On the other hand, the cross-validation process (considering the performance metrics outlined in Sect. 5) is specifically centered on the capability of the system being able to analyze and determine similarities between the different concepts that are defined in the ontologies. Moreover, owing to the fact that the models are defined using the process description languages, logic, and queries; the defined ontologies tend to allow for the semantic-based analysis and can measure similarities of the different concepts. Indeed, this is achieved by referencing the metadata assertions or annotations (i.e. the process descriptions). In consequence, the process of determining or ascertaining the similarities (semantic information) amongst the different entities prove to be more effective than the traditional structure-based measures [23]. Besides, the SPM-based approach proves to provide a more flexible analysis for task-specific processes or systems which are thought to be aware of the various process they are used to support. Thus, machine-understandable systems.
7 Conclusion This paper shows that process-related analysis often allied to the process mining techniques; spans the need not only for methods that can extract valuable information from the events logs. But, also the necessity for designing novel approaches that can be utilized to perform a more abstract (conceptual) reasoning about the different processes in question. On the one hand, although the PM has become a very useful technique that supports process-related analysis or information exploration whereby, useful information on how the different activities within the processes depend on each other are made possible. On the other hand, there exists the issue of semantic (abstraction) analysis of the resultant models which the standard PM methods seem to lack. To this effect, this paper has shown that a combination of the PM methods with semantic technologies is more effective for extracting models capable of producing newly and/or previously undiscovered information within the processes or models. Consequently, the paper introduces the SPM-based framework which integrates the main tools (i.e. annotated event logs/models, ontologies and semantic reasoner) to analyze the models. As a result, the method proves useful towards discovery and enhancement of the sets of behaviours (patterns) that can be found within the process domains. In other words, the paper supposes that a system which is formally encoded with semantic labelling (annotation), semantic representations (ontologies) and semantic reasoning (reasoner) has the ability to lift the PM results from the syntactic to a more conceptual level. Future works could adopt the proposed SPM-based method to analyze data from any given process domain or settings. This may also involve refinement of the semanticbased approach which has already been developed in this paper and owing to the fact the semantic process mining is a new field within the broader context of the PM, and there are not too many algorithms or tools that support the method in the literature.
198
K. Okoye
Acknowledgment. The authors would like to acknowledge the technical and financial support of Writing Lab, TecLabs, Tecnologico de Monterrey, in the publication of this work.
References 1. Van der Aalst, W.M.P.: Process Mining: Data Science in Action, 2nd edn. Springer, Heildelberg (2016) 2. Okoye, K., Islam, S., Naeem, U., Sharif, M.S., Azam, M.A., Karami, A.: The application of a semantic-based process mining framework on a learning process domain. In: Arai, K., Kapoor, S., Bhatia, R. (eds.) IntelliSys 2018. AISC, vol. 868, pp. 1381–1403. Springer, Cham (2019) 3. Calvanese, D., Kalayci, T.E., Montali, M., Tinella, S.: Ontology-based data access for extracting event logs from legacy data: the onprom tool and methodology. In: Abramowicz, W. (eds.) Business Information Systems. BIS 2017. LNBIP, vol 288, pp. 220–236. Springer, Cham (2017) 4. de Medeiros, A., van der Aalst, W.M.P., Pedrinaci, C.: Semantic process mining tools: core building blocks. In: ECIS, Galway, Ireland, June 2008, pp. 1953–1964 (2008) 5. Okoye, K., Naeem, U., Islam, S.: Semantic fuzzy mining: enhancement of process models and event logs analysis from Syntactic to Conceptual Level. Int. J. Hybrid Intell. Syst. (IJHIS) 14(1–2), 67–98 (2017) 6. Garcia, C.D.S., Meincheim, A., Junior, E.R.F., Dallagassa, M.R., Sato, D.M.V., Carvalho, D.R., Santos, E.A.P., Scalabrin, E.E.: Process mining techniques and applications – a systematic mapping study. Expert Syst. Appl. 133, 260–295 (2019) 7. Calvanese, D., Montali, M., Syamsiyah, A., van der Aalst, W.M.P.: Ontology-driven extraction of event logs from relational databases. In: Reichert, M., Reijers, H.A. (eds.) BPM 2015. LNBIP, vol. 256, pp. 140–153. Springer, Cham (2016) 8. Manning, C.D., Raghavan, P., Schütze, H.: Introduction to Information Retrieval. Cambridge University Press, Cambridge (2008) 9. Ingvaldsen, J.E.: Semantic process mining of enterprise transaction data, Ph.D. thesis Norwegian University of Science and Technology, Norway (2011) 10. Cunningham, H.: Information Extraction, Automatic. University of Sheffield, Sheffield, UK (2005) 11. Popov, B., Kiryakov, A., Kirilov, A., Manov, D., Ognyanoff, D., Goranov, M.: KIM - semantic annotation platform. J. Nat. Lang. Eng. 10(3–4), 375–392 (2004) 12. Dill, S., Eiron, N., Gibson, D., Gruhl, D., Guha, R., Jhingran, A., Kanungo, T., Rajagopalan, S., Tomkins, A., Tomlin, J.A., Zien, J.Y.: SemTag and Seeker: bootstrapping the semantic web via automated semantic annotation. In: Proceedings of WWW 2003 Budapest (2003) 13. Domingue, J., Dzbor, M., Motta, E.: Magpie: supporting browsing and navigation on the semantic web. Funchal, Portugal, In: Nunes, N., Rich, C. (eds.) Proceedings of ACM Conference on Intelligent User Interfaces (IUI) (2004) 14. Bechhofer, S., van Harmelen, F., Hendler, J., Horrocks, I., McGuinness, D.L., PatelSchneider, P.F., Stein, L.A.: OWL web ontology language reference, Technical report W3C Recommendation (2004) 15. Motik, B., Patel-Schneider, P.F., Parsia, B., Bock, C., Fokoue, A., Haase, P., Hoekstra, R., Horrocks, I., Ruttenberg, A., Sattler, U., Smith, M.: OWL 2 Web Ontology Language Structural Specification and Functional-Style Syntax, 2nd edn. W3C Recommendation (2012). https:// www.w3.org/TR/owl2-syntax. Accessed Aug 2019 16. Wimalasuriya, D.C., Dou, D.: Ontology-based information extraction: an introduction and a survey of current approaches. J. Inf. Sci. 36(3), 306–323 (2010)
Semantic-Based Process Mining: A Conceptual Model Analysis
199
17. Poggi, A., Lembo, D., Calvanese, D., De Giacomo, G., Lenzerini, M., Rosati, R.: Linking data to ontologies. In: Journal on Data Semantics, vol. 4900, pp. 133–173 (2008) 18. Zhao, L., Ichise, R.: Ontology integration for linked data. J. Data Semant. 3(4), 237–254 (2014) 19. Pfaff, M., Neubig, S., Krcmar, H.: Ontology for semantic data integration in the domain of IT benchmarking. J. Data Semant. 7(1), 29–46 (2017) 20. Horrocks, I., Patel-Schneider, P.F., Boley, H., Tabet, S., Grosof, B., Dean, M.: SWRL: A Semantic Web Rule Language Combining OWL and RuleML. W3C Member Submission (2004). http://www.w3.org/Submission/SWRL/. Accessed July 2019 21. Baader, F., Calvanese, D., McGuinness, D.L., Nardi, D., Patel-Schneider, P.F.: Description Logic Handbook: Theory, Implementation, and Applications, 1st edn. Cambridge University Press, New York (2003) 22. Yankova, M., Saggion, H., Cunningham, H.: Semantic-based Identity Resolution and Merging for Business Intelligence. University of Sheffield, UK, Sheffield (2008) 23. Maynard, D., Peters, W., Li, Y.: Evaluating evaluation metrics for ontology-based applications: infinite reflection. In: Proceedings of the International Conference on Language Resources and Evaluation, LREC 2008, 26 May–1 June, Marrakech, Morocco (2008) 24. Polyvyanyy, A., Ouyang, C., Barros, A., van der Aalst, W.M.P.: Process querying: enabling business intelligence through query-based process analytics. Decis. Support Syst. 100(2017), 41–56 (2017) 25. Polyvyanyy, A., et al.: Process Querying. (2016). http://processquerying.com/. Accessed Feb 2019 26. Montani, S., Striani, M., Quaglini, S., Cavallini, A., Leonardi, G.: Knowledge-based trace abstraction for semantic process mining. In: ten Teije, A., Popow, C., Holmes, J.H., Sacchi, L. (eds.) AIME 2017. LNCS (LNAI), vol. 10259, pp. 267–271. Springer, Cham (2017) 27. De Giacomo, G., Lembo, D., Lenzerini, M., Poggi, A., Rosati, R.: Using ontologies for semantic data integration. In: Flesca, S., Greco, S., Masciari, E., Saccà, D. (eds.) A Comprehensive Guide Through the Italian Database Research Over the Last 25 Years. SBD, vol. 31, pp. 187–202. Springer, Cham (2018) 28. Bogarín, A., Cerezo, R., Romero, C.: A survey on educational process mining. Wiley Interdisc. Rev. Data Min. Knowl. Discovery (WIRES) 8(1), e1230 (2018) 29. Cairns, A.H., Ondo, J.A., Gueni, B., Fhima, M., Schwarcfeld, M., Joubert, C., Khelifa, N.: Using semantic lifting for improving educational process models discovery and analysis. In: SIMPDA of CEUR Workshop Proceedings, CEUR-WS.org, vol. 1293, pp. 150–161 (2014) 30. Han, J., Kamber, M., Pei, J.: Data Mining: Concepts and Techniques, 3rd edn. The Morgan Kaufmann Series in Data Management Systems. Morgan Kaufmann Publishers, Massachusetts (2011) 31. d’Amato, C., Fanizzi, N., Esposito, F.: Query answering and ontology population: an inductive approach. In: Bechhofer, S., Hauswirth, M., Hoffmann, J., Koubarakis, M. (eds.) ESWC 2008. LNCS, vol. 5021, pp. 288–302. Springer, Heidelberg (2008) 32. Elhebir, M.H.A., Abraham, A.: A novel ensemble approach to enhance the performance of web server logs classification. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. (IJCSIM) 7(2015), 189–195 (2015) 33. Baati, K., Hamdani, T.M., Alimi, A.M., Abraham, A.: Decision quality enhancement in minimum-based possibilistic classification for numerical data. In: Abraham, A, Cherukuri, A.K., Madureira, A.M., Muda, A.K. (eds.) Advances in Intelligent Systems and Computing Book Series (AISC). Proceedings of SoCPaR 2016, vol. 614, pp. 634–643. Springer (2018) 34. Baati, K., Hamdani, T.M., Alimi, A.M., Abraham, A.: A new possibilistic Classifier for heart disease detection from heterogeneous medical data. Int. J. Comput. Sci. Inf. Secur. 14(7), 443–450 (2016)
200
K. Okoye
35. Zadeh, L.A.: Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 100(1), 9–34 (1999) 36. Peña-Ayala, A., Sossa, H.: Proactive sequencing based on a causal and fuzzy student model. In: Peña-Ayala, A. (ed.) Intelligent and Adaptive Educational-Learning Systems: Achievements and Trends, pp. 49–76. Springer, Berlin Heidelberg (2013) 37. Peña-Ayala, A.: Intelligent and Adaptive Educational-Learning Systems: Achievements and Trends, 1st edn. Springer-Verlag, Heidelberg (2013) 38. de Leoni, M., Van der Aalst, W.M.P., Dees, M.: A general process mining framework for correlating, predicting and clustering dynamic behaviour based on event logs. Inf. Syst. 56(1), 235–257 (2016) 39. de Leoni, M., Van der Aalst, W.M.P., Ter Hofstede, A.H.M.: Visual support for work assignment in process-aware information systems: framework formalisation and implementation. Decis. Support Syst. 54(1), 345–361 (2012) 40. van Dongen, B., Claes, J., Burattin, A., De Weerdt, J.: The 12th International Workshop on Business Process Intelligence (2016). http://www.win.tue.nl/bpi/doku.php?id=2016:start#org anizers. Accessed June 2019 41. Okoye, K., Tawil, A.R.H., Naeem, U., Islam, S., Lamine, E.: Semantic-based model analysis towards enhancing information values of process mining: case study of learning process domain. In: Abraham A., et al. (eds.) Advances in Intelligent Systems and Computing book series (AISC). Proceedings of SoCPaR 2016, vol. 614, pp. 622–633. Springer (2018) 42. Okoye, K., Islam, S., Naeem, U.: Ontology: core process mining and querying enabling tool. In: Thomas, C. (ed.) Chapter 7, Ontology in Information Science, pp. 145–168. InTechOpen Publishers (2018) 43. Okoye, K.: Process mining with semantics: real-time application on a learning process domain. J. Netw. Innov. Comput. (JNIC) 6(2018), 25–33 (2018). Machine Intelligence Research Labs (MIR Labs) USA, ISSN 2160–2174 44. Okoye, K., Tawil, A.R.H., Naeem, U., Lamine, E.: Discovery and enhancement of learning model analysis through semantic process mining. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. IJCISM 8(2016), 093–114 (2016) 45. Carmona, J., de Leoni, M., Depair, B., Jouck, T.: IEEE CIS Task Force on Process Mining Process Discovery Contest @ BPM 2016, 1st edn. (2016). http://www.win.tue.nl/ieeetfpm/ doku.php?id=shared:edition_2016. Accessed Jan 2018 46. Clark & Parsia LLC: University of Manchester, UK, University of Ulm, Germany.: The OWL API, Manchester, UK: Sourceforge.net - original version API for OWL 1.0 developed as part of the WonderWeb Project (2017) 47. Sirin, E., Parsia, B.: Pellet: An owl dl reasoner. Whistler, British Columbia. In: Canada, Proceedings of the 2004 Int. Workshop on Description Logics, vol. 104, CEUR-WS.org (2004) 48. Van der Aalst, W.M.P.: Process Mining: Discovery, Conformance and Enhancement of Business Processes, 1st edn. Springer, Berlin (2011)
Educational Process Intelligence: A Process Mining Approach and Model Analysis Kingsley Okoye1(B) and Samira Hosseini1,2 1 Writing Lab, TecLabs, Vicerrectoría de Investigación y Transferencia de Tecnología,
Tecnologico de Monterrey, 64849 Monterrey, NL, Mexico [email protected] 2 School of Engineering and Sciences, Tecnologico de Monterrey, Monterrey, NL, Mexico
Abstract. Over the past decade, the information systems have successfully shown to be useful towards the provision of support for real-time business processes (e.g. educational processes). The ample understanding and interpretation of the educational models have become closely related to the teaching-learning process and decision-making mechanisms. Moreover, higher educational institutions have been found to seek ways on how to manage and make the best business decisions in alignment with performance of the teachers and students learning outcomes. On the one hand, process mining (PM) has proved to be one of the existing methods that is capable of analyzing the learning activities or processes to provide useful information that can be used to support educational models. On the other hand, this work shows that to address and support the complexities within the education processes, an adequate method and technological support is needed. To this end, this paper proposes an educational process intelligence (EPI) model that is used to provide a contextual-based and informed learning process analysis and decision making through the process mining approach. Technically, the work illustrates the application of the model using data about an online course for university students. Keywords: Process intelligence · Educational innovation · Process mining · Learning activities · Educational models · Event logs
1 Introduction Process Intelligence has emerged as a new technology that can be leveraged to provide effective organizational management and business-related decision making. Nowadays, many of the overlapping terms such as artificial intelligence, machine learning, deep learning, etc., have one common feature which is that they are constantly generating information or data about the different processes which they are used to support. For instance, the aforementioned technologies are used to identify objects or patterns in images, transliterate speech into text, match posts and news items, or some kind of products with the consumers’ interest, and/or retrieve relevant search results from the web. Interestingly, the unprecedented increase in the amount of data that are generated about those different process is being recorded and stored in the different organizations’ © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 201–212, 2021. https://doi.org/10.1007/978-3-030-49339-4_21
202
K. Okoye and S. Hosseini
information systems or database. Besides, it has become the job of process analysts and business owners to derive useful insights from the captured datasets than can help drive the business operations forward, ensure the right decision-making strategies, and monitor potential deviations or bottlenecks. Indeed, process mining (PM) [1] has proved to be one of the existing methods that can be used to extract and analyze process-related information from the existing process domains. On the one hand, the PM combines techniques from computational intelligence which has lately been considered to encompass artificial intelligence (AI) or even the latter, augmented intelligence (AIs) systems, and data mining (DM) to process modelling in order to analyze the captured event logs and models [2]. On the other hand, the process intelligence can be seen as the branch of computer science that combines the modelling, predictive and data mining features of the PM to provide intelligent methods or systems that can think and act like humans (machineunderstandable systems), and to perform different tasks requiring intelligence such as e-learning, content planning, speech recognition, problem-solving, logical thinking, etc. For instance, a typical process intelligence system could focus on offering personalized instruction or teaching platform that allows the users to select machine led instruction, monitor student’s progress, and provide support based on the learned patterns or behaviours. Perhaps, the resulting outcomes of such systems which enables discovery of the different patterns (such as process mining) can be used to promote better practice or approaches towards achieving contextual-based skillsets and to help the students realize better results [3]. Moreover, this paper proposes the educational process intelligence (EPI) to demonstrate the capabilities of intelligent systems and models. For example, one of the main areas in which the technique is currently being applied and is gaining attention in recent years is the Educational Process Mining (EPM) [4]. According to Bogarín et al. [4], EPM means the application of process mining (PM) to raw educational data by considering the end to end processes rather than local patterns. Interestingly, the idea of the EPM emerges from the educational data mining (EDM) discipline [5] and the drive for its incentive is primarily to discover, analyze and improve educational processes based on the hidden or unobserved information within the events log recorded in the IT systems. Thus, the term Educational Process Intelligence. In summary, this paper makes the following contribution to knowledge: • Definition of an Educational Process Intelligence model (EPI) that supports contextual-based analysis of the domain processes to improve decision-making strategies. • An algorithm (procedures) that can be layered on top of existing information assets (e.g. educational process data) to provide a more process-related analysis that can be easily understood and adopted by the stakeholders (e.g. the higher educational institutions, process analysts/owners, etc.). • Series of case study experimentation and implementation of the EPI model within the educational settings. • Use of process mining techniques to find out patterns (e.g. students learning styles and behaviours) from events log and predict outcomes through further analysis of the discovered models.
Educational Process Intelligence
203
The rest of the paper is structured as follows; in Sect. 2, the paper discusses appropriate related works in this topic area. Section 3 introduces the main components of the educational process intelligence (EPI) model and demonstrates the steps/procedures for its application in real-time in Sect. 4. Section 5 presents a case study implementation and analysis using the proposed method which integrates the process mining. In Sect. 6, the work discusses the outcomes and impact of the experimentations and analysis, and then concludes and points out the direction for future works in Sect. 7.
2 Related Works Nowadays, information derived from different organizational processes are used to support and to manage the users-related activities. Characteristically, the supporting systems, otherwise referred to as the process intelligent systems, allow for the identification of useful knowledge about the users by making use of the sequences of activities executions (information) available in the organizations databases. Besides, by using this information, the organizations define procedures or strategies that allow for maintaining a strong relationship with the users and insightful business decisions [6, 7]. Equally, the resultant methods such as the process intelligence have also been leveraged to promote personalization of the users experience (e.g. the learning process) towards a better outcome [8]. According to Ascione et al. [9], from the educational and lifelong learning point of view; the resulting innovative methodologies have become effective within the higher educational institutions - ranging from adoption of the intelligent practices to transformation of the students learning experiences by discovering their potentialities, and expressive-communication-relational skills. Interestingly, Piedade et al. [6] notes that such methodologies are already being used in terms of both the theoretical and technological sense to designate solutions that are primarily targeted to support the different activities related to the students within the academic settings. For instance, course management and e-learning, students’ information management and storage, etc. However, Nulhakim et al. [10] notes that the implementation of the emerging technologies, or yet still, a good education is a tough challenge for educators because of the definition of what quality and/or good educational process is? is still a debate amongst the future generations of institutions or learning communities. According to [10], modern education is more concerned with the development of the students’ potentials. In other words, the resulting models are not only focused exclusively on the technical abilities of the students but are also concerned with the students’ experience throughout the entire period of learning. For example, process-related information derived from the students learning styles or behaviours can be used to provide personalized contents or guidance to ensure the students’ abilities to solve problems, think critically, and are able to locate or evaluate information, and can productively collaborate and communicate with other students [11]. Interestingly, Process Science has emerged due to such process-perspective that may be missing within the educational initiatives or curricula [1]. Moreover, the works of [1, 12–17] shows that the data logs extracted and stored in several organizations information system (such as the education sectors) must be utilized to enhance the end to end processes in reality by focusing on analysing the underlying patterns or behaviours
204
K. Okoye and S. Hosseini
based on the information that are present in the logs, thus, the incentive of the process mining. Actually, many explanations of the process mining have been proposed in the literature [1, 17–19]. To note, Van der Aalst [17] refers to the process mining as the new technology that makes use of data mining and process modelling techniques to find out patterns (models) from the event logs and predict outcomes through further analysis of the discovered models. Likewise, those level of analysis applies to the context of the Educational Process Mining (EPM) [4]. The EPM pursues to mine and analyse the educational data at process-levels. For instance, the process analysts may focus on performing process-related analysis that references the individual students’ activities in order to help improve the learning process and/or provide useful knowledge about how different learners interact with each other within the learning execution environment. A number of researchers have also directed their work towards the use and application of the PM within the educational settings [4, 5, 14, 20–22]. Moreover, Bogarín et al. [4] notes that the EPM methods apply specific algorithms to data (e.g. the fuzzy miner as utilized in this paper) in order to discover hidden patterns or relationships (attributes) that describe the data. In fact, whichever tool or method one chooses to adopt, the key focus should be on achieving the purpose of adopting either of the tools/methods for analysis. Besides, Holzhüter et al. [23] believes that a way of supporting the learners within the e-learning settings is to adopt the combined approach of using the process mining with concepts of the discovered learning patterns from the events logs as described in this paper.
3 Educational Process Intelligence (EPI) Model The work introduces in this section; the EPI model which it proposes for effective management and modelling of the educational processes. The EPI model is grounded on the process mining framework [1, 20, 21, 23] which combines the data mining and process modelling techniques to analyze datasets captured about the educational processes and can also be applied to any given real-time process irrespective of the process domain. This is done through automated mapping or modelling of how the different activities that make up the processes have been performed in real-time. As gathered in Fig. 1, the EPI model integrates data from the educational process with the process mining technique in order to create models or visualizations (mappings) that are used to provide contextual-based analysis/monitoring of the said processes. In fact, the EPI model consists of the following main components as follows: 1. Events Logs: captured about different activities that underlie the educational process. 2. Mining Algorithms: that are used to perform the process mapping and analysis of the event logs. 3. Process Maps: which are used to visualize how the different activities that make up the process are performed. 4. Process Intelligence: components that are used to realize (perform and understand) the contextual-based analysis and recommendation of the different learning activities and content.
Educational Process Intelligence
205
Fig. 1. The Educational Process Intelligent (EPI) Model.
4 Formalization of the EPI Algorithm The work presents in this section of the paper; the steps or procedures (Algorithm 1) which it applies for the real-time implementation of the EPI design framework.
206
K. Okoye and S. Hosseini
Algorithm 1: Developing EPI Models and Process Analysis 1: Input: E, educational events data PM, process mining 2: Output: M, process maps/models EPI, educational process intelligence and analysis 3: Procedure: process modelling and analysis 4: Begin 5: For all events log E 6: Apply PM methods 7: Extract patterns or maps M ← from E 8: while no more attributes or considerations (A) is left do 9: Analyze M to provide EPI 10: If M interpretation ← Null then 11: obtain the relevant A from E and loop to line 7 12: Else If M interpretation ← 1 then 13: create the necessary EPI and M analysis 17: Return: output and process improvements 18: End If statements 19: End while 20: End For
As gathered in the algorithm (Algorithm 1), we recognize that to develop a processbased intelligent system and/or analysis method that integrates the process mining technique, the following fundamental elements must be taken into account: • Events logs, E, from the process domains (e.g. educational data) used to discover the Models, M. Most often the events log are ordered in a sequential order, and must have at least a Case identification Id (Case_id) and Activity Name (Act_name) attributes to allow for the process model discovery and analysis to follow. • Other additional information or attributes may be required for ample implementation of the process mining e.g. Event ID, Timestamp, Resources, Roles, etc. • The process mining methods (e.g. Fuzzy miner, Heuristic miner, Genetic miner, Alpha algorithm, Inductive miner, etc.) are used to perform the different types of process mining task (process discovery, conformance check, model enhancement) to create the process models, M (mapping) and for further improvement analysis. • A typical model, M consists of traces, paths or patterns of information which are referenced to provide the process-based analysis (EPI), thus, process intelligence.
5 Analysis and Case Study Implementation The work makes use of Massive Open Online Course (MOOCs) data (Fig. 2) recorded about 281 students who are enrolled in the Critical thinking: reasoned decision making course (edX online) offered by Tecnologico de Monterrey [24] at the time of the analysis. As shown in Fig. 2, students who are enrolled in the edX online course has to complete a four stages/phases of assessment (Evaluación del tema 1, Evaluación del tema 2, Evaluación del tema 3, and Evaluación del tema 4) in order to be awarded the certification on the course if they meet the final pass mark. The average mark of the evaluation stages
Educational Process Intelligence
207
(assessment), and the current grade of the students irrespective of if they have completed the course are also recorded in the data, including the final exam scores for the completed students.
Fig. 2. Fragment of the events log about the students online course.
Technically, the work applies the fuzzy miner algorithm [25, 26] in Disco [27] in order to discover and analyse the events log and the resultant process model. Indeed, the data used for the analysis meets the minimum requirement for any process mining task as defined in [1, 17]. To this end, the work assigns the different attributes which are contained in the events log for the purpose of the analysis as follows: • Student ID is assigned as the Case ID • The current Grade is assigned as the Activity, and • Evaluación del tema 1, Evaluación del tema 2, Evaluación del tema 3, and Evaluación del tema 4, Evaluación del tema (Avg), Examen final, and Enrollment Status variables are assigned as custom variables. Consequently, the following figure (Fig. 3) represents the fuzzy model which the work discovers for the different learning activities (process instances). Whereas, Fig. 4 and 5 represents the statistical results and analysis of the outcomes. As gathered in the fuzzy model (Fig. 3) and analysis in Fig. 4 and 5, the work determines the frequency of the student’s grades and learning activities by establishing the threshold of each of the activities. The logic is by applying the fuzzy mining algorithm on the datasets (Fig. 2), the method allows us to see in detail how the learning processes has been performed by revealing the underlying mappings of the different activities (workflows) as performed in reality. Moreover, the technique also provides us with the opportunity to focus on the streams (frequency) of the learning patterns or behaviours as well as visualize the paths they follow in the process. In Fig. 3, we utilize the filtering capabilities of the fuzzy miner [25, 26] to abstract the most frequent activities in the dataset. For example, as represented in the fuzzy model (Fig. 3) and summary of the
208
K. Okoye and S. Hosseini
Fig. 3. Fuzzy Model representation for most frequent students activities and paths.
Fig. 4. Fragment of the summary of the students final exam scores and relative frequency.
students grades (Fig. 5), we observe that the students whose current score or activities (grade) are 0.24 (24% of the pass mark of 100) are dominant on the course with frequency of 34 out of the 281 students and relative frequency of 12.1% across the dataset. However, we note that the number applies only for the students who are still undergoing the course, i.e., have not attempted all the four phase/assessment on the course (Evaluación del tema 1, Evaluación del tema 2, Evaluación del tema 3, and Evaluación del tema 4). On the one hand, as shown in the Examen final (Fig. 4), we note that 57.3% (161 out of 281) of the students have not attempted either of the four assessments phases (Evaluación del tema 1 to 4) as distributed across the analysis in Fig. 5. On the other hand, when considering the grades of the students who have completed the course (all four assessment), there appears to be a high pass rate (see Fig. 4) for the 42.7% population (i.e. 100 – 57.3% of uncompleted ones) with 81 (28.83%) students out of the 120 (281–161 uncompleted ones) scoring a total of 1 (100% pass mark). In general, the purpose of the process mining technique particularly as illustrated in this section is to define a method which provides one (e.g. the educators, process
Educational Process Intelligence
209
Fig. 5. Summary of the different assessment scores, relative frequency, and average scores.
analysts and process owners) with reliable and extendible results (insights) about the captured datasets that are stored in the information systems in order to understand how the different activities that make up the process relates, and in turn, can be utilized for further monitoring of the said processes and decision making. For instance, the results of the analysis in this paper could suggest that a majority of the students who are enrolled in the MOOCs (edX online) course have not attempted all the four assessments (161 of 281), and the students who have a score of 0.24 (24% of the pass mark of 100) are frequent amongst the uncompleted ones. In turn, the course instructors/developers can make use of such information to monitor bottlenecks or constrains and for recommendation of e-content materials and guidance.
6 Discussion Process mining has proved itself to be the missing link between the model-based process analysis and data-oriented analysis techniques as illustrated in this paper. Moreover, since the method allows for the extraction of non-trivial information from captured datasets as shown in Sect. 5, the technique could also be seen as one of the main mechanisms of the data science, or better still, process intelligence. In this paper, the work introduces the Educational Process Intelligence (EPI) as a mechanism towards discovering and improvement of the sets of recurrent behaviours or patterns that can be found within the different process domains. Ultimately, the aim is to help ascertain the frequency and/or attributes (see Fig. 3, 4, and 5) the process elements share amongst themselves, or that distinguishes some certain entities from another. Consequently, with such a system, the resulting methods prove to offer solutions that carry the characteristics of “intelligence” which are attributed to not just the idea of artificial intelligence but as a specific feature of the broader field of computational intelligence. Besides, the main benefit of the EPI model and its implementation in comparison to the other benchmark algorithms used for process mining is that the method can be described as a fusion theory which integrates the derived process models (e.g. fuzzy models) with other tools or method for
210
K. Okoye and S. Hosseini
modelling/visualization of the real-time processes. Thus, supports a hybrid intelligent system. Currently, the process mining methods e.g. the fuzzy miner has become mature and is being used in different areas of application within different organizations for the purpose of process modelling and analysis. Indeed, the application of the method are not only limited to the business processes but are also used to provides new and augmented ways to discover, monitor, and enhance any given process [28, 29]. Perhaps, there are two main drivers for the growing interest in the process mining [30]. First, data about the different organizational processes (e.g. business operations) are captured and stored at an unprecedented rate with the need to provide an effective method for transformation of the readily available events logs into minable formats for a more contextual-based process analysis [31]. Secondly, there is more than ever-increasing need to improve and support the business processes in competitive and rapidly changing environments [17, 30, 32, 33]. Moreover, the results of the EPI method as described in this paper can easily be adapted and applied by educational process owners, innovators, process analysts, IT experts and software developers in understanding/analysing their everyday processes especially for the purpose of informed process-related decision-making in diaspora.
7 Conclusion This paper shows that the unabridged notion of the process intelligence can be layered on top of existing information assets (e.g. the events log about the educational process) to provide a more informed analysis of the processes in question which can be easily understood by the process owners/analysts. For this reason, this paper proposes an educational process intelligence (EPI) model that can be used to provide a contextualbased analysis of the learning process and decision making through the process mining. Practically, the work applies the model on a case study of the online course for university students in order to demonstrate the application of the method in real-time. For all intents and purposes, the work presumes that it must become the responsibility of the process owners/analysts to adopt and apply such an intelligent method in their operational processes/analysis in order to ensure an ample implementation and understanding of the different activities that make up the said processes. Perhaps, the adoption of the aforementioned practice and infrastructures could help maximize the influence and/or opportunities that are being offered through the process intelligent method as illustrated in this paper. Future works can apply the proposed model and analysis of this paper to analyse events logs for any given process, or yet, conduct the experimentations using a different set of algorithms or process mining technique. Acknowledgment. The authors would like to acknowledge the technical and financial support of Writing Lab, TecLabs, Tecnologico de Monterrey, in the publication of this work. We would also like to acknowledge The MOOC’s, Alternative Credentials Unit of the TecLabs for the provision of the datasets used for the analysis in this paper.
Educational Process Intelligence
211
References 1. Van der Aalst, W.M.P.: Process Mining: Data Science in Action, 2nd edn. Springer, Berlin (2016) 2. Okoye, K., Islam, S., Naeem, U., Sharif, M.S., Azam, M.A., Karami, A.: The application of a semantic-based process mining framework on a learning process domain. In: Arai, K., Kapoor, S., Bhatia, R. (eds.) IntelliSys 2018. AISC, vol. 868, pp. 1381–1403. Springer, Cham (2019) 3. Lau, J., Zimmerman, B., Schaub, F.: Alexa, are you listening? Privacy perceptions concerns and privacy seeking behaviors with smart speakers. In: Proceedings of ACM Human Computer Interaction 2, CSCW, Article 102 (2018) 4. Bogarín, A., Cerezo, R., Romero, C.: A survey on educational process mining. In: Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery (WIRES), p. e1230. Wiley Periodicals (2017) 5. Cairns, A.H., Ondo, J.A., Gueni, B., Fhima, M., Schwarcfeld, M., Joubert, C., Khelifa, N.: Using semantic lifting for improving educational process models discovery and analysis. In: SIMPDA, vol. 1293 of CEUR Workshop Proceedings, pp. 150–161. CEUR-WS.org (2014) 6. Piedade, M.B., Santos, M.Y.: Business intelligence in higher education: enhancing the teaching-learning process with a SRM system. In: 5th Iberian Conference on Information Systems and Technologies, Santiago de Compostela, 2010, pp. 1–5 (2010) 7. Payne, A.: Handbook of CRM. Achieving Excellence in Customer Management. Elsevier BH, Oxford (2006) 8. Pedró, F., Subosa, M., Rivas, A., Valverde, P.: Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development. Education Sector - UNESCO-ED2019/WS/8, p. 46 (2019) 9. Ascione, A., Di Palma, D., Rosa, R.: Innovative educational methodologies and corporeity factor. J. Hum. Sport Exerc. 14(2), 159–168 (2019) 10. Nulhakim, L., Wibawa, B., Erwin, T.N.: Relationship between students’ multiple intelligencebased instructional areas and assessment on academic achievements. J. Phys: Conf. Ser. 1188(2019), 012086 (2018) 11. Bråten, I., Braasch, Jason L.G.: Key issues in research on students’ critical reading and learning in the 21st century information society. In: Ng, C., Bartlett, B. (eds.) Improving Reading and Reading Engagement in the 21st Century, pp. 77–98. Springer, Singapore (2017) 12. Calvanese, D., Kalayci, T.E., Montali, M., Tinella, S.: Ontology-based data access for extracting event logs from legacy data: the onprom tool and methodology. In: Abramowicz, W. (ed.) BIS 2017. LNBIP, vol. 288, pp. 220–236. Springer, Cham (2017) 13. Montani, S., Striani, M., Quaglini, S., Cavallini, A., Leonardi, G.: Knowledge-based trace abstraction for semantic process mining. In: ten Teije, A., Popow, C., Holmes, John H., Sacchi, L. (eds.) AIME 2017. LNCS (LNAI), vol. 10259, pp. 267–271. Springer, Cham (2017) 14. Okoye, K., Tawil, Abdel-Rahman H., Naeem, U., Islam, S., Lamine, E.: Semantic-based model analysis towards enhancing information values of process mining: case study of learning process domain. In: Abraham, A., Cherukuri, A.K., Madureira, A.M., Muda, A.K. (eds.) SoCPaR 2016. AISC, vol. 614, pp. 622–633. Springer, Cham (2018) 15. Okoye, K., Tawil, A.R.H., Naeem, U., Islam, S., Lamine, E.: Using semantic-based approach to manage perspectives of process mining: application on improving learning process domain data. In: 2016 IEEE International Conference on Big Data (BigData), Washington, DC, pp. 3529–3538 (2016) 16. Lautenbacher, F., Bauer, B., Forg, S.: Process mining for semantic business process modeling. In: 13th Enterprise Distributed Object Computing Conference, Auckland, pp. 45–53 (2009)
212
K. Okoye and S. Hosseini
17. Van der Aalst, W.M.P.: Process Mining: Discovery, Conformance and Enhancement of Business Processes, 1st edn. Springer, Berlin (2011) 18. Cairns, A.H., Gueni, B., Fhima, M., Cairns, A.A., David, S., Khelifa, N., Dautier, P.: Process mining in the education domain. Inte. J. Adv. Intell. Syst. 8(1 & 2), 219–232 (2015) 19. Ingvaldsen, J.E.: Semantic process mining of enterprise transaction data, Norway: Ph.D. Thesis - Norwegian University of Science and Technology (2011) 20. Trˇcka, N., Pechenizkiy, M., van der Aalst, W.M.P.: Process mining from educational data. In: Romero, C., et al. (eds.) Handbook of Educational Data Mining. Chapman & Hall/CRC Data Mining & Knowledge Discovery Series, pp. 123–142. CRC Press, Florida (2010) 21. Okoye, K., Naeem, U., Islam, S.: Semantic fuzzy mining: enhancement of process models and event logs analysis from syntactic to conceptual level. Int. J. Hybrid Intel. Syst. 14(1–2), 67–98 (2017) 22. Bogarín, A., Romero, C., Cerezo, R., Sánchez-Santillán, M.: Clustering for improving educational process mining, pp. 11–15. ACM, NY (2014) 23. Holzhüter, M., Frosch-Wilke, D., Klein, U.: Exploiting learner models using data mining for elearning: a rule based approach. In: Peña-Ayala, A. (ed.) Intelligent and Adaptive EducationalLearning Systems. Smart Innovation, Systems and Technologies, vol. 17. Springer, Heidelberg (2013) 24. edX program: MOOC unit Tecnologico de Monterrey. https://www.edx.org/school/tecnol ogico-de-monterrey. Accessed 10 Aug 2019 25. Günther, C.W., van der Aalst, W.M.P.: A generic import framework for process event logs. In: Eder, J., Dustdar, S. (eds.) BPM 2006. LNCS, vol. 4103, pp. 81–92. Springer, Heidelberg (2006) 26. Günther, C.W., van der Aalst, W.M.P.: Fuzzy mining – adaptive process simplification based on multi-perspective metrics. In: Alonso, G., Dadam, P., Rosemann, M. (eds.) BPM 2007. LNCS, vol. 4714, pp. 328–343. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3540-75183-0_24 27. Rozinat, A., Gunther, C.: Disco User Guide - Process Mining for Professionals, Eindhoven, The Netherlands. Fluxicon.com (2012) 28. Van der Aalst, W.M.P., Adriansyah, A., de Medeiros, A.K.A., et al.: Process mining manifesto. In: Daniel, F., Barkaoui, K., Dustdar, S. (eds.) Business Process Management Workshops. BPM 2011. LNBIP, vol. 99, pp. 169–194. Springer (2012) 29. De Leoni, M., Van der Aalst, W.M.P.: Data-aware process mining: discovering decisions in processes using alignments. In: Shin, S.Y., Maldonado, J.C. (eds.) ACM Symposium on Applied Computing, Coimbra, Portugal, pp. 1454–1461. ACM Press, New York (2013) 30. Santos-Garcia, C., Meincheim, A., Faria Junior, E.R., Dallagassa, M.R., Vecino-Sato, D.M., Carvalho, D.R., Portela-Santos, E.A., Scalabrin, E.E.: Process mining techniques and applications – a systematic mapping study. Expert Syst. Appl. 133, 260–295 (2019) 31. Okoye, K.: Technique for annotation of fuzzy models: a semantic fuzzy mining approach. In: Tallón-Ballesteros, A.J. (ed.) Frontiers in Artificial Intelligence and Applications, vol. 320, Fuzzy Systems and Data Mining V, pp. 65–75 (2019) 32. Spengler, A.J., Alias, C., Magallanes, E.G.C., Malkwitz, A.: Benefits of real-time monitoring and process mining in a digitized construction supply chain. In: Proff, H. (ed.) Mobilität in Zeiten der Veränderung, pp. 411–435. Springer, Wiesbaden (2019) 33. Knoch, S., Herbig, N., Ponpathirkoottam, S., Kosmalla, F., Staudt, P., Fettke, P., Loos P.: Enhancing process data in manual assembly workflows. In: Daniel, F., Sheng, Q., Motahari, H. (eds.) Business Process Management Workshops. BPM 2018. Lecture Notes in Business Information Processing, vol. 342, pp. 269–280. Springer, Cham (2019)
Design and Development of a Mobile App as a Learning Strategy in Engineering Education Yara C. Almanza-Arjona1(B) , Leonel A. Miranda-Camargo1 , Salvador E. Venegas-Andraca2 , and Beatriz E. García-Rivera3 1 Escuela de Ingenieria y Ciencias, Tecnologico de Monterrey, Monterrey, Mexico
[email protected] 2 Escuela de Ingenieria y Ciencias, Writing Lab, TecLabs, Vicerrectoría de Investigación y
Transferencia de Tecnología, Tecnologico de Monterrey, Monterrey, Mexico 3 Instituto de Ciencias Aplicadas y Tecnología, Universidad Nacional Autónoma de México,
Mexico City, Mexico
Abstract. The goal of the present innovation is to give engineering students an innovative educational experience through their natural engagement with mobile devices. It aims to motivate the integration of technology in the teaching-learning process where students design, develop and use a mobile app in order to build up knowledge in Thermodynamics. It also enhances their level of domain of soft skills such as teamwork, creativity, self-management, analytical, abstract and critical thinking. This work suggests the use of Information and Communication Technologies (ICT) to develop Open Educational Resources (OERs), which could be helpful in the teaching-learning process of Equilibrium Thermodynamics (ET). During 2016, chemical and biotechnology engineering students from ITESM CEM developed a mobile app to facilitate the comprehension of physicochemical phenomena of phase equilibria. Its design and implementation was the result of a four-step process: 1. Selection of the technologic platform; 2. Definition of thematic content; 3. Creative process for the design and information setup; and 4. Development of the app. Results of this study show quantitatively that implementing technology-based learning experiences as a learning tool has a positive impact on the educational performance and skills development of students. It was learnt that the student’s performance was influenced by the type of technology selected and promoted the development of distinct skills and the construction of knowledge in different levels of domain. Keywords: Educational innovation · Open Educational Resources · Education 4.0 · Mobile learning · Technology in education · Engineering education
1 Introduction 1.1 Innovation in Engineering Education Over the last decades, humankind has been experiencing massive changes in a rate never seen before; just about one century ago, mass production lines were possible thanks to © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 213–225, 2021. https://doi.org/10.1007/978-3-030-49339-4_22
214
Y. C. Almanza-Arjona et al.
the availability of electricity. Later on, during the early years of the 1970s, mechanical and electrical-based production machines were replaced by electronic devices that could be programmed and the development of IT was possible [1]. The transformation that we have been experiencing along the current century, where the fusion of several technologies enabled the intelligent production, has been called the fourth Industrial Revolution (IR4.0). Also known as Industry 4.0, this technological era is described by six essential principles: interoperability, transparency of information, technical support, real-time data acquisition and processing, modularity and distributed decision [2]. Its basic elements are machines, devices, sensors and people, to be in communication with each other through the Internet (Internet of Things - IoT) [2, 3]. Human-machine interfaces are now regularly present in our daily life due to artificial intelligence (AI) and digital physical frameworks. Based on a discussion made in an earlier work [3], this novel scientific and technological landscape demands a revolution in engineering education as new concepts, methods and technologies not previously taught in college are meant to either substitute or complement current syllabus. This advancement in education is important and requires to learn, discover and design techniques that will allow future engineers to solve problems we have not faced yet. New educational models are emerging from higher education institutions, since they have to train engineering graduates for future life and work defined by the IR 4.0; engineers will address complex technological challenges by producing new generations of machines, designing new materials, and creating intelligent systems. Educational innovation requires the implementation of educational experiences that help students gain a deep understanding of academic contents as well as to develop solid techniques for intellectual achievement and knowledge construction to prepare forthcoming generations to solve problems that do not exist today [4]. The term Education 4.0, according to M Ciolacu and co-workers [5], introduces technology into the teaching-learning process with different approaches such as Blended Learning (virtual courses that may include personal interaction) and the seven AI driven features in educational technology: personalized learning process, game based learning using Virtual Reality/Augmented Reality (VR/AR), communities of practice, adaptive technologies, learning analytics, intelligent Chabot’s and E-Assessment [6]. Tecnológico de Monterrey, the top private university in Mexico and top 5 in Latin America, according to the Times Higher Education Latin America University Rankings [7], is responding to such demands by introducing a new educational model known as Modelo Tec 21, which is characterized by promoting the creation of new approaches and tools to create innovative learning experiences and focuses on the development of students’ cognition, skills, competencies and attitudes to harness on pertinent information, knowledge construction and abilities that could not be replaced by automata. In this paper, we report the results obtained from an educational innovation that consisted of designing and implementing a mobile app as an open electronic tool to support the teaching-learning process in the course Equilibrium Thermodynamics (ET), which is a junior year course for students reading for chemical and biotechnology engineering degrees at Tecnológico de Monterrey. This exercise intended to explore the technological potential of using smartphones and tablets to facilitate the comprehension of physicochemical phenomena of phase equilibria and at the same time promoting ethical use of technology by creating original content, using and managing appropriate technological resources.
Design and Development of a Mobile App as a Learning Strategy
215
1.2 Educational Innovation: Integration of Technology in the Classroom Technology has been smoothly introduced into the classroom; in the early 2000’s it was utilized to enhance the instructional practice, such as web-based courses, distance learning and CD-ROM/videomusic material during lectures [8]. Later on, the use of ICT tools was directed to support the teaching-learning process through the virtualization of education, for example, through the design of OERs, the support of online learning processes or the design of different pedagogical methods with ICT (e.g., problem-based or project based learning), students’ evaluation of educational innovation experiences with different tools (e.g., blogs). In the past few years, the focus of the use of technology in higher education has been towards the design and development of learning strategies and activities with the use of digital environments and tools [9]. Mobile learning is a rapidly expanding practice across higher education institutions and a fertile field of educational research. It is not the process of transferring and delivering educational content through mobile devices, which can be accessed at anytime and anywhere; it is certainly not centered on the technology itself. Rather, it is focused on the knowledge, skills and understanding that are fostered by the analytical engagement of the students with mobile devices. Mobile learning is about understanding and knowing how to utilize our everyday technological tools as learning spaces [10]. The aim of the innovation reported in this paper was to evaluate the feasibility of creating an OER in the format of mobile app to support students in the use of mobile technology in their everyday life as a guided tool to enable knowledge building in the subject of Equilibrium Thermodynamics across a virtual environment. Furthermore, this experience was designed to achieve educational goals, gain the desired level of knowledge in engineering, and also promote awareness of the students regarding the ethical use of information, attitudes and competences such as analytical thinking (e.g. analyzing the validity of the vast information found online), self-management, self-learning, teamwork, decision making and technology literacy (in this case for both, the students and the teachers). It is worth noting that some of these skills were developed not only by the student community; there was also an effort made by students and faculty members working together, since this work was possible by making synergy between the core engineering knowledge of faculty members—as a guidance to develop the technical content for the app—, and the natural ICT skills of students—regarding the use of technology and programming in mobile device.
2 A Novel Approach to Learn Thermodynamics 2.1 Thermodynamics Course in Engineering Curricula Equilibrium Thermodynamics is a course typically taken by junior undergraduates reading for chemical and biotechnology engineering degrees at Tecnologico de Monterrey; its thematic content is comprised of phase equilibria principles and physical chemical phenomena. This branch of Thermodynamics is characterized by its barely intuitive abstract concepts and complex mathematical models, which aim at describing molecular processes in phases that are in equilibrium, and it is the basis for the design of chemical separation processes. This subject may be challenging for students, and sometimes even
216
Y. C. Almanza-Arjona et al.
boring as complex ideas and concepts are constantly presented and analyzed, being those same concepts hard to visualize in daily life. During the traditional teaching practice of this subject, we found that some students were reluctant to fully engage during lectures when the focus of lecture content is on the mathematical resolution of complex partial differential equations rather than having a balance between mathematical analysis and full understanding of the molecular processes and corresponding physical chemical phenomena [11, 12]. In this scenario, students are not achieving the expected level of skills that correspond to this course (for instance, problem solving), since it is difficult to relate these thermodynamic concepts with real-life situations. Accordingly, students are able to solve complex thermodynamic models but they do not necessarily know how and when to apply them to solve chemical and biotechnology engineering problems—that is, moving from knowledge to skills—. The motivation of the present work comes from the interest to engage students in innovative educational activities, shifting from a traditional teaching approach to a technological learning environment where millennial and Z-generation students are naturally attracted to. The authors worked with project-based and collaborative learning techniques along with ICT motivated by results previously obtained [13–15]. Furthermore, the literature shows that by combining these techniques with digital tools in teaching engineering and sciences, the development of soft skills is also achieved [16–19]. As previously stated, OERs are electronic resources that are freely shared online and are able to be redistributed to other students [20]. According to Tyler DeWitt—an MIT Ph.D. student and a student coordinator for the MIT+K12 video outreach project—the trick to actually creating OER “is realizing that everything that goes into OER must be your work, in public domain, another OER, or similarly-licensed material” [21]. Images pulled from an internet search, for instance, could be copyrighted, and therefore all parts of the OER would not truly be open. The authors believe that developing an OER can be used as a tool to promote mobile learning in the teaching-learning process of this subject, and in the making, this activity may raise awareness in students about the ethical use of information found on the internet, promote creativity in the creation and organization of original technical content, while maintaining the rigor of the engineering fundamentals. 2.2 Case of Study: Developing a Mobile App as an OER by Undergrad Engineering Students The goal of the educational experience we report in this paper was to provide students with a context to live an innovative educational experience through their natural engagement with mobile devices so as to integrate this technology as a new learning space of the ET course. Additionally, we aimed at motivating the integration of technology in the teaching-learning process by placing students in the center of an academic strategy where they design, develop and use a mobile app to learn thermodynamics and enhance their level of domain of soft skills such as teamwork, creativity, self-management, as well as analytical, abstract and critical thinking. Nowadays, there is ample availability of digital resources with academic content such as blogs, forums, and videos, which are tools that help students to comprehend technical concepts. However, not all of those resources necessarily contain reliable information and the very fact of organizing the content of multiple multimedia sources into a coherent body of knowledge is a challenging task.
Design and Development of a Mobile App as a Learning Strategy
217
The innovation presented in this paper is also aiming to make this easy-to-handle app available for engineering students, whose goal is to support the development of abstract thinking competencies applied to ET based on reliable and accurate information. Characteristics of the Group Under Study. The group that participated in this educational experience was Group IQ2003-3 of the term August - December 2016 from Tecnologico de Monterrey Campus Estado de Mexico. It consisted of 26% males and 74% females, ages between 19 and 20 years old. For all students in this study, it was their first course on ET and the entire group had similar academic background. The academic content studied during the period August – December 2016 comprised thermodynamic concepts and their applications in phase and chemical reaction equilibrium systems, according to Tecnologico de Monterrey syllabus. All 23 students participated in lectures, self-learning activities, classroom demonstrations and problem solving during 16 weeks, 3 h per week. The activities reported in this paper were developed along the course (16 weeks), and every new topic was included in the learning activity. The offer to participate in the creation of OERs was open to the entire class, and only one team (experimental group) of five members, three males and two females, chose to design and implement an OER in the form of a mobile app, which could be used and shared for free. The rest of the students (control group) opted for a website modality. Thus, the study group was comprised of a single team of 5 members: three chemical engineering and two biotechnology engineering students. On the other hand, the control group was comprised of 5 chemical engineering and 13 biotechnology engineering students, organized in 2 teams of 4 students and 2 teams of 5 students. To evaluate the activity, a rubric was designed to ascertain the performance of the students and generate evidence of the skills they developed, including selecting and organizing technical information, see Table 1. This rubric divides the App development into eight defined component parts and provides clear descriptions of the characteristics of the work associated with each component, at five varying levels of mastery, ranging from incomplete up to exemplary level.
2.3 Design and Implementation of the Mobile App This practical activity required extra hours to conduct the programming and evaluation of the app. As suggested in [22], the app’s design and development processes conducted by the students required several steps ranging from the conception of its structure to postanalysis and publication to the final user. Also, functionality, scope and the user objective were defined. Both design and development processes were carried out simultaneously. This process allowed the continuous improvement of functional details and contents in order to optimize its performance. Selection of the Tech Platform. An impediment of several OERs is that contents are confusing and extensive, context may not be properly presented or there is no possibility to verify references. As for videos, the drawback comes from the great amount of time one most invest in checking out different channels, until finding something that resembles the searched concept, at the level required. App development for smart devices gives the student the opportunity to have a reference for these topics in a simple and reliable
It is not possible to identify the material created by the student. The information is not relevant to the topic and does not contribute to the understanding of ET
Difficult to understand, Many errors but a reader many errors in spelling or can understand the main grammar idea
Layout has no structure or Text broken into organization paragraphs and/or sections
Original ideas & contribution to the learning process of ET
Writing
Layout
It is possible to identify original material produced by the student. Information is somewhat relevant to the topic and does not contribute significantly to the understanding of ET
Information is not always clear or correct. The theme is somewhat clear but does not relate to the purpose of the project
Incomplete/incorrect information. The OER does not have a clear purpose
Content
Poor
Incomplete
Criteria
Uses headings; sections labeled; some formatting
Easy to understand, with some errors
It is possible to identify the original material produced by the student. The information is relevant to the topic and makes some contribution to the understanding of ET
Information is clear and correct. The main idea is more or less clear and related to the purpose of the project
Partially proficient
Organized and consistent
Clear, concise and basically well written; still has a few errors
The content made and produced by the student is good and easily identified. The information is relevant to the topic and contributes to the understanding ET
The content has accurate and useful information. The theme is clear and related to the purpose of the project
Proficient
Table 1. Rubric for the evaluation of the OERs
25
%
10
(continued)
Organized and consistent; good formatting
Clear, concise and well 10 written and edited with no serious errors
The content made and 25 produced by the student is excellent and hard-work is evident. The information is relevant to the topic and contributes significantly to the understanding of ET
The theme is clear. The content has accurate, high quality and very useful information
Exemplary
218 Y. C. Almanza-Arjona et al.
No citations are made and Some references are no references mentioned shown, but it is not possible to identify the sources of information used
Not capable to work as OER, as it contains copyrighted material
OER characteristics
Several sections are not capable to work as OER, as they contain copyrighted material
Proficient
The team work was properly done, as there is no evidence that indicates that more than one person designed the OER
Images are relevant to the topic; some images are produced by the student. Most images have correct size or resolution
Capable to work as OER, Capable to work as OER but some material has to be properly referenced
The information presents The citations and some citations, but the references are adequate, references are incomplete but the format is inconsistent
The OER shows organization and teamwork, format is uniform and layout tidy
Citations and References
There is some evidence that shows teamwork, several format errors or differences in color are not uniform
The OER suggests that there was no teamwork, only sections of information put together
Teamwork
Partially proficient
No images, or images that Images unrelated to page; Images recycled from are not relevant to the images recycled from other pages on the subject other websites with no Internet with no reference reference; images too big/small or poorly cropped or have resolution problems
Images
Poor
Incomplete
Criteria
Table 1. (continued) Exemplary
5
5
5
%
Capable to work as OER 15 and it additionally contributes significantly to the material available online
Citations and references are properly made
The team work was properly done, as there is no evidence that indicates that more than one person designed the OER, and the credits to the authors are given
Images have strong relation to the topic and enriches it; some images are produced by student; images have proper size, resolution, colors and edition
Design and Development of a Mobile App as a Learning Strategy 219
220
Y. C. Almanza-Arjona et al.
way. Thus, research and analysis of several platforms for building a functional prototype considered the following characteristics: User-Friendly Development Environment. With the aim that any member in the team could easily carry out modifications without much complication or thorough programming knowledge. Resilience for In-Situ Changes Along the Development Process. Since it is an educational prototype, approval and feedback from a leader professor is needed. Hence, the platform must allow for changes in real time, regarding handling and visualization functionality. Cross-Platform Applicability. The app must be ready to be used in any mobile platform. Based on our due diligence process, we chose a free platform called Pixate (Pixate Inc, 2012) whose prototype environment is highly friendly, since it barely incorporates the need of coded programming in ComboBox widgets and object repositioning on the screen. An attractive characteristic of Pixate is its capacity to add images and convert them into buttons, which allows better user interaction and easy organization. Also, multimedia objects incorporation is allowed, e.g. .gif files, Flash-based videos and music. Nonetheless, a drawback is that external links are not permitted; in a prototype phase, this issue could be irrelevant, though, so effort was focused on monitoring the user behavior with the app. Likewise, the sole incorporation of multimedia objects requires some effort; the incapacity of adding text and for the user to have inputs, limits resource exploitation to improve the app-user interaction. Definition of Thematic Content. Technical contents were developed based on the syllabus established by the university and enriched with information taken from the book “Engineering and Chemical Thermodynamics” [23]. Creative Process for the Design and Organization of the Information. This process was carried out in a collaborative way by the experimental group. Each member’s ideas were discussed and the identification of needs was achieved by considering different learning styles and selecting the contents and materials that the team considered should be present in the app. The conception and development of the app were accomplished by taking advantage of each student’s individual skills, as well as with the synergy generated from the interaction between students with two different professional approaches (chemical and biotechnology engineering). Priority was given to content readability and easiness in navigation through the tool; design was focused on letting the user reach the information with only two navigation actions. Legibility was carefully designed and core thermodynamic concepts were included with strong theoretical rigor but kept visually attractive with complementary graphics (schemes, tables, images). The creative process also involved the design of a wireframe (a simplified representation of the screen limits) by freehand, to later create a prototype using computer assisted design (Photoshop) and, finally, visual design was incorporated inside the code created by the developer. Color selection was carried out considering the representative colors of Tecnológico de Monterrey as well as making sure that the palette selection
Design and Development of a Mobile App as a Learning Strategy
221
was sight-friendly. Everything already stated was integrated in the basic elements of the app, as well as in the User Interface elements. Information was organized by thinking in the concept of «information architecture» [21], which is a form of organizing content and functions, considering the relationship between different screens’ contents and the physical screen itself. Subtopics were organized inside general topics, and at the end of each one, links and support bibliography were included. Development of the Equilibrium Thermodynamics App. Utilization of the software architecture resulted to be quite simple, since the only action to be performed by the user was to work with images carrying out the role of interaction with other items in the environment. A horizontal-dropdown menu was placed on the side of the screen to ease the access to the topics in one single movement, as well as hiding it back on another part of the screen (Fig. 1). When selecting a topic, an image with text and image content was displayed, sometimes with original animations, avoiding copyright violations due to use of external material. Each topic was represented in an image with self-capacity to adapt to the device screen and with the possibility of being slidable to improve its visualization (Fig. 1c).
(a)
(b)
(c)
(d)
Fig. 1. (a) App main screen; (b) app main menu; (c) topic visualization; (d) freehand style images.
As shown in Fig. 1d, the text format included both computer and freehand styles; images were also freehanded and animations were designed using Macromedia Flash (Adobe Systems Software, Ireland Ltd.), by inserting them in multimedia widgets with reproduction controls. The user could clearly visualize the way real-life problems were solved step by step, as solved examples were also included in the app so that students could relate theoretical concepts with real engineering situations. 2.4 Results: Academic Performance of the Group Table 2 shows the performance of students who took part in the design and implementation of OERs, as part of the ET course, compared to a group that followed the traditional teaching approach, without the integration of technology as a teaching tool (labelled as Contrast Group). The contrast group was comprised of 13 men and 14 women, all with similar characteristics to the group under study, and working under the same educational model.
222
Y. C. Almanza-Arjona et al.
Table 2. Academic performance of the students who participated in the learning experience.
Term
Contrast group
Group IQ2003-3
August–December 2015
August–December 2016
Number of students
27
23
% of students who accredited the course
85
100
Final group average
81
80
As expected, the integration of technology in the learning experience motivated students to perform better in comparison with a group that followed the traditional teaching model based on lectures and using only technology as audio-visual support material (e.g. videos). However, a deeper insight was obtained when analyzing the performance of the control and experimental groups that are detailed in Table 3. The performance of the entire course was reflected on the final grade for both groups, however an extra assignment was designed to evaluate the engineering problem solving skills by working on a real-world case, which was the design of a separation process for common solvent mixtures used in the chemical industry. A binary mixture of different solvents was given to each student, and through the analysis of the physical chemical properties of the system, along with the proper selection of thermodynamic models, they were asked to simulate the process in a professional simulator, suggest and calculate the equilibrium stages needed to perform a separation process, to finally compare their answers with information reported in the literature. Table 3. Academic performance of the students who participated in the design and development of a mobile app. Control group
Experimental group
Term
August–December 2016
Number of students
18
5
Final grade average
88
97
Average grade of a real-world case
94
100
As shown, we found that the performance of the experimental group was higher than the control group; students involved in the creation of a mobile app as a learning tool of ET showed a final grade average 10% higher than the control group. Also, the development of skills, such as engineering problem solving, of the experimental group was 6% higher than the control group. The app prototype that the students developed was evaluated firstly by the 23 students of the group under study. A second test was conducted by the university’s faculty, Chemical and Biotechnology Engineering professors and finally the head of the Department. In all cases, the users gave positive feedback, as their experience with the app was perceived as successful.
Design and Development of a Mobile App as a Learning Strategy
223
3 Conclusions Results of this study show quantitatively that implementing technology-based learning experiences as a learning tool has a positive impact on the educational performance and skills development of students. However, there is a deeper insight gained from the experience: the type of technology selected achieves different results, promotes the development of distinct skills and the construction of knowledge in different levels of domain. As explained in the introduction, using technology in higher education is not about technology itself, or just for the sake of motivating the use of technology within the classroom, but it is about creating significant learning experiences and creation of new learning environments that could help in the building of knowledge and mastering skills to prepare graduates for the future. The development of an OER, regardless the modality selected, required the maturation of research skills, especially when conducting a literature review, selection and organization of technical data. In terms of the type of the technological platform selected, it was observed that the level of domain achieved by students that worked on a website, regarding the organization of information, depth of analysis and contextualization of the mathematical models in real-life situations was not as high as those achieved by the group that developed the mobile app. This is possible because students working on a website made a mental conception of the OER as a mere production of electronic notes by collecting already existing material online, and the creation of original material was very limited; it is possible that they were influenced by popular educational websites. Students in this group were not able to fully understand abstract concepts and the way they presented the information did not show evidence of transferring theoretical knowledge into practical problem solving abilities. Furthermore, they were not fully aware of the ethical use of copyrighted material. In contrast, the performance of the experimental group, the depth of comprehension of abstract concepts, and the ability to organize, correlate, rank in different hierarchies and present the information was considerably better than that of those students working with websites. This is possibly attributed to the level of domain required to build the app, to work within the technological restrictions of the platform, and to think about strategies on presenting the information in a user-friendly environment. Students working on this group developed creative thinking in terms of visual attractiveness of the contents, easy navigation to find related information and organize the menus in the app. Finally, it was possible to identify the need to create new evaluation concepts and tools when working with technological environments. Results obtained in this study have generated innovative didactic technological material to enable mobile learning experiences to other students and showed that it is feasible to implement this type of resources within the new educational model Tec 21. Acknowledgements. The authors acknowledge the financial and technical support of Writing Lab, TecLabs, Tecnologico de Monterrey, Mexico, in the production of this work. We also thank Alejandra Carvajal Treviño, Christian Walter Deisenroth Martínez, Emilio García Valdés and Denise Montserrat Piliado Hernández for their keen participation and valuable contribution to this project.
224
Y. C. Almanza-Arjona et al.
References 1. Gleason, N.W.: Singapore’s Higher Education Systems in the Era of the Fourth Industrial Revolution: Preparing Lifelong Learners, pp. 145–149. Palgrave Macmillan, Singapore (2018) 2. Baygin, M., Yetis, H., Karakose, M., Akin, E.: An effect analysis of industry 4.0 to higher education. In: 2016 15th International Conference on Information Technology Based Higher Education and Training (ITHET), pp. 1–4 (2016) 3. Rojas-Aguirre, Y., Almanza-Arjona, Y.C., Alejandro-Cruz, J.S., Meza-Puente, L., Covarrubias-Sánchez, L.: Challenges and Opportunities in the Development of Data Science Skills in Undergraduate Materials Education—A Perspective from Mexico (2019). http:// www.mrs.org/fall2019/symposium-sessions/symposium-sessions-detail?code=BI01 4. Almanza-Arjona, Y.C., Vergara-Porras, B., Garcia-Rivera, B.E., Venegas-Andraca, S.E.: Research-Based Approach to Undergraduate Chemical Engineering Education (2019). http:// dx.doi.org/10.1109/educon.2019.8725195 5. Ciolacu, M., Tehrani, A.F., Beer, R., Popp, H.: Education 4.0—fostering student’s performance with machine learning methods. In: 2017 IEEE 23rd International Symposium for Design and Technology in Electronic Packaging (SIITME), pp. 438–443 (2017) 6. Ciolacu, M., Svasta, P.M., Berg, W., Popp, H.: Education 4.0 for tall thin engineer in a data driven society. In: 2017 IEEE 23rd International Symposium for Design and Technology in Electronic Packaging (SIITME), pp. 432–437 (2017) 7. Guijosa, C.: Tec de Monterrey among the top 5 universities in Latin America—Observatory of Educational Innovation. https://observatory.tec.mx/edu-news/tec-de-monterrey-among-thetop-5-universities-in-latin-america 8. Resnick, H.: Introduction to Technology in Education (2002). http://dx.doi.org/10.1300/j01 7v20n03_01 9. Marín, V.I., Duart, J.M., Galvis, A.H., Zawacki-Richter, O.: Thematic analysis of the international journal of educational Technology in Higher Education (ETHE) between 2004 and 2017 (2018). https://doi.org/10.1186/s41239-018-0089-y 10. Pachler, N., Bachmair, B., Cook, J.: Charting the conceptual space. In: Kress, G. (ed.) Mobile Learning: Structures, Agency, Practices, pp. 3–27. Springer (2009) 11. González Arias, A.: Calor y trabajo en la enseñanza de la termodinámica. Revista Cubana de Física. 20 (2003) 12. Durán-Aponte, E., Durán-García, M.: Aprendizaje cooperativo en la enseñanza de termodinámica: estilos de aprendizaje y atribuciones causales. J. Learn. Styles 6 (2013) 13. Gerardo, C.M.M., Martínez, I.O., López, J.L.G.: The efficiency of cooperative learning in teaching chemistry at the high school level. RIDE Revista Iberoamericana para la Investigación y el Desarrollo Educativo. 6, 309–318 (2015) 14. Pérez-Poch, A.: Las técnicas de Aprendizaje Cooperativo mejoran y consolidan la calidad docente en la asignatura “Telemática” de EUETIB. Actas del JENUI 6 (2006) 15. Morantes, P., Suárez, R.R.: Conceptualización del trabajo grupal en la enseñanza de las ciencias. Lat. Am. J. Phys. Educ. 3, 361 (2009) 16. Daza Pérez, E.P., Gras-Marti, A., Gras-Velázquez, À.: Experiencias de enseñanza de la química con el apoyo de las TIC. Química (2009) 17. Jiménez-Valverde, G., Llitjós, A.: Cooperación en entornos telemáticos y la enseñanza de la química. Revista Eureka sobre enseñanza y divulgación de las ciencias. 3, 115–133 (2006) 18. Gavilán, I., Cano, S., Aburto, S.: Diseño de herramientas didácticas basado en competencias para la enseñanza de la química ambiental. Educación Química. 24, 298–308 (2013) 19. Valverde, G.J., Viza, A.L.: Una revisión histórica de los recursos didácticos audiovisuales e informáticos en la enseñanza de la química. Revista Electrónica de Enseñanza de las Ciencias. 5 (2006)
Design and Development of a Mobile App as a Learning Strategy
225
20. OER Commons: Open educational resources. https://www.oercommons.org/about 21. DeWitt, T.: TED Talks: Hey science teachers–make it fun (2013) 22. Cuello, J., Vittone, J.: Las Aplicaciones. In: Cuello, J., Vittone, J. (eds.) Diseñando apps para móviles, pp. 12–25. José Vittone—Javier Cuello (2013) 23. Koretsky, M.D.: Engineering and Chemical Thermodynamics. Wiley, Hoboken (2004)
Beyond Things: A Systematic Study of Internet of Everything K. Sravanthi Reddy1 , Kavita Agarwal2 , and Amit Kumar Tyagi3(B) 1 Department of Computer Science and Engineering, Malla Reddy Engineering College,
Hyderabad, India [email protected] 2 Department of Computer Science and Engineering, Lingaya’s Vidyapeeth, Faridabad, Haryana, India [email protected] 3 School of Computer Science and Engineering, Vellore Institute of Technology, Chennai Campus, Chennai 600127, Tamilnadu, India [email protected]
Abstract. Increasing necessity/needs of human have a large impact on development of technology. If we look in 1950, we were far behind than current scenarios. Today Human has made several great innovations which make human being life easier to live. Among such development, sensing of devices is great one. Sensing technologies are being everywhere, i.e., in each applications. A best example is Internet of Things. Internet of Things are communicating together and doing work efficiently (using sensing functions). Further, several upcoming technologies are also in trend like internet of everything, internet of nano-thing, etc. Internet of Things (IoTs) is still in developing phase, so internet of everything is also far from development. Today approximately 0.6% devices are connected only to internet (total 50 billion internet-connected devices), but this number will increase in near future, i.e., 25 billion devices will be connected till 2025. Increasing integration of devices with internet creates several challenges like security, privacy, huge data, etc. Several researchers are making serious attempts with IoTs but with IoE no more have been done/taken care, i.e., no article provides research gaps such issues or challenges (in IoE) on a single place. Hence, this article provides systematic study with significant current and future challenges (including possible future expansion of their applications). Keywords: Internet of Things · Internet of Objects · Internet Connected Things · Internet of Everything
1 Introduction Today’s several emerging technologies are in trend like Internet of Things (IoT), Internet of Nano-Things (IoNT) and Internet of Everything (IoE) in various sectors/applications for providing convenient and reliable communication/services to users. Internet of © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 226–242, 2021. https://doi.org/10.1007/978-3-030-49339-4_23
Beyond Things: A Systematic Study of Internet of Everything
227
Things (IoTs) is the extension of Internet with providing a full-time connectivity to physical devices and everyday objects, for example, electronics items are getting embedded with internet connectivity for monitoring environment/activities and controlling these smart devices remotely. Further, Internet of Nano-Things (IoNT) concept also has been evolved in the past decade. IoTs variants can be many applications like Medical IoT, Human IoT, Industrial IoT, Consumer IoT, etc. Next, Internet of Everything (IoE) is integration of many Internet of Things, machine to machine, etc., together to do some tasks (like industrial Internet or industry 4.0) in automated form and smart way. Here, accepted definition (worldwide) for IoE is “the Internet of Everything (IoE) is bringing together people, process, data, and things to make networked connections more relevant and valuable than ever before-turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses, individuals, and countries [1]”. Information Technology (IT) company ‘Cisco’ claims that IoE brings people, systems, data and stuff together to make networked communications more important and meaningful than ever before turning knowledge into behavior that create new technologies, richer interactions, and unprecedented economic opportunities for enterprises, individuals, and countries. Hence, we need to put more focus on/study of IoE (also IoT) is more helpful in identifying dangers of criminal and other malicious activities, also hardware and software errors, which produce serious concerns, challenges, which can be considered for future (further) research. In last, after evaluating the current trends of advancement in the fields of IoT, IoNT and IoE, we reached to a decision that we need to identify many challenges/complex tasks in IoE environment/ecosystem which are required to be taken over by IoT/smart devices (and by research community). 1.1 Internet of Everything: Connecting the Unconnected Things and People Internet of Everything (IoE) is a network of networks where billions of devices are connected together and sharing information among each other (for performing any task or solving a problem). These devices generated and share a lot of information among each other for providing efficient services to users (learning by itself). This collected information creates many opportunities including new risks. In general, IoE comprises of these concepts as well as connecting people, objects and systems. The integration of IoTs (create IoE ecosystem) provide automated decision-making industries or machines or environment like industries 4.0 or I4.0 or simply I4. I4 uses Industrial internet as the commercial, not consumer or individual application, i.e., towards digital transformation of manufacturing. Connecting devices together via internet, Internet has solution to cure (overcome) most of challenges whatever we face today. Now days, Internet of Things are being used in many popular sectors like smart healthcare, defence, smart cyber infrastructure, etc. Internet of Everything (IoE) can rectify and has ability to solve all raised challenges/problems (by device/human). For example, internet is solving several problems like by Improving education through the democratization of information, enabling economic growth through electronic commerce, and enabling greater collaboration or improving business possibilities with reaching to each and every consumer (in a region). As another example, internet is changing the face of aerospace industry via using cloud computing for its storage-purpose (string data virtually, can be accessed anytime, anywhere). But, having a lot of changing capabilities in IoT (or sectors like
228
K. S. Reddy et al.
science, medicine, communications, and other disciplines), still we have several critical problems like hunger, access to potable water, and various diseases, for such issues we are still in way of finding a solution. • The human population today has almost tripled, while the overall water supply has been reduced by industrial pollution, unsustainable agriculture, and poor civic planning. • The raising energy cost is causing instability between countries, growing business expenses, and adding to the consumer’s financial burden. Moreover this, due to increasing population around the world, we are facing rapid change in climate, through that we face serious threatens to our way of life (by impacting the weather, agriculture, etc.) in various forms like Tsunami, Flood etc. For solving such critical problems, author in [2, 3] introduced the concept of IoE, i.e., using smart devices in every possible applications, i.e., providing intelligence everywhere (in each application). 1.2 Internet of Everything Ecosystem: Role of Big Data and Cloud In above section, we discussed that IoE is future of next century, containing IoT devices. An IoE ecosystem consists billions of sensors and millions of apps gather information from energy consumption, plant growth, blood pressure, etc. IoE ecosystem uses many smart devices together to change the environment of industry. Using IoT in IoE applications make emerging technologies best performer of the previous decade. In these emerging technologies, cloud computing and big data are two best innovations of the decade. These two services cloud services and big data help other technologies to emerge (with respect to IoE). The cloud can store and secure generated data that can be analyzed and turned into actionable information (i.e., data analytics). Cloud computing provides users with a computing environment, i.e. it offers dynamically distributed and virtualized services as an Internet service. Cloud computing is increasingly being used in many sectors, bringing many benefits to societies and industries. Cloud computing systems allow data to be stored remotely and accessed via secure login technologies. For instance, healthcare providers, can easily scale cloud storage to handle that patient data, i.e. patients can also access their medical records on mobile apps. Moreover this, Big Data is most important component for big data analytics to provide effective decisions or to deliver predictive insights to other sectors/applications. Here the term “big data” refers to more diverse and ever-larger data sets that are updated in real-time. This generated data (by connected devices used in respective applications) is present in various forms like unstructured (e-mails, videos, audio formats), structured (any data that presents in a fixed field within a record or file), semi-structured, numerical from traditional databases [4]. Big data’s different characteristics [5] are volume, variety, and speed, etc. Big data is crucial to IoE as it enables the efficient and productive handling of large amounts of data (generated by IoT, industrial internet, and Machine to Machine (M2M) technologies). IoE ecosystem includes more knowledge about this planet than we have ever gathered/accessed before and this interconnectivity produces large quantities and varieties of data from many devices, objects, people and systems at
Beyond Things: A Systematic Study of Internet of Everything
229
the highest volume and speed. In summary, we discover that the IoE ecosystem (through all smart devices interconnected) needs to be securely integrated, function together and interact seamlessly with all connected systems (or phones) and networks (efficiently). Therefore, data generated from these multiple interactions must be properly secured, analyzed, incorporated and actionable with modern tools. 1.3 Role of Internet of Things in Internet of Everything Ecosystem As discussed above, Internet of things are smart objects/devices which are used to create smart environment and perform task or solve problem efficiently and smartly. IoT devices working infrastructure is created by combination of sensors, actuators and internet. When these devices are connected together through Internet, then these devices/machines create smart environment. Smart environment is being implemented perfectly via Machine to Machine (M2M) communication. Note that till date nearly 0.6 billion devices have been connected globally and total 75 billion devices will be connected till 2025 [6]. As discussed above and in [11], integration of such devices are generating a lot of data, i.e., at a huge rate which is difficult to handle and have lot of importance for solving future/real world’s problems. For that, we require several innovative solutions and efficient tools (also algorithms) to solve problems raised in these applications. For that, we need to define the role of internet of things, smart objects, sensors, etc., in IoE ecosystem clearly for next generation. Internet of Things: Internet of Things (IoT) is a network of physical objects, linked and accessed (making communication) data via the Internet. Such objects involve embedded software to communicate with internal or external states. In other words, when entities can feel and interact, i.e., how and where to make decisions, and who, for example, makes them, Nest thermostat. IoT consists of networks of sensors and actuators connected to objects/devices, providing data that can be processed in order to make valuable decisions (such as predictions/projections) for the future (with automated actions initiated). The data also generates vital intelligence for planning, management, policy and decision-making. But, when these smart devices are being used everywhere then intelligence (generation of useful information) is initiated everywhere. Then, this scenario or ecosystem is called Internet of Everything. IoE reinvents the three-level (current) industries, i.e. business system, business model, and business moment. IoT will therefore produce tens of millions of new devices and sensors, all of which will generate real-time data. Today data is the new oil in 21st century, and it creates money to industries [7]. In near future, we will require big data and storage technologies to collect, analyse and store large amount of information. Each component mentioned in Fig. 1, discussed in details as: • People: In IoE, people can connect in many ways to the Internet. Many people today connect to the Internet through their interest in computers (e.g. PCs, laptops, TVs, smartphones, etc.) and social networks (e.g. Facebook, Twitter, LinkedIn, etc.). We will be linking more IoT devices in more appropriate and useful ways, as the Internet position is too central and necessary for IoE. For example, in the future, people will be able to use a pill over a safe Internet connection to a doctor to feel and document the safety of their digestive tract. Therefore, sensors mounted on the skin or sewn into
230
K. S. Reddy et al.
Fig. 1. Components of Internet of Everything [8]
clothing will provide data on the vital signs of a human. Individuals themselves will therefore become nodes on the Internet, with both static data and an activity process continuously emitting [9]. • Data: Devices collect data with IoT and stream it to a central source over the Internet, where it is analyzed and processed. Through transforming data into more useful information, they will become more knowledgeable as the capabilities of devices/things connected to the Internet continue to advance and improve. Instead of merely collecting raw data, Internet Connected Things (ICT) will soon return higher-level information to machines, computers, and individuals for further evaluation and decisionmaking. This transition from data to IoE information is important because it will allow us to make quicker, smarter decisions (also more successful in managing our environment). • Things: This group is constructed using physical things like sensors, consumer devices, and enterprise assets that are connected to both the Internet and each other. In IoE, these things will sense more data become context-aware and provide more experiential information to help people and machines make more relevant and valuable decisions. For example, “things” in IoE include smart sensors built into structures like bridges, and disposable sensors, can be placed on everyday items like milk cartons, dustbins, etc. • Process: Process plays an important role in how each of these organizations interacts with others (to provide quality products to the IoE world), i.e. people, information, and items. Connections are meaningful with the correct process and add value because the right information is provided in the appropriate way to the right person at the right time. Now, this paper evaluates the benefits and risks caused by their application (with considering the governance models and industry practices), emerging in support of IoE. In near future, IoE will provide relevant and valuable connections to machine or devices to make human life easier or work efficiently. IoE has to work efficiently with machines and machine must perform efficient communication with other systems/machines (M2M
Beyond Things: A Systematic Study of Internet of Everything
231
communication). By CISCO [10] it is predicted that M2M (non-human mediated application of IoT), industrial Internet and IoT (all) are components of the Internet of everything (IoE) and there will be $14.4 trillion (USD) worth of “value at stake” over the next decade in the IoE economy, driven by “connecting the unconnected. Which is good sign for a nation” growth as in contribution in a nation’s GDP. It shows the value of IoE for many sectors, and for this century. Hence, the remaining paper is organised as: Sect. 2 discusses related work related to internet of things, internet of everything. Also, several other components like cloud, big data, sensing devices are being discussed in this respective section. Further, motivation behind toward this work/paper is being discussed in Sect. 4. Further, importance or scope of IoE is discussed in Sect. 5. Section 6 discusses several relevant and valuable connections of Internet of Everything in near future. Further, Sect. 6 discusses several issues, opportunities and challenges in IoE (in detail). Section 7 discusses essential part of this article, i.e., an open discussion as argument between uses of IoE over IoT in near future (with considering all possible advantages and disadvantages). Finally, this work is being concluded in Sect. 8 with some future enhancement (including some research gaps). Note that the terms “Internet of Things” or “Internet of Objects” or “Smart devices” or “Smart things”, or “Internet Connected Things will be used interchangeably throughout this work.
2 Related Work As we discussed in Sect. 1, four pillars of Internet of Everything are people, data, process and things. Internet of Things (IoT) is the networked connection of physical Objects, whereas, Internet of Everything means “Intelligence Everywhere”. In 19th century, internet have launched/shared with introduction of Advanced Research Projects Agency Network (ARPANET). Today Internet has occupied every possible sectors or applications for increasing productivity or making communication easier. Mobile devices or smart devices are best ever innovation of the recent (past) century. Human can communication to other in minimum time (without having movement) with such smart devices. These smart are being used in many sectors to do effective work or increase productivity of an industry/organisation. For example, many smart devices or internet connected objects are being used in satellite or in aerospace sectors. Such sectors or satellite are being used to look over the boundary of nation and protect humanity against any natural hazards. These satellites provide live status of earth from space, which is very useful in shifting higher density of people (in an area) to different area/locations (to face minimum losses). Such solution can save millions of lives irrespective of repeated natural hazards. For example, Japan faces maximum earthquake in a year, so we can use IoE in smart home to alert user in critical situations and protect them. In summary, the pillars of the Internet of Everything (IoE) are: • People: People getting connected in more valuable and appropriate way • Data: Conversion of information into intelligence in order to make better decisions • Process: Providing the right information at the right time to the right person (or machine)
232
K. S. Reddy et al.
• Things: Intelligent decision taking by physical devices and objects connected to the Internet and each other; also called the Internet of Things (IoT). Hence, this section discusses work related to internet of everything and its related components/terms like cloud, big data, sensing devices, etc. It also discusses several works/attempts done in the previous decade in enhancement of Internet of everything. Now next section will discuss motivation behind/towards writing this work/area of internet of everything.
3 Motivation Now days IoT are being used in every sectors/applications to increase productivity and fulfil need of society. IoT is the infrastructure that allows all types of devices (also machines) to communicate with each other, for example, cyber infrastructure, medical cyber physical systems, etc. This (IoT) links physical systems around the world such as power meters, cars, containers, pipes, wind turbines, sales devices, personal accessories, etc. Today’s IoT technologies are used in all fields (or industries) possible, as well as providing many possibilities for other sectors such as fleet management, energy management, connected vehicle, health monitoring, and cargo management. Internet of Everything is the enhance version of Internet of Things (or Internet connected things) which consist intelligence everywhere via using smart objects (in real world’s applications). For example, a daily life example has been discussed in [11] in detail. As another real world example, such devices are very much useful in decreasing total number of accidents over the road via continuous sensing nearby objects and responding to users (e.g., autonomous car). In near future, IoE will be everywhere and each object have intelligence and able to respond immediately with the help of IoTs. On another side, with (using) such devices we are facing many serious concerns like security of our personal information (or data), privacy of our identity (or location/information), trust in devices (or smart things), and lack of standardization of tools, etc. Serious vulnerability is getting traced by these smart devices/things, which is a serious issue (because via hacking or threats an attacker can try to steal user’s information and can use for its financial gain/purpose). Finally, we should not forget that in the future we will be able to use IoT and IoE tools for sustainable development and to take urgent action to combat climate change and its effects on nature. Hence, this section discusses motivation behind writing articles regarding to this emerging area. In that, we found that IoE is need for next decade and will be implemented with every system/ devices (to make people life easier). Now next section will discuss importance of IoE over IoT in 21st century (in detail).
4 Importance of Internet of Everything (Over Internet of Things) in 21st century In general, terms IoE and IoT are being used interchangeably in this work (also in general-use), but IoT is not a synonym for IoE, it is an essential component of IoE. For
Beyond Things: A Systematic Study of Internet of Everything
233
example, IoE includes people, artifacts, and system interactions in which IoT is one of the components. Internet Connected Things make environment for IoE and crate communications like (for) Machine to Machine (M2M) communication, Industrial Internet, industry 4.0, etc., for many industries. These smart technologies are being used in several critical applications like healthcare, defence, or aerospace, etc. Then, they have very essential role in making effective decision or predict accurate results (with respect to respective application), saving maximum human lives (around the world). Using IoE in different industry increase the productivity, reduce production cost and save time for completing any task. Hence, keeping importance of IoE in our mind, we are explaining several essential terms here, which are: Machine to Machine (M2M) Communication Today and in Future: Machine-toMachine (M2M) communication is made by integrating of many devices (called IoT or smart devices). There are several key components of an M2M system that include sensors, including radio frequency identification (RFID) tags, a Wi-Fi or cellular communications connection, and (programmed) autonomous computing software, used to communicate between devices and make effective decisions (without human assistance), i.e. only connected via internet/other networks. Even today’s M2M is an important aspect of warehouse management, remote control, automation, traffic control, logistics, supply chain management, fleet management and telemedicine.). M2M communication have a lot of importance in several business models which include video-based security, in-vehicle information services, assisted living and mobile health solutions, energy solutions, manufacturing solutions, and the creation of smart cities. Several industries or organizations can collect revenue via using M2M technology, or via providing new opportunities to customer choice and service. For example, operational costs in manufacturing, automation and logistics are decreasing day by day, also M2M communication are increasing in various applications/sectors like healthcare, automotive, and consumer electronics. M2M development is also allowing businesses to focus on providing end to end global solutions. Transportation companies are saving millions by reducing fuel consumption using data captured, transmitted, and analysed (in real-time via efficient tools). Industrial Internet: Industrial Internet provides enhanced visibility and deeper insight into equipment and resource quality. Responses on which equipment is most relevant, how it should be maintained and how unexpected failures can be prevented can be given by asset performance management. By using data and analytics in new ways to drive efficiency gains, increase performance and achieve overall operational excellence, the digital Internet improves the way people and machines communicate. The digital internet provides valuable new insights through the integration of machines with powerful (best) analytics. Industrial Internet’s popular feature is that it consists of/installs knowledge above the level of individual machines. Internet-connected smart devices (IoT, Internet of Services and cyber-physical systems) can automatically improve performance, security, reliability, and energy efficiency by collecting data/information, interpreting data, and taking appropriate information action and transmitting it to the respective user. Industrial Internet solutions enable sustainable development through enhanced resource efficiency, resulting in savings in energy and water, increased performance, and higher output rates
234
K. S. Reddy et al.
of industrial machines. Simply put, by internet convergence of smart devices, M2M interaction maximizes the use of all industrial tools. For instance, street lighting (i.e. improving traffic congestion), energy-efficient initiatives (in larger cities) use Wireless Control Systems (WCS) to enable remote operation and monitoring of lighting fixtures through a web-enabled central management system. This not only saves energy and money, but also enables controllers to switch off or dim streetlights when required, i.e. to provide unique versatility and utilization of resources. Remember that street light can also feel vibrations that can help identify structural integrity problems while placed on bridges. With similar examples, IoE can use useful a lot and provide different experience to users/citizens. Industry 4.0: Near future technology belongs to centralised structure with explaining or telling machines “what to do”. For example, we can connect embedded system production technologies to other business industries (for smart production processes) which create a new technological age, also change/transform industry (also business models), etc., and work as smart factory. Automation (increased by interaction with M2M) would mean advanced robotics that will make automation more efficient and cheaper. Through such technologies as sensors and actuators, wireless networks, high-performance cloud computing and big data analytics, this interaction or automation phase is allowed. Virtual industrial transition is Industry 4.0 (emerged in 2011), which is the industry’s new revolution. Industry 4.0 innovation, for example, helps farmers in developing countries keep pace with increasing demand for milk products, which has been a factor in improving quality of life and boosting rural economic growth. Automation also calculates the percentage of milk, cream fat and non-fat solids, while queue management ensures prompt refilling of silos without delay to maintain continuity. Note that M2M communication/automation ensures sustainable consumption and production patterns of an organisation. The next phase of digitization of industry 4.0/manufacturing sector include four trends: • Increased data volumes, computational power and connectivity, particularly new wideranging low-power networks, • The growth of skills in analytics and business intelligence, • New forms of communication between human and machine, such as touch interfaces and augmented reality systems, and, • Improvements in transferring digital instructions to the physical world, such as advanced robotics and 3-D printing. Note that here digitalisation of the industry means is ability of real-time data (by machines) by efficient analytics tools. In the production environment, for example, cyberphysical systems (CPSs) include smart machines, storage systems, and production facilities that exchange information autonomously, trigger actions, and independently control each other. This improves the manufacturing, engineering, material use and supply chain and life cycle management processes involved in industrial processes. On the other hand, data generated from Global Positioning Systems (GPS) and agricultural sensors (and using big data analytics) will allow farmers to improve or increase their crop productivity through proper (field) water utilization. Farmers can also benefit from reliable
Beyond Things: A Systematic Study of Internet of Everything
235
guidance on the seeds to be planted, time to harvest, and estimated yield using data and analysis. Monitoring crops and weather patterns can be tracked in the near future to specific regions to issue early warnings of drought or protect crops from extreme natural disasters. Such attempts may be useful for government to take preventive measures in risk areas. Industrie 4.0 therefore offers new tools for smarter energy consumption, greater storage of information in products and pallets (so-called smart lots), and optimization of real-time yields. Industrial Internet and Industry 4.0 can be used to improve health, resource efficiency and sustainable development in the near future. IoE Today: Through describing the current and evolving elements of IoE: IoT, M2M, Industrial Web, Industry 4.0, and the environment they work in (i.e., cloud and Big Data Analytics), we discuss several benefits and risks caused by the respective application. Today internet has changed various application/sectors in terms of efficiency. What’s next, then? How is the Internet going to evolve and keep changing and improving the world? Such things are discussed here or things are in supporting the IoE today are discussed here as. Transforming the World’s Largest Cities via Providing Smart Solutions to Transportation Sector On nearby smartphones, tablets and laptops along the highway/road, smart screens can be accessed via Wi-Fi. The aim of smart Screens (in cities) are to: • By connecting people immediately with information relevant to their immediate proximity, notify them. • Secure by providing local police and fire departments with a citywide network of sensing, communications and response capable of directing required staff and assets precisely where and when necessary • Revitalize by increasing levels of commerce, investment, and tourism We need to create innovative solutions to the major environmental, social and health challenges facing cities, i.e., [12] skill. Smart traffic can also solve several critical problems like traffic management, etc. Mazhar Rathore et al. [13] explained the application of IoT devices/IoE in providing real time data to help urban drivers find parking more quickly and efficiently (technology can help drivers find parking more quickly and avoid unnecessary driving). Also, logistics companies are able to use similar initiatives to obtain traffic footprints of urban centres to help define cost- and time-efficient routes for the delivery of vehicles. Resource Efficiency: The key area where IoT can bring significant benefits is energy management. The control of water is a good example. Usually, large-scale water systems could lose around 20% to leakage before reaching the end-customer. Such a system would bring a whole new level of efficiency to water consumption around the world. Better optimization of capacity and demand, better management of the network and leakage, lower unbilled volume of water are some advantages of resource efficiency. The new reality has many connected devices rapidly improving computing power and economies of scope and scale (also increasing the use of cloud computing and big data
236
K. S. Reddy et al.
analytics). The technology transition provides multiple opportunities (which we have not seen before) for both the public and private sectors to develop new technologies, enhance productivity and efficiency, enhance real-time decision-making, solve issues relevant to critical society, and develop new and innovative user experiences. IoE covers a wide range of items, including M2M and the digital Internet. This chapter addresses the nature and significance of the IoE age and the near future for different applications. Now, the next segment will be addressing important and useful IoE interfaces/connections in the near future with other smart devices/objects.
5 Relevant and Valuable Connections of Internet of Everything to Other Applications in Near Future IoE will impact individuals, businesses, and countries in various ways, i.e. by bringing people, processes, data, and things together, it will bring several benefits to humanity. Each term can be defined as: • Individuals: Through their senses, people experience the world (hears, touch, sight, taste and smell). IoE becomes an exponential proxy in this context for sensing, understanding and managing our world. Something that was silent now has a voice with IoE. • Businesses: Success in business is about making a profit. IoE should help businesses achieve this goal by creating new automation and improvement opportunities. • Countries: While there are many forms of government, accountability is essential for countries to provide their people with services. If properly applied to protect confidentiality, safety and security, IoE would allow all levels of government to increase transparency in order to benefit everyone. For example, billions of devices helps manufacturing sector via tracking of materials efficiently. For healthcare, smart slippers and other wearable devices for the elderly include sensors that detect falls and different medical conditions. If something is wrong, the system will alert a doctor via email or text message, avoiding a fall and a costly ride to the emergency room. Another example is the installation of sensors in cars offering a pay-as-you-drive policy that ties the insurance premium to the risk profile of the patient. This will lead to increased health, protection and avoidance of losses in the insurance industry. IoE will also facilitate new business models like usage-based insurance, calculated based on real-time driving data. IoE will also improve its ecosystem capabilities and provide sustainable development, as well as expand applications in areas such as healthcare, elderly care, medical research, urban planning, logistics, environmental protection, resource management, education, strategic planning and effectiveness assessment across all disciplines [12]. Now, what are the applications may/will support IoE are discussed here (with some examples): IoE Tomorrow: Several decisions are need to be made in the coming decade by IoE. Some changes in near future will be or included here as:
Beyond Things: A Systematic Study of Internet of Everything
237
Conquering Climate Change: Using IoE, we can assess our limited resources’ best by enhancing how we think, perceive, and even control our climate. As billions of sensors are connected around the world (or in the atmosphere), we’re going to get to know about the “heartbeat” of our world. Indeed, we’re going to know when our planet is healthy or ill. With this intimate understanding, we will start to eliminate some of our most pressing problems, including hunger and drinking water availability. • Hunger: Farmers will be able to plant crops that have the greatest chance of success by understanding and predicting long-term weather patterns. And, once the fields are harvested, more efficient (and therefore less expensive) transport systems will enable food to be distributed and delivered from abundant places to scarce places. • Drinkable Water: Although IoE may not be able to create water where it is most necessary, it will be able to address many of the issues that limit our clean water supply, such as industrial waste, wasteful farming, and poor urban planning. For example, when a leak occurs, smart sensors installed throughout the water system of a city can detect and automatically divert water to avoid unnecessary waste. The same sensor would warn the service workers so that as long as resources are available the problem can be solved. In IoT based Cloud infrastructure, all types of devices and machines communicate with each other to solve complex or difficult problem/to do difficult task (like in healthcare, etc.). New heart monitoring devices can be worn at home for extended periods of time giving physicians much more visibility in the function of the heart (including times and activities). Such test in a hospital are every costly, so in near future efficient medical care are an alternative option for patients (using smart devices). Different systems move to households, allowing the remote and continuous control of essential information from patients. Such medical information is wirelessly connected to a regional monitoring center (i.e., from a phone to a router) in the patient’s home, which then sends the information to the broadband network, forwarding it to the cloud where sensors constantly monitor a patient’s condition, i.e. reminding a health care provider of any problems/problems. However, the telehealth field [14] holds the potential to expand healthcare practitioners’ scope to rural, underserved and high-risk areas. IoT improves the quality of life and integration of elderly people, improving safety and lifestyle for elderly people (i.e., supported by home sensors). In summary, IoT technology can collect, analyze and automate appropriate responses and actions to real-time data from sensors and other devices in homes or other properties in a secure manner. In near future, this technology can identify and send alerts on an emergency, i.e., requiring urgent care (with identifying, tracking and monitoring of patient’s health, e.g., mental – disorder patients). Similarly, in case of an accident, IoTs devices can send alert message to nearby hospitals, and can help in saving many lives. Network can also host live video conferences between doctors and patients, and share pictures and medical records (or reviews, etc.). In another example, traffic-use software can use virtual realtime maps and autonomous driving to help solve simple real-world problems. This is a serious issue and can be solved by implementation of IoT devices in IoE environment. Hence, this section discusses several relevant and useful interconnections with other
238
K. S. Reddy et al.
devices/systems/objects in near future (to do work efficiently). Now, next section will discuss several issues, challenges and essential opportunities for future researchers.
6 Issues, Opportunities and Challenges in Internet of Everything in Near Future As discussed above, Machine to Machine (M2M) communication, industrial Internet and IoT are all components of the Internet of everything (IoE) and it is predicted that the uses of IoE will be existed in every sector in next decade. Some popular issues and challenges in IoE are: • Identification of each devices or proving address to each device is a challenging issue (for finding attacker when any attack happened). For example, IPv6 must become a reality as the number of connections moves from billions to trillions. Also, efficient network protocols, storage mechanism, and analytics process or tools push several challenges towards IoE. • Preserving privacy in IoE is an essential issue. Here, because of its often imperceptible collection of information, there is a privacy issue. In the environment (i.e., indoors and outdoors), for example, it becomes more computer-aware with sensors that are not managed by any person and for which data collection or use is not apparent. • IoE also raises (equal) security challenges with confidentiality. Computer protection and the potential to break into the computer or data flow that comes from the system are serious issues that arise. Security and privacy are serious concerns for health devices/medical care applications [19] or those involved in critical systems. • Finding energy sources to fuel the huge number of miniature (even microscopic) devices is another problem. In summary, we notice that privacy, security, trust, addressing, etc., are the main challenges or issues in IoE. Suppose IoE is part of a larger system and used by many users every-day. For example, Self-monitoring, analysis, reporting or ‘smart’ house may include a number of enabled devices, smart appliances, outlets and cables, all of which can be controlled through a centralized house system. In such applications, leaking of personal information by internet of things or security of IoT against many popular threats is a serious issue. For example, in Hollywood movie “Die Hard or Live Free”, villain tries to control on each and every systems of a country and use all devices according to his plans. Also, one more movie name I.T released 2016, attacker tries to breach and take control on one man’s house and use collected information for blackmailing him/against owner’s wish. Hence, issues in IoE (in the next 10 years) will be security, privacy and reliability would allow us to have open social and political discussions. However, note that government plays an important role in embracing every emerging technology through eliminating policy obstacles, increasing excessive costs, and mitigating unintended consequences where they are critical factors. Government’s role is to promote and encourage the use of emerging technologies in a number of ways and as an early adopter will help create trust and confidence in new technologies. Overall, a policy issue of serious concern provides sufficient rules and regulations to protect it (personal
Beyond Things: A Systematic Study of Internet of Everything
239
data) from attackers and provide privacy assurances. The industries/organizations of today are data driven and data that contain information about the preferences, networks, habits and behaviors of individuals (e.g., mobile apps, etc.). The concern is dependent on the nature of the information being collected and used. Like, de-identified data can be used in healthcare for the good of all people for certain research purposes and for big data purposes. Today protecting privacy of patient and protecting sensitive or non-personal information is a critical issue. A successful implementation strategy is always necessary for IoE for overcoming above serious concern ‘privacy’. Privacy is preserved with some rules, regulations or restrictions. Nonetheless, higher constraints are blocking successful implementation of innovative technologies, causing confusion and often legal ambiguity. To achieve a successful IoE ecosystem (or systems of systems), we need to provide solutions like • Globally recognized market-driven, consensus-based standards can accelerate adoption, drive competition, and enable new technologies to be introduced cost-effectively (i.e., cost-effectiveness). • Sensitivity and confidentiality of the data, improve the security of the devices • Distributed edge system and data center analytics solutions, • Capacity to categorize and manage data as public or private (i.e., providing robust and appropriate data protection with involving trusted environments) Therefore, in order to overcome these obstacles, government agencies, standards bodies, companies, and even individuals will need to come together with a spirit of cooperation. Through integrating people, systems, information, and stuff (IoE’s pillars), the internet allows us to do something fantastic and seek innovative solutions to individuals, companies, and countries’ problems/challenges. Hence, this section discusses several issues, challenges and provides opportunities (list-wise) in Internet of Everything in next decade. Now next section will discuss/provide an open discussion or argument among both emerging technologies (with every possible user’s or industry perspective).
7 Argument Between Internet of Thing and Internet of Everything In above discussion, we have seen the growth/rapid change in internet of things and internet of everything today and in upcoming decade. IoT is a component of IoE, of a smart infrastructure. We found that privacy and security are critical building blocks for the IoE ecosystem and capabilities to be designed by design methodologies into IoT systems from the outset. Security solutions will also be an essential component of personal data protection; and both security and privacy should be considered as basic elements in the design of IoT systems [18]. Note that trust also can be improved by improving privacy in a computing environment. Considering IoT devices use in IoE, have the ability to connect the physical world and human activity with sensors and networks (with the Internet) and attracting high attention from citizens, governments and transforming people lives/daily routines, etc. These smart devices are being used in e-healthcare applications/bio-medical imaging area on a large
240
K. S. Reddy et al.
scale. Electronic Health Records (EHR) lead to greater and more streamlined flow of information within an electronic health care system and can transform the way care is delivered. For electronic health records, information is available whenever and wherever it is facilitating; enhanced patient care and coordination; better treatment and patient outcomes; and significant cost savings and efficiencies. Patient information is readily available to store patient information directly on the cloud through devices/machines to improve efficiency and minimize paperwork and costs. Through mobile applications, patients can also access their medical records. Big data is important for IoE because it allows the efficient and productive handling of the vast amount of data produced by IoT, industrial Internet and M2M technologies. Now coming IoE part, in many applications, a variety of sensors (embedded IoE Ecosystem) from connected environments (like cloud and big data analytics) provide a continuously updated stock of conditions in real time across a number of variables. IoE include components big data, cloud and IoT together to make communication or predict decision early and efficient. Big data, cloud and IoT can be used in the human body through sensor inputs, as well as through the senses, the internet, communications systems, and the cloud can be represented by the human nervous system carrying the impulses. Therefore, analytics and big data processing can be linked to some of the brain functions that process data for decision making. Big data analysis has a significant impact on medical quality and personalized medicine. Predictive analytics [15] plays a key role in early disease identification and ensuring that patients receive personalized and efficient treatment. It predicts diseases from suturing processes, assessing the risk of illness, assisting a physician with a diagnosis, and predicting future health. Data analytics, forecasting models to identify useful trends from raw data often play a similar role. Today’s Big Data helps to use large quantities of IoE resources, whether through data, processes or facilities, thus improving service quality. As discussed in [16], big data present in structured and unstructured form. Both structured and unstructured type of data increases the efficiency (with improving service efficiency and effectiveness of outcomes) of analytics process with discovering and communication of meaningful correlations in data. The Nano-Thing Internet uses multiple sensors to ensure proper timing and dosage of medicine support doctors during clinical trials; in-flight data collection from jet engines is used in real-time to recognize and plan required and preventive maintenance; embedded sensors and related analytics help enhance the independence of the elderly and visually impaired, and improve the quality of life in cities and rural areas. In addition, the information produced (through D2D/M2M communication) enables scientists, politicians, developers and residents to collaborate and make cities safer, more productive and more livable, i.e. to save costs by predicting and proactively solving potential problems such as urban floods. Hence, in summary IoE will be more popular and useful than IoT in very possible applications. Also, remember without IoT, IoE is nothing, i.e., IoE is depending on internet connected devices/objects (completely). Because, emerging technologies like IoE have major impact on economy of a nation. According to McKinsey’s estimation [17] The Internet of Things (IoT) can have a total potential economic impact of $3.9 trillion to $11.1 trillion a year by 2025 and at the high end that rate of value, i.e., including
Beyond Things: A Systematic Study of Internet of Everything
241
the consumer surplus, would amount to about 11% of the world economy. In the near future, we need to develop new business models that accelerate economic growth and maximize societal benefits. To solve many real world’s problems, we require robust data exchange solutions with IoE in many sectors/applications with ensuring scalability and sustainability of infrastructure and technological innovation (for a long time). If we care about this technology (Internet of Everything) to our best level, it will solve many of challenges (or problems) whatever we face in current. Hence, this section discusses an open discussion or argument between IoT and IoE and provide answer to several questions like “Which one technology will be popular in near future”, “Which technology have maximum challenges or bad impacts on society”, etc. Now next section will conclude this work in brief with some future research works (enhancements).
8 Conclusion and Future Scope Today’s emerging technologies like IoT, IoE are currently utilised in several sectors/ways for benefit of society and are providing strength to the network day by day. As we have seen many of the applications are being transferred to IoT (a network of objects/smart devices connected to the internet). Further, IoE (networks of networks of IoTs or intelligence using IoT) built upon billions and someday trillions of connections create the opportunities which were never done before. With opportunities, it may also face some serious like security of devices, privacy of data (in motion/communication time), addressing of each unique smart devices, etc. There are exponential changes in technology which create many opportunities for near future. Hence, rapid pace of change in technologies require a lot of attention from research communities to look in respective issues, challenges and research gaps. To overcome these challenges, we (citizens), government organizations, standards bodies, and businesses need to come together with a lot of energy and cooperation to face such issues/challenges. Acknowledgment. Parts of this work have been funded by the Ministry of Science, Education, and Culture of the German State of Rhineland-Palatinate in the context of the project MInD and the Observatory for Artificial Intelligence in Work and Society (KIO) of the Denkfabrik Digitale Arbeitsgesellschaft in the project “KI Testing & Auditing”
References 1. http://www.cisco.com/web/about/ac79/innov/IoE.html 2. Weissberger, A.: TiECon 2014 Summary-Part I: Qualcomm Keynote & loT Track Overview. IEEE Com Soc (2014) 3. Evans, D.: The Internet of Everything: How More Relevant and Valuable Connections Will Change the World. Cisco Internet Business Solutions Group (TBSG), Cisco Systems, Inc., San Jose, CA, USA, White Paper (2012) 4. Han, J., Haihong, E., Le, G., Du, J.: Survey on NoSQL database. In: 6th International Conference on Pervasive Computing and Applications (2011) 5. Oguntimilehin, A., Ademola, E.O.: A review of Big Data management, benefits and challenges. J. Emerg. Trends Comput. Inf. Sci. 5, 433–437 (2014)
242
K. S. Reddy et al.
6. Alam, T.: A reliable communication framework and its use in internet of things (IoT). Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol. (IJSRCSEIT) 3(5), 450–456 (2018) 7. Azencott, C.-A.: Machine learning and genomics: precision medicine versus patient privacy. Phil. Trans. R. Soc. A 376, 20170350 (2018) 8. Evans, D.: How the Internet of Everything Will Change the World. Cisco Blog (November 2012) 9. Gubbi, J., Buyya, R., Marusic, S., Palaniswami, M.: Internet of things (IoT): a vision, architectural elements, and future directions. Fut. Gener. Comput. Syst. 29(7), 1645–1660 (2013) 10. Bradley, J., Loucks, J., Macaulay, J., Noronha, A.: Internet of Everything (IoE) Value Index: How Much Value Are Private-Sector Firms Capturing from IoE in 2013? Cisco Internet Business Solutions Group (TBSG), Cisco Systems, Inc., San Jose, CA, USA, White Paper (2013) 11. Tyagi, A.K., Shamila, M.: Spy in the crowd: how user’s privacy is getting affected with the integration of internet of thing’s devices. In: Proceedings of International Conference on Sustainable Computing in Science, Technology and Management (SUSCOM), 26–28 February 2019. Amity University Rajasthan, Jaipur, India. http://dx.doi.org/10.2139/ssrn.335 6268. https://ssrn.com/abstract=3356268 12. https://cdn.iccwbo.org/content/uploads/sites/3/2016/10/ICC-Policy-Primer-on-the-Internetof-Everything.pdf 13. Mazhar Rathore, M., Ahamed, A., et al.: Urban planning and building smart cities based on the internet of things using big data analytics. Comput. Netw. 101, 63–80 (2016) 14. Al-Majeed, S.S., Al-Mejibli, I.S., et al.: Home telehealth by internet of things (IoT). IEEE (2015) 15. Ghosh, R., Naik, V.K.: Biting off safely more than you can chew: predictive analytics for resource over-commit in IaaS cloud. IEEE (2012) 16. Raghupathi, W., Raghupathi, V.: Big data analytics in healthcare: promise and potential. Health Inf. Sci. Syst. 2(1), 3 (2014) 17. https://www.mckinsey.com/business-functions/digital-mckinsey 18. Tyagi, A.K., Rekha, G., Sreenath, N.: Beyond the hype: internet of things concepts, security and privacy concerns. In: Satapathy, S.C., Raju, K.S., Shyamala, K., Krishna, D.R., Favorskaya, M.N. (eds.) ICETE 2019. LAIS, vol. 3, pp. 393–407. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-24322-7_50 19. Shamila, M., Vinuthna, K., Amit Kumar, T.: A review on several critical issues and challenges in IoT based e-healthcare system. In: International Conference on Intelligent Computing and Control Systems, ICICCS 2019. IEEE (2019)
Flower Shaped Patch with Circular Defective Ground Structure for 15 GHz Application Ribhu Abhusan Panda1(B) , Priya Kumari2 , Janhabi Naik2 , Priyanka Negi2 , and Debasis Mishra2 1 Department of Electronics and Telecommunication Engineering,
V.S.S. University of Technology, Burla, Odisha, India [email protected] 2 Department of Electronics and Communication Engineering, GIET University, Gunupur, Odisha, India [email protected], [email protected], [email protected], [email protected]
Abstract. This artefact affords an innovative design which is based on alterations associated with conventional circular patch. The modifications has been done in such a way that the subsequent shape includes two biconvex shaped patches perpendicular to each other resembles a flower shape. Maximum arc to arc distance has been considered as 20 mm that is same as that of the wavelength λ which is calculated from the frequency at which the antenna has been designed to operate. The operating frequency has been considered as 15 GHz which is suitable for different applications like military use, satellite communication, 5G communication etc. Ansys HFSS has been used for the design and simulation purpose and the emphasis is made on the return loss vs frequency plot (S11 < −10 dB) to determine the resonant frequency and the bandwidth of the designed antenna. Other parameters like surface charge distribution, standing wave ratio measured in terms of voltage, antenna gain, directivity etc. have been determined from the simulation results. Keywords: Flower shaped patch · Arc to arc distance · 5G · S11 · Antenna bandwidth · Antenna gain
1 Introduction The antenna designers have provided large variety of designs in past few years which includes the perturbation of conventional shaped patch in accordance to the corresponding wavelength [1–4]. To operate the antenna over a large frequency range log periodic implementation of some unique shaped patch has been done in last two years [5–7]. Few perturbed arrays also have been proposed in last few years to enhance the antenna gain [8, 9] Gain enhancement of a biconvex and biconcave shaped patch have been performed by implementation of metallic rings and split ring resonating slots [10, 11]. Some more unique techniques have been proposed to enhance the antenna gain [12–15]. Defective © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 243–249, 2021. https://doi.org/10.1007/978-3-030-49339-4_24
244
R. A. Panda et al.
ground structure (DGS) is the structure made by implementation of slots in conformal ground plane. This DGS also provides better bandwidth, enhances the antenna gain and also make the antenna operable at desired frequency. Diverse defective ground structures have been proposed for specific applications [16–20]. In this paper the conformal circular patch has been modified in such a way that two biconvex shaped patches with maximum arc to arc distance equal to corresponding wavelength value are perpendicular to each other. To make the fabrication procedure easy simple line feed technique is considered and a circular DGS has been implemented on the ground plane.
2 Antenna Design with Specific Design Parameters The proposed design includes two biconvex shaped patches. The maximum arc to arc distance of these patches is 20 mm which is equal to the value of corresponding wavelength λ calculated from the design frequency 15 GHz (λ = fc ). The substrate has the dimension 50 mm × 50 mm × 1.6 mm and the ground plane has the dimension 50 mm × 50 mm × 0.01 mm. Proposed flower shaped patch also has the height 0.01 mm which is calculated taking the skin depth of the copper material into consideration as copper has been used for both ground plane and the patch. FR4 epoxy is a dielectric material widely available and having a dielectric constant 4.4 that is used for constructing the substrate. The circular DGS is implemented on the ground plane whose optimized value has been determined by the optimetrics tool included in High frequency structure simulator (HFSS) that uses finite element method for simulation. The radius of the circular DGS best suited for the design has been found out to be 2 mm. Figure 1 shows the geometry of the projected design.
Fig. 1. Pictorial representation of the geometry associated with the Antenna
Flower Shaped Patch with Circular Defective Ground Structure
245
3 Outcomes from the Simulation 3.1 S-Parameter (Return Loss and Resonant Frequency) The emphasis has been made on the S11 plot which provides return loss and resonant frequency. From the S11 plots of the proposed patch with and without DGS, it is clear that after implementation of DGS the performance of the antenna is increased in terms of return loss and resonant frequency. The return loss of the proposed patch has been found to be −30.388 dB at the resonant frequency 14.9 GHz which is almost equal to the design frequency 15 GHz. Figure 2 and Fig. 3 shows the S-Parameter variation with frequency.
Fig. 2. Return loss vs frequency of proposed patch
Fig. 3. Comparison of S-Parameter
3.2 S11 and Standing Wave Ratio Comparison Proposed model can be considered as the terminated transmission line so the standing waves that are produced can also be measured in terms of voltage leading to a specific value known as voltage standing wave ratio (VSWR). For the projected design the VSWR value has been found out to be 1.0624 which is very near to the perfect value “1”. Figure 4 illustrates the standing wave ratio variation with frequency.
246
R. A. Panda et al.
Fig. 4. Standing wave ratio
3.3 Surface Current and Radiation Patterns The antenna gain, directivity and radiation efficiency for the proposed design has been found out to be 5.72 dB, 9.34 dB and 43.57% respectively. The illustrations of radiation patterns have been provided in Fig. 5 and Fig. 6. Surface current distribution provides the clear observation about the current flow in the patch. Form the Fig. 7 it is clear that the patch includes sufficient current and radiates properly (Table 1).
Fig. 5. 2D radiation pattern of proposed patch
Flower Shaped Patch with Circular Defective Ground Structure
247
Fig. 6. 3D radiation pattern of proposed patch
Fig. 7. Surface current density Table 1. Comparison of antenna parameters Parameters
With DGS Without DGS
Peak Antenna Gain (dB)
5.72
5.4
Peak Directivity (dB)
9.34
9.2
Resonant Frequency (GHz) 14.9
15
Return Loss (dB) (S11 )
−30.388
−25.07
Radiation efficiency (%)
43.57
43
Bandwidth (MHz)
1090
760
4 Conclusion From the S-parameter plot and other plots it can be concluded that designed antenna has a high antenna gain and directivity at the frequency 15 GHz, so it can be used efficiently
248
R. A. Panda et al.
for many applications like 5G, military, satellite communication etc. Implementation of a simple circular DGS also increases the antenna bandwidth by an amount of 330 MHz.
References 1. Panda, R.A., Mishra, D., Panda, H.: Biconcave lens structured patch antenna with circular slot for Ku band application. In: Lecture notes in Electrical Engineering, vol. 434, pp. 73–83. Springer, Singapore (2018) 2. Panda, R.A., Mishra, D.: Reshaped patch antenna design for 60 GHz WPAN application with end-fire radiation. Int. J. Modern Electron. Commun. Eng. 5(6), 5–8 (2017) 3. Panda, R.A, Mishra, D., Panda, H.: Biconvex patch antenna with circular slot for 10 GHz application. In: SCOPES 2016, pp. 1927–1930. IEEE (2016) 4. Panda, R.A.: Multiple line feed perturbed patch antenna design for 28 GHz 5G application. J. Eng. Technol. Innov. Res. 4(9), 219–221 (2017) 5. Panda, R.A., Panda, M., Nayak, P.K., Mishra, D.: Log periodic implementation of butterfly shaped patch antenna with gain enhancement technique for X-Band application. In: ICICCT 2019, System Reliability, Quality Control, Safety, Maintenance and Management, pp 20–28. Springer, Singapore (2020) 6. Panda, R.A., Mishra, D.: Log periodic implementation of star shaped patch antenna for multiband application using HFSS. Int. J. Eng. Tech. 3(6), 222–224 (2017) 7. Panda, R.A., Mishra, D.: Modified circular patch and its log periodic implementation for Ku band application. Int. J. Innov. Technol. Explor. Eng. 8(8), 1474–1477 (2019) 8. Panda, R.A., Mishra, D., Kumar Nayak, P., Mohapatro, U.N.: Perturbed 2-element patch antenna for 25 GHz application. In: IEEE Second International Conference on Intelligent Computing and Control Systems (ICICCS), pp. 240–244 (2018) 9. Mishra, R.K., Panda, R.A., Mohapatro, U.N., Mishra, D.: A novel 2-Element array of perturbed circular patch for 5G application. In: IEEE 5th International Conference on Advanced Computing and Communication Systems (ICACCS), pp. 138–142 (2019) 10. Panda, R.A., Panda, M., Nayak, S., Das, N., Mishra, D.: Gain enhancement using complimentary split ring resonator on biconcave patch for 5G application. In: International Conference on Sustainable Computing in Science, Technology & Management (SUSCOM-2019), pp. 994–1000 (2019) 11. Panda, R.A., Dash, P., Mandi, K., Mishra, D.: Gain enhancement of a biconvex patch antenna using metallic rings for 5G application. In: 6th International Conference on Signal Processing and Integrated Networks (SPIN), pp. 840–844 (2019) 12. Attia, H., Yousefi, L.: High-gain patch antennas loaded with high characteristic impedance superstrates. IEEE Antennas Wirel. Propag. Lett. 10, 858–861 (2011) 13. Rivera-Albino, A., Balanis, C.A.: Gain enhancement in microstrip patch antennas using hybrid substrates. IEEE Antennas Wirel. Propag. Lett. 12, 476–479 (2013) 14. Kumar, A., Kumar, M.: Gain enhancement in a novel square microstrip patch antenna using metallic rings. In: International Conference on Recent Advances and Innovations in Engineering (ICRAIE 2014), Jaipur, pp. 1–4 (2014) 15. Ghosh, A., Kumar, V., Sen, G., Das, S.: Gain enhancement of triple-band patch antenna by using triple-band artificial magnetic conductor. IET Microwaves Antennas Propag. 12(8), 1400–1406 (2018) 16. Kumar, A., Machavaram, K.V.: Microstrip filter with defected ground structure: a close perspective, Int. J. Microwave Wirel. Technol. 5(5), 589–602 (2013) 17. Nouri, A., Dadashzadeh, G.R.: A compact UWB bandnotched printed monopole antenna with defected ground structure. IEEE Antennas Wirel. Propag. Lett. 10, 1178–1181 (2011)
Flower Shaped Patch with Circular Defective Ground Structure
249
18. Kildal, P.-S.: Foundations of Antennas: A Unified Approach. Student Literature, p. 394 (2000) 19. Yadav, M.B., Singh, B., Melkeri, V.S.: Design of rectangular microstrip patch antenna with DGS at 2.45 GHz. In: IEEE International Conference of Electronics Communication and Aerospace Technology (ICECA) (2017) 20. Cao, S., Han, Y., Chen, H., Li, J.: An ultra-wideband stop band LPF using asymmetric pishaped Koch fractal DGS. IEEE Access 5, 27126–27131 (2017)
Classification of Seagrass Habitat Using Probabilistic Neural Network Anand Upadhyay(B) , Prajna Tantry, and Aarohi Varade Department of Information Technology, Thakur College of Science and Commerce, Thakur Village, Kandivali East, Mumbai 400101, Maharashtra, India [email protected], [email protected], [email protected]
Abstract. Seagrasses are very valuable asset which maintains the ecological and economic components of marine ecosystem. Seagrasses provides Food and shelter for marine animals, purifies water, provides nutrients, and decreases the speed of storm. As the human population is increasing, various types of threads on the seagrass is also increasing. Different activities of humans like sewage input, dumping of solid waste on the shoreline and anchoring of boats are main reasons of reduction in the population of seagrass. Remote Sensing is a technique through which the geospatial data of any location can be captured. So for the proposed research the remotely sensed Seagrass image of Andaman & Nicobar Island of India is collected using Google Earth. The image collected from Google Earth is high resolution image. High resolution image contains features in the form of RGB value. Probabilistic Neural Network (PNN) algorithm is a machine learning algorithm which is used for the classification of Seagrass from the data. Algorithm is applied on the extracted RGB value of the image. After applying PNN algorithm on the remotely sensed data, the classification of seagrass successfully performed with the accuracy of 99% and the Kappa Coefficient value is 0.99. The result shows the very good accuracy of the classification. Keywords: Seagrass · Google Earth · PNN · Remote sensing · Machine learning
1 Introduction Seagrasses are marine blossoming plant which becomes inside the ocean [8]. The name likely suggests the appearance as most species bear long extensive grass-like leaves. Despite drawing the least consideration, they are one of the most beneficial ecological community on earth. Seagrass makes the knolls which supports high biodiversity of marine environment. They give sustenance and asylum to the marine creatures, refines water, gives supplement, and diminishes the speed of tempest. There are 11 types of seagrasses in Andaman Nicobar islands. As the population of human is expanding, seagrasses are confronting bunches of issue. Various exercises of people like dumping the loss inside the ocean, discharging the synthetic substances in the ocean cause danger © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): IBICA 2019, AISC 1180, pp. 250–257, 2021. https://doi.org/10.1007/978-3-030-49339-4_25
Classification of Seagrass Habitat Using PNN
251
to the life of seagrass [1]. Thusly, the types of seagrasses are diminishing consistently from the some previous years. The mapping and checking of seagrasses is important for knowing the state of seagrass like land zone inclusion, seagrass species organization, and seagrass biomass, and so on. Scientists are attempting to distinguish the current state of seagrasses in these area with the goal that they can spare seagrasses. Remote sensing is considered as the inspection or collection the information about an area from faraway [3]. In remote sensing, the identification of the object is done dependent on the vitality that is reflected from the earth. There are two different ways of doing remote sensing. In active remote sensing, the sensor discharges its very own wellspring of vitality towards the items and afterward the sensor distinguishes the radiation that is reflected from the object. In the passive remote sensing, the sensor not distinguishes its own discharged vitality, sensor recognizes the normal vitality (radiation) that is transmitted or reflected by the object. Daylight is the basic regular wellspring of vitality which is reflected by the object. In the proposed research the latent remotely detected information has been taken. The regions of Andaman and Nicobar islands has been chosen for this research. There are very less research on mapping of seagrass for Andaman and Nicobar island that do not envelope the study region. The information is high goals satellite picture which is taken from Google Earth. From the high goals satellite picture the grouping of Seagrass is performed. In this exploration, Probabilistic Neural Network (PNN) calculation is utilized for the order of Seagrass. Probabilistic Neural Network calculation is a piece of Neural Network which is a part of AI. There are various utilizations of Neural Network like example acknowledgment, order, relapse, estimation, expectation and determining and so on. Probabilistic Neural Network is a multilayered Feed Forward Neural Network. Probabilistic neural system (PNN) is comprise of three layers information layer, shrouded layer, and yield layer. Principle favorable position of utilizing PNN is that its preparation procedure is quick. An effort to pursue mapping of seagrass in Andaman and Nicobar island and the classification through PNN in this article.
2 Literature Review To comprehend the dynamic idea of the biodiversity of seagrass like the species piece, its flourishing, spatial courses of action, relies on the geospatial data of seagrasses. In the accompanying examination, the investigators have set up the sureness of airborne hyper-unearthly and satellite multi-ghastly picture datasets figured for mapping types of seagrass creation, level anticipated foliage spread or more ground dry-weight biomass. The undertaking was practiced on the Eastern Banks in Moreton Bay, Australia, an area of clear and shallow waters along the coast, comprising a line of seagrass animal varieties, spread and biomass levels [4]. The satellite picture information utilized were Quickbird-2 multi-phantom and Landsat-5 Thematic Mapper multi-unearthly. CASI-2 sensor delivered Airborne hyper-phantom picture information utilizing a pixel size of 4.0 m. The mapping was directed somewhere inside 3.0 m shallow water, based on past demonstrating of the detachability of seagrass reflectance marks at developing water pitch [7]. Their result shows that mapping of seagrass species, spread and biomass to high precision level ( 2 and S11 > −10 dB at WiMAX and WLAN bands. The optimal parameters are obtained by using following relations for U and J shaped slots. Lj1 + Lj2 + Wj2 − 2Wj1 = L1 ≈
C √ 4Fn εeff
(1)
Design of Two Slot Multiple Input Multiple Output UWB Antenna
327
Where the equivalent length of the j shaped slot is represented as L1 in above equation, C is the velocity of light and Fn is notched frequency. For optimal length of u shaped slot use the below equation: Lu + 2Wu − 2Lu1 = L2 ≈ λg =
λg 2
C √ Fn εeff
(2) (3)
where λg represents guided wavelength of the optimal notched frequency Fn and L2 is optimal equivalent length of the notch ‘U’.
3 Performance Analysis of Introduced Antenna In this work the characteristics and performance of introduced MIMO antenna is investigated with respect to loaded slots U, J and both slots (U and J) in detail interims of S parameters, return loss, VSWR and reflection coefficients. The introduced (MIMO) multiple input and multiple output antenna s-parameters are simulated for various structures between elements. When the excitation of port1, port2 and port3, port4 with notched loads with proper impedance termination. When single antenna element is excited remaining element ports are matched with matched load of 50 ohms. The separation in between the two orthogonal elements is high and the isolation between the antenna elements are reduced to optimum level for better performance characteristics shown in Fig. 4. The U-shaped slots adjusted by varying dimensions for WLAN (5.5 GHz) and increase the slot length Lu , Lj1 for notched frequency band. The J-shaped slot adjusted to design length of the above equation frequency rejection for WIMAX (3, 4 GHz) [17, 18].
Fig. 4. S parameters of antenna excited at port 1 (a) U slot (b) J slot (c) both
The Fig. 5. shows the VSWR plot for different slots U, J and combination of U&J respectively. From the above graphs it is clear that the introduced antenna (with both U&J slots) provides a VSWR < 2 over the frequency band 3.1 to 11 GHz.
328
S. Malathi et al.
Fig. 5. Simulated VSWR for different structures
The impedence matching response of introduced antenna with center isolating structure is shown in Fig. 6. The changes in lengths of slots and decoupling structure helps in achieving desired impedence matching at UWB frequency band. It is observed in the graph that the reflection coefficient with respect to four ports S11, S22, S33 and S44 are same with slight mismatches. The simulated results shows that the introduced antenna is perfectly matched with the circuit over the rangy of i.e. from 2 GHz to 11 GHz where scattering coefficients