Hybrid Intelligent Systems: 19th International Conference on Hybrid Intelligent Systems (HIS 2019) held in Bhopal, India, December 10-12, 2019 [1st ed.] 9783030493356, 9783030493363

This book highlights the recent research on hybrid intelligent systems and their various practical applications. It pres

583 57 38MB

English Pages XIV, 456 [470] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Matter ....Pages i-xiv
Dimension Reduction with Extraction Methods (Principal Component Analysis - Self Organizing Map - Isometric Mapping) in Indonesian Language Text Documents Clustering (Muhammad Ihsan Jambak, Ahmad Ikrom Izzuddin Jambak, Rahmad Tirta Febrianto, Danny Matthew Saputra, Muhammad Irfan Jambak)....Pages 1-9
Reducing Data Volume in Instance Based Learning (Maria Do Carmo Nicoletti, Luis Andre Claudiano)....Pages 10-20
State Estimation of Moving Vehicle Using Extended Kalman Filter: A Cyber Physical Aspect (Ankur Jain, Binoy Krishna Roy)....Pages 21-30
ADAL System: Aspect Detection for Arabic Language (Sana Trigui, Ines Boujelben, Salma Jamoussi, Yassine Ben Ayed)....Pages 31-40
Modelling, Analysis and Simulation of a Patient Admission Problem: A Social Network Approach (Veera Babu Ramakurthi, Vijayakumar Manupati, Suraj Panigrahi, M. L. R. Varela, Goran Putnik, P. S. C. Bose)....Pages 41-51
Short-Term Load Forecasting: An Intelligent Approach Based on Recurrent Neural Network (Atul Patel, Monidipa Das, Soumya K. Ghosh)....Pages 52-62
Design and Analysis of Anti-windup Techniques for Anti-lock Braking System (Prangshu Saikia, Ankur Jain)....Pages 63-71
Wind-Power Intra-day Statistical Predictions Using Sum PDE Models of Polynomial Networks Combining the PDE Decomposition with Operational Calculus Transforms (Ladislav Zjavka, Václav Snášel, Ajith Abraham)....Pages 72-82
Heterogeneous Engineering in Intelligent Logistics (Yury Iskanderov, Mikhail Pautov)....Pages 83-91
Extracting Unknown Repeated Pattern in Tiled Images (Prasanga Neupane, Archana Tuladhar, Shreeniwas Sharma, Ravi Tamang)....Pages 92-102
Convolutional Deep Learning Network for Handwritten Arabic Script Recognition (Mohamed Elleuch, Monji Kherallah)....Pages 103-112
Diversity in Recommendation System: A Cluster Based Approach (Naina Yadav, Rajesh Kumar Mundotiya, Anil Kumar Singh, Sukomal Pal)....Pages 113-122
Contribution on Arabic Handwriting Recognition Using Deep Neural Network (Zouhaira Noubigh, Anis Mezghani, Monji Kherallah)....Pages 123-133
Analyzing and Enhancing Processing Speed of K-Medoid Algorithm Using Efficient Large Scale Processing Frameworks (Ayshwarya Jaiswal, Vijay Kumar Dwivedi, Om. Prakash Yadav)....Pages 134-144
Multiple Criteria Fake Reviews Detection Based on Spammers’ Indicators Within the Belief Function Theory (Malika Ben Khalifa, Zied Elouedi, Eric Lefèvre)....Pages 145-155
Data Clustering Using Environmental Adaptation Method (Tribhuvan Singh, Krishn Kumar Mishra, Ranvijay)....Pages 156-164
Soft Computing, Data Mining, and Machine Learning Approaches in Detection of Heart Disease: A Review (Keshav Srivastava, Dilip Kumar Choubey)....Pages 165-175
A Novel CAD System for Breast DCE-MRI Based on Textural Analysis Using Several Machine Learning Methods (Raouia Mokni, Norhene Gargouri, Alima Damak, Dorra Sellami, Wiem Feki, Zaineb Mnif)....Pages 176-187
An Adversarial Learning Mechanism for Dealing with the Class-Imbalance Problem in Land-Cover Classification (Shounak Chakraborty, Indrajit Kalita, Moumita Roy)....Pages 188-196
An Integrated Fuzzy ANP-TOPSIS Approach to Rank and Assess E-Commerce Web Sites (Rim Rekik)....Pages 197-209
Implementation of Block Chain Technology in Public Distribution System (Pratik Thakare, Nitin Dighore, Ankit Chopkar, Aakash Chauhan, Diksha Bhagat, Milind Tote)....Pages 210-219
Chaotic Salp Swarm Optimization Using SVM for Class Imbalance Problems (Gillala Rekha, V. Krishna Reddy, Amit Kumar Tyagi)....Pages 220-229
Three-Layer Security for Password Protection Using RDH, AES and ECC (Nishant Kumar, Suyash Ghuge, C. D. Jaidhar)....Pages 230-239
Clothing Classification Using Deep CNN Architecture Based on Transfer Learning (Mohamed Elleuch, Anis Mezghani, Mariem Khemakhem, Monji Kherallah)....Pages 240-248
Identification of Botnet Attacks Using Hybrid Machine Learning Models (Amritanshu Pandey, Sumaiya Thaseen, Ch. Aswani Kumar, Gang Li)....Pages 249-257
Congestion Control in Vehicular Ad-Hoc Networks (VANET’s): A Review (Lokesh M. Giripunje, Deepika Masand, Shishir Kumar Shandilya)....Pages 258-267
Advances in Cyber Security Paradigm: A Review (Shahana Gajala Qureshi, Shishir Kumar Shandilya)....Pages 268-276
Weighted Mean Variant with Exponential Decay Function of Grey Wolf Optimizer on Applications of Classification and Function Approximation Dataset (Alok Kumar, Avjeet Singh, Lekhraj, Anoj Kumar)....Pages 277-290
Enhanced Homomorphic Encryption Scheme with Particle Swarm Optimization for Encryption of Cloud Data (Abhishek Mukherjee, Dhananjay Bisen, Praneet Saurabh, Lalit Kane)....Pages 291-298
Detection and Prevention of Black Hole Attack Using Trusted and Secure Routing in Wireless Sensor Network (Dhananjay Bisen, Bhavana Barmaiya, Ritu Prasad, Praneet Saurabh)....Pages 299-308
Recursive Tangent Algorithm for Path Planning in Autonomous Systems (Adhiraj Shetty, Annapurna Jonnalagadda, Aswani Kumar Cherukuri)....Pages 309-318
Marathi Handwritten Character Recognition Using SVM and KNN Classifier (Diptee Chikmurge, R. Shriram)....Pages 319-327
Whale Optimization Algorithm with Exploratory Move for Wireless Sensor Networks Localization (Nebojsa Bacanin, Eva Tuba, Miodrag Zivkovic, Ivana Strumberger, Milan Tuba)....Pages 328-338
Facial Expression Recognition Using Histogram of Oriented Gradients with SVM-RFE Selected Features (Sumeet Saurav, Sanjay Singh, Ravi Saini)....Pages 339-349
Automated Security Driven Solution for Inter-Organizational Workflows (Asmaa El Kandoussi, Hanan El Bakkali)....Pages 350-361
Network Packet Analysis in Real Time Traffic and Study of Snort IDS During the Variants of DoS Attacks (Nilesh Kunhare, Ritu Tiwari, Joydip Dhar)....Pages 362-375
Securing Trustworthy Evidences for Robust Forensic Cloud in Spite of Multi-stakeholder Collusion Problem (Sagar Rane, Sanjeev Wagh, Arati Dixit)....Pages 376-386
Threat-Driven Approach for Security Analysis: A Case Study with a Telemedicine System (Raj kamal Kaur, Lalit Kumar Singh, Babita Pandey, Aditya Khamparia)....Pages 387-397
Key-Based Obfuscation Using Strong Physical Unclonable Function: A Secure Implementation (Surbhi Chhabra, Kusum Lata)....Pages 398-408
A Survey on Countermeasures Against Man-in-the-Browser Attacks (Sampsa Rauti)....Pages 409-418
Towards Cyber Attribution by Deception (Sampsa Rauti)....Pages 419-428
Tangle the Blockchain: Toward IOTA and Blockchain Integration for IoT Environment (Hussein Hellani, Layth Sliman, Motaz Ben Hassine, Abed Ellatif Samhat, Ernesto Exposito, Mourad Kmimech)....Pages 429-440
Towards a Better Security in Public Cloud Computing (Sonia Amamou, Zied Trifa, Maher Khmakhem)....Pages 441-453
Back Matter ....Pages 455-456
Recommend Papers

Hybrid Intelligent Systems: 19th International Conference on Hybrid Intelligent Systems (HIS 2019) held in Bhopal, India, December 10-12, 2019 [1st ed.]
 9783030493356, 9783030493363

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Advances in Intelligent Systems and Computing 1179

Ajith Abraham Shishir K. Shandilya Laura Garcia-Hernandez Maria Leonilde Varela   Editors

Hybrid Intelligent Systems 19th International Conference on Hybrid Intelligent Systems (HIS 2019) held in Bhopal, India, December 10–12, 2019

Advances in Intelligent Systems and Computing Volume 1179

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong

The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **

More information about this series at http://www.springer.com/series/11156

Ajith Abraham Shishir K. Shandilya Laura Garcia-Hernandez Maria Leonilde Varela •





Editors

Hybrid Intelligent Systems 19th International Conference on Hybrid Intelligent Systems (HIS 2019) held in Bhopal, India, December 10–12, 2019

123

Editors Ajith Abraham Scientific Network for Innovation and Research Excellence Machine Intelligence Research Labs (MIR) Auburn, WA, USA

Shishir K. Shandilya School of Computer Science and Engineering VIT Bhopal University Bhopal, Madhya Pradesh, India

Laura Garcia-Hernandez Area of Project Engineering University of Cordoba Córdoba, Spain

Maria Leonilde Varela Escola de Engenharia Universidade do Minho Guimarães, Portugal

ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-3-030-49335-6 ISBN 978-3-030-49336-3 (eBook) https://doi.org/10.1007/978-3-030-49336-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Welcome Message

Welcome to VIT Bhopal University, India, and to the 19th International Conference on Hybrid Intelligent Systems (HIS 2019) and the 14th International Conference on Information Assurance and Security (IAS 2019). In 2018, HIS and IAS were held at Instituto Superior de Engenharia do Porto (ISEP), Portugal, during December 13–15. Hybridization of intelligent systems is a promising research field of modern artificial/computational intelligence concerned with the development of the next generation of intelligent systems. A fundamental stimulus to the investigations of hybrid intelligent systems (HIS) is the awareness in the academic communities that combined approaches will be necessary if the remaining tough problems in computational intelligence are to be solved. Recently, hybrid intelligent systems are getting popular due to their capabilities in handling several real-world complexities involving imprecision, uncertainty, and vagueness. HIS 2019 received submissions from 15 countries, and each paper was reviewed by at least five reviewers in a standard peer-review process. Based on the recommendation by five independent referees, finally 34 papers will be presented during the conference (acceptance rate of 35% including virtual presentations). Information assurance and security have become an important research issue in networked and distributed information sharing environments. Finding effective ways to protect information systems, networks, and sensitive data within the critical information infrastructure is challenging even with the most advanced technology and trained professionals. IAS aims to bring together researchers, practitioners, developers, and policy makers involved in multiple disciplines of information security and assurance to exchange ideas and to learn the latest development in this important field. IAS 2019 received submissions from ten countries, and each paper was reviewed by at least five reviewers in a standard peer-review process. Based on the recommendation by five independent referees, finally eight papers will be presented during the conference (acceptance rate of 30% including virtual presentations). Conference proceedings will be published by Springer Verlag, Advances in Intelligent Systems and Computing Series, which is now indexed by ISI Proceedings, DBLP, SCOPUS, etc. Many people have collaborated and worked v

vi

Welcome Message

hard to produce this year successful HIS–IAS conferences. First and foremost, we would like to thank all the authors for submitting their papers to the conference, for their presentations and discussions during the conference. Our thanks to program committee members and reviewers, who carried out the most difficult work by carefully evaluating the submitted papers. We are grateful to our three plenary speakers for the wonderful talks: • Prof. Dr. Arturas Kaklauskas, Vilnius Gediminas Technical University, Lithuania • Prof. Dr. Pawan Lingras, Saint Mary’s University, Halifax, Canada • Prof. Dr. Stephen Huang, University of Houston, USA Our special thanks to the Springer Publication team for the wonderful support for the publication of these proceedings. We express our sincere thanks to the session chairs and local organizing committee chairs for helping us to formulate a rich technical program. We are thankful to administrative officers of VIT University Bhopal for hosting HIS–IAS 2019. Special thanks to Dr. Shishir Shandilya (General Chair: HIS–IAS 2019, VIT University Bhopal) and his team for the great local organization. Looking forward to interacting with all of you during the conferences. Ajith Abraham Steering Committee Chairs (HIS–IAS Conference Series)

HIS–IAS 2019 Organization

Chief Patron G. Viswanathan (Chancellor)

VIT Bhopal University, India

Patrons Sankar Viswanathan (Vice President) Kadhambari S. Viswanathan (Assistant Vice President)

VIT Bhopal University, India VIT Bhopal University, India

Advisors P. Gunasekaran (Vice Chancellor) Jayasankar Variyar (Executive Director (Academics))

VIT Bhopal University, India VIT Bhopal University, India

General Chairs Ajith Abraham Shishir K. Shandilya

Machine Intelligence Research Labs (MIR Labs), USA VIT Bhopal University, India

vii

viii

HIS–IAS 2019 Organization

Program Chairs Laura Garcia-Hernandez Maria Leonilde Varela

University of Cordoba, Spain Universidade do Minho, Portugal

Organizing Chairs Sanju Tiwari S. Sountharrajan

University of Polytecnica, Madrid, Spain VIT Bhopal University, India

Web Master Kun Ma

University of Jinan, China

Program Committee Ajith Abraham Laurence Amaral Babak Amiri Heder Bernardino Jànos Botzheim Joseph Alexander Brown Alberto Cano Paulo Carrasco Oscar Castillo Lee Chang-Yong Phan Cong-Vinh Gloria Cerasela Crisan Alfredo Cuzzocrea Haikal El Abed El-Sayed M. El-Alfy Carlos Fernandez-Llatas Xiao-Zhi Gao Laura Garcia-Hernandez Elizabeth Goldbarg Thomas Hanne Leticia Hernando Biju Issac Atif Ali Khan Kyriakos Kritikos Vijay Kumar

Machine Intelligence Research Labs (MIR Labs) Federal University of Uberlandia The University of Sydney Universidade Federal de Juiz de Fora Budapest University of Technology and Economics Innopolis University Virginia Commonwealth University Univ. Algarve Tijuana Institute of Technology Kongju National University Nguyen Tat Thanh University “ Vasile Alecsandri” University of Bacau ICAR-CNR and University of Calabria German International Cooperation (GIZ) GmbH King Fahd University of Petroleum and Minerals Universitat Politècnica de València Aalto University University of Córdoba Federal University of Rio Grande do Norte University of Applied Sciences Northwestern Switzerland University of the Basque Country Teesside University University of Chicago Institute of Computer Science, FORTH VIT University,Vellore

HIS–IAS 2019 Organization

Simone Ludwig Ana Madureira Efrén Mezura-Montes Jolanta Mizera-Pietraszko Holger Morgenstern Paulo Moura Oliveira Diaf Moussa Ramzan Muhammad Akila Muthuramalingam Janmenjoy Nayak C. Alberto Ochoa-Zezatti Varun Ojha George Papakostas

Konstantinos Parsopoulos Carlos Pereira Eduardo Pires Dilip Pratihar Radu-Emil Precup Shishir Kumar Shandilya (Division Head) Mansi Sharma Tarun Kumar Sharma Mohammad Shojafar Patrick Siarry Shing Chiang Tan Sanju Tiwari Shu-Fen Tu Eiji Uchino Leonilde Varela Lin Wang Daniela Zaharie

ix

North Dakota State University Departamento de Engenharia Informática University of Veracruz Wroclaw University of Technology Sachverstaendigenbuero Morgenstern, GI, ACM, IEEE UTAD University UMMTO Maulana Mukhtar Ahmad Nadvi Technical Campus KPR Institute of Engineering and Technology Aditya Institute of Technology and Management (AITAM) Universidad Autónoma de Ciudad Juárez University of Reading Human-Machines Interaction (HMI) Laboratory, Department of Computer and Informatics Engineering, EMT Institute of Technology University of Ioannina ISEC UTAD University Department of Mechanical Engineering Politehnica University of Timisoara Cyber Security & Digital Forensics, SCSE, VIT Bhopal University, India Indian Institute of Technology, Delhi Amity University Rajasthan University of Surrey Universit de Paris 12 Multimedia University National Institute of Technology Kurukshetra Department of Information Management, Chinese Culture University Yamaguchi University University of Minho University of Jinan West University of Timisoara

x

Additional Reviewers Das Sharma, Kaushik Diniz, Thatiana Graff, Mario Lee, Huey-Ming Medeiros, Igor Mizera-Pietraszko, Jolanta Santos, André

HIS–IAS 2019 Organization

Contents

Dimension Reduction with Extraction Methods (Principal Component Analysis - Self Organizing Map - Isometric Mapping) in Indonesian Language Text Documents Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Muhammad Ihsan Jambak, Ahmad Ikrom Izzuddin Jambak, Rahmad Tirta Febrianto, Danny Matthew Saputra, and Muhammad Irfan Jambak Reducing Data Volume in Instance Based Learning . . . . . . . . . . . . . . . . Maria Do Carmo Nicoletti and Luis Andre Claudiano State Estimation of Moving Vehicle Using Extended Kalman Filter: A Cyber Physical Aspect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ankur Jain and Binoy Krishna Roy ADAL System: Aspect Detection for Arabic Language . . . . . . . . . . . . . Sana Trigui, Ines Boujelben, Salma Jamoussi, and Yassine Ben Ayed Modelling, Analysis and Simulation of a Patient Admission Problem: A Social Network Approach . . . . . . . . . . . . . . . . . . . . . . . . . . Veera Babu Ramakurthi, Vijayakumar Manupati, Suraj Panigrahi, M. L. R. Varela, Goran Putnik, and P. S. C. Bose

1

10

21 31

41

Short-Term Load Forecasting: An Intelligent Approach Based on Recurrent Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Atul Patel, Monidipa Das, and Soumya K. Ghosh

52

Design and Analysis of Anti-windup Techniques for Anti-lock Braking System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prangshu Saikia and Ankur Jain

63

xi

xii

Contents

Wind-Power Intra-day Statistical Predictions Using Sum PDE Models of Polynomial Networks Combining the PDE Decomposition with Operational Calculus Transforms . . . . . . . . . . . . . . Ladislav Zjavka, Václav Snášel, and Ajith Abraham

72

Heterogeneous Engineering in Intelligent Logistics . . . . . . . . . . . . . . . . . Yury Iskanderov and Mikhail Pautov

83

Extracting Unknown Repeated Pattern in Tiled Images . . . . . . . . . . . . . Prasanga Neupane, Archana Tuladhar, Shreeniwas Sharma, and Ravi Tamang

92

Convolutional Deep Learning Network for Handwritten Arabic Script Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Mohamed Elleuch and Monji Kherallah Diversity in Recommendation System: A Cluster Based Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Naina Yadav, Rajesh Kumar Mundotiya, Anil Kumar Singh, and Sukomal Pal Contribution on Arabic Handwriting Recognition Using Deep Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Zouhaira Noubigh, Anis Mezghani, and Monji Kherallah Analyzing and Enhancing Processing Speed of K-Medoid Algorithm Using Efficient Large Scale ProcessingFrameworks . . . . . . . . 134 Ayshwarya Jaiswal, Vijay Kumar Dwivedi, and Om. Prakash Yadav Multiple Criteria Fake Reviews Detection Based on Spammers’ Indicators Within the Belief Function Theory . . . . . . . . . . . . . . . . . . . . 145 Malika Ben Khalifa, Zied Elouedi, and Eric Lefèvre Data Clustering Using Environmental Adaptation Method . . . . . . . . . . 156 Tribhuvan Singh, Krishn Kumar Mishra, and Ranvijay Soft Computing, Data Mining, and Machine Learning Approaches in Detection of Heart Disease: A Review . . . . . . . . . . . . . . . . . . . . . . . . 165 Keshav Srivastava and Dilip Kumar Choubey A Novel CAD System for Breast DCE-MRI Based on Textural Analysis Using Several Machine Learning Methods . . . . . . . . . . . . . . . . 176 Raouia Mokni, Norhene Gargouri, Alima Damak, Dorra Sellami, Wiem Feki, and Zaineb Mnif An Adversarial Learning Mechanism for Dealing with the Class-Imbalance Problem in Land-Cover Classification . . . . . . 188 Shounak Chakraborty, Indrajit Kalita, and Moumita Roy

Contents

xiii

An Integrated Fuzzy ANP-TOPSIS Approach to Rank and Assess E-Commerce Web Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Rim Rekik Implementation of Block Chain Technology in Public Distribution System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Pratik Thakare, Nitin Dighore, Ankit Chopkar, Aakash Chauhan, Diksha Bhagat, and Milind Tote Chaotic Salp Swarm Optimization Using SVM for Class Imbalance Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Gillala Rekha, V. Krishna Reddy, and Amit Kumar Tyagi Three-Layer Security for Password Protection Using RDH, AES and ECC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Nishant Kumar, Suyash Ghuge, and C. D. Jaidhar Clothing Classification Using Deep CNN Architecture Based on Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Mohamed Elleuch, Anis Mezghani, Mariem Khemakhem, and Monji Kherallah Identification of Botnet Attacks Using Hybrid Machine Learning Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Amritanshu Pandey, Sumaiya Thaseen, Ch. Aswani Kumar, and Gang Li Congestion Control in Vehicular Ad-Hoc Networks (VANET’s): A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Lokesh M. Giripunje, Deepika Masand, and Shishir Kumar Shandilya Advances in Cyber Security Paradigm: A Review . . . . . . . . . . . . . . . . . 268 Shahana Gajala Qureshi and Shishir Kumar Shandilya Weighted Mean Variant with Exponential Decay Function of Grey Wolf Optimizer on Applications of Classification and Function Approximation Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Alok Kumar, Avjeet Singh, Lekhraj, and Anoj Kumar Enhanced Homomorphic Encryption Scheme with Particle Swarm Optimization for Encryption of Cloud Data . . . . . . . . . . . . . . . . . . . . . . 291 Abhishek Mukherjee, Dhananjay Bisen, Praneet Saurabh, and Lalit Kane Detection and Prevention of Black Hole Attack Using Trusted and Secure Routing in Wireless Sensor Network . . . . . . . . . . . . . . . . . . 299 Dhananjay Bisen, Bhavana Barmaiya, Ritu Prasad, and Praneet Saurabh Recursive Tangent Algorithm for Path Planning in Autonomous Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Adhiraj Shetty, Annapurna Jonnalagadda, and Aswani Kumar Cherukuri

xiv

Contents

Marathi Handwritten Character Recognition Using SVM and KNN Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Diptee Chikmurge and R. Shriram Whale Optimization Algorithm with Exploratory Move for Wireless Sensor Networks Localization . . . . . . . . . . . . . . . . . . . . . . 328 Nebojsa Bacanin, Eva Tuba, Miodrag Zivkovic, Ivana Strumberger, and Milan Tuba Facial Expression Recognition Using Histogram of Oriented Gradients with SVM-RFE Selected Features . . . . . . . . . . . . . . . . . . . . . 339 Sumeet Saurav, Sanjay Singh, and Ravi Saini Automated Security Driven Solution for Inter-Organizational Workflows . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Asmaa El Kandoussi and Hanan El Bakkali Network Packet Analysis in Real Time Traffic and Study of Snort IDS During the Variants of DoS Attacks . . . . . . . . . . . . . . . . . 362 Nilesh Kunhare, Ritu Tiwari, and Joydip Dhar Securing Trustworthy Evidences for Robust Forensic Cloud in Spite of Multi-stakeholder Collusion Problem . . . . . . . . . . . . . . . . . . 376 Sagar Rane, Sanjeev Wagh, and Arati Dixit Threat-Driven Approach for Security Analysis: A Case Study with a Telemedicine System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 Raj kamal Kaur, Lalit Kumar Singh, Babita Pandey, and Aditya Khamparia Key-Based Obfuscation Using Strong Physical Unclonable Function: A Secure Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 Surbhi Chhabra and Kusum Lata A Survey on Countermeasures Against Man-in-the-Browser Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Sampsa Rauti Towards Cyber Attribution by Deception . . . . . . . . . . . . . . . . . . . . . . . 419 Sampsa Rauti Tangle the Blockchain: Toward IOTA and Blockchain Integration for IoT Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 Hussein Hellani, Layth Sliman, Motaz Ben Hassine, Abed Ellatif Samhat, Ernesto Exposito, and Mourad Kmimech Towards a Better Security in Public Cloud Computing . . . . . . . . . . . . . 441 Sonia Amamou, Zied Trifa, and Maher Khmakhem Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455

Dimension Reduction with Extraction Methods (Principal Component Analysis - Self Organizing Map - Isometric Mapping) in Indonesian Language Text Documents Clustering Muhammad Ihsan Jambak1(B) , Ahmad Ikrom Izzuddin Jambak1 , Rahmad Tirta Febrianto1 , Danny Matthew Saputra1 , and Muhammad Irfan Jambak2 1 Faculty of Computer Science, Sriwijaya University, Inderalaya, Indonesia

[email protected] 2 Faculty of Engineering, Sriwijaya University, Inderalaya, Indonesia

Abstract. Clustering algorithms such as k-Means, fail to function appropriately when used to analyze data with high dimensions. Therefore, in order to achieve a good clustering, a feature selection or a feature extraction dimensional reduction is needed. The Principal Component Analysis (PCA) algorithm often utilized the extraction methods, however, the reduction result is not too good, due to low quality of clustering and lengthy processing time. Therefore, it is necessary to study other algorithms methods to obtain alternatives to the PCA. This study therefore was conducted by comparing the results of Indonesian text document clustering, which had been reduced in dimensions by PCA, Self-Organizing Map (SOM), and Isometric Featuring Mapping (Isomap). The measurements were made on clustering quality parameters using the Davies Bouldin Index, computational time, and iterations. The results shows that SOM tend to improve cluster quality to 269.084% better than the k-Means, while, Isomap has the ability to speed up the clustering computing time by 190 times. In addition, the qualitative outcome determines the most appropriate algorithm extraction method capable of reducing clustering features of Indonesian language text documents. Keywords: Dimensional reduction · Feature extraction · Principal component analysis · Self-Organizing map · Isometric mapping · k-Means

1 Introduction When data derived from text documents are converted to numeric, it turns each word into a separate feature with many dimensions [1]. These numeric data are high-dimensional with large number of features, thereby, causing several problems related to noise, anomalies (outliers), missing values and discontinuities [2, 3]. Every feature of the text document is significantly influenced during texts document clustering, in accordance with its location placement. Furthermore, when a data has many diverse features, it finds it difficult to determine it’s the closeness or similarity between clustering algorithms, © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 1–9, 2021. https://doi.org/10.1007/978-3-030-49336-3_1

2

M. I. Jambak et al.

therefore, it is not appropriately grouped. This condition is often called the curse of dimensionality [4]. To obtain functional clusters, high-dimensional data need to pass the preprocessing stage, and by reducing nonessential variables with specific considerations [3, 5–7]. Dimension reduction consists of the feature selection and feature extraction methods. From previous studies, the feature selection has a better influence on the cluster results using the k-Means algorithm compared to the extraction method [8]. The Principal Component Analysis (PCA) is one of the commonly used feature extraction methods. However, the PCA algorithm requires lengthy processing time, with unacceptable reduction results [3, 8]. Therefore, it is necessary to study other algorithms to obtain alternatives to the PCA in reducing feature extraction. This study therefore was conducted by comparing the results of Indonesian texts document clustering, which had been reduced in dimensions by PCA, Self-Organizing Map (SOM), and Isometric Featuring Mapping (Isomap).

2 Research Methodologies 2.1 Data The data tested were Indonesian-language text sourced from the portal site garuda.ristekdikti.go.id with a total of 100 research journals. All documents were saved with.txt extension, and the text was further converted into numerical data. This research, the conversion consists of the following stages Case Folding, Tokenizing, Stop Words Removal, and Indonesian Language Stemming using the Nazief Adriani method [9]. Finally, weights were added to the test data using the tf-idf (term frequency-inverse document frequency) which turned the 100 text documents, into 8250 terms or features after conversion. 2.2 Clustering by k-Means with an Optimal k Value The data converted into numerical through the tf-idf weighting process, were clustered using the k-Means algorithm, which is one of the frequently used technique, to partition objects into k groups, after determining its optimum value. This optimum k value serves as a reference to other clusters for quality comparison. The k-Means algorithm works by randomly determining the initial centroid or the cluster center point, followed by calculating the distance between journal documents in each centroid using Euclidean Distance (ED). This measurement aims to determine the level of similarity of journal documents and classify the data in one cluster. This is followed by calculating the average value of data to determine a new centroid. When a change occurs in the initial centroid, return to look for another till there is no data change between the previous and current cluster [10, 11]. DBI values were used to determine the optimum k-value. DBI is resulted from comparison of intra-cluster value (average values of the data distance from the centroids) and inter-cluster value (average value of inter-centroids). The test was performed using the k values varied to determine changes in clustering results. Furthermore, the optimum

Dimension Reduction with Extraction Methods

3

number of clusters were determined using the elbow method by looking at the most significant change in value in comparison with the number of clusters [12]. Figure 1 shows the optimum number was k = 6 from the curve image, which was used to cluster data.

Fig. 1. Mean of intra-cluster distance for several k values.

Cluster validation is one of the critical issues for an accurate and successful grouping process. It is generally categorized into external and internal clustering validation, with the purpose of grouping objects accordingly. The internal clustering validation step is based on 2 criteria, the first is compactness, and the members of each cluster need to be close to each other. The standard measurement of compactness is the variant, this is followed by Separateness which measures the separation of clusters [13]. One method used to calculate the internal clustering validation is the Davies Bouldin Index (DBI). This is because it creates better results with smaller index. 2.3 Dimension Reduction Using PCA Principal Component Analysis is a technique for creating new variables which are linear combinations of original variables. The purpose of the PCA is to project original data into lower dimensions data. The projected data are the data that have the highest variance

4

M. I. Jambak et al.

value and the data between the dimensions are decanted [14, 15]. PCA uses covariance from the original data to find the eigenvector and eigenvalue values, then the data will be arranged based on the eigenvalue value from highest to lowest. If a dimension of k is desired, then the k eigenvector with the largest eigenvalue is taken and arranged into a matrix. Furthermore, the results of the data that have been mapped on this lower dimension are called principal components. The PCA equation is as follows: z = xw

(1)

where z = new principal component value x = original input value w = projected matrix The calculation steps as follow: 1. Calculate average input data of X:

X¯ =

n

i=1 Xi

N

(2)

Which: X¯ = average value of input X X = original data (input value) N = numbers of data 2. Calculate the covariance S from input X:

S=

(x − x¯ )T (x − x¯ ) N

(3)

Where: S = covariance value of input X T = transpose value 3. Calculate the eigenvector and eigenvalue from the covariance matrix; 4. Determine the number of new components k and then take k eigenvector which has the largest k-eigenvalue value. Arrange the eigenvector as a column matrix which is then used as a projected matrix w. 5. Finally, multiply the original data x with the projected matrix w to get the reduced data z.

Dimension Reduction with Extraction Methods

5

2.4 Dimension Reduction Using SOM Self-Organizing Map is one of the neural network techniques which use unsupervised and competitive learning methods. It aims to map high-dimensional data into lowdimensional data using a structured topology in units based on similarity [16]. In order to project data into lower dimensions, it uses a weight vector which is randomly determined in grouping input vectors. Euclidean Distance is used to measure the distance between the input vector closest to the value and the weight selected as the winning neuron. The weight vector is further updated based on the results of the first iteration using the alpha/learning rate multiplier, which always decreases when there is no change. SOM consists of three essential components [17], namely: Specific neurons, which is the smallest value of the discriminant function, the spatial location of the excited neuron topology environment, which is used to provide a basis for cooperation in a neuron environment, and Synaptic Adaption, which decreases the value of the discriminant function associated with the input pattern. The following are the steps of SOM: 1. Determine the initial weight value randomly (0 to 1) several k dimensions; 2. Determine the value of learning rate and the number of iterations, in this study; 3. Calculate the distance between input data ti and weights using the Euclidean Distance formula; 4. Determine the winning neurons, namely those whose weight values are closest to the input data; 5. Update the weight value; 6. Repeat the above steps using data ti+1 till the iteration reaches the specified amount. In the SOM algorithm process, parameter determination is utilized in the form of Neighbourhood Radius (NR), Learning Rate (LR), and numbers of Iterations. These parameters are sequentially determined in stages to obtain optimum value for each parameter which are determined based on the best DBI cluster evaluation value. The parameters used are NR = 50, LR = 0.4, and Iteration = 4000. 2.5 Dimension Reduction Using Isomap Isomap is a dimension reduction method which works with geodesic distance to overcome nonlinear data [6]. The first is to identify the k nearest neighbors by measuring the distance between each data point. This is followed by calculating the geodesic distance using the Floyd-Warshall algorithm [18–20], which looks for the shortest distance for each data point, where the distance between non-contiguous data points is infinite. Finally, it uses a geodesic distance matrix as an input for Multidimensional Scaling (MDS) [21–23]. The calculation steps of the ISOMAP method are as follows [24]: 1. Build a neighbor chart and determine all data points by connecting each to all k nearest neighbor points in the G matrix. 2. Calculate the shortest path. Isomap estimates the geodesic distance using the FloydWarshall algorithm with equations as follows:

6

M. I. Jambak et al.

dG (i, j) = min{dG (i, j), dG (i, k), dG (k, j)}

(4)

where the final value matrix is:   N DG = dG2 xi , xj

i,j=1

(5)

which contain the shortest distance between all pairs of points from the matrix G. 3. Building d-dimensional. Apply the multidimensional classic MDS scaling to the matrix with the shortest distance from the previous step with a formula as shown in the equation:

Y = min || τ (DG ) − τ (DY ) ||2

(6)

where, τ =−

HSH 2

(7)

while, Hxi xj = δij − Sxi xj = Dx2i xj

1 N

(8) (9)

In the ISOMAP algorithm process, the parameter is determined in the form of nearest neighbors (kn value). Due to the ability of the value of kn to influence the results of the ISOMAP algorithm. Therefore in the k-Means clustering, initial tests are carried out to determine the optimum kn value which is further used in taking the clustering results. Determination of optimum kn uses parameters with DBI values, which obtained optimum kn = 10.

3 Results and Discussions This research was conducted using object-oriented software developed with the Java programming language and run on hardware with an Intel(R) Core(TM) i5-5200U CPU @ 2.20 GHz Processor, 16.0 GB RAM, and 64-bit Operating System. All clustering processes with and without dimension reduction methods (PCA, SOM, and Isomap) were carried out using k = 6 for k-Means and conducted 30 times for each method to fulfill the statistical requirement for normal distribution result. The results were analyzed using SPSS version 22 software by Analysis of Variance (ANOVA) with a 0.95 confidence level. Table 1 shows the best clustering quality was carried out after the data had been reduced with SOM. Where k-Means has the ability to produce a DBI value of 13.013,

Dimension Reduction with Extraction Methods

7

Table 1. Clustering quality (in DBI) results before and after dimension reduction Algorithms

N

Mean

Std. deviation

Std. error

35.0166924758

19.5983395784

3.5781508922

k-Means

30

PCA + k-Means

30

31.4857691144

12.0762022431

2.2048027925

SOM + k-Means

30

13.0132456731

3.804739745

.6946472612

Isomap + k-Means

30

30.0506162361

12.4968669458

2.2816053081

Total

120

this means that SOM has the ability to improve cluster quality to 269.084% better than the original using the k-Means (high dimension data). Meanwhile, although others were able to reduce the number of DBI, statistically there were no significant differences. Table 2, shows dimension reduction with Isomap speeds up the k-Means clustering computing time by more than 190 times while PCA accelerate it by 3.64 times. However, SOM failed to provide a significant difference result. This computation time is relevant to the results data in Table 3, which shows that Isomap and PCA has the ability to significantly reduce the number of k-Means iterations in order to achieve the convergence condition. Table 2. Clustering time (in seconds) results before and after dimension reduction Algorithms

N

Mean

Std. deviation

Std. error

.6131333333

.17916602459

.03271109107

k-Means

30

PCA + k-Means

30

.1681333333

.03477289207

.00634863246

SOM + k-Means

30

.6938061867

.24181344821

.04414889343

Isomap + k-Means

30

.0032097533

.00057773859

.00010548015

Total

120

Table 3. Clustering iterations results before and after dimension reduction

Algorithms

N

Mean

Std. deviation

Std. error

k-Means

30

6.20

1.789

.327

PCA + k-Means

30

1.83

.379

.069

SOM + k-Means

30

6.70

2.184

.399

Isomap + k-Means

30

1.00

.000

.000

Total

120

8

M. I. Jambak et al.

When compared to the additional time required by the three reduction methods namely PCA, SOM, and Isomap, Table 4 shows that SOM method is the best with a computing time of 0.1314 s. It also explains that the SOM method requires more unaffected memory and processing. Table 4. Reduction time (in seconds) Reductor N Mean PCA

Std. deviation Std. error

30 20.0484333333 .42970083284 .07845227971

SOM

30

.1314666667 .02115579864 .00386250271

Isomap

30

.8784728133 .16573215240 .03025841279

Total

90

4 Conclusion This research compared the clustering results of Indonesian-language text documents with the k-Means algorithm, conducted on high-dimensional data using PCA, SOM, and Isomap. Furthermore, the quality of clustering results, computing time, number of iterations, and dimensional reduction computing time were compared. The results of this study indicate that, the reduction of high-dimensional data improves the quality of clustering results. This conclusion proves the theory of the curse of dimensionality in clustering. Dimension reduction using feature extraction method is in principle, therefore, all information contained in the data is not partially lost, with prolonged computation time. In conclusion, SOM and Isomap are alternatives to PCA, which has been widely utilized. Furthermore, SOM provides better clustering quality, while Isomap has the ability to provide better computing time.

References 1. Jun, S., Park, S.-S., Jang, D.-S.: Document clustering method using dimension reduction and support vector clustering to overcome sparseness. Exp. Syst. Appl. 41(7), 3204–3212 (2014) 2. Chen, T.C., et al. Neural network with K-means clustering via PCA for gene expression profile analysis. In: 2009 WRI World Congress on Computer Science and Information Engineering. IEEE (2009) 3. Jambak, M.I., et al. The impacts of singular value decomposition algorithm toward indonesian language text documents clustering. In: International Conference of Reliable Information and Communication Technology. Springer, Heidelberg (2018) 4. Keogh, E., Mueen, A.: Curse of dimensionality. In: Encyclopedia of Machine Learning, pp. 257–258 (2010) 5. Han, J., Pei, J., Kamber, M.: Data Mining: Concepts and Techniques. Elsevier, Amsterdam (2011)

Dimension Reduction with Extraction Methods

9

6. Aréchiga, A., et al.: Comparison of dimensionality reduction techniques for clustering and visualization of load profiles. In: 2016 IEEE PES Transmission & Distribution Conference and Exposition-Latin America (PES T&D-LA). IEEE (2016) 7. Yang, X.-S., et al.: Information analysis of high-dimensional data and applications. Math. Prob. Eng. 2015, 2 (2015) 8. Hasanah, S.I.R., Jambak, M.I., Saputra, D.M.: Comparison of dimensional reduction using singular value decomposition and principal component analysis for clustering results of Indonesian language text documents. In: The 2nd International Conference of Applied Sciences, Mathematics, & Informatics (ICASMI) 2018. Universitas Lampung, Bandar Lampung (2018) 9. Adriani, M., et al.: Stemming Indonesian: a confix-stripping approach. ACM Trans. Asian Lang. Inf. Process. (TALIP) 6(4), 1–33 (2007) 10. Jain, D., Singh, V.: Feature selection and classification systems for chronic disease prediction: a review. Egypt. Inf. J. 19(3), 179–189 (2018) 11. Tan, P.-N., Steinbach, M., Kumar, V.: Cluster analysis: basic concepts and algorithms. Introduction Data Min. 8, 487–568 (2006) 12. Syakur, M., et al. Integration k-means clustering method and elbow method for identification of the best customer profile cluster. In: IOP Conference Series: Materials Science and Engineering. IOP Publishing (2018) 13. Ristevski, B., et al.: A comparison of validation indices for evaluation of clustering results of DNA microarray data. In: The 2nd International Conference on Bioinformatics and Biomedical Engineering, ICBBE 2008. IEEE (2008) 14. Abbas, M.I., Azis, A.I.S.: Integrasi algoritma singular value decomposition (SVD) dan principal component analysis (PCA) Untuk Pengurangan Dimensi Pada data rekam medis. In: Ilmu Komputer, UMI, pp. 99–111 (2014) 15. Santosa, B., Umam, A.: Data Mining dan Big Data Analytics: Teori dan Implementasi Menggunakan Python & Apache Spark. Penebar Media Pustaka, Yogyakarta (2018) 16. Kohonen, T.: The self-organizing map. Proc. IEEE 78(9), 1464–1480 (1990) 17. Haykin, S.: Multilayer perceptrons. Neural Netw. Compr. Found. 2, 156–255 (1999) 18. Qu, T., Cai, Z.: A fast isomap algorithm based on fibonacci heap. In: International Conference in Swarm Intelligence. Springer, Heidelberg (2015) 19. Weisstein, E.W., Floyd-Warshall Algorithm (2008) 20. Wu, Y., Chan, K.L.: An extended Isomap algorithm for learning multi-class manifold. In: Proceedings of 2004 International Conference on Machine Learning and Cybernetics (IEEE Cat. No. 04EX826). IEEE (2004) 21. Steyvers, M.: Multidimensional scaling. In: Encyclopedia of Cognitive Science (2006) 22. De Leeuw, J., Mair, P.: Multidimensional scaling using majorization: SMACOF in R (2011) 23. Venna, J., Kaski, S.: Local multidimensional scaling. Neural Netw. 19(6–7), 889–899 (2006) 24. Baozhu, W., et al.: Dimensionality reduction based on isomap and mutual information maximization. In: 2010 The 2nd Conference on Environmental Science and Information Application Technology. IEEE (2010)

Reducing Data Volume in Instance Based Learning Maria Do Carmo Nicoletti(B) and Luis Andre Claudiano Centro Universitário Campo Limpo Paulista - PMCC, Rua Guatemala 167, C. Limpo Paulista, SP 13231-230, Brazil [email protected]

Abstract. During the training phase of the Nearest-Neighbor (NN) algorithm, considered the most popular Instance-Based Learning (IBL) algorithm, all training instances are stored as the description of the learned concept. IBL algorithms postpone the generalization process, which usually happens during the training phase, until the classification phase starts i.e., when an unclassified data instance needs to be classified. When training sets have a high volume of instances, to store them all becomes unfeasible mainly due to storage requirements. Several IBL algorithms overcome storage related problems by implementing data volume reduction i.e., by storing only a representative subset of the training set. The investigation described in this paper focuses on four IBL algorithms that implement data reduction, which have been empirically evaluated in data sets from the UCI Repository. Their performance, considering storage reduction and classification accuracy, are presented and discussed.

1 Introduction In the Machine Learning (ML) research area one of the several models of supervised learning is referred to as instance-based learning (IBL), also known as lazy learning, and is implemented by algorithms called instance-based learning algorithms. As discussed in [1, 2], the training phase of algorithms that implement IBL, as a rule, simply store the training instances. The ‘generalization’ process, which usually takes place during the training phase of machine learning algorithms, occurs during the classification phase of IBL algorithms, when a new instance, of unknown class, needs to be classified. An advantage of this type of learning is that instead of generalizing the concept, considering the whole set of available training instances, it estimates the concept locally, for each new instance to be classified. One of the disadvantages of IBL algorithms is the computational cost involved in running the classification process, when the training set is bulky and, also, data instances are described by a large number of attributes, since all the required processing task happens in the classification phase. Currently it can be identified in the literature several trends related to the use of IBL algorithms e.g., in a cooperative fashion with biologically inspired optimization methods, as in [3], being the algorithm of choice for the segmentation of images, as in [4] or then, as new proposals/refinements, as in [5] and [6]. The research work described in this © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 10–20, 2021. https://doi.org/10.1007/978-3-030-49336-3_2

Reducing Data Volume in Instance Based Learning

11

paper focuses on IBL algorithms and, particularly, on how some of IBL algorithms try to overcome the drawback related to storage requirements. When designing an algorithm that implements a reduction of the number of instances, an important choice must be made between modifying the instances, using a new representation or then, retaining a subset of the original set of instances. With focus on modifying the original representation, instances referred to as prototypes (or representatives or exemplars) may represent a group of instances, even if the prototype instance has been artificially created (i.e., there is no corresponding instance in the original set of instances). The EACH algorithm/system, which implements the NGE (Nested Generalized Exemplar Learning) theory [7–9], modifies and generalizes the training instances by representing them as hyperrectangles with faces parallel to the coordinate axes. The use of hyperrectangles for representing concepts has the advantage of easing the translation of the induced concepts into a set of rules, aiming at improving the understandability of the concepts [10, 11]. IBL algorithms however, solve the problem of reducing the number of instances by identifying an appropriate subset of the original set of instances. It is important to note, however, that a problem associated with this solution is the possibility of the nonexistence of a subset of precisely localized instances which reflects a precise and concise description of the concept represented by all instances. However, when using algorithms employing prototypes, such precision and conciseness can be obtained by using artificially created prototypes. As mentioned earlier, several IBL algorithms deal with high volume of training instances by using reduction techniques. The work described in this paper empirically investigates algorithms that implement reduction techniques in several data sets of the UCI Machine Learning Repository [12]. The main goal was to identify the real contribution of algorithms that aim at volume reduction while maintaining the classification performance stable. This paper is organized as follows. Section 2 covers details related to the process of reducing the volume of instances to be stored by IBL algorithms. Section 3 initially presents the NN algorithm [13], a typical representative of IBL, which has been largely used in several modified versions, followed by the four IBL algorithms that implement volume reduction namely, the IB2 [1, 2], the CNN [14], the RNN [15] and the Method2 [16] Sect. 4 presents a description of the main characteristics of the 10 data sets used as input data, followed by the ad-hoc methodology employed in the learning experiments and the results of the experiments. Finally Sect. 5 summarizes the work done and highlights its intended continuation.

2 Some Considerations About IBL Algorithms As briefly mentioned in Introduction, an IBL algorithm does not generalize the training data as usually ML algorithms do - it only stores it. The generalization process is carried out during the classification phase, when a class is assigned to the new instance x of unknown class. To classify x the algorithm scans the set of stored instances, trying to identify the instance whose description is more ‘similar’ to the description of x and then, its corresponding class is assigned to x. IBL algorithms use as a measure of ‘similarity’ the concept of proximity, usually implemented as the Euclidean distance. A concept

12

M. D. C. Nicoletti and L. A. Claudiano

description based on instances is represented by the set of stored instances. Among some of the difficulties IBL algorithms experience is related to the increase in classification time, when many training instances are added to the memory and, also, the possibility that such algorithms exceed the capacity of the available memory, due to the large number of stored instances. As mentioned earlier, one way of dealing with this problem is by implementing volume reduction techniques. Different criteria can be used to compare the performance of IBL algorithms that implement volume reduction techniques. Authors in [17] consider several of them. The experiments described in this paper used two of them, namely: (1) Reduction of storage: one of the most relevant criterion used to evaluate an IBL algorithm implementing a reduction technique; (2) Generalization accuracy: an IBL algorithm implementing volume reduction is successful if reduces the volume of stored instances, without a significant reduction in its classification accuracy.

3 IBL Algorithms with Data Reduction Most algorithms that implement IBL have been inspired by the Nearest Neighbor (NN) algorithm [13]. This section briefly presents the NN, since the four IBL algorithms implementing volume reduction used in the experiments described in Sect. 4, are based on the NN. Figure 1 shows the pseudocode of the NN algorithm based on the description found in [15]. The algorithm assumes that all the N stored instances correspond to data instances in a M-dimensional space RM (where R is the set of real numbers), each of them associated with one out of S classes and, then, a distance function is used to determine which data instance, among those stored, is the closest one to the new instance to be classified. Once the closest instance is determined, its class is assigned to the new instance and the algorithm ends. Training algorithm: • store training set with N training instances, TNN = {(x1, θ1), (x2, θ2), ..., (xN, θN)}, where: (a) xi (1 ≤ i ≤ N) is a M dimensional vector of attribute values: xi = (xi1, xi2, ..., xiM) (b) and θi ∈ {1, 2, ..., S}, is the correct class of xi (1 ≤ i ≤ N). Classification algorithm: • given an instance xq to be classified, the decision rule implemented by the algorithm decides that xq has class θj if 1≤i≤N d(x, xj) ≤ d(x, xi) where d is a M-dimensional distance metric. Fig. 1. High level pseudocode of the NN [15].

The NN does not implement any technique for reducing data volume and, as commented in [14], the NN is not the algorithm of choice in many applications due to

Reducing Data Volume in Instance Based Learning

13

the storage requirements it imposes. The NN has been considered in the experiments described in Sect. 4 for comparison purposes only. The IB2 is the second algorithm of the IBL family proposed by Aha [1, 2], and can be described as an algorithm which, given a training set, starts by randomly choosing and storing a data instance; if the next randomly chosen instance is correctly classified by the stored instance, it is discarded, otherwise, it is also stored. This process repeats itself until all instances from the given training set have been considered. As pointed out in [1, 2], the IB2 reduces considerably the need for storage; this fact, however, has the side effect of turning the algorithm more sensitive to the presence of noise. As noisy instances are prone to be incorrectly classified, they end up being stored, and this becomes a serious disadvantage of this algorithm. The IB2 is quite similar to the CNN algorithm, which is described next, except that it does not go over the data twice, as the CNN does. The NN variant named Condensed Nearest Neighbor (CNN), proposed in [14], seeks to reduce the volume of data stored by the NN (i.e., the set noted as TNN in the procedure described in Fig. 1), by choosing a representative subset TCNN ⊆ TNN , as described in its pseudocode, in Fig. 2. procedure cnn_gates % training set Input: TNN % reduced training set Output:TCNN 1. TCNN first instance of TNN 2. TCNN is used to classify each instance of TNN, from the first. The process is repeated until one of the two situations arises: (a) every instance of TNN is correctly classified and the algorithm ends. (b) one of the instances of TNN is incorrectly classified and then, go to step 3. 3. Add the instance of TNN incorrectly classified to TCNN and go to step 2. return TCNN end procedure Fig. 2. Pseudocode of the CNN [15].

For constructing the TCNN set the CNN algorithm uses the concept of consistent subset of TNN , defined as a subset of TNN that classifies correctly all data instances in TNN . The minimal subset is the smallest, in number of elements, among all consistent subsets and, consequently, the one that will classify every instance in TNN more efficiently and appropriately. The CNN algorithm, however, always finds a consistent subset that, not necessarily, is minimal. The difference between the NN and CNN is the training set each one of them stores; while the NN stores the entire original training set, the CNN seeks to reduce the original training set and thus, storage requirements. The Reduced Nearest Neighbor (RNN) algorithm [15] can be approached as an extension of the CNN, considering that its first step is the CNN algorithm, as can be seen in its pseudocode presented in Fig. 3. Aiming at pre-processing the original set of instances in order to identify representative training instances to pass them on to the CNN, the Method2 algorithm [16] implements a procedure that collects, from the given training set, pairs of training instances

14

M. D. C. Nicoletti and L. A. Claudiano

that define the so called Tomek’s links. In a two-class training set, where class 1 is represented by instances {x1 , …, xi } and class 2 is represented by instances {y1 , …, yj }, a pair (xip , yjq ) defines a Tomek’s link if there is no training instance xz of class 1 such that distance(xz ,yjq ) < distance(xip , yjq ) and no training instance yk of class 2 such that distance(xip ,yk ) < distance(xip , yjq ). procedure rnn Input: training set TNN Output: reduced training set TRNN TCNN cnn(TNN) TRNN TCNN while there are instances in TRNN which have not been removed do begin FP remove_next_instance(TRNN) % the removal is sequentially conducted % classifies instances in TNN using RRNN okay classify(TNN, TRNN) if not okay then TRNN {FP} TRNN end return TRNN end procedure Fig. 3. A rewriting of the RNN algorithm as proposed in [15].

If two data instances are connected by a Tomek’s link, each of them is a boundary instance of the class it represents. In a two-class situation if two data instances define Tomek’s link, both are each other’s nearest neighbor of the opposite class.

Tomek´s link X7

X8

X6

Y1

X1

Y8

X4

X5

X3

Y6 Y7

Y2

X2

Y5

Y9 Y10

Y3

Y4

Fig. 4. Training set containing 18 instances, 8 belong to class (•) and 10 to class (*). Note that among the three lines, only the dashed line, marked with an arrow pointing down, is a Tomek’s link.

Reducing Data Volume in Instance Based Learning

15

For the sake of illustration, as shown in the diagram of Fig. 4, consider a training set with 18 instances, each belonging to one of two classes, grouped as two subsets: the eight instances with class (•), {X1 ,…, X8 }, and the ten instances with class (*), {Y1 ,…, Y10 }. Two instances, X and Y, belonging to different classes, define a Tomek’s link if X is the nearest neighbor of Y and Y is the nearest neighbor of X. The Tomek’s link is defined, therefore, by two border instances, with distinct classes, which are each other´s nearest neighbor.

4 Data, Methodology, Experiments and Results Table 1 describes a few characteristics of the 10 data sets employed in the experiments; nine of them have been downloaded from [12] and one (Mouse) has been artificially created, as a three class data, whose instances compose a two-dimensional representation of Mickey Mouse’s head. The methodology used to conduct the experiments can be approached as a two-phase process. The first phase, in charge of preprocessing the original data, is described by the flowchart in Fig. 5. In the flowchart, for each domain identified as ODi (i = 1,…, 10), the associated original data set downloaded from the UCI Repository (as well as the Mouse data set) is referred to as ODi . The second phase, described in Fig. 6, concerns the use of one of the four IBL algorithms that implement volume reduction, for reducing the volume of a particular data set, identified by i. Considering that IBL algorithms are sensitive to the order the instances are processed by the algorithm, as shows Fig. 5, for each ODi , 50 versions of ODi have been created, where each version contained a shuffled version of the original data instances in ODi . The 50 shuffled versions of ODi have been named ODi1 , ODi2 , . . . , ODi50 - they all have the same instances that are in the ODi data set, but in different sequencing. Next, for each specific ODi (i = 1, …, 10), 50 pairs of sets, identified as Trij –Teij (j = 1, …, Table 1. i: numerical identification of ODi , ODi : original data domain downloaded from [12], except for (*), NODi : Number of data instances in OD, NA: Number of Attributes that describe ODi , NC: Number of Classes in ODi . i

ODi

NODi NA NC

1

Car

1,728

2 3

6

4

Cleveland

303 13

2

Hungarian

294 13

2

4

Iris

150

4

3

5

Liver

6

Mouse(*)

7 8 9

345

6

2

1,000

2

3

Seeds

210

7

3

Waveform

800 21

3

Vertebral

310

6

3

435 16

2

10 Voting

16

M. D. C. Nicoletti and L. A. Claudiano

Begin

i

1

Original Datai (ODi)

ODi

Tri1 Tei 1

...

ODi2

1

Tri2

N

Tei

ODi

50

Tri Tei 50 50

2

i

i+1

i

10

Y

End

Fig. 5. Creating the 50 training-test pairs of data for each domain ODi , i = 1, …, 10.

50), containing respectively 85% and 15% of instances of ODi , were created. Table 2 shows the number of training instances and corresponding test instances of each of the 50 Training(Tr)-Testing(Te) set pairs generated, associated with each of the 10 domains considered. Notice that for a particular original data, the number of training instances considered, as well as the number of testing instances, remain the same for all its 50 versions. Table 2 shows the number of training instances and corresponding test instances of each of the 50 Training(Tr)–Testing(Te) pairs generated, associated with each of the 10 data domains considered. Notice that for a particular original data set, the number of training instances considered, as well as the number of testing instances, remain the same for all of its 50 versions. The flowchart in Fig. 6 runs a loop with 50 iterations where, at each iteration, the algorithm receives as input a pair of data sets Trij –Teij , (j = 1, …, 50), associated with data set i, created in the pre-processing phase (Fig. 5). For each of such pair, the algorithm produces the subset of instances from Trij to be stored (as the induced concept), which is then evaluated, using the corresponding Teij . The procedure ends when all the 50 pairs have been input to the algorithm and the induced concept evaluated; before the procedure ends, the average of accuracy values and the number of stored instances, over the 50 executions, are calculated. The results for each data domain and each IBL with volume reduction algorithm are showed in Table 3. The flowchart in Fig. 6 displays the execution of a particular IBL algorithm implementing reduction volume, for a particular data set i (given by its numerical identification which can vary from 1 to 10).

Reducing Data Volume in Instance Based Learning

17

Begin i

j

1

Tri & Tei j j IBL algorithm with Volume Reduction

j

50

Y

N j

The average and standard deviation values obtained by the algorithm on the 50 Tr-Te pairs, from ODi are calculated.

j+1

End

Fig. 6. Flowchart of the execution process of each of the four volume reduction algorithms used; in a particular data set identified by the numerical identifier i. In the flowchart each of them is generically referenced as IBL algorithm with Volume Reduction.

Table 2. OD(NA/NC): original data domain(No. attributes/No. Classes); #NOD: No. of instances in OD; NITr(%): No. of instances in the training set (%); #NITe(%): No. of instances in the testing set (%). OD(NA/NC)

#NIDD #NITr(%)

#NITe(%)

Car(6/4)

1,728

1,469 (85.01)

259 (14.99)

258 (85.15)

45 (14.85)

Cleveland(13/2)

303

Hungarian(13/2)

294

250 (85.03)

44 (14.97)

Iris(4/3)

150

128 (85.33)

22 (14.97)

Liver(6/2)

345

293 (85.03)

52 (14.97)

1,000

850 (85.00)

150 (15.00)

Seeds(7/3)

210

179 (85.24)

31 (14.76)

Waveform(21/3)

800

680 (85.00)

120 (15.00)

Vertebral(6/3)

310

264 (85.16)

46 (14.84)

Voting(16/2)

435

370 (85.06)

65 (14.94)

Mouse(*) (2/3)

18

M. D. C. Nicoletti and L. A. Claudiano

Table 3. OD: original data domain; %Acc: Average accuracy over 50 runs; % SI: Average number of stored data instances over 50 runs. IB1

IB2

CNN

RNN

Method2

OD

%Acc %SI

%Acc %SI

%Acc %SI

%Acc %SI

%Acc %SI

Car

94.73 100

94.09 13.89

94.47 13.74

93.88 13.76

58.11 6.39

Cleveland

58.76 100

54.58 44.03

55.91 44.57

55.87 32.27

50.53 26.88

Hungarian

60.64 100

57.73 44.54

57.00 45.11

54.32 39.29

55.86 26.78

Iris

95.91 100

93.18 10.31

93.91 6.01

88.91 7.37

51.18 4.61

Liver

62.93 100

57.19 43.45

56.58 43.15

55.96 36.75

54.00 25.98

Mouse

98.68 100

98.20 3.56

97.84 3.45

97.07 2.08

75.44 1.45

Seeds

91.23 100

89.10 15.49

96.90 15.23

85.29 11.39

66.71 9.83

Waveform

73.87 100

69.07 31.86

69.47 32.09

68.43 24.14

64.35 12.90

Vertebral

82.48 100

75.22 26.63

76.35 26.01

74.87 20.16

58.48 11.52

Voting

93.05 100

89.75 11.77

89.66 12.07

89.05 11.25

80.49 6.04

For reference Table 3 also presents results obtained when using the IB1 algorithm [1, 2], which is the name given to the NN algorithm, when considering the IBL family. The IB1 can be considered an incremental version of the NN algorithm which keeps, for each stored instance, its score when classifying the subsequent instances, in the learning phase of the algorithm, when all training instances are stored, one by one. As expected, the IB1, an algorithm that simply stores the whole training data, had the best accuracy over all domains. Considering it does not implement any volume reduction technique, the whole training set remains as the induced concept. As far as accuracy is concerned, when comparing IB2, CNN and RNN, their differences in values are statistically irrelevant despite the slight disadvantage of the RNN, in domains Hungarian, Iris and Seeds. When comparing the accuracy values of Method2 with those of IB2, CNN and RNN, only in domain Hungarian the Method2 reached a better accuracy value, compared to those obtained by the other three. In this domain, however, the four algorithms that implement volume reduction, as well as the IB1, had poor performance, a possible indication of the presence of noisy data.

Reducing Data Volume in Instance Based Learning

19

As far as the number of stored instances is concerned, when comparing IB2, CNN and RNN, results point out that the RNN produced the higher volume reduction among the three, in all domains, except Iris, where it loses to the CNN by a little higher than 1%. Particularly in the Iris domain, the CNN performance, although very close to the other two, was the lowest. Method2 has obtained the best results in volume reduction in all 10 domains, comparatively to the other three, IB2, CNN and RNN. Considering its accuracy values, however, it may be inferred that Method2 has prioritized reduction in detriment of accuracy. As the Method2 in its second phase uses the CNN, the preprocessing of the input data based on Tomek links, conducted during its first phase, is perhaps a very strict preprocessing procedure for the data considered in the experiments.

5 Conclusions The work described in this paper has focused on four IBL algorithms which implement volume reduction, where three of them belong to what Brighton and Mellish in [18] have identified as early approaches and one to the group identified as recent addition, by the same authors. We consider that the Method2, as presented in this paper, belongs to the first group, since it is basically the CNN with a pre-processing phase, where the original data is preprocessed to extract pairs of instances that define Tomek’s links, to be input to the CNN. As reviewed in [17] and in [18], there are still several IBL algorithms that implement volume reduction which have not been addressed in this paper. Particularly two of them, the RT3 [17] and the ICF [18] have been considered to be the focus for continuing the work described in this paper. Acknowledgment. Authors are grateful to UNIFACCAMP, C. Limpo Paulista, SP, Brazil, for the support received. The first author is also thankful to the CNPq. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.

References 1. Aha, D.W., Kibler, D., Albert, M.K.: Instance-based learning algorithms. Mach. Learn. 6, 37–66 (1991) 2. Aha, D.W. (ed.): Lazy Learning. Springe, Dordrecht (2013) 3. Salama, K.A., Abdelbar, A.M., Helal, A.M., Freitas, A.A.: Instance-based classification with ant colony optimization. Intell. Data Anal. 21(4), 913–944 (2017) 4. Arroyo, J., Guijarro, M., Pajares, G.: An instance-based learning approach for thresholding in crop images under different outdoor conditions. Comput. Electron. Agric. 127, 669–679 (2017) 5. Marchiori, E.: Hit miss networks with applications to instance selection. J. Mach. Learn. Res. 9, 997–1017 (2008) 6. Hamidzadeh, J., Monsefi, R., Sadoghi Yazdi, H.: Large symmetric margin instance selection algorithm. Int. J. Mach. Learn. Cybernet. 7(1), 25–45 (2014). https://doi.org/10.1007/s13042014-0239-z

20

M. D. C. Nicoletti and L. A. Claudiano

7. Salzberg, S.: A nearest hyperrectangle learning method. Mach. Learn. 6, 251–257 (1991) 8. Wettschereck, D.: A hybrid nearest-neighbor and nearest-hyperrectangle algorithm. In: Bergadano, F., Raedt, L. (eds.) Lecture Notes in Artificial Intelligence, vol. 784, pp. 323–335 (1994) 9. Wettschereck, D., Dietterich, T.G.: An experimental comparison of the nearest-neighbor and nearest-hyperrectangle algorithms. Mach. Learn. 19, 5–27 (1995) 10. Domingos, P.: Rule induction and instance-based learning: a unified approach: In: Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI-95), pp. 1226–1232 (1995) 11. Domingos, P.: Unifying instance-based and rule-based induction. Mach. Learn. 24, 141–168 (1996) 12. Dua, D., Graff, C.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine, CA (2019). http://archive.ics.uci.edu/ml/ind ex.php 13. Cover, T.M., Hart, P.E.: Nearest neighbour pattern classification. IEEE Trans. Inf. Theory 13, 21–27 (1967) 14. Hart, P.E.: The condensed nearest neighbor rule. IEEE Trans. Inf. Theory 14, 515–516 (1968) 15. Gates, G.W.: The reduced nearest neighbor rule. IEEE Trans. Inf. Theory 18(3), 431–433 (1972) 16. Tomek, I.: Two modifications of CNN. IEEE Trans. Syst. Man Cybern. 6(11), 769–772 (1976) 17. Wilson, D.R., Martinez, T.R.: Reduction techniques for instance-based learning algorithms. Mach. Learn. 38, 257–286 (2000) 18. Brighton, H., Mellish, C.: Advances in instance selection for instance-based learning algorithms. Data Min. Knowl. Disc. 6, 153–172 (2002)

State Estimation of Moving Vehicle Using Extended Kalman Filter: A Cyber Physical Aspect Ankur Jain(B) and Binoy Krishna Roy NIT Silchar, Silchar, Assam, India [email protected] Abstract. Massive growth in computing, nano-electronic, large scale integrated devices tied up physical systems to digital systems to perform advanced optimized controlling and supervision. Formally it is known as a cyber-physical system. When we interface physical devices to cyber devices, there are a lot of sensors data need to be transformed into a relevant information. On several occasions,it is very difficult to capture the physical world accurately. Environment and sensors are also prone to noise. Because of that noise, calculation of control input becomes very difficult in real time scenario. In this paper, we designed a filter to remove noise and modeling errors. Here, we used Extended Kalman filter (EKF) to design a filter. The filter output is used for control signal calculation. Keywords: Cyber physical system · State estimation Kalman filter · Longitudinal and lateral control

1

· Extended

Introduction

During the past decade, research community gained significant achievement in the area of Cyber-physical system [1], Autonomous driving [2], self-driving vehicles [3], cooperative adaptive cruise control (CACC), vehicle control [4] etc. To support the above broad projects, there are several stand-alone strategies developed by many researchers such as Automated highway system [5] , lane departure warning and control [6], Anti-lock braking system (ABS) [7], automatic cruise control [8], steering control system [9] etc. In the physical world, computational resources and physical systems are interconnected. Embedded systems and computer networks govern actuators that operate in the real world. They receive inputs from various sensors and creates a control loop. We can use it for smart environment adaptation and to improved efficiency. Such systems are commonly and broadly defined as cyber-physical systems (CPSs) which are shown in Fig.1a. Here, the interface could be analog to digital (A/D) converter, digital to analog (D/A) converter, Computer networks, etc (Table 1). There are several advantages of autonomous cars as compared to conventional cars with stand-alone systems(i.e. ABS [10]), for example, accident prevention due to lack of driver attention and skills [1], reduced traffic increases c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 21–30, 2021. https://doi.org/10.1007/978-3-030-49336-3_3

22

A. Jain and B. K. Roy Table 1. (a) Controller design techniques used in vehicle control (b) Notations Ref.

Control techniques

[27, 28] PID control [29, 30] Gain scheduling [31]

Fuzzy control

[32, 33] H∞ [23, 34] LQR [35, 36] SMC [37, 38] MPC [19] (a)

Back stepping

Symbol Description x

State vector

z

sensor measurements

A

Linearised state transition matrix

C

Linearised measurement matrix

P

State uncertainty covariance matrix

Q

Process uncertainty covariance matrix

R

Measurement uncertainty covariance matrix

(b)

road throughput [11], fuel savings [12], etc. To achieve level 4 or 5 of automation levels of SAE [13], we need to do detailed motion planning and route mapping [14]. In this paper, we have solved the longitudinal and lateral control problem in the presence of modelling error and sensor noise. It is very important to discuss the significant works done by several researchers. Ref. [15] discussed the cyber-physical view on road freight transport. Tigansu et al. designed a cooperative controller in a perspective CPS in [16]. Fallah et al. designed a vehicle safety system [17]. They tightly coupled communication, vehicle dynamics and computations devices together. The list of controller designed for autonomous vehicle control is tabulated in Table 2a. Longitudinal motion [18] and lateral motion [19] of vehicles are studied standalone. Longitudinal dynamics are used for high velocity while lateral motion is usually described by the kinematics with low-speed assumption. There are a few papers which studied longitudinal and lateral control together [20,21]. This system works well in ideal surrounding conditions and precise sensor measurements [22]. Most of the literature assumed ideal surrounding and sensors. Ref. [2], taken care of lateral interruptions such as lane change and cut-ins. Transient performance due to cut-ins changes the vehicle control mode, it is improved in [23]. Extended Kalman filter (EKF) is a widely known method used for state estimation for a nonlinear system. other applications of EKF can be found in [24–26]. We coupled longitudinal dynamics and lateral kinematics to weaken low curvature assumption. We have also taken care of noisy surroundings and sensors. The contribution of this paper is to study autonomous highway driving in a noisy environment with a nonlinear model. The rest of the paper is organised as follows. We defined problem formulation in Sect. 2. The Sect. 3 describes the methodology used to solve the problem defined. Numerical values of parameters and simulation results are presented in the Sect. 4. The Sect. 5 concludes the research study.

State Estimation of Moving Vehicle

2

23

Problem Formulation

For a given vehicle model x˙ = f (x, u), track position and velocity of a moving car on the road. Ego vehicle is equipped with RADAR as a sensor, which is a bit noisy. The sensor gives the range and bearing to a preceding car. Using given measurement model h(x), we estimate range Rx (m), and yaw angle (rad). Assume that the errors in both range and angle are distributed in a Gaussian manner. Find the expected value of the position for the given measurement? 2.1

Modelling of the Plant

Let’s suppose we are able to fetch the position of moving vehicle in x, y, mathematically,we can model it as below xk = xk−1 + x˙ k−1 Δt yk = yk−1 + y˙ k−1 Δt RADAR gives the periodic reading in terms of range Rx (m) and and bearing (rad) which is defined mathematically as below  Rx = (xac − xradar )2 + (yac − yradar )2 (1)  = x2 + y 2 (2) yac − yradar y = tan−1 xac − xradar x

 = tan−1

(3)

As we can see that the above equations are nonlinear in nature, we can linearised RADAR model using jacobian method, which is given below ⎤ ⎡ √ x 0 √ y 0 x2 +y 2 x2 +y 2 ⎦ (4) Ck = ⎣  0  1 − 2  y y2  0 y 2 x

1+ x2

x 1+ x2

we will generalized the above model as below xk = p(xk|k−1 ) = fk−1 (xk−1 ) + qk−1 ∼ N (xk ; Ak−1 xk−1 , Qk−1 ), qk−1 ∼ N (q k−1 = 0, Qk−1 ) x0 ∼ N (x0 , P0|0 ) yk = p(yk |xk ) = hk (xk ) + rk ∼ N (yk ; Ck xk , Rk ), rk ∼ N (rk = 0, Rk ),

(5) (6) (7) (8) (9) (10) (11)

24

2.2

A. Jain and B. K. Roy

Vehicular Network Model

Networked Cyber-Physical Systems (NCPSs) consist of computing devices acting synergically by a communication network and interacting with a complex physical process distributed over a given area. Here we have modelled network as a time delay τtotal . Suppose τsc is a sensor to controller delay,τca is a controller to actuator delay, and is a computation delayτc . Here we have assumed that the total delay τtotal = τsc + τca + τc is less than one sampling time. The vehicular arrangement are shown in Fig. 1b.

(a)

(b)

Fig. 1. (a) Cyber physical interfacing of vehicle (b) Vehicle arrangement

3

Methodology

Control System Design Path planning which is the crux to planning the motion, aids in construction of the speed profile and lateral control. In order to execute the reference path or trajectory from the motion planning system, a feedback controller is used to select appropriate actuator inputs (Throttle/braking and steering) to carry out the planned motion and correct tracking errors. These inputs are obtained from longitudinal control and lateral control which are direct outcome of the speed profile and path planning respectively. The tracking errors generated during the execution of a planned motion are due to the inaccuracies of the vehicle model. Thus, a great deal of emphasis is placed on the robustness and stability of the closed loop system design.The architecture for vehicle control is shown in Fig. 2a. Extended Kalman filter As we can see our measurement model is nonlinear which is defined using Eq. 1 and Eq. 3. The Extended Kalman filer takes care of this non-linearty by linearizing the model at each current estimate (ˆ x). It means that we are approximating the curve with closely matching line. Then this linear equations at these points are parses to the kalman function. The flow chart of the algorithm has been shown in Fig. 2b Mathematical description of the method is as follows. In the

State Estimation of Moving Vehicle

25

Fig. 2. (a) Vehicle control architecture, (b) Flow chart for EKF

perspective of control system, a problem can be nonlinear if either of the process equations or a measurement equations are nonliear. Let’s suppose, we have nonx) and h(x). A first order Taylor expansion of fk−1 (x, u) linear fuctions fk−1 (ˆ around x ˆ gives  x) + f (ˆ x)(x − x ˆ) y ∼ fk−1 (ˆ   ∂fk−1 (x, u)  A=  ∂x x ˆk−1|k−1 ,uk−1  ∂h(x)  C= ∂x xˆk|k−1 q ∼ N (0, Q), r ∼ N (0, R) Here, A and C are called jacobians which forms nothing but the state transition and measurement matrices in discreate space. Here we have used the linear process model and nonlinear measurment model thats why we need to find only C which is given in Eq. 4. q and r represents the process noise and measurement noise respectively and their co-variance are represented by Q and R matrices. For the given prior which is defined by the Eq. 12 xk−1|k−1 , Pk−1|k−1 ) xk−1 |y1:k−1 ∼ N (ˆ

(12)

We predict the next state using the known model 13 and the process covariance defined in Eq. 14. x ˆk|k−1 = f (ˆ xk−1|k−1 )

(13)

26

A. Jain and B. K. Roy

Pk|k−1 = f ´(ˆ xk−1|k−1 )Pk−1|k−1 f ´(ˆ xk−1|k−1 )T + Qk−1

(14)

By getting actual sensor measurement we find the innovation and then update the estimated state according to the Eq. 16

4

y = h(xk ) + rk

(15)

xk|k−1 )(x − x ˆk|k−1 ) + rk−1 ∼ h(ˆ xk|k−1 ) + h´(ˆ

(16)

Results

Numerical values for the system parameters values and EKF design parameters are tabulated in Table 2. Table 2. Numerical parameters for simulation Parameters/matrix Values ⎡ ⎤ 1 Δt 0 0 ⎢ ⎥ ⎢0 1 0 0 ⎥ ⎢ ⎥ Ak−1 ⎢0 0 1 Δt⎥ ⎣ ⎦ 0 0 0 1 ⎤ ⎡ √ x 0 √ y 0 2 2 2 2 x +y x +y ⎥ ⎢ Ck−1 ⎣−  y  0  0⎦  1 2 2 y y x2 1+ x 1+ x2 x2 ⎡ ⎤ 2500 ⎢ ⎥ ⎢ ⎥ 9 ⎢ ⎥ P ⎢ 2500 ⎥ ⎣ ⎦ 9 ⎡ ⎤ 0.024 0.055 0 0 ⎢ ⎥ ⎢ 0.05 0.1 ⎥ ⎢ ⎥ Q ⎢ 0 ⎥ 0 0.024 0.055 ⎣ ⎦ 0 0 0.05 0.1  25 0 R 0 0

Here, Let’s suppose ego vehicle is equipped with radar. Radar measures the relative distance between preceding and ego vehicle. If the relative velocity of the preceding vehicle is more than 0 then the gap between them increases. Figure 3d shows the 2D position of the vehicle relative to the ego vehicle. To update EKF, RADAR measurements are given in terms of range which is shown in Fig. 3a and bearing which is shown in Fig. 3b. Figure 3c shows the estimated relative velocity. From the simulation results it is evident that filter outputs are trying to converge to the true value. Because filtering is being done in real time that’s why fluctuations are being seen in the results.

State Estimation of Moving Vehicle

(a)

(b)

(c)

(d)

27

Fig. 3. (a) Range measurements, (b) Bearing measurements, (c) Estimated velocity, (d) Estimated 2D position of a vehicle

5

Conclusions

In this paper, we have discussed the architecture of a cyber-physical system for in-vehicle control. We designed a filter to remove the outlier in the measurement and noise in the signals. We estimated the preceding car velocity. The simulation shows satisfactory results. Here, we linearised the nonlinear RADAR model and used it into the filter algorithm. We saw the discrepancy with a Monte-Carlo experiment between the linearised model and nonlinear model because here we are using only the mean value. In future, we will use more points in the sample space so that mean value converges to the true value.

28

A. Jain and B. K. Roy

Acknowledgement. The authors would like to acknowledge the financial support is given by TEQIP-III, NIT Silchar, Silchar - 788010, Assam, India.

References 1. Jia, D., Lu, K., Wang, J., Zhang, X., Shen, X., Member, S., Wang, J., Zhang, X.: A survey on platoon-based vehicular cyber-physical systems. IEEE Commun. Surv. Tutor. 18(1), 263–284 (2016) 2. Liu, K., Gong, J., Kurt, A., Chen, H., Ozguner, U.: A model predictive-based approach for longitudinal control in autonomous driving with lateral interruptions. In: 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 359–364 (2017) 3. Hyatt, K., Paukert, C.: Self-driving cars: a level-by-level explainer of autonomous vehicles. 2018. https://www.cnet.com/roadshow/news/self-drivingcar-guide-autonomous-explanation/ 4. Jain,A., Roy, B.K.: Control aspects of cooperative adaptive cruise control in the perspective of the cyber-physical system. Int. J. Innov. Technol. Exploring Eng. (IJITEE) 2019 5. Wang, Z., Wu, G., Hao, P., Boriboonsomsin, K.: Developing a platoon-wide ecocooperative adaptive cruise control ( cacc) system. In: IEEE Intelligent Vehicles Symposium, Proceedings, vol. 4, pp. 1256–1261 (2017) 6. Mohapatra, A.G.: Computer vision based smart lane departure warning system for vehicle dynamics control. Sensors and Transducers 132(9), 122–135 (2011) 7. Tanelli, M., Savaresi, S.M., Cantoni, C.: Longitudinal vehicle speed estimation for traction and braking control systems. In: Proceedings of the IEEE International Conference on Control Applications, pp. 2790–2795 (2006) 8. Jain, A., Roy, B.K.: Tradeoff between quality of control (QoC) and quality of service (QoS) for networked vehicles cruise control. In: Proceedings of 3rd International Conference on Internet of Things and Connected Technologies (ICIoTCT) (2018) 9. Snider, J.M., et al.: Automatic steering methods for autonomous automobile path tracking. Master’s thesis, Robotics Institute, Pittsburgh, PA, Technical Report CMU-RITR-09-08 (2009) 10. Ivanov, V., Savitski, D., Shyrokau, B.: A survey of traction control and antilock braking systems of full electric vehicles with individually controlled electric motors. IEEE Trans. Veh. Technol. 64(9), 3878–3896 (2015) 11. Dey, K.C., Yan, L., Wang, X., Wang, Y., Shen, H., Chowdhury, M., Yu, L., Qiu, C., Soundararaj, V.: A review of communication, driver characteristics, and controls aspects of cooperative adaptive cruise control (cacc). IEEE Trans. Intell. Transp. Syst. 17(2), 491–509 (2016) 12. Nemeth, B., Csikos, A., Varga, I., Gaspar, P.: Road inclinations and emissions in platoon control via multi-criteria optimization. In: 2012 20th Mediterranean Conference on Control and Automation, MED 2012 - Conference Proceedings, pp. 1524–1529 (2012) 13. S. A. E. O.-r. A. V. S. Committee and Others. Taxonomy and definitions for terms related to on-road motor vehicle automated driving systems. SAE International (2014) 14. Paden, B., Čáp, M., Yong, S.Z., Yershov, D., Frazzoli, E.: A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Trans. Intell. Veh. 1(1), 33–55 (2016)

State Estimation of Moving Vehicle

29

15. Besselink, B., Turri, V., Van De Hoef, S.H., Liang, K.-Y., Alam, J., Johansson, K.H.: Cyber-physical control of road freight transport. In: Proceedings of the IEEE, vol. 104, no. 5, pp. 1128–1141 (2016) 16. Tiganasu, A., Lazar, C., Caruntu, C.F.: Cyber physical systems - oriented design of cooperative control for vehicle platooning. In: 2017 21st International Conference on Control Systems and Computer Science (CSCS) (2017) 17. Fallah, Y.P., Huang, C., Sengupta, R., Krishnan, H.: Design of cooperative vehicle safety systems based on tight coupling of communication, computing and physical vehicle dynamics. In: Proceedings of the 1st ACM/IEEE International Conference on Cyber-Physical Systems, pp. 159–167. ACM (2010) 18. Li, S.E., Gao, F., Cao, D., Li, K.: Multiple-model switching control of vehicle longitudinal dynamics for platoon-level automation. IEEE Trans. Veh. Technol. 65(6), 4480–4492 (2016) 19. Jiang, J., Astolfi, A.: Lateral control of an autonomous vehicle. IEEE Trans. Intell. Veh. 3(2), 228–237 (2018) 20. Attia, R., Orjuela, R., Basset, M.: Combined longitudinal and lateral control for automated vehicle guidance. Veh. Syst. Dyn. 52(2), 261–279 (2014) 21. Song, P., Zong, C., Tomizuka, M.: Combined longitudinal and lateral control for automated lane guidance of full drive-by-wire vehicles. SAE Int. J. Passeng. CarsElectron. Electr. Syst. 8(2015–01–0321), 419–424 (2015) 22. Ozguner, U., Acarman, T., Redmill, K.: Autonomous Ground Vehicles. Artech House, Norwood (2011) 23. Kim, S.G., Tomizuka, M., Cheng, K.-H.: Mode switching and smooth motion generation for adaptive cruise control systems by a virtual lead vehicle. IFAC Proc. Vol. 42(15), 490–496 (2009) 24. Hallouzi, R., Verdult, V., Hellendoorn, H., Morsink, P.L.J., Ploeg, J.: Communication based longitudinal vehicle control using an extended kalman filter. In: Proceedings of the 1st IFAC symposium on Advances in Automotive Control (AAC 2004), vol. 19, p. 23 (2004) 25. Jiang, K., Victorino, A.C., Charara, A.: Real-time estimation of vehicle’s lateral dynamics at inclined road employing extended Kalman filter. In: Proceedings of the 2016 IEEE 11th Conference on Industrial Electronics and Applications, ICIEA 2016, pp. 2360–2365 (2016) 26. Kim, M.S., Kim, B.J., Kim, C.I., So, M.H., Lee, G.S., Lim, J.H.: Vehicle dynamics and road slope estimation based on cascade extended kalman filter. In: 2018 International Conference on Information and Communication Technology Robotics (ICT-ROBOT), pp. 1–4. IEEE (2018) 27. Han, G., Fu, W., Wang, W., Wu, Z.: The lateral tracking control for the intelligent vehicle based on adaptive pid neural network. Sensors 17(6), 1244 (2017) 28. Rajamani, R.: Vehicle Dynamics and Control. Springer, Heidelberg (2011) 29. Zhang, H., Wang, J.: Vehicle lateral dynamics control through afs/dyc and robust gain-scheduling approach. IEEE Trans. Veh. Technol. 65(1), 489–494 (2015) 30. Jain, A., Roy, B.K.: Gain-scheduling controller design for cooperative adaptive cruise control: towards automated driving. J. Adv. Res. Dyn. Control Syst. (JARDCS) (2019) 31. Yang, J., Zheng, N.: An expert fuzzy controller for vehicle lateral control. In: IECON 2007–33rd Annual Conference of the IEEE Industrial Electronics Society, pp. 880–885. IEEE (2007) 32. Huang, X., Zhang, H., Zhang, G., Wang, J.: Robust weighted gain-scheduling h∞ vehicle lateral motion control with considerations of steering system backlash-type hysteresis. IEEE Trans. Control Syst. Technol. 22(5), 1740–1753 (2014)

30

A. Jain and B. K. Roy

33. Latrach, C., Kchaou, M., El Hajjaji, A., Rabhi, A.: Robust H∞ fuzzy networked control for vehicle lateral dynamics. In: 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), pp. 905–910. IEEE 2013 34. Rui, W., Yi-Ming, S., Mei-Tong, L., Hao, Z.: Research on bus roll stability control based on lqr. In: 2015 International Conference on Intelligent Transportation, Big Data and Smart City, pp. 622–625. IEEE (2015) 35. Lee, S.-H., Chung, C.C.: Predictive control with sliding mode for autonomous driving vehicle lateral maneuvering. In: American Control Conference (ACC) 2017, pp. 2998–3003. IEEE (2017) 36. Tagne, G., Talj, R., Charara, A.: Higher-order sliding mode control for lateral dynamics of autonomous vehicles, with experimental validation: In: IEEE Intelligent Vehicles Symposium (IV) 2013, pp. 678–683. IEEE (2013) 37. Gutjahr, B., Gröll, L., Werling, M.: Lateral vehicle trajectory optimization using constrained linear time-varying mpc. IEEE Trans. Intell. Transp. Syst. 18(6), 1586– 1595 (2016) 38. Naus, G., Ploeg, J., Van de Molengraft, M., Heemels, W., Steinbuch, M.: Design and implementation of parameterized adaptive cruise control: an explicit model predictive control approach. Control Eng. Pract. 18(8), 882–892 (2010)

ADAL System: Aspect Detection for Arabic Language Sana Trigui1(B) , Ines Boujelben1,3(B) , Salma Jamoussi1,2(B) , and Yassine Ben Ayed1,2(B) 1 Miracl, University of Sfax, Sfax, Tunisia [email protected], [email protected], [email protected], [email protected] 2 Higher Institute of Computer Science and Multimedia of Sfax, Sakiet Ezzit, Tunisia 3 Higher Institute of Computer Science and Multimedia of Gabes, Gabes, Tunisia

Abstract. Sentiment analysis can be done at different levels of granularity: document, sentence, and aspect. In our case, we are interested in the aspect which presents the finest level of granularity. This level is named Aspect Based Sentiment Analysis. In fact, Aspect Based Sentiment Analysis (ABSA) requires two primordial steps: (i) extract entity aspects and (ii) determine the sentiments from all the aspects. Aspect extraction is an important step for the ABSA. It aims at detecting all the existing aspects in a sentence. The extraction of these aspects is complicated considering the presence of several challenges especially when the Aspect extraction is done in the Arabic language. In this paper, we propose a supervised system ADAL for aspects detection in the Arabic language. The obtained results indicate that our proposed method outperforms previous works to achieve 96% in terms of f-measure when applied to the same dataset provided by The International Workshop on Semantic Evaluation 2016 (SemEval-2016). Keywords: Aspect extraction · Supervised method · Arabic language

1 Introduction With the rapid growth of user-generated content on the internet, people tend to share their sentiments and opinions online. Given that the analysis of these sentiments has become a key tool for making sense of that content, it presents a very important area for automatic Natural Language Processing (NLP) applications such as question answering systems, automatic summarization, product evaluation, recommendation systems, and popularity analysis. Aspect Based Sentiment Analysis (ABSA) is an area under NLP that focuses on finding important aspects from sentences and identifying polarity expressed in those aspects. It is also known as a feature or attribute-based sentence Analysis. The Sentiment Analysis (SA) classifies the overall sentiments of a text into positive, negative or neutral, while the ABSA associates specific sentiments with different aspects of an © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 31–40, 2021. https://doi.org/10.1007/978-3-030-49336-3_4

32

S. Trigui et al.

entity. Therefore, the results in ABSA are more detailed, interesting and accurate. For example in the following sentence: “The crew is excellent and the technique is bad”. At the sentence level the sentiment expressed is neutral (positive and negative at the same time) whereas, at the aspectual level, the sentiment is positive for the crew and negative for the technique. To detect sentiments towards aspects, it is necessary firstly to detect these aspects, so aspect detection is an important step for ABSA. Aspect detection aims at detecting the different aspects found in a sentence. Aspects represent the attributes or features that describe an entity or an object. For example, smartphones can have different features like camera, battery life, touch screen, etc. Here the entity is the smartphone and the aspects are its features. Most ABSA works focus on the English language and there is relatively less work on the Arabic language [2] despite Arabic is currently ranked as the fourth language used on the web, and there are about 226 million Arabic Internet users1 . This is explained by the lack of available resources of Arabic sentiment analysis like lexicons and datasets. For this reason, we are interested in Arabic aspects detection. The rest of the paper is organized as follows. We begin with enumerating the different problems faced by the aspect extraction task. Then, we survey previous works on aspect extraction. The next section is devoted to describing our proposed method. In the last section, we present the different experiments and we discuss the reported result.

2 Aspect Extraction Problems An aspect can be either introduced directly through words or expressed implicitly from a context or the mining of a sentence. Indeed, the explicit aspects are expressed through one or more words as shown in the following example:

Excellent staff and pleasant swimming pools (1) In (1) the aspects are staff ( ) and swimming pools ( ). For the implicit aspects, they are not expressed through any words. They are deduced by the semantics of the sentence as it is illustrated in the following example.

The car is expensive (2) The aspect in (2) is the price. Additionally, in the sentence, we can have multiples aspects, and the aspect can be expressed through one word (simple aspect) or more than one word (compound aspects).

Great location, excellent staff, beach and pleasant swimming pools (3) 1 Top ten internet languages homepage.https://www.internetworldstats.com/stats7.htm.

ADAL System: Aspect Detection for Arabic Language

33

In this sentence, we find simple aspects (location, staff) and compound aspects (swimming pools). An aspect term in a sentence can be expressed through a noun, verb, adverb or adjective. [3] are proved that 60%–70% of aspect terms are explicit nouns. In fact, aspects can be expressed by various parts of speech (POS) types. The table below illustrates examples showing how the aspect can have various POS (Table 1). Table 1. Possible POS of an aspect

POS of an aspect

Example

Noun Great location (1) Verb This hotel is in a good location (2) Adverb Locally, this hotel is one of the best hotels (3) adjective This hotel is very expensive (4)

3 Related Work In the literature, several methods have been proposed to extract aspects. These methods can be divided into two broad approaches: rule-based approach and Machine Learning approach (included supervised and unsupervised methods). Among the rule-based methods, we mention the work of Hai et al. [4] who adopted a rule-based approach to identify aspects and opinions from Chinese language reviews. They proposed a novel method to identify opinion features from online reviews by exploiting the difference in opinion feature statistics across two corpora, one domainspecific corpus and one domain-independent corpus. Another study that adopted the rules-based method was performed by [5]. Firstly, the authors extracted a set of linguistic patterns from a training corpus basing on the syntactic structure of a sentence and sentiments words. Their method consists of defining a set of rules for detecting the sentiment words and then using grammatical relations to construct the syntactic structure of a sentence to detect the aspects. A final stage of refinement is proposed to remove the non-important aspects. Here the lexical relationship between the sentiment words and the aspects is the key to this method where the authors can identify non-frequent aspects. The rule-based method offers significant analysis. However, the complexity of Arabic sentences and the high variability in the expressions used make it intricate to detect aspects in a sentence. Moreover, it is a very domain-related approach and making them adaptable to other areas expensive in terms of manual effort and time. To overcome these problems, aspect detection works are oriented toward machine learning methods to automate the extraction task. We distinguish two learning methods: unsupervised and supervised methods. The unsupervised methods use great quantities of unlabeled text. It is learning to classify without supervision.

34

S. Trigui et al.

Among the works coming under this approach, we cite [6] who proposed a simple method to detect explicit aspects by adopting a set of rules based on statistical observations: The most frequent names are the aspects (having occurrences above a certain threshold). Unsupervised methods have the advantage of being robust by being adapted to any type of text and whatever the language and the field of study. Also, they do not rely on an annotated corpus that saves time and effort. However, they can extract a large number of aspects. Nevertheless, these extracted aspects are too generic. To overcome this problem, supervised methods make it possible to set the number of aspects to extract in advance. Among the supervised methods, we cite [7] who use the Conditional Random Fields (CRF) to detect aspects. Their method is based on the BIO scheme: B-term indicates the beginning of an aspect, I-term indicates the continuation of an aspect and O- term indicates no aspect. They evaluate their method on the Corpus provided by SemEval2014. Basing on the POS and the dependency tree, they obtained an F-measure of 82%. The same algorithm was used by [8] to detect aspects based on POS tagging, Named Entity Detection, and the dependency tree. They obtained an F-measure of 61% using the corpus provided by Sem-eval 2016. Also, [9] uses CRF to detect aspects using the SemEval 2016 corpus; he achieved 79% of an F-measure. Finally, [10] tested several classifiers on SemEval 2016 corpus-based on morphological, syntactic and semantic features. The classifiers used are IBK, SVM (SMO), J48 and NaiveBayes and obtained F-measure can reach 89%. In this paper, we choose to resolve the problem of aspect extraction based on a supervised method which is the goal of the next section.

4 Proposed Method In this paper, we propose a supervised machine learning method for aspects extraction applied to the Arabic language. In our case, we focus on the characteristics of each word. That is to say, we have assigned to each word in a sentence a vector of features. These features are of different natures and are extracted from annotated corpus SemEval 2016. The different learning features used in our work are listed in Table 2. To extract these learning features, we used the MADAMIRA2 tool. This tool is widely used in the NLP application. It allows segmentation, lemmatization, morph syntactic categorization and named entities detection. As illustrated in Table 2, we compiled four types of features to describe the dataset: • Lexical features to identify the morphological category of a current word, words before and after it. • Semantic features that are interested in identifying named entities types in the sentence. • Syntactic features consist in determining the type that can be verbal or nominal. Also, we used a suffix attribute for testing if the current word is a suffix or not. • Numeric features, including the number of words in the sentence, the number of words of each context (before and after the current word) and the current word position.

2 Madamira homepage, https://camel.abudhabi.nyu.edu/madamira/.

ADAL System: Aspect Detection for Arabic Language

35

Table 2. Used features. Type

Feature

Description

Lexical

catM

The part of speech of the current word

catMav1

The part of speech of the first word before the currentword

catMav2

The part of speech of the second word before the current word

catMav3

The part of speech of the third word before the current word

catMap1

The part of speech of the first word after the current word

catMap2

The part of speech of the second word after the current word

catMap3

The part of speech of the third word after the current word

Semantic

typeNE

Type of the named entity (Person, Location, Organization)

Syntactic

Type-phrase suffix

Type of sentence (nominal, verbal) = 1 if the current word is suffix

Numeric

NbrM

Number of words in the sentence

pos

Position of the current word

nbrMav

Number of words before the current word

nbrMap

Number of words after the current word

5 Experiments and Results 5.1 Data SemEval 2016 workshop introduced the task of aspect extraction for sentiment analysis for Arabic. Unfortunately, the task received no submission for the Arabic. Regardless, they contributed a large Arabic hotel review dataset labeled for aspect extraction, aspect sentiment identification and also aspect categorization [1]. We are interested only in the aspect extraction task. SemEval-ABSA16 is a multi-lingual task for ABSA covering customer reviews of 8 different languages (i.e. Arabic, English, Chinese, Dutch, French, Russian, Spanish, and Turkish) and 7 different domains (i.e. restaurants, laptops, mobile phones, digital cameras, hotels, museums, and telecommunications) [1]. Table 3 summarizes the dataset size and distribution over ABSA research tasks. Table 3. Hotels’ dataset description Train Number of sentences

Test

4802

1227

Number of simple aspects

8821

2149

Number of compound aspects

1688

450

Number of non-aspects

106054 27144

36

S. Trigui et al.

5.2 System’s Outputs The corpus of SemEval-ABSA16 contains both simple and compound aspects. For that, we use the schema NBE to label each word in the sentence: N indicates the no aspect, B indicates the beginning of an aspect and E indicates the end of an aspect For example, in the following sentence:

Great location and pleasant swimming pools The word ( /location) is annotated by ‘B-Aspect’, the word ( /Great) is annotated by ‘N-aspect’, the word ( / pools) is annotated by ‘B-Aspect’, the word ( / swimming) is annotated by ‘E-Aspect’ and the word ( /Great) is annotated by ‘N-aspect’. 5.3 Experimentation In this paper, all the reported experiments were performed on the test SemEval 2016 corpus using of standard evaluation metrics; precision, recall and F-measure. A set of classifiers were evaluated such as Naïve Bayes, Decision Tree, RepTree and Adaboost. These classifiers are available in WEKA3 . Also, we use the CRF classifier which is available in Yet Another CRF toolkit4 . The SemEval 2016 corpus has been used by many researchers. Some research studies treated only 2 classes (Aspect, No aspect) [9], others are based on three classes [10] (BAspect, I-Aspect and No aspect). For this reason, we propose to treat these two cases. Table 4 and Table 5 show the system’s performance when applying several classifiers. When basing on only two classes (a word can be aspect or not aspect), results show that Adaboost achieves the best results (F-measure = 96.9%) with a very slight difference over three classifiers which are RERTree (F-measure = 91.2%), J48 (F-measure = 90%) and SMO (F-measure = 94.49%) The rest of the classifiers’ results are naïve bayes (F = 88.5%) and CRF (F-measure = 81.7%). Adaboost also gives the best result when we treat the case of three classes (Table 5). The next experiment aims at assessing the performance of the proposed features. We use four types of features: lexical, semantic, syntactic and numeric. We evaluate the performance of each combination of features when applying the AdaBoost classifier. We should remember that the F-measure obtained for the first corpus and the second corpus are respectively 96.9% and 96.2% (with Adaboost) when applying all features. The following table represents the obtained results:

3 Weka homepage, https://www.cs.waikato.ac.nz/ml/weka/. 4 CRF homepage, http://wing.comp.nus.edu.sg/~forecite/services/parscit-100401/crfpp/CRF++-

0.51/doc/.

ADAL System: Aspect Detection for Arabic Language

37

Table 4. Results of different classifiers basing on 2 classes Classifiers

Precision (%) Recall (%) F-measure (%)

REPTree(RT) 91.1

91.8

91.2

NaiveBayes

88.5

88.5

88.5

J48

90.1

91

90

Adaboost (RT)

96.9

97

96.9

SMO

94.7

95

94.49

CRF

76.6

87.5

81.7

Table 5. Results of different classifier basing on 3 classes Classifiers

Precision (%)

Recall (%)

F-measure (%)

REPtree

91.2

90.2

90.1

NaiveBayes

88.2

87.8

87.8

J48

90

89.2

88.6

Adaboost (RT)

96.2

96.2

96.2

SMO

92

90

90.98

CRF

87

75

80.55

As indicated in Table 6, the reported results show that the use of numeric attributes gives 13% as f-measure whereas with lexical attributes we get an f-measure equal to 70%. This is very logical because the lexical attributes give us more information about the category of words and subsequently the morphological category of words helps us to detect more easily the aspects as we saw previously. We also notice that using these two types of attributes (numeric and lexical) together is better to use them separately. Indeed, using the combination of numeric, lexical, semantic and syntactic features improves the overage performance of our ADAL system. This finding can be explained by the fact that we focused on these features. Hence, we intend to choose a small subset of features that is sufficient to correctly predict the class. Here, we attempt to learn which features are better for our system. For this, we used the wrapper methods which were introduced by [11]. Their principle is to use the misclassification rate as an evaluation criterion. These methods provide good performance. We used the AdaBoost algorithm for attribute selection and we got a 1% improvement at the f-measure level and the selected attributes are catM, catMav1, catMav3, catMap1, catMap3, pos, nbrMav, nbrMap and typeNE. Therefore these attributes are sufficient for our system.

38

S. Trigui et al. Table 6. Evaluation of learning features

Features

Corpus of 2 classes

Numeric

13%

Corpus of 3 classes 9%

Lexical

70%

59%

Semantic

31%

22%

Syntactic

2%

0%

Numeric + Lexical

95%

93%

Numeric + Syntactic

12%

9%

Numeric + Semantic

38%

29%

Lexical + Semantic

85%

80%

Lexical + Syntactic

72%

66%

Semantic + Syntactic

45%

37%

Numeric + Lexical + Semantic

96%

94%

Numerci + Lexical + Syntactic

89%

87%

Lexical + Semantic + Syntactic

88%

85%

Numeric + Semantic + Syntactic

54%

56%

Numeric + Lexical + Semantic + Syntactic

96.9%

96.2%

Finally, the purpose of the next experiment is to provide a performance comparison with previous studies based on supervised learning methods [9] and [10] when applying to the same test corpus SemEval 2016. The obtained results are reported in Table 7. As cited in this table, the baselines obtain an F-measure of 30.9%. Hence, our proposed method outperforms the baseline approach with a clear difference of 60% in term of f-measure. Using the CRF classifier, we obtain the best result compared to the result of [9] with a difference of 2% in the case of two classes. For the case of three classes, we also obtain the best result compared to the result of [10] using the classifier SMO. Table 7. Comparative performance of our system and previous studies Methods

Number of classes

Baseline

2

Recall (%) –

Precision (%) –

F-measure (%)

Used classifiers

30.9

SVM

[9]

2

75

84

79

CRF

ADAL system

2

96.9

97

96.9

Adaboost

[10]

3

90

89.9

89.9

SMO

ADAL system

3

96.3

96.2

96.2

Adaboost

ADAL System: Aspect Detection for Arabic Language

39

The results obtained are very encouraging and show the effectiveness of our method. Thanks to our method we managed to solve the problems of aspects detection (compound and multiple aspects) by using several attributes of different natures. We added two other measures where the first measure calculates the number of simple aspects and the second calculates the shadow of compound aspects. In the test corpus, the number of the simple aspects is 450 and the compound aspects are 2149. By applying our system of the aspects detection on this corpus, we have managed to detect 439 simple aspects among 450 aspects (environ 97%) and 1727 compound aspects among 2149 aspects (80%). Significantly, that the number of simple aspects detected is greater than the number of compound aspects since the detection of simple aspects is easier than the detection of compound aspects. Indeed, a simple aspect is expressed through only one word (noun, verb, …) while the compound aspect can be expressed through several words and afterward according to the results obtained we noticed that our system detects well the beginning of the compound aspects and falls into error on its end.

6 Conclusion Through this paper, we described our supervised system ADAL to detect Arabic aspects. Our main goal is to study various features of each word in the sentence to predict which term can be an aspect. Several classifiers were applied to the SemEval 2016 corpus. The Adaboost techniques yield the best results in terms of both precision (97%) and recall (96.9%). For future work, we intend to build a complete system that tackles the sentiment analysis at the aspect level. Additionally, we plan to evaluate our method with different corpora languages and domains. Finally, it will be mandatory to combine the rule-based method with a machine learning method to enhance the overall performance of our system.

References 1. Pontiki, M., Galanis, D., Papageorgiou, H., Androutsopoulos, I., Manandhar, S., AL-Smadi, M., Al-Ayyoub, M., Zhao, Y., Qin, B., De Clercq, O.,: Hoste, V., Apidianaki, M., Tannier, X., Loukachevitch, N., Kotelnikov, E., Bel, N., Jimenez-Zafra S., ¸ Eryigit, G.: SemEval-2016 task 5: aspect based sentiment analysis. In: Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval 2016. Association for Computational Linguistics, San Diego (2016) 2. AL-Smadi, M., Qawasmeh, O., Talafha, B., Quwaider, M.: Human annotated arabic dataset of book reviews for aspect based sentiment analysis. In: 3rd International Conference on Future Internet of Things and Cloud (FiCloud), pp, 726–730. IEEE (2015) 3. Naveen Kumar, L., Suresh Kumar, S.: Aspect based sentiment analysis survey. IOSR J. Comput. Eng. 18(2), 24–28 (2016). e-ISSN: 2278-0661, p-ISSN: 2278-8727 4. Hai, Z., Chang, K., Kim, J., Yang, C.: Identifying features in opinion mining via intrinsic and extrinsic domain relevance. IEEE Trans. Knowl. Data Eng. 26(3), 623–634 (2014) 5. Piryani, R., Gupta, V., Singh, V.K., Ghose, U.: A linguistic rule-based approach for aspectlevel sentiment analysis of movie reviews. In: Advances in Computer and Computational Science (2017)

40

S. Trigui et al.

6. Hu, M., Liu, B.: Mining and summarizing customer reviews. In: Proceedings of the tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp, 168–177. ACM (2004) 7. Zhiqiang, T., Wenting, W.: DLIREC: aspect term extraction and term polarity classification system. In: SemEval, pp, 235–240 (2014) 8. Brun,C., Perez,J., Roux,C.: XRCE at SemEval-2016 Task 5: feedbacked ensemble modelling on syntactico-semantic knowledge for aspect based sentiment analysis. In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval 2016), pp, 282–286 (2018) 9. Alawami, A.: Aspect Extraction for Sentiment Analysis in Arabic Dialect. University of Pittsburgh, Diss (2017) 10. Al-Smadi, M., Al-Ayyoub, M., Jararweh, Y., Qawasmeh, O.: Enhancing aspect-based sentiment analysis of arabic hotels’ reviews using morphological, syntactic and semantic features. Inf. Process. Manag. 56(2), 308–319 (2018) 11. John, G.H., Kohavi, R., Pfleger, K.: Irrelevant features and the subset selection problem. In: Proceedings of the Eleventh International Conference on Machine Learning, pp, 121–129 (1994)

Modelling, Analysis and Simulation of a Patient Admission Problem: A Social Network Approach Veera Babu Ramakurthi1 , Vijayakumar Manupati1(B) , Suraj Panigrahi1 , M. L. R. Varela2 , Goran Putnik2 , and P. S. C. Bose1 1 Department of Mechanical Engineering, NIT Warangal, Hanamkonda, Telangana, India

[email protected], [email protected] 2 Department of Production and Systems, School of Engineering, University of Minho,

Guimarães, Portugal {leonilde,putnikgd}@dps.uminho.pt

Abstract. Due to pressing demand for the quality of care and to maximize the patient satisfaction, traditional scheduling may not cater the needs of patient’s accessibility for mitigating the patient tardiness and social effects. This paper addresses, patient appointment scheduling problem (PASP) in a radiology department in southern India, as a case study. Due to partial precedence constraints between different modalities, the problem is formulated as a static, multistage/multi-server system. We proposed a novel social network analysis (SNA) based approach to examine the relationship between identified modalities and their influence with different examination type. To validate the results of SNA model, in a real time environment a simulation analysis is carried out by using FlexSim Healthcare software. Based on the empirical data collected from the radiology department, comparisons between the present condition of the department and the achieved results from proposed approach is performed through discrete event simulation model. The results indicate that the proposed approach has proved its effectiveness on the system performance by reducing the average total completion time of the system by 5% and 38% in patients waiting time. Keywords: Appointment scheduling problem · Social network analysis · Queuing models

1 Introduction Recent healthcare services require better quality of care, reducing the cost, and minimizing the patient length of stay by optimal allocation of resources due to a surge in patient volume and increasing awareness for precautionary measures. Subsequently, it is necessary to investigate the patient flow that has been considered as one of the key factors for assessing the performance of a healthcare system [1]. However, patient flow prediction is a difficult errand because of its frequent uncertain nature that prompts © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 41–51, 2021. https://doi.org/10.1007/978-3-030-49336-3_5

42

V. B. Ramakurthi et al.

the brisk increment in patient volume. An increase in 10% patient’s appointments on the same day proved that it leads to 8% reduction of patient satisfaction [2]. In order to take care of the above-mentioned issues and to improve the patient’s solace, effective/efficient methodologies and new management techniques must be considered. In India, a noteworthy level of the populace living underneath the national poverty line and their need is a better lifestyle. Catering facilities for such a colossal number surely prompts tremendous workload in the public sector. Hence for improving the patient’s comfort by ensuring comprehensive health care modeling, contracts with private service providers have been made under the National health scheme. Despite collaboration with private providers, still patients are suffering from lack of effective treatment. In this context, there is a huge need to reduce the patient’s length of stay irrespective of their type of demand, simultaneously costs should be reduced and improving the quality of care. In the past decade appointment scheduling problem is considered to schedule elective surgeries [3] but in the recent years, it has been widely used for almost all types of diagnosis. Contrary to traditional appointment scheduling problem, it has greater advantages to the patients by providing flexibility in timings, better reachability (patients can priory take their appointments by phone, or other electronic bookings), flexible choices (patients themselves choose their preferable physicians), etc. [4–6]. Out of these, the most important factor that majorly affects the patients care is patients flow/delivery process in a hospital system. The patient flow analysis and identification defines how the rate of patient flow can be affected by seasonal and local factors [7]. Statistical models have been developed to predict the patient flow analysis based on patient admission parameters [8]. Waiting time is always a problem in a hospital. Reduction of waiting tine is always a concern. Optimization approach using queuing theory and poisson modelling have effectively reduced the waiting time in the emergency department [9]. Machine learning approaches and data mining techniques are used for the analysis of health records of patients [10]. In this study, we propose a social network analysis based method for a health-care appointment scheduling system to comprehend the process of different modalities such as MRI, CT, RX, OT, PX, and MG and to identify the value-added process thereby removing the components which are actually delaying the process. Consequently, frequencies of each modality, their interactions were investigated and those modalities influencing high that can affect the total system performance has been identified. Thereafter, a single/multi-stage queuing system is designed with total patient’s length of stay and total completion time as performance measures for estimating the current performance of the system. We considered a case study for a radiology department and it has been described in detail with the presented two-dimensional framework.

2 Case Study Description This paper centres around patient admission scheduling problem in a radiology department of a hospital situated in the rural area of southern India. This department offers total six different modalities namely MRI, Computed Tomography (CT), Densitometry (PX), Mammography (MG), Ortho-pan tomography (OT) and RX with varieties of exam types

Modelling, analysis and simulation of a patient admission problem

43

to serve the scheduled patients for ten days working hours. The department is madeout of several humans, physical and technical resources (Administration, Physicians; Medical assistants; Assistants; Examination rooms; Waiting room; Changing room; and Control rooms) for serving the patients. However, due to the department location and its social environment, at the initial stages of a survey, we found a non-significant amount of delays (downtime). Sometimes, patients arrive much earlier (several hours before) than their actual scheduled time. The detailed workflow of the patient and their information is illustrated in Fig. 1. Appointments are pre-scheduled by the patients depending on the available slots in the considered hospital and accordingly the processing time for each modality is defined. On the scheduled day, the patient alludes to the hospital and waits until the registry, questionnaire and consent declaration are filled for the admission. Ensuring to admission the patient is then directed to the respective ward for wearing the hospital gown, removing ornaments, watch and any other iron material. In the meanwhile, the equipment is prepared for examination and the necessary resources are made available. The examination process follows patient positioning, image acquisition, contrast administration, image acquisition. The post examination process undergoes wherein the reports are delivered and billing is done, terminating the process. The assistant makes call for examination of the patients and guides them to the dressing rooms which are common for all the modalities and to control the overcrowding of appointments, different technicians are used for assistance. 2.1 Data Collection For a better knowledge on the functioning of the hospital and their resource utilization, 6 different modalities were identified and studied on daily basis for a span of 4 months. The number of patients examined counted was 329. On observation, it was understood that a majority of the appointments fall under MRI category. Henceforth our future research is restricted to MRI modality. A total of 40 different exam types were found in MRI and a major share of 80% is contributed by 4 exam types namely Brain, Joints, Cervical and Lumbar respectively as shown in Fig. 2. This prompted the constraint of the study to major share contributors. The succeeding phase of study incorporates acquiring discernment time to acquire a sample that would be statistically illustrated. Resource sharing among modalities and rescheduling of patients by the technicians were observed to be the major barriers to the normal workflow. Longer patient waiting and broad working hours are the key impacts of Rescheduling. The final phase includes determining the throughput time of each task involved in the examination procedure. The analysis of the examinations in the hospital statistically is articulated in Table 1. The average, median and standard deviations of the different tasks involved in the examination are shown in Table 2.

3 Model Formulation In this research work, patients follow single stage, multi-stage systems due to their random arrivals. However, registry, questionnaire, consent declaration filling, report

44

V. B. Ramakurthi et al.

delivery and billing falls under multi-stage systems. As the doctor and patient relationship are highly valued in appointment scheduling problem their relationship can be useful to predict the quality of the system, and it can be shown as a single independent queue model. Thus, the problem described within this context considers patients belonging to different types which have been modelled in the form of a static, multi-stage/multiserver system, having estimated processing times belonging to different types of clinical examinations which is further defined as a probability distribution.

Image Acquisition

Patient arrival

Patient waits

Patient Registry

Contrast START

Questionnaire and consent Declaration Patient info taken into control room

Exam sequences definition

Information registered on

Equipment Preparation

Patient waits

Patient is taken into respective ward

Patient changes clothes

Contrast Administration

Elective Patient

Image Acquisition

Out Patient registration

Remove Patient from Equipment

Admission

PATIENT POSITIONING

Patient changes clothes

Exam Observation registry

Patient waits

Image Processing

Image Reporting

Report delivery and billing

Patient leaves

Fig. 1. Flowchart of the proposed clinic work flow

3.1 Mathematical Model Representation In this paper, the objective functions that we consider are the minimization of patient length of stay and Minimization of maximum completion time. The objective functions are described as follows: Equations (1) and (2) stipulates the objectives, i.e. minimization of makespan (total completion of patients) and minimization of the total waiting time of the patients. Constraint (3) articulates all the task for patient’s process should be positive in value. The constraint (4) expresses the processing time for a task of the process should be ranging within three-sigma limits. Decision Variables:    w → j0x,y ≥ jfx−1y zw → wtx,y = 0 = ηx,y = x,y nzw → wtx,y = j0x+1y − (j0x,y + px,y ) 0 → j0x,y = jfx−1y

Modelling, analysis and simulation of a patient admission problem

45

Table 1. Statistical Analysis of the work flow of hospital Attributes

Task Initial fabric change (min)

Prepare Equipment (min)

Patient positioning (min)

Average

2.32

0.64

1.98

Median

1.95

0.55

1.81

Standard deviation

0.9

0.31

0.65

Contrast (min)

Remove patient (min) Final fabric change (min)

Average

2.09

1.36

1.97

Median

1.94

1.19

1.85

Standard deviation

0.74

0.67

0.9

Table 2. Statistical analysis of image acquisition task by examination type

Highly

BRAIN CERVICAL NECK LIVER HAND EYE NASAL-SINUSES CARDIAC THORAX HIP JOINT FINGER PROSTATIC PELVIC BONE TEMPOROMANDIBULAR JOINT PELVIC PROTOGRAPHY BRAIN ANGIOGRAPHY LUMBOSACRAL RENAL SACRAL

FREQUENCY

FREQUENCY 250 200 150 100 50 0

Attributes Exam with contrast Brain (min)

Joints Cervical Lumbar (min) (min) (min)

Average

20.42

26.51 26.72

25.92

Median

19.52

25.52 25.17

26.65

EXAM TYPE

Fig. 2. Frequencies of MRI exam types

Standard deviation

4.48

6.06

8.25

4.91

Minimization of objective functions jfx,y = jox,y + px,y +  wtx,y

(1)

Vwtx,y = tox+ 1y − (tox,y + px,y )

(2)

px,y > 0, ∀x ∈ jy, y ∈ j

(3)

μx,y − 3σx,y ∧ 2 ≤ px,y ≤ μx,y + 3σx,y ∧ 2, ∀x ∈ j, y ∈ j

(4)

Subjected to constraints

46

V. B. Ramakurthi et al.

4 Social Network Analysis Method and Flexsim Healthcare Simulation System In their work, the average distance between patients proved to be better prediction than social network properties such as betweenness and centrality measures for the propagation of disease. According to our knowledge till date, no study has been done on appointment scheduling problem with social network analysis method that can predict patient’s frequency on different modalities and their interrelations among them to effectively serve the patient. The point by point depiction of the social network analysis method and execution on the considered problem is as per the following: 4.1 Modelling of Collaborative Networks In SNA, network indicates interaction among various nodes linked by ties. In this research, the SNA approach deals with the extraction of hospital data in the form of a network having different nodes. Here, the nodes are modalities while the ties are exam types interconnected with each other to frame a network structure. Using the hospital data as the input parameters for the SNA method, various attributes of the acquired network have been examined. i.e., the SNAM is classified into two stages: (a) modelling of network and (b) network analysis which has been referenced in the accompanying sections: The network structure begins with, fed the data that has been collected from the survey into an affiliation matrix. The affiliation matrix comprises of Radiology Department attribute information with exam types and modalities as rows and columns. If the interaction of attribute to exam type is 1, they are related and if it is a 0 then they are not related. Here, the interaction articulates the actual material flows on different resources. Later, the modelling algorithm is performed on the matrix to analyse it for accomplishing the collaborative network as depicted in Fig. 3. The collaborative network obtained is more interesting and meaningful in comparison to the simple network with respect to its size, characteristics etc. The arrows here represent interactions between the attributes which is not possible with traditional representation. This proposed approach is repeated for the remaining data for achieving different collaboration network. Figure 3 gives a detailed view of the influencing exam type in the considered modalities. The complete information about the various exam types performed in the considered modalities is explained. In this paper, the proposed healthcare system’s attributes are symbolized by the nodes in the network. The nodes with different colours\label distinguishes each attribute from the other. For instance, in Fig. 3 the nodes having smaller size and blue colour represents various exam types, and the aforementioned nodes stipulates the various modalities present in the system. Although different colours and name has been used to represent each modality nodes and different shapes for each exam type nodes. A preliminary analysis is necessary to be conducted to explain the overall nature of the network. The following section gives a detail explanation about the preliminary analysis and explains about statistical analysis.

Modelling, analysis and simulation of a patient admission problem

47

4.2 Social Network Analysis for Collaborative Networks 4.2.1 Network Features The position of the node with respect to the centre of the network can be easily determined using centrality. In this research, betweenness and closeness centrality measures for the network analysis are considered. The below Eqs. (5 and 6) shows the detail calculations of the considered centrality measures. Closeness is a measure to determine the close connection amongst different nodes in the network. To obtain the values of various centrality measures of each attribute i.e., betweenness and closeness, the input data is entered to the Ucinet software. Table 3 articulates the measures of two centralities of the five modalities for the mentioned cases of the particular work systems. Closeness Centralityi =

n−1 n  k(a, b)

(5)

b=1 n 

Closeness Centralization =

D − Da

b=1

(n − 1)(n − 2)/(2n − 3)

Table 3. Centrality measures Rank Attribute

Closeness Betweenness

1

MRI

91.667

44.979

2

CT

91.667

44.979

3

BREAST

50.575

4.548

4

JOINTS

50.575

4.548

5

NECK

50.575

3.7

6

SHOULDER 50.575

7

TEETH

50.575

0.207

8

CERVICAL

50.575

0.207

9

THIGH

50.575

0.207

10

FINGER

50.575

0.207

.

.

.

.

.

.

.

.

.

0.207

.

.

43

PX

43.137

0.003

.

44

MG

33.846

0

45

OT

33.846

0

(6)

48

V. B. Ramakurthi et al.

By observing Table 3, a number of conclusions can be extracted based on the obtained collaboration networks. The modalities having higher degree of centrality exhibit emphatically associated, though the modalities with lower degree centrality show very fewer connections. Subsequently, the modalities with a higher degree of centralities have been distinguished so they can act as center points and can serve as key elements or focal elements to the network. The identified key modalities are shown in network diagram as big square nodes and it is shown in Fig. 3. These key modalities or hubs have a wide range of connections strength and make it easy to identify the hubs in the health care system by observing the collaborative networks.

Fig. 3. Collaborative network diagram of the different modalities and exam type

5 Results and Discussion The proposed SNAM is implemented on a case study and the distinctive characteristic properties of the network were identified. Later on, the frequencies of the modalities and their interactions in the network were identified by the help of features of every node which were obtained by the analysis of above mentioned data. From the survey data, we identified modality, MRI obtained a higher degree of closeness and betweenness centrality, similarly brain, joints, cervical, and lumbar exam types have higher centralities. The influence of identified modalities and exam types having higher centralities is much higher on the whole network as they act as key hubs of the system. Thereafter, for the considered hospital problem makespan (total completion time) and total waiting time is considered as performance measures that can improve the system performance effectively. After that, to validate the results from SNAM, a simulation is conducted using FlexSim HC simulation tool.

Modelling, analysis and simulation of a patient admission problem

49

5.1 Validation of SNAM by FlexSim Healthcare Simulation Here in FlexSim, we have considered only the case of patients of MRI, as the percentage of patients for MRI was found to be more. The exam types which are considered are Brain, Joints, Cervical and Lumbar respectively as the percentage of patients for these examinations are found more through SNAM.

Fig. 4. Throughput of the patients through out the simulation

Figure 4 presents the results for the simulation program that indicates the patient throughput. Here, the graph is plotted between a number of patients with respect to the number of days of observation. From the Fig. 4, it is seen that the patient throughput in case of brain examination is more as compared to other examination. To be more specific, for the 4 months of observation the number of patients of brain examination were around 277 followed by joints around 189 then cervical 126 and lumbar patients were around 110 which directly reflects the results from SNAM. Figure 5 shows the relation between the average time taken and the number of days for the length of stay of four different groups of patients. The graph is plotted for the length of stay period which has been taken as 123 days. Here, 14:21:20 represents day-14, 21 h and 20 min respectively. In Fig. 5 during the first 14-18 days, it is clearly visible that, there is much fluctuation whereas with the increase in the number of days the graphs tend towards becoming stabilized. In the earlier period lumbar patients stay around 50 to 90 min, followed by patients of the brain around 45 to 75 min then joints and cervical ranging between 55 to 68 min, and as the number of days increases, the length of stay period for all the category patients falls between 45 to 60.

Fig. 5. Average length of stay for the patients for examination

50

V. B. Ramakurthi et al.

Fig. 6. Average state time for patients

Fig. 7. Average waiting times for patients at the following area

Figure 6 shows the Average State Times for the four major examinations. From the above graph, it is observed that the minimum time was spent for the patients of brain examination, followed by the patients of lumbar, cervical and joints examination respectively. Here from the Fig. 6, it is clearly visible that most of the time patients of joints achieve direct care followed by cervical then lumbar and Brain. However, this trend in case of the different examination was mainly dependent upon two main factors such as the In Transit time and the Direct Care. Figure 7 shows the average wait times for the patients of four different examinations which mainly focus on three major areas namely registration area, MRI area and the dressing room area. A difference was observed during the comparison of these four exam types on the basis of the average waiting time. To be more specific, the MRI area was having the highest waiting time i.e. 16.71 for Brain, 15.48 for Joints, 16.28 for Cervical and 1.96 for Lumbar followed by the dressing room area and the registration area. Amongst the four main exam types, the patients of brain examination had to wait for a longer time period especially at the MRI area whereas the average waiting time for the patients of other exam types such as cervical, joints and lumbar were second, third and fourth highest respectively.

6 Conclusions and Future Work In this paper, a social network analysis based method integrated with multi-objective optimization approach is developed to solve the patient admission problem for improving the patient’s comfort. As a first step, a survey has been conducted in a clinic that is following

Modelling, analysis and simulation of a patient admission problem

51

appointment scheduling to the patients and then identified the different modalities and their exam types individually. Thereafter, with proposed social network analysis method each modalities interaction with each other and their frequency according to patient tasks are identified. From this analysis, we observed that out of 40 different exam types in Radiology department four exam types namely Brain, Joints, Cervical and Lumbar are contributing 80% of the total share. Later, the patient’s flow pattern is mapped according to queuing models and this can be useful for formulating the mathematical model. After examining the results and the considered performance measures it is observed that the considered health care system is a case of mixed integer programming model. In this paper, the average total completion time of the system and patients waiting time are considered as objective functions. To solve this, a FlexSim based simulation approach was carried out in which the results were established and validated based upon the patient admission tasks. The results were obtained by running the simulation shows that the average total completion time of patient admission tasks has been minimized. The second objective in this paper, which was the waiting time that indicated the quality of service has also been decreased by 38%. In future work, it would be interesting to develop a web-based system for improving better communication between patients and the doctors.

References 1. Adeyemi, S., Demir, E., Chaussalet, T.: Towards an evidence-based decision making healthcare system management: modelling patient pathways to improve clinical outcomes. Decis. Support Syst. 55(1), 117–125 (2013) 2. Sampson, F., et al.: Impact of same-day appointments on patient satisfaction with general practice appointment systems. Br. J. Gen. Pract. 58(554), 641–643 (2008) 3. Gupta, D., Denton, B.: Appointment scheduling in health care: challenges and opportunities. IIE Trans. 40(9), 800–819 (2008) 4. Ryan, M., Farrar, S.: Using conjoint analysis to elicit preferences for health care. BMJ Br. Med. J. 320(7248), 1530–1533 (2000) 5. Rubin, G., et al.: Preferences for access to the GP: a discrete choice experiment. Br. J. Gen. Pract. 56(531), 743–748 (2006) 6. Gerard, K., et al.: Is fast access to general practice all that should matter? a discrete choice experiment of patients’ preferences. J. Health Serv. Res. Policy 13, 3–10 (2008) 7. Bailey, N.T.J.: A study of queues and appointment systems in hospital out-patient departments, with special reference to waiting-times. J. Roy. Stat. Soc. Ser. B (Methodological) 14(2), 185–199 (1952) 8. Meadows, K., Gibbens, R., Gerrard, C., Vuylsteke, A.: Prediction of patient length of stay on the intensive care unit following cardiac surgery: a logistic regression analysis based on the cardiac operative mortality risk calculator, EuroSCORE. J. Cardiothorac. Vasc. Anesth. 32(6), 2676–2682 (2018) 9. Xavier, G., Crane, J., Follen, M., Wilcox, W., Pulitzer, S., Noon, C.: Using poisson modeling and queuing theory to optimize staffing and decrease patient wait time in the emergency department. Open J. Emerg. Med. 6(03), 54 (2018) 10. Kovalchuk, S.V., Funkner, A.A., Metsker, O.G., Yakovlev, A.N.: Simulation of patient flow in multiple healthcare units using process and data mining techniques for model identification. J. Biomed. Inf. 82, 128–142 (2018)

Short-Term Load Forecasting: An Intelligent Approach Based on Recurrent Neural Network Atul Patel1 , Monidipa Das2(B) , and Soumya K. Ghosh2 1

2

Department of Electrical Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India [email protected] Department of Computer Science and Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India [email protected], [email protected]

Abstract. With the evolution of smart grids in recent years, load forecasting has received more research focus than ever before. Several techniques, especially based on artificial neural network and support vector regression, have been proposed for this purpose. However, due to lack of appropriate modeling of external influences over the load data, the performance of these techniques remarkably deteriorates while making forecast for the peak load values, especially on short-term basis. In this paper, we present a strategy to forecast hourly peak load using Recurrent Neural Network with Long-Short-Term-Memory architecture. The novelty lies here in improving the forecast accuracy by an intelligent incorporation of available domain knowledge during the forecast process. Experimentation is carried out to forecast hourly peak load in five different zones in USA. The experimental results are found to be encouraging.

Keywords: Short-term load forecasting Smart grid · Domain knowledge

1

· RNN-LSTM · Peak load ·

Introduction

Load forecasting is an active area of research since 1960s. This provides insight into future consumption of power load, based on observed data as well as consumer behavior, and eventually, helps a lot in pricing, utility planning and distribution of power in effectual way [2]. Even a fractional increase in load forecast accuracy can have a significant effect on improving a country’s economy. As a consequence, the power load forecasting still remains a popular area of research in the present background of twenty first century. One of the key factors playing important role in power load forecasting is the timescale. On the basis of timescale, the load forecasting can be divided into c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 52–62, 2021. https://doi.org/10.1007/978-3-030-49336-3_6

Short-Term Load Forecasting: An Intelligent Approach Based on RNN

53

three broad categories [7], namely Short-term load forecast (STLF), Mid-term load forecast (MTLF), and Long-term load forecast (LTLF). STLF is done for very short duration of time. It can be a few minutes, hours, a day, or even a week. The primary aim of STLF is planning of power exchange and optimal generator unit commitment. It can also aid in addressing real-time control and security assessments of the plant. MTLF is made for a month to a year or two. This helps in scheduling maintenance, coordinating load dispatches, and also maintaining a balance between supply and demand. LTLF is done for few years ( > 1 year) to 10–20 years ahead. Major decisions regarding generation, transmission, and distribution of power are made based on the results of LTLF. Problem Statement and Challenges: In the present paper, we focus on short-term forecast of peak load on hourly basis. Given the time series of hourly peak load data y1 , y2 , · · · , yt over t time stamps (hours), the goal is to predict the peak load for the next m time stamps, i.e. y(t+1) , y(t+2) , · · · , y(t+m) on hourly basis. The task is not as trivial as it seems. Though extensive research efforts have been made so far to improve the performance of peak load forecasting, the area still retains substantial research importance because of a number of challenges prevailing over here. Apart from the highly complex and non-linear nature of the electric load data, the other challenges in short-term peak load forecasting arise due to its dependency on seasonal and social factors. In majority of the cases, it becomes difficult to acquire the relevant data on influencing factors (such as change in temperature, humidity, customer behavior etc.) and accurately fit these into a forecasting model. Hence, the current research thrust is to come up with a complementary method that can better utilize the available data and can help improving the model performance even when the data on influencing factors are unavailable. Our Contributions: In the present work, we attempt to address the abovementioned issues by exploiting the power of computational intelligence and available domain knowledge. Our major contributions in this context are as follows: – proposing an hourly peak load forecasting approach based on RNN with longshort-term-memory (LSTM) architecture; – devising an intelligent way of improving RNN performance with incorporated domain knowledge; – proposing a mechanism for dynamic updating of rule base in a knowledge based system; – validating the effectiveness of the proposed approach with respect to forecasting hourly peak load in five different zones in USA; The rest of the paper is organized as follows. Sect. 2, reviews the existing works on short-term load forecasting. Sect. 3 discusses on the fundamentals of the recurrent neural network with long-short-term memory architecture. Sect. 4 thoroughly describes our proposed approach for hourly peak load forecasting. The details of experimentation along with the results of hourly load forecast are presented in Sect. 5, and finally, we conclude in Sect. 6.

54

A. Patel et al.

2

Related Works

Short-term load forecasting (STLF) is quite an widely investigated research area. As per the recent surveys [5], most of the existing STLF models are defined either on classical auto-regressive models or on artificial neural network (ANN) based machine learning techniques. The classical models mostly suffer from the issue of modeling non-linearity within load time series data, which is addressed by the ANN models [1,8] with their ability to efficiently analyze non-linear problems. Unfortunately, due to the over-fitting and curse-of-dimensionality issues, the ANN-based load forecasting models are often found to produce poor prediction performance in many of the load forecasting scenarios [2]. In order to tackle these issues, a number of support vector machine (SVM)-based models [10] have been proposed in recent days. Nevertheless, the modeling of influences from external factors still remains a challenge for accurate load forecasting. Incidentally, to the best of our knowledge, the issue of modeling external influences in absence of the relevant data has not yet been addressed in any of the existing works.

3 3.1

An Overview of Recurrent Neural Networks Recurrent Neural Networks (RNNs)

RNNs are the exclusive cases of feed forward neural networks, where the hidden units are connected in such a way that it forms a directed cycle (recurrent connection) thus allowing the network models to exhibit dynamic temporal behavior. One of the major benefits of having such recurrent connection is that the memory of previous inputs remains within the networks internal state. This makes RNNs applicable to various complex problems of sequence to sequence learning. Typically, the current input xt is multiplied with weight u and then is added to the product of the previous output yt−1 and corresponding weight w. This value is passed through tanh nonlinearity to generate the current output. yt = tanh(wyt−1 + uxt )

(1)

The simplest RNN can be visualized by unrolling the time axis of a fully connected neural network (refer to Fig. 1).

Fig. 1. Neural network variants: (a) Feed-forward model, (b) Recurrent model

Short-Term Load Forecasting: An Intelligent Approach Based on RNN

3.2

55

RNN with Long-Short-Term Memory (LSTM)

The Long-Short-Term Memory (LSTM) architecture [6] of RNN is primarily proposed to overcome the issues of vanishing/exploding gradient and lack of ability to capture the long-term dependencies in standard RNN. LSTM can enforce constant error flow through constant error carousels within special units by bridging minimal time lags in excess of 1000 discrete time steps. Typically, the LSTM architecture consists of three gates, as illustrated next. – Forget Gate: The forget gate concatenates the previous hidden state (ht−1 ) at t−1 and the current input (xt ) at time t into a single tensor. It then passes it through a sigmoid function (σ) after applying a linear transformation. If the output of the forget gate is 1, then it completely forgets the previous state, otherwise if it is 0, then the previous internal state is passed as it is. Accordingly, the return vector of forget gate can be represented as follows: ft = σ(Wf [ht−1 , xt ] + bf )

(2)

where, bf and Wf are the bias and weight vector for the forget gate. – Input Gate: In the input gate, the current input (xt ) and the previous hidden state (ht−1 ) are concatenated and passed through another sigmoid layer. The return vector of this gate can be represented as follows: it = σ(Wi [ht−1 , xt ] + bi )

(3)

where, bi and Wi are the bias and the weight vector for the input gate. Once the input return vector is determined, the candidate layer applies a tanh nonlinearity to the current input and the previous output in the LSTM cell and generates a candidate vector in following manner: t = tanh(Wc [ht−1 , xt ] + bc ) C

(4)

where, bc is the bias and Wc is the weight vector for the candidate layer. After the current candidate value is determined, it is added to the fraction of the old cell state C(t−1) as allowed by the forget gate, to produce the updated t cell state: Ct = ft ∗ C(t−1) + it ∗ C – Output Gate: The output gate controls what fraction of the internal state is passed to the output. The return vectors from the output gate is expressed below, where, bo and Wo are the corresponding bias and the weight vector.

4

Ot = σ(Wo [ht−1 , xt ] + bo )

(5)

ht = Ot ∗ tanh(Ct )

(6)

Hourly Peak Load Forecasting: Proposed Approach

The overall flow of the proposed forecast model is depicted in Fig. 2. As shown in the figure, the approach is comprised of three major steps: 1) Data pre-processing, 2) RNN-LSTM analysis, and 3) Knowledge-driven tuning of forecast value.

56

A. Patel et al.

Fig. 2. Flow of the proposed forecast approach

4.1

Data Pre-processing

The primary objective of this step is to convert the input dataset into desired format: < dd−mm−yyyy hh−mm, peakLoadV alue > (refer to Fig. 2). Further, the step also processes for the missing instances in the dataset. Accordingly, the whole dataset is re-sampled at one h frequency and the missing values are filled out using backfill method [9] in order to have consistency in the dataset. Since RNN depends on scale of data, in the pre-processing step, we also normalize the dataset to have values in the range of 0 to 1. 4.2

RNN-LSTM Analysis

The initial forecast for the hourly peak load is obtained using the RNN-LSTM, as described in Sect. 3.2. Thus, the forecast for the next time stamp (t + 1) becomes as follows: (7) yt+1 = sof tmax(Ot+1 ) Ot+1 = σ(Wo [ht , xt+1 ] + bo )

(8)

where, Ot+1 is the un-normalized output which is further normalized using sof tmax function to obtain the forecast value y(t+1) of the hourly peak load. The value of ht is determined by following the Eq. 6 as illustrated in Sect. 3. 4.3

Knowledge-Driven Tuning of Forecast Values

As established in literature, various factors, including the weather condition and customer behavior can have significant influence on short-term load forecasting. However, the relevant data are not always available in practice. In this context, we propose a novel technique of utilizing our generic domain knowledge to indirectly extract such influence pattern from the given load time series data.

Short-Term Load Forecasting: An Intelligent Approach Based on RNN

57

Fig. 3. Total demand on (a) daily basis, (b) weekly basis, (c) hourly basis [11]

As per the domain knowledge, the load demand during Summer and Winter is more, compared to that in Autumn and Spring (refer Fig. 3 (a)). Even, the load demand is different during different hours of the day. For example, the load demand during normal working office hours is higher compared to other time (refer Fig. 3 (c)). Also, it is evident from Fig. 3 (b) that the load demand on Weekends is less than the load demand on weekdays. This is so because most of the workplaces are closed on weekends. Accordingly, we use this knowledge to indirectly determine the effect of weather factors and customer behavior on the power load variation of any zone. Feature Creation: In order to utilize the domain knowledge available with us, we create four new features so as to tune the forecast values, and to further improve our forecast accuracy. The new features are as follows: – Difference = load value[i+1] - load value[i] – Season (S) : Winter ⇒ December, January, February Summer ⇒ June, July, August Autumn ⇒ September, October, November Spring ⇒ March, April, May – Time of the day (T) = Morning, Day, Evening, Night – Weekend (W) = Binary variable indicating the given day is a weekend or not.  W =

1 0

: W eekend : Otherwise

58

A. Patel et al.

Fine-Tuning Forecast Values: In order to fine-tune the forecast value obtained from RNN-LSTM analysis, first we calculate the average difference between all consecutive pairs of observed load values, considering all the possible combinations of possible Season, Time of the day, and whether it is Weekend or not. Then these average values are added to the forecast values based on a dynamically changing rule-base. Typically, each rule is generated in following form: If Season = S & Time = T & Weekend = W Then tuning-component = Dict(S,T,W)

where Dict(S, T, W ) is a function of Dif f erence (see feature creation). Even though it looks trivial, the process helps extracting and incorporating domain knowledge in our forecast model, and thereby, helps in improving the forecast accuracy. In general, the electricity demand is significantly affected by customer consumption behavior that changes almost arbitrarily with the change in weather conditions (temperature, humidity etc.), random occurrence of social events (e.g. special game series, festivals, party etc.), change in individual work-load over a week, and so on. However, the lack of data availability often becomes the biggest issue for direct modeling of influences of these factors in a load forecast model. Contrarily, the present process employs a data-driven technique for an indirect as well as smart extraction of the pattern of how load demand changes with the customer-behavior change according to the variation in seasonal factors and day-to-day social activities. Our approach for knowledge-driven fine-tuning of forecast value is presented in Algorithm 1. Algorithm 1. Knowledge-driven Forecast Value Tuning 1: /* Initialization 2: Season := [W inter, Spring, Summer, Autumn] 3: T ime := [M orning, Day, Evening, N ight] 4: W eekend := [0, 1] 5: for S ∈ Season do 6: for T ∈ Time do 7: for W ∈ Weekend do 8: Count = 0; Sum = 0; 9: for i ∈ 1:nrow(TrainData) do 10: if Season=S & Time=T & Weekend=W then 11: Sum += Difference[i]; 12: Count++; 13: end if 14: end for 15: Dict(S,T,W) = Sum/Count; /* Dynamically changing rule-base 16: end for 17: end for 18: end for 19: 20: for S ∈ Season do 21: for T ∈ Time do 22: for W ∈ Weekend do 23: for i ∈ 1:nrow(TestData) do 24: if Season=S & Time=T & Weekend=W then 25: Forecast[i] += Dict(S,T,W); /* Fine-tuning forecast value 26: end if 27: end for 28: end for 29: end for 30: end for

Short-Term Load Forecasting: An Intelligent Approach Based on RNN

5

59

Experimental Evaluation

5.1

Study Area and Dataset

The effectiveness of our proposed knowledge-driven RNN-LSTM model is validated with respect to hourly load forecasting using the dataset from Kaggle “Global Energy Forecasting Competition” held in 20121 . The reason for using this publicly available and well known load forecasting dataset is to allow other researchers to easily compare their models to our proposed method. The dataset consists of zone-wise load history of USA, among which we have considered the data of 5 zones (Zone-1 to Zone-5) to build and test our model. 5.2 Experimental Setup The proposed model is evaluated in comparison with four baselines, namely statistical ARIMA, NARNET (non-linear autoregressive neural network), RNN, and RNN-LSTM models. The proposed forecasting model (knowledge-driven RNN with LSTM architecture) along with normal RNN, RNN-LSTM, and ARIMA models are implemented using python (Flavor: Anaconda Python and IDE: Jupyter notebook). For the deep learning part, Keras2 is used. On the other side, for NARNET, we have used the library function of MATLAB NNToolbox3 . The model building, training and testing are carried out in a 64-bit PC with windows 10 OS and 4GB RAM. The typical configuration of our model is as follows: one hidden layer having 16 LSTM blocks; one output layer, predicting the hourly load. The dataset is split into 67% training set and 33% test set. For data pre-processing, LSTM-based forecast, and knowledge-driven tuning, we follow the same convention as exemplified in the respective subsections within Sect. 4. All the considered models are evaluated with respect to two popular statistical goodness-of-fit criteria, namely NRMSD (normalized root mean squared deviation) [3] and MAPE (mean absolute percentage error) [4]. Additionally, we also perform correlation study over the forecast from the considered NN-based models and the actual load values. 5.3

Results and Discussions

The results of comparative study are summarized in Table 1 and in Figs. 4–5. On analyzing the results, the following inferences can be drawn: – As shown in Table 1, for all the considered study zones, the proposed model produces small NRMSDs and MAPEs, which are even lesser than that of the benchmark RNN-LSTM model. This indicates superiority of our knowledgedriven LSTM variant over all other considered models. 1 2 3

https://www.kaggle.com/c/global-energy-forecasting-competition-2012-loadforecasting. https://github.com/keras-team/keras. https://se.mathworks.com/help/deeplearning/ref/narnet.html.

60

A. Patel et al. Table 1. Comparison of model performance for different zones in USA Study zone Metrics Zone 1

NRMSD MAPE

Zone 2

NRMSD

Zone 3

NRMSD

Zone 4

NRMSD

Zone 5

NRMSD

MAPE MAPE MAPE MAPE

RNN (LSTM) RNN

NARNET ARIMA Proposed approach

19.574

32.604 21.325

26.71

7.112

25.321 31.469

23.680

15.916 6.815

19.246

39.220 31.077

29.898

18.058

4.043

15.871 23.598

17.418

3.728

18.842

26.875 32.331

29.898

18.066

4.035

16.432 23.925

17.418

3.718

14.415

37.842 36.839

39.688

12.297

6.474

15.079 37.839

14.635

5.997

21.172

30.188 30.659

20.818

18.477

9.075

20.918 41.121

20.443

8.392

Fig. 4. Percentage improvement in forecast error compared to various NN variants: (a) RNN-LSTM, (b) RNN, (c) NARNET

– The high value of correlation (refer Fig. 5) reveals that the hourly load series forecasts made by our model have best match with the observed load time series. [Due to the page limitation, we have included the results of correlation study only for the Zone-1.]

Short-Term Load Forecasting: An Intelligent Approach Based on RNN

61

Fig. 5. Correlation between actual and forecast values for the considered NN variants (Zone-1): (a) Proposed model, (b) RNN-LSTM, (c) RNN, (d) NARNET

– Finally, as depicted in Fig. 4, the average percentage improvement (in reducing error) for the proposed forecast model with respect to standard RNNLSTM, RNN, and NARNET models are 9%, 59%, and 63%, respectively. Overall, in comparison with the baselines, our proposed knowledge-driven RNN-LSTM variant is found to show improved performance with respect to all the considered metrics. This demonstrates that the factor contributing to the increased accuracy of our model is nothing but the intelligent incorporation of the domain knowledge, which is the main contribution of this work.

6

Conclusions

In this paper, we have proposed a novel variant of short-term load forecasting (STLF) strategy based on RNN with LSTM architecture. The uniqueness of the proposed method remains in embedding available domain information to tune the forecast values. The promising results of experimental study demonstrate significant improvement in forecast accuracy due to incorporation of domain knowledge in the forecast process. In future, we plan to upgrade this model with added feature for utilizing spatial auto-correlation among neighboring zones.

References 1. Baliyan, A., Gaurav, K., Mishra, S.K.: A review of short term load forecasting using artificial neural network models. Procedia Comput. Sci. 48, 121–125 (2015)

62

A. Patel et al.

2. Ceperic, E., Ceperic, V., Baric, A.: A strategy for short-term load forecasting by support vector regression machines. IEEE Trans. Power Syst. 28(4), 4356–4364 (2013) 3. Das, M., Ghosh, S.K.: Deep-STEP: a deep learning approach for spatiotemporal prediction of remote sensing data. IEEE Geosci. Remote Sens. Lett. 13(12), 1984– 1988 (2016) 4. Das, M., Ghosh, S.K.: Spatio-temporal prediction of meteorological time series data: an approach based on spatial Bayesian network (SpaBN). In: International Conference on Pattern Recognition and Machine Intelligence, pp. 615–622. Springer (2017) 5. Fallah, S.N., Ganjkhani, M., Shamshirband, S., Chau, K.W.: Computational intelligence on short-term load forecasting: a methodological overview. Energies 12(3), 393 (2019) 6. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT press, Cambridge (2016) 7. Khuntia, S.R., Rueda, J.L., van der Meijden, M.A.: Forecasting the load of electrical power systems in mid-and long-term horizons: a review. IET Gener. Transm. Distrib. 10(16), 3971–3977 (2016) 8. Khwaja, A., Zhang, X., Anpalagan, A., Venkatesh, B.: Boosted neural networks for improved short-term electric load forecasting. Electr. Power Syst. Res. 143, 431–437 (2017) 9. Koubli, E., Palmer, D., Rowley, P., Gottschalg, R.: Inference of missing data in photovoltaic monitoring datasets. IET Renew. Power Gener. 10(4), 434–439 (2016) 10. Mitchell, G., Bahadoorsingh, S., Ramsamooj, N., Sharma, C.: A comparison of artificial neural networks and support vector machines for short-term load forecasting using various load types. In: Manchester PowerTech, pp. 1–4. IEEE (2017) 11. Taieb, S.B., Hyndman, R.J.: A gradient boosting approach to the kaggle load forecasting competition. Int. J. Forecast. 30(2), 382–394 (2014)

Design and Analysis of Anti-windup Techniques for Anti-lock Braking System Prangshu Saikia and Ankur Jain(B) NIT Sichar, Sichar, Assam, India [email protected], [email protected]

Abstract. Anti-lock braking system techniques are used to prevent wheels from locking during aggressive braking to prevent skidding. This technique also reduces stopping distance. It is used to provide underlying vehicle safety. On the other hand, Anti-windup techniques are used to prevent the violation of threshold during braking. Therefore this is implemented to prevent the sudden change in speed between vehicle and wheels. In these schenerios, the controller is designed with anti-windup techniques, such as back-calculation, conditional integration, to generate desired torque to regulate ideal slip ratio. Slip ratio is defined in terms of vehicle angular motion and wheel rotation. In this particular paper we designed a controller with three techniques and compared them in terms of braking distance, control input requirement and relative slip ratio. Keywords: Anti-lock braking system · Safety · Slip factor · Advance braking system · Vehicle stability control · Anti-windup controller

1

Introduction

With an increase in vehicles in the modern world, there is a steep rise in accidents. Most of these accidents can be avoided by simple measures, while some require manual measures some can be reduced by using intelligent techniques [1]. ABS is designed to manipulate the wheel slip in order to obtain maximum friction while maintaining the steering ability at the same time. In other words the primary goal of ABS is to stop the vehicle in the shortest possible distance keeping the vehicle from skidding or attaining directional control [2]. In cases where the PID controller is used output response faces the danger of windup phenomenon which can be quite disastrous in cases where locking of wheels can pose danger to the life of a driver. This paper especially focuses on such dire situations and we design controllers with the anti-windup scheme in order to obtain fast or desired response even when the input is beyond saturation limits of the mechanical actuator of the vehicle. ABS module generally includes vehicle’s physical brakes, wheel speed sensors (up to 4), an electronic control unit (ECU), brake master cylinder, a hydraulic modulator unit with pump and valves. Over the years c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 63–71, 2021. https://doi.org/10.1007/978-3-030-49336-3_7

64

P. Saikia and A. Jain

research has been done in order to advance the ABS technology in terms of better performance. The history begins from here, the earliest of the ABS was designed in aerospace industry in 1930 [3,4]. This was the first initiative taken by vehicle experts in paving the way for development of this area. The first set of ABS was installed in Boeing B-47 and later in the 1950s, following its successful testing it was installed in most of the aeroplanes. In the early 1960s this technology was only implemented in high-end auto mobiles but in the years that followed, there was a technological boom and the further development in electronics and microcomputer made it possible to commercialise the ABS system for mass production in the1980s. Various ABS techniques exist today, some of which are- classical control methods based on PID control, nonlinear control based on back stepping, optimal control methods based on Lyapunov approach, adaptive control based on gain scheduling control method, intelligent control based on fuzzy logic, robust control based on sliding mode control method etc. Traction control system (TCS) [5] and vehicle dynamic stability control [6] make use of the technological framework of ABS. [7]. Today, ABS has been incorporated in sophisticated and even in some motor cycle. ABS has been well received over the years and it have proved to be a boon in the field of road safety. It is a common knowledge that during heavy braking the wheels will lock up resulting in slip. This is a common reason behind many accidents. Slippery (wet, icy, etc.) roads also adds to this. Slip results in Long stopping distances and in some cases destabilise the direction control of the vehicle [8–10]. An ABS comes with many benefits. It prevents skidding of vehicle in slippery roads while preserving the steering control. ABS is also highly valued as the reliability and its track record is so impressive that insurance companies are willing to offer discounts for the vehicles with ABS control. ABS is such a reliable technology that even its presence increases the resale value of the vehicle. Other advanced system such as traction control share a similar set-up as ABS therefore it makes it quite easy for the manufacturers to install both these systems together. ABS is still a developing field therefore naturally it is not complete or perfect. Problems in ABS occur when sensors used in it malfunctions due to the effect of rubble or metal chips contaminating its sensing elements. Erratic or an output with no continuity is obtained when the damage occurs in the sensor wires. Hydraulic control unit may also fail if the system is exposed to a very harsh environment or the system is a subject to severe neglect. From a designer’s view the brake controllers of ABS pose unique challenges as an ideal performance demands a controller to operate at an unstable equilibrium point. In a vehicle, depending on road conditions the maximum braking torque may vary significantly. The parameters used in developing the ABS system also poses numerous challenges as the variation in the conditions of road introduces ambiguity in the standard parameters under measurement such as tire slippage measurement signal. Bumpy roads and changing tire pad friction introduces new problems while the system bandwidth gets limited due to transportation delays.

Design and Analysis of Anti-windup Techniques

65

In spite of these facts ABS has already been successfully equipped in highend cars. Though ABS has gone through decades of research the technology is still considered cutting edge. Recent objectives in this field include research on coming up with efficient ways to enhance pedestrian safety and accident avoidance. Various studies [11] from time to time have also shown the significance of ABS in reducing vehicle crashes. Such a monumental achievement of this technology still providing room for future development compelled us in working in the technology. The main purpose of this paper is to propose an ABS technique for cruise control in a vehicle incorporating the idea of anti-windup in it. Antiwindup scheme is being used in the controller thereby improving the system performance even when the system input saturates the actuating limits. The organisation of the rest of the paper is as follows. We defined problem formulation in Sect. 2. The Sect. 3 describes the methodology used to solve the problem defined. Numerical values of parameters and simulation results are presented in the Sect. 4. The Sect. 5 concludes the research study.

2

Problem Formulation

ABS certainly reduce the risks and possibilities of accidents due to the locking of wheels. It reduces the stopping distance in vehicles as compared to those with locked wheels, furthermore, it prevents the skidding of a vehicle providing it steadiness and better steering capability. Varying road conditions change the input parameter constantly demanding the system to dynamically function at each point of time within the limits of allowed error. Friction on which the slip ratio depends changes depending on the tire brand, road condition, vehicle speed etc. From studies, it was found that the slip ratio of approximately 0.2 gives the required optimal friction for stopping the vehicle. Therefore, it is the role of the ABS controller to achieve the desired slip of 0.2. In this paper, the conditions of actuator saturation are taken into account and the controller designed with anti-wind up is comparatively examined to simple PID controller. The closed loop control system for abs is as shown in Fig. 1. Error(e)

Reference

+

System input(u) Controller

-

System Output ABS

Measured Output Sensor

Fig. 1. Closed-loop-control system for ABS

2.1

Modelling of the Plant [12]

The proposed mathematical model was designed according to the laws of physics and the non-linear dynamics can be depicted according to the following equations:-

66

P. Saikia and A. Jain

Force balance equation in the longitudinal direction Eq. 1: max = μr FN

(1)

The slip ratio is given by Eq. 2: λ=

Vx − ωR Vx

(2)

Torque summation about the wheel centre, Eq. 3 Jw αw = −u + μr RFN

(3)

Using equations first and second equations, and rearranging for λ yields, Eq. 4   μr FN 1 − λ R2 R  + λ =− μ (4) + Vx m Jw Jw Vx These Eqs. 1, 2, 3, 4 were used to develop a simulink model of ABS in Matlab. The values for each input was taken within limits of physical system. Three controllers, namely PID, PID with back calculation and PID with conditional integral, were incorporated in the model. The affects of all the three controllers were observed accordingly by a comparative study on the outputs such as wheel speed vs vehicle speed, stopping distance, control input, relative slip and generation of tire torque. While modelling the system we have assumed that the vehicular model is linear and the complex dynamics of road conditions is avoided during the formulation.

3

Methodology

PID with Conditional Integration (CI) [13] In this technique the integrating unit of the PID controller is automatically clamped when the output y reaches the saturation value. It ensures that whenever the controller goes to saturation there is no further increase in output ‘y’. If after some time the error fed to controller reduces below a certain value such that its corresponding output brings the actuator out of saturation than the integrator starts working again. In this method the output of the PI controller never goes beyond the saturation limits. After reaching saturation the value at which the integrator output stabilises depends only on the proportional constant and the input error magnitude. Further modification in this technique can be forcing the integrator to a certain predetermined value.

Design and Analysis of Anti-windup Techniques

67

PID with Back Calculation (BC) [14] This scheme is based on the idea of re-computation of the integral to a optimised value to give an output at the saturation limit. It is always beneficial to dynamically reset the integrator with a time constant rather than resetting it instantaneously. In this scheme, an extra feedback path is generated by formulation of the error signal (ES) through the measurement of the difference between the actual actuator output and the controller output. A gain 1/Tt is multiplied with the obtained error signal which is then fed back to the integrator. In the absence of saturation, this signal is zero. Accordingly, there will be no interference with the normal operation of the actuator when it is unsaturated. However, when saturation occurs, the error signal Es deviates from zero. Due to the constant behaviour of process input in this situation the normal feedback path around the process remains out of operation. But due to the presence of an alternate feedback path around the integral component, the output is forced to attain a value such that input of the integrator subsequently becomes zero.

4

Results

It is evident from the obtained results that braking performance is significantly enhanced by ABS. It was found that all three controllers gave approximately the same stopping distance. However, in order to achieve that distance all, the three controllers make the wheel speed approach the vehicle velocity to zero with slight differences. The error signal obtained from the difference of desired and relative speed is used to control the brake pressure to obtain the required control input at various points of time. On this matter, it was found that the controller with BC gives quite a smooth response and approaches the desired slip with minimal fluctuations which subsequently reduces the requirement for a large braking pressure. It was also seen that the generated tire torque in the BC method showed a bump less performance as compared to the simple and the CI PID controller. All in all, it was seen that as compared to the other two Table 1. Nominal parameter values ABS Parameters

Values

Radius of the wheel (R)

0.31 m

Mass of the vehicle (m)

342 kg

Moment of inertia (Jw )

1.13 kgm2

Gravitational constant (g)

9.81 m/s2

Maximum braking torque (u) 1200 Nm Linear velocity of vehicle (Vx ) 27.78 m/s Rotational speed of wheel (ω) 89.18 rad/s Desired slip (λd )

0.2

68

P. Saikia and A. Jain

controllers the response of the PID controller with back-calculation fell under the spectrum of optimal result in each case. These results were obtained theoretically, however, the application of it in real-world scenario is quite tricky. In our study, we considered only a linear model which did not take into account the continuously varying nonlinear conditions. The nominal value of parameters of the ABS system defined in Subsection 2.1 shown in Table 1 taken from Ref. [15]. The model was designed taking the following parameters for the various schemes proportional gain(Kp ) = 7, Differential gain(Kd ) = 7, Integral gain(Ki ) = 1.4 and in back-calculation tracking constant(tt ) = 0.5. Vehicle speed vs wheel speed 70

Back calculation ww Conditional integration ww Simple PID ww vehicle speed bc

60

speed(rad/sec)

50 40 30 20 10 0 2

4

6

8

10

12

Time(s)

Fig. 2. Wheel speed relative to vehicle speed

From the results shown in Fig. 2 we observed that both the simple PID and the PID with conditional integral gave approximately the same response, compared to it the BC based PID controller gave a more desired smooth response of the wheel speed. All the schemes manages to bring the speed of the wheel to zero at varying point of time i.e. the simple PID and CI scheme require more time to bring the wheel speed to zero as compared to BC scheme, additionally because of the constantly varying gradient in the response of the simple and CI PID scheme the one with BC is more preferable. Brake pressure defines the control input. It is always preferable to have a constant and low brake pressure as system goes through less strain in this case. On this context from the obtained results in Fig. 3a it was found that the applicable break pressure is quite low in the case of BC scheme also the pressure attains zero at a small amount of time. Whereas in simple and CI scheme the break pressure constantly varies never truly attaining zero within desired time limit. The later is undesirable as the system experiences quite a lot of strain in the process of continuously varying the hydraulics in order to produce the required pressure. Relative slip is the variation of the slip of the wheel in a vehicle with respect to the speed of surface. The results obtained in Fig. 3b clearly depict that in all of the schemes skidding has been minimised. The desired slip was taken to

Design and Analysis of Anti-windup Techniques Control input

Slip

5 4

0.8

3

0.6

2

0.4

Relative slip

Brake pressure

1

u_bc u_pid u_ci

1 0 -1

0.2 0 -0.2

-2

-0.4

-3

-0.6

-4

-0.8

-5

-1 2

4

6

8

69

10

12

acc_slip_pid acc_slip_bc acc_slip_ci

2

4

6

8

Time(s)

Time(s)

(a)

(b)

10

12

Fig. 3. (a) Brake pressure vs Time, (b) Relative slip vs Time

be 0.2. The response in BC scheme approaches the desired slip with a smooth performance whereas the simple PID and CI scheme gives fluctuations and additionally takes a longer time to achieve the desired slip. Stopping distance 700 500 450

600 500

350

Distance(rad)

Generated tire torque

400

300 250 200

400 300 200

150 100

100 50

Tire torque spid Tire torque bc Tire torque ci

0 0

2

4

6

8

10

12

Stopping distance spid Stopping distance bc Stopping distance ci

0 2

4

6

Time (s)

Time (s)

(a)

(b)

8

10

12

Fig. 4. (a) Generated torque vs Time, (b) Stopping distance Vs Time

Generated torque is the essence of a successfully and optimally operating braking system. The faster and accurate the braking torque is, the better the performance. In the following ABS system we observe from Fig. 4a that the simple PID controller and CI scheme gives a faster response showing less rise time as compared to the BC scheme. However as time progressed it was observed that the later settled down with faster and more steadier response as compared to any of the CI or simple PID scheme which went on to give deviations from the required torque never actually achieving a steady state response. As evident

70

P. Saikia and A. Jain

from Fig. 4b stopping distance was observed to be the same in all of the three controllers.

5

Conclusions

In physical systems like ABS where actuator limits imposes the danger of wind up, it is quite advantageous to have a controller that can counter these effect while giving required results with desired accuracy and precision. From the obtained results from our designed controller it can be clearly stated that as compared to other schemes the back calculation gives more efficient and accurate results. We have arrived at these conclusions by making some assumptions out of these the most significant one is the consideration of the vehicular model as linear. However, due to the high non-linearity of ABS control problem, research on improved control methods is still under progress. Most of the proposed schemes require system models, however as real-life conditions are unpredictable and varying quite dynamically, most of these models give unsatisfactory results. Future prospects of this research lies in the incorporation of evolutionary algorithms like brain storm optimization algorithm or algae algorithm into the controller. These algorithms, if implemented effectively holds the potential to increase the efficiency of the system in obtaining optimized results even in dynamically varying conditions. Acknowledgement. The authors would like to acknowledge the financial support is given by TEQIP-III, NIT Silchar, Silchar - 788010, Assam, India.

References 1. Jain, A., Roy, B.K.: Gain-scheduling controller design for cooperative adaptive cruise control: towards automated driving. J. Adv. Res. Dyn. Control Syst. (JARDCS) (2019) 2. Ivanov, V., Savitski, D., Shyrokau, B.: A survey of traction control and antilock braking systems of full electric vehicles with individually controlled electric motors. IEEE Trans. Veh. Technol. 64(9), 3878–3896 (2015) 3. Maier, M.: The new and compact ABS5 unit for passenger cars. 950757 (1996) 4. Wellstead, P., Pettit, N.: Analysis and redesign of an antilock brake system controller. IEE Proc. Control Theory Appl. 144(5), 413–426 (1997) 5. Tanelli, M., Savaresi, S.M., Cantoni, C.: Longitudinal vehicle speed estimation for traction and braking control systems. In: Proceedings of the IEEE International Conference on Control Applications, pp. 2790–2795 (2006) 6. Poussot-Vassal, C., Sename, O., Dugard, L., Savaresi, S.: Vehicle dynamic stability improvements through gain-scheduled steering and braking control. Veh. Syst. Dyn. 49(10), 1597–1621 (2011) 7. Aly, A.A., Zeidan, E.-S., Hamed, A., Salem, F.: An antilock-braking systems (ABS) control: a technical review. Intell. Control Autom. 2(03), 186 (2011) 8. Mauer, G.F.: A fuzzy logic controller for an ABS braking system. IEEE Trans. Fuzzy Syst. 3(4), 381–388 (1995)

Design and Analysis of Anti-windup Techniques

71

9. Lennon, W.K., Passino, K.M.: Intelligent control for brake systems. IEEE Trans. Control Syst. Technol. 7(2), 188–202 (1999) 10. Lojko, B., Fuchs, P.: The control of ASR system in a car based on the TMS320f243 DSP. Ph. D. dissertation, Diploma Thesis, Dept. of Radio & Electronics, Bratislava (2002) 11. Broughton, J., Baughan, C.: A survey of the effectiveness of ABS in reducing accidents. Transport Research Laboratory (2000) 12. Nouillant, C., Assadian, F., Moreau, X., Oustaloup, A.: A cooperative control for car suspension and brake systems. Int. J. Automot. Technol. 3(4), 147–155 (2002) 13. Anti-windup Strategies, pp. 35–60. Springer, London (2006). https://doi.org/10. 1007/1-84628-586-0_3 14. Åström, K.J., Hägglund, T.: PID controllers: theory, design, and tuning. Instrument society of America Research Triangle Park, NC, vol. 2 (1995) 15. Rohilla, P., Dhingra, A.: Design and analysis of controller for antilock braking system in matlab/simulation. Int. J. Eng. Res. Technol. (IJERT) 5, 583–589 (2016)

Wind-Power Intra-day Statistical Predictions Using Sum PDE Models of Polynomial Networks Combining the PDE Decomposition with Operational Calculus Transforms Ladislav Zjavka1(B) , Václav Snášel1 , and Ajith Abraham2 1 Department of Computer Science, Faculty of Electrical Engineering and Computer Science,

VŠB-Technical University of Ostrava, Ostrava, Czech Republic {ladislav.zjavka,vaclav.snasel}@vsb.cz 2 Machine Intelligence Research Labs (MIR Labs), Auburn, WA 98071, USA [email protected]

Abstract. Chaotic processes in complex atmospheric circulation and fluctuation waves in local conditions cause difficulties in wind power prediction. Physical models of Numerical Weather Prediction (NWP) systems produce only coarse 24– 48-h prognoses of wind speed, which are not entirely assimilated to local specifics and usually delayed to be produced every 6-h. Artificial Intelligence (AI) techniques can process daily forecasts or calculate independent statistical predictions using historical time-series in a few-hour horizon. The presented unconventional neuro-computing method elicits Polynomial Neural Network (PNN) structures to decompose the n-variable Partial Differential Equation (PDE), into a set of nodeconverted sub-PDEs. The inverse Laplace transformation is applied to the node produced rational terms, using Operational Calculus (OC), to obtain the originals of unknown node functions. The complete composite PDE model includes the sum of selected sub-PDE solutions, which allow detail representation of complex weather patterns. Self-adapting statistical models are developed using a specific increased inputs->-output time-shift to represent the current local near-ground conditions for predictions in the trained time-horizon of 1–12 h. The presented multi-step procedure forming statistical AI models allow more accurate intra-day wind power predictions than processed middle-scale numerical forecasts. Keywords: Polynomial Neural Network · Partial Differential Equation · Operational Calculus · Polynomial PDE conversion · Inverse Laplace transformation

1 Introduction Atmospheric pressure and temperature distribution in global scales are the major factors which mainly influence overall weather character. Local wind waves, gusts, unstable direction or surface temperature affects the induced power on smaller scales but these © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 72–82, 2021. https://doi.org/10.1007/978-3-030-49336-3_8

Wind-Power Intra-day Statistical Predictions Using Sum PDE Models

73

fluctuations must be also considered as plant side-effects whose relevance may increase in short-time horizon. Wind power prediction methods can predict direct wind power or wind speed initially and then convert it in a 2-stage procedure. They are based usually on 2 main approaches [4]: • Physical, using numerical solutions of PDEs • Statistical, using data observation series to form prediction models Regional NWP systems solve sets of primitive PDEs to simulate each particle behavior for the ideal gas flow in a defined resolution. They can additionally solve PDEs describing surface wind factors to particularize the local wind speed forecasts [5]. Regression or AI methods do not aim at representation of weather phenomena using the physical consideration. They process or analyze time-series observations to model the statistical relationships between relevant inputs -> output quantities [3]. They can post-process local NWP output data to better account for local specifics and the surface character. The refined wind speed forecast is easy to convert by statistical models into the power series. Standard AI adaptive techniques do not need to wait for NWP data, which are usually provided with a several hour delay. However, independent predictions of their statistical models are usually valuable up to 6 h-horizon. On the other hand, AI processing of NWP data results in high correlation with the forecast accuracy. But in the case of incoming frontal zones, events and their disturbances, NWP data allow representation of changed unknown weather patterns which are not included in the training data set [6]. The main objective is to choose the appropriate NWP or statistical approach according to the current situation and data analysis to develop the optimal prediction model with the minimal errors or failures. Complexity of composite sum PDE models is adequate to the patterns in local weather. Standard AI computing techniques require data pre-processing which usually significantly reduces the number of input variables, and results in simplification of the models. Polynomial Neural Networks (PNN) use regression where the number of parameters grows exponentially along with the number of input variables. PNN decompose the general inputs->outputs connections, expressed by the Kolmogorov-Gabor polynomial (1). Y = a0 +

n  i=1

ai xi +

n n   i=1 j=1

aij xi xj +

n  n n  

aijk xi xj xk + . . .

(1)

i=1 j=1 k=1

n - number of input variables x i ai , aij , aijk ,… - polynomial parameters Group Method of Data Handling (GMDH) evolves gradually multi-layer structures of PNN, adding layer by layer to calculate the polynomial parameters of the chosen nodes, which approximate to the best the target function. PNN decompose system complexity into a number of simple relationships, each described by low order polynomial node functions (2) for every pair of input variables x i , x j [1]. y = a0 + a1 xi + a2 xj + a3 xi xj + a4 xi2 + a5 xj2 x i , x j - input variables of polynomial neuron nodes

(2)

74

L. Zjavka et al.

Differential Polynomial Neural Network (D-PNN) is a recent neuro-computing technique using adapted 2-stage procedures of Operational Calculus (OC). It decomposes the n-variable linear PDE into to particular node sub-PDEs, whose various combinations allow complex representation of unknown dynamic processes. D-PNN combines principles of the self-organizing multi-layer structures with mathematical methods of PDE solutions. It selects the best 2-inputs in PNN nodes to produce applicable PDE components. The 1st step OC polynomial PDE conversion leads to rational terms which are the Laplace images of unknown node functions. The inverse L-transform is applied to them in the 2nd step to obtain the node originals whose sum is used in the complete PDE model of the searched separable output functions. D-PNN uses External Complement in its training and testing, which usually allow the optimal representation of a problem [1]. Statistical models are developed for each 1–12-h inputs->output time-shift of spatial data observations to predict wind power at particular hours [7]. The D-PNN intra-day predictions are more accurate than those based on adapted middle-term NWP forecasts or standard statistical approaches using only a few input variables in the simple AI or regression models (Sect. 5) [9].

2 Intra-day Multi-step Wind Power Prediction The proposed multi-step procedure, based on the statistical approach, pre-estimates at first the optimal numbers of the last days, whose data are used to elicit prediction models. The optimal daily training periods are initially determined using assistant test models according to their best approximation of the desired output in the last 6-h. The model development is analogous to the prediction one but its output is continually tested with the reserved latest power measurements. The lowest testing errors indicate the optimal training parameters. The estimated numbers of daily data samples with an increased inputs->output timeshift are used to elicit the regression or AI statistical models. These are applied to the latest morning input data to predict wind power in the trained horizon 1–12 h ahead. Separate intra-day models are developed to represent the optimal inputs->output data relations for the particular time-shift in each hour prediction (Fig. 1) [29]. A similarity between training and testing data patterns, characterized by a sort of settled weather over few-day periods, allow to develop prediction models applicable to unseen data. In-coming frontal breaks or disturbances result in various different conditions, which are difficult to model using only the latest data [7].

3 A Polynomial PDE Substitution Using Operational Calculus D-PNN defines and substitutes for the general linear PDE (3) which can describe unknown complex dynamic systems. It decomposes the n-variable PDE into 2-variable specific sub-PDEs in PNN nodes. These can be solved using OC to model unknown node functions uk whose sum is the searched n-variable u function (3). a + bu +

n  i=1

∂u   ∂ 2u + dij + ... = 0 ∂xi ∂xi ∂xj n

ci

n

i=1 j=1

u=

∞  k=1

uk

(3)

Wind-Power Intra-day Statistical Predictions Using Sum PDE Models Data observations

The last data

Time-shift 1- 12 h.

Temperature Relat.humidity Pressure Wind speed & direction

Prediction 1-12 hours ahead

The latest data

Test (6h.)

Training (2-x days) Input time-series:

Model inputs

Desired wind power

Input time-series:

Prediction each hour Time-shift 1- 12 h.

Temperature Relat.humidity Pressure Wind speed & direction

D-PNN

75

Wind power predictions

Models 1…12

Fig. 1. D-PNN is trained with spatial data observations from the estimated period of the last few days for each inputs->output time-shift 1–12 h (blue-left) to develop PDE models which apply the latest data inputs to predict wind power in the trained time-horizon (red-right)

u(x 1 , x 2 , …, x n ) - unknown separable function of n-input variables a, b, ci , d ij ,… - weights of terms ui - partial functions Particular 2-variable linear 1st or 2nd order PDEs, formed in PNN nodes, can be expressed with 8 equality variables (4).   ∂u ∂u ∂ 2 u ∂ 2 u ∂ 2 u F x1 , x2 , u, , , , , =0 (4) ∂x1 ∂x2 ∂x12 ∂x1 ∂x2 ∂x22 uk - node partial sum functions of an unknown separable function u The OC conversion of specific PDEs (4) is based on the proposition of the Ltransforms of function nth derivatives in consideration of the initial and boundary conditions (5). n    (i−1) L f (n) (t) = pn F(p) − pn−i f0+ L{ f (t)} = F(p)

(5)

k=1

f(t), f’(t), …, f (n) (t) – originals continuous in p, t - complex and real variables This polynomial substitution for the f(t) function nth derivatives in an Ordinary Differential Equation (ODE) results in algebraic equations from which the L-transform F(p) of the searched function f(t) can be separated as a pure rational function (6). It is expressed in the complex form with the complex number p, so that the inverse Ltransformation is necessary to obtain the original functions f(t) of a real variable t (6) described by the ODE [2].  P(αk ) 1 P(p) = Q(p) Qk (αk ) p − αk n

F(p) =

k=1

f (t) =

n  P(αk ) αk ·t e Qk (αk )

(6)

k=1

α k - simple real roots of the multinomial Q(p) F(p) - L- transform image Pure rational terms (6), whose polynomial degree correspond to specific 2-variable sub-PDEs (4), are produced in D-PNN node blocks (Fig. 2) (4), using the OC based conversion (5). The inverse L-transformation is analogously applied to the corresponding

76

L. Zjavka et al.

L-images (6), to obtain the originals of unknown uk node functions which are summed in the output model of the separable n-variable u function (3). Each block node calculates its GMDH polynomial (2) output which is applied in the next layer node inputs. Blocks contain 2 vectors of adaptable polynomial parameters a, b to form rational functions neurons, i.e. specific sub-PDE converts (7). One of its inverse L-transformed neurons can be selected to be included in the model output sum [7]. x2

x1

Input variables

nd

2 order sub-PDE solution

/

/

Π

/

neurons

GMDH polynomial

CT CT = composite terms

Block output

Fig. 2. Blocks form derivative neurons – PNN-node sub-PDE solutions

yi = wi

b0 + b1 x1 + b2 sig(x12 ) + b3 x2 + b4 sig(x22 ) a0 + a1 x1 + a2 x2 + a3 x1 x2 + a4 sig(x12 ) + a5 sig(x22 )

· eϕ

(7)

ϕ = arctg(x 1 /x 2 ) - phase representation of 2 input variables x 1 , x 2 ai , bi - polynomial parameters wi - weights sig – sigmoidal Eulers’s notation of complex variables (8) defines the phase which can replaces the inverse L-transformation eϕ (7) of converted sub-PDEs expressed in the complex form with p (6). The pure rational term corresponds to the r radius (amplitude) (8). p = x1 +i · x2 = 



Re



x12

+ x22

·e

i · arctan



x2 x1

= r · ei·ϕ = r · (cos ϕ + i · sin ϕ) (8)

Im

4 PDE Decomposition Using Backward Differential Network Multi-layer PNN form composite polynomials (9). Blocks in nodes of the 2nd and next layers can produce in addition Composite Terms (CT) which are equivalent to the simple neurons in calculation of the D-PNN sum output. CTs substitute for the sub-PDEs with respect to input variables of back-connected node blocks of the previous layers (Fig. 3) using the product of their Laplace images according to the composite function partial derivation rules (10). F(x1 , x2 , . . . , xn ) = f (z1 , z2 , . . . , zn ) = f (φ1 (X ), φ2 (X ), . . . , φm (X ))

(9)

Wind-Power Intra-day Statistical Predictions Using Sum PDE Models …

x1

x2

x3

x4

x5

1-2

1-3

1-4

1-5

1-N

....

5-N

3-4

1-4

4-5

5-B

....

3-B

1-2

1-4

2-4

1-B

4-5

....

4-B

1-4

2-4

3-5

1-5

4-5

....

5-B

x11

xn

77

B = max. number of blocks in all layers

x13

1-3 x21

x22

Y = y1 + y2 + ... + y k+ ... yi - node sub-PDE solutions

Σ

Backward connections -1

L -transformed rational sum terms — node sub-PDE solutions

Fig. 3. D-PNN selects from possible 2-variable combination node blocks to produce applicable sum PDE components (neurons)

 ∂f (z1 , z2 , . . . , zm ) ∂ϕi (X ) ∂F = · ∂xk ∂zi ∂xk m

k = 1, . . . , n

(10)

i=1

The 3rd layer blocks, for example, can select from additional CTs using products of sub-PDE converts. i.e. the neuron L-images, of 2 and 4 back-connected blocks in the previous 2nd and 1st layers (11). The number of possible CT combinations in blocks doubles along with each back-joined preceding layer (Fig. 3). y31 =w31 · ·

2 + b x + b x2 b0 + b1 x21 + b2 x21 3 22 4 22 2 + a x2 a0 + a1 x21 + a2 x22 + a3 x21 x22 + a4 x21 5 22 2 b0 + b1 x12 + b2 x12

2 a0 + a1 x11 + a2 x12 + a3 x11 x12 + a4 x11

2 + a5 x12

·

P12 (x1 , x2 ) ϕ31 ·e Q12 (x1 , x2 )

(11)

Qij , Pij = GMDH output and reduced polynomials of n and n − 1th degree ykp - pth Composite Term (CT) output ϕ 21 = arctg(x 11 /x 13 ) ϕ 31 = arctg(x 21 /x 22 ) ckl - complex representation of the lth block inputs x i , x j in the k th layer The CTs are the products of sub-PDE solutions of the external function (i.e. the L−1 transformed image) in the starting node block and selected neurons (i.e. the internal function images) of back-connected blocks in the previous layers (11). The D-PNN output Y is the arithmetic mean of the outputs of selected active neurons + CTs in node blocks to simplify and speed-up the parameters adaptation (12). 1 Y = yi k k

i=1

(12)

78

L. Zjavka et al.

k = the number of active neurons + CTs (node PDE solutions) Multi-objective algorithms can perform the formation and “back-production” of neurons and CTs in the tree-like PNN structure nodes (Fig. 3). D-PNN selects the best of 2input combinations in each layer node (analogous to GMDH) to produce applicable sum PDE model components. Their polynomial parameters and weights are pre-optimized using the Gradient method [10]. This iteration algorithm skips from the actual to the next (or random) node block, one by one, to select and adapt one of its neurons or CTs. D-PNN training error is minimized in consideration of a continual test using the External Complement of GMDH. A convergent combination of selected node sub-PDE solutions can form the optimal sum model.  M 2  d  Y − Y i i  i=1 → min (13) RMSE = M Y i - produced and Y di - desired D-PNN output for ith training vector of M-data samples The Root Mean Squared Error (RMSE) is calculated in each iteration step of the training and testing to be gradually minimized (13).

5 Prediction Experiments Using the Estimated Daily Data Periods D-PNN applied time-series of 20 data inputs to predict wind power 1–12 h ahead in the central wind farm at Drahany, Czech Republic. The last 6-h data were reserved for the continual test, i.e. these samples were not used to adapt the model parameters but only to calculate the testing error and control the training. Additional spatial historical data (wind speed and direction) of 3 surrounding wind farms and meteorological observations (avg. temperature, relative humidity, see level pressure, wind speed and azimuth) from 2 nearby airports extended the input vector1 (Fig. 4).

Airport (temperature, humidity, pressure, wind speed and direction)

Wind farm (wind power, speed and direction)

Forecasted locality

Fig. 4. Spatial location denotation of the observation data 1 Weather underground historical data series: www.wunderground.com/history/airport/LKTB/

2016/7/22/DailyHistory.html.

Wind-Power Intra-day Statistical Predictions Using Sum PDE Models

79

Standard regression SVM using the dot kernel and the GMDH Shell for Data Science, a professional self-optimizing forecasting software, were used to compare performance of the models. Their training and testing were analogous to the D-PNN multi-step procedure (Fig. 1) using data from the estimated daily periods. GMDH searches for the most valuable 2-inputs in PNN nodes [1], analogous to D-PNN. This feature selection improves the accuracy of the prediction models (Fig. 5 and Fig. 6).

Fig. 5. Drahany 15.5.2011 - RMSE: D-PNN=314.1, SVM=525.4, GMDH=495.0,Smooth =1021.0, Regress=897.9

Fig. 6. Drahany 17.5.2011 - RMSE: D-PNN=140.3, SVM=358.9, GMDH=284.6, Smooth =561.6, Regress=467.7

The performance of the presented D-PNN and standard AI models was compared with regression of 2 conventional methods - Exponential Smoothing (ES) and Linear Regression (LR) in 2-week 12-h intra-day predictions, from May 12 to 25, 2011 (Fig. 7 and Fig. 8). ES and LR process only the historical time-series of wind power and apply their previous time-step predictions as input data in the next-time steps.

80

L. Zjavka et al.

Fig. 7. 2-week daily 12-h wind power prediction average D-PNN=158.7, SVM=257.7, GMDH=204.1, Smooth=268.2, Regress=312.7

MAE:

The AI models can predict the real power course in most cases. Their formation is problematic, if training data patterns with catchy wind do not correspond to the latest changed conditions in the predicted capful days with an intermittent or stable low power output (Fig. 6). The wind speed values vary under or around the power generation limit (about 400 kW), which causes difficulties and failures in the predictions. NWP data can be analyzed to detect these days in order to extend the training periods or extract days with similar data patterns. The AI predictions in changeable weather (Fig. 5) usually succeed as the training data better characterize wind variations in predicted hours. The SVM output can alternate in subsequent time-steps (Fig. 5 and Fig. 6), which increases prediction errors. SVM need more precise estimations of the optimal training periods

Fig. 8. 2-week daily 12-h wind power prediction average D-PNN=71.9, SVM=101.2, GMDH=84.8, Smooth=87.2, Regress=93.4

adjusted

MAPE:

Wind-Power Intra-day Statistical Predictions Using Sum PDE Models

81

than D-PNN or GMDH. These selective methods can apply different numbers of the last data samples to produce models with similar predictions. The ES and LR predictions represent mostly a simple course or linear trend in the power progress. ES can rarely predict a round course of power series. Both the statistics can succeed in calm days with gentle wind speed alterations (Fig. 6), which follow some catchy periods. This is caused be simplicity of the models an uncomplicated calculation resulting in the flat output. ES and LR require also estimations of the optimal periods whose data samples they use to calculate the parameters [9].

6 Conclusions D-PNN is a novel neuro-computing method combining self-organizing PNN structures with adapted mathematical techniques to decompose and solve n-variable PDEs. Its selective sum PDE solutions can model the local weather dynamics. D-PNN can predict real wind power alterations in catchy days. The predictions are less valuable if calm wind days follow a break change in weather. The compared AI and conventional regression techniques are not able to model the complexity of local weather patterns in most of the predicted days. D-PNN can analogously predict the intra-day production of the photovoltaic (PV) energy using additional input data of clear sky index, cloudiness cover or sky conditions [8]. Statistical models need to apply additional NWP data in the middleterm 24–48-h prediction horizon2 . The presented wind power intra-day predictions are more precise than AI converted wind speed forecasts of meso-scale NWP systems, which cannot fully consider local specifics. Acknowledgements. This work was supported from European Regional Development Fund (ERDF) “A Research Platform focused on Industry 4.0 and Robotics in Ostrava”, under Grant No. CZ.02.1.01/0.0/0.0/17 049/0008425.

References 1. Anastasakis, L., Mort, N.: The development of self-organization techniques in modelling: a review of the group method of data handling (GMDH). The University of Sheffield (2001) 2. Berg, L.: Introduction to the Operational Calculus. North-Holland Series on Applied Mathematics and Mechanics, vol. 2. North-Holland, New York (1967) 3. Liu, H., Tian, H.-Q., Chen, C., Fei Li, Y.: A hybrid statistical method to predict wind speed and wind power. Renewable Energy 35, 1857–1861 (2010) 4. Monteiro, C., Bessa, R., Miranda, V., Botterud, A., Wang, J., Conzelmann, G.: Wind power forecasting: state of the art 2009. Report No.: ANL/DIS-10-1. Argonne National Laboratory, Argonne, Illinois (2009) 5. Wang, J., Song, Y., Liu, F., Hou, R.: Analysis and application of forecasting models in wind power integration: a review of multi-step-ahead wind speed forecasting models. Renew. Sustain. Energy Rev. 60, 960–981 (2016) 2 Weather underground tabular forecasts: www.wunderground.com/cgi-bin/findweather/getFor

ecast?query=LKMT.

82

L. Zjavka et al.

6. Yan, J., Liu, Y., Han, S., Wang, Y., Feng, S.: Reviews on uncertainty analysis of wind power forecasting. Renew. Sustain. Energy Rev. 52, 1322–1330 (2015) 7. Zjavka, L.: Wind speed forecast correction models using polynomial neural networks. Renewable Energy 83, 998–1006 (2015) 8. Zjavka, L., Krömer, P., Mišák, S., Snášel, V.: Modeling the photovoltaic output power using the differential polynomial network and evolutional fuzzy rules. Math. Model. Anal. 22, 78–94 (2017) 9. Zjavka, L., Mišák, S.: Direct wind power forecasting using a polynomial decomposition of the general differential equation. IEEE Trans. Sustain. Energy 9, 1529–1539 (2018) 10. Zjavka, L., Snášel, V.: Constructing ordinary sum differential equations using polynomial networks. Inf. Sci. 281, 462–477 (2014)

Heterogeneous Engineering in Intelligent Logistics Yury Iskanderov1(B) and Mikhail Pautov2 1 The St. Petersburg Institute for Informatics and Automation of RAS,

39, 14-th Line, St. Petersburg, Russia [email protected] 2 Foscote Group, 23A Spetson Street, 102A, 4000 Mesa Geitonia, Limassol, Cyprus [email protected]

Abstract. This paper introduces application of some core principles of the actornetwork theory (ANT) and ANT-based heterogeneous engineering in intelligent logistics presented as a class of concepts, including: autonomous logistics, product intelligence, Physical Internet, intelligent transportation systems, self-organizing logistics. It was demonstrated that the discussed method has a high potential in applied studies of intelligent logistics systems and, more generally, sociotechnological systems through its dialogue and potential integration with multiagent systems studies and other relevant methods of agent- and network-oriented research and engineering and aims at bringing attention of the broad research community to the actor-network paradigm and its applications. Keywords: Actor-network theory · Heterogeneous engineering · Intelligent logistics

1 Introduction This paper is based on the actor-network paradigm introduced in our recently published paper [28] as a descriptor for self-organizing global logistics networks. Here we use the same ANT-driven approach to reflect on intelligent logistics architectures. Given the relative novelty of the actor-network discourse in the context of multi-agent systems and hybrid intelligent systems studies we find it practical to provide a brief introduction to the fundamentals of the actor-network theory and heterogeneous engineering given in more detail in [28]. The concept of actor-network first appeared at the end of the 20th century in the texts on the science and technology studies (STS) authored by Latour, Callon, Law and some other creators of the method. ANT evolved from the idea that the actions of any human or nonhuman actor are mediated by actions of a set of other human or nonhuman actors. Once these heterogeneous actors start the continuous process of forming an actor-network AN their agencies (abilities to act) lose their original distinction and all actors start acting as equal elements in their interplay within AN. Agency here should © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 83–91, 2021. https://doi.org/10.1007/978-3-030-49336-3_9

84

Y. Iskanderov and M. Pautov

be understood as a mere effect of interaction of heterogeneous actors [10]. Heterogeneity of actors was traditionally considered in the framework of actor-network theory as (a) their belonging to one of the two opposite worlds: human or nonhuman (the latter including with no discrimination or prejudice natural, technological, textual, ideal and other elements) and (b) their ability to form stable networks of socio-technological hybrids (or quasi-objects). With the recent advent and coming out of the blue of the artificial intelligence the world of heterogeneous actors acquires the third pole and the third dimension: nonhuman intelligent objects-subjects in their interactions with humans and non-intelligent nonhuman actors (hence the emerging demand for integration of the ANTbased approaches with the hybrid intelligent systems and multi-agent systems research, knowledge engineering, ergonomics and other related fields). We foresee further evolution of the actor-network paradigm from the descriptive theory, through its formalization and integration with other relevant network-oriented and agent-based methods, towards its eventual conversion into a full-fledged applied tool for modelling and simulation of socio-technological systems. ANT-based method of heterogeneous engineering (HE) was defined by its author who coined the term as a “function of the interaction of heterogeneous elements as these are shaped and assimilated into a network” [11]. In these terms socio-technological systems (whereof the logistics systems are a significant subclass) are viewed as assemblages of heterogeneous elements associated by means of networks [11]. The socio-technological world is an effect of a combination of these heterogeneous elements in their permanent interaction [10]. In heterogeneous engineering the social, the technical, the conceptual and the textual elements are fitted together and further converted (translated) into a set of equally heterogeneous products [9]. The following section gives a brief overview of the basic principles of the actornetwork theory and their intercourse with the multi-agent systems represented, inter alia, by intelligent logistics systems.

2 ANT and Multi-agent Systems In [28] we give a definition and description of some core ANT concepts (generalized symmetry, actor-network dualism and revised nebular oppositions). Other ANT fundamentals are connected with the central process in actor-networks known as “translation”. Translation. Operation of translation in actor-networks can be defined as a delegation of powers of representation from a set of actors (actor-networks) to any particular [blackboxed] actor or actor-network in a particular program of actions: A = T(A1 ,…,An ), where T is the translation of actors A1 ,…,An to A [28]. “A translates B” means A defines B. Regardless of whether B is human or non-human, a collectivity or an individual [25]. Operation of translation equalizes actor-network actions in various space-time areas and various meta-levels of presentation (e.g. when behavior of an actor-network is translated through texts: graphs, diagrams, algorithms, formulae etc.). Any association is possible if this association is encoded as heterogeneous connections established through the operation of translation [1].

Heterogeneous Engineering in Intelligent Logistics

85

Obligatory Passage Points (OPP). An OPP is the subject and driver of the translation (i.e. the Translator): an entity intending to speak on behalf of other actors of the actornetwork translating in a way that “fits together their interests and behaviours” [29]. Prescription. Prescription (or inscription) determines “translatability” of actors. Prescription index P(A)∈[0,1] of actor A is a fuzzy estimate of possible actions of actor A from the viewpoint of other actors in actor-network AN. More formally, the more complete and determined knowledge actors AN\A of actor-network AN have in regard to actor A the higher the value of index PAN\A (A). The less prescribed actors are more easily translatable in the interest of others, than more rigidly prescribed ones [14]. For any actors A1 and A2 : P(A1 ) τ(A1 )>τ(A2 ), where τ(Ai )∈[0,1] is a quantitative metric to measure ability of actor Ai to be translated [28]. The process of translation can be presented as a tuple of consequent stages: T = , where P is problematisation, I is interessement, E is enrolement and M is mobilization. These are discussed in detail in [28] along with the corresponding phenomena in MAS. Here we give a brief overview. These translation stages are referred to in the following sections where we discuss intelligent logistics architectures. Problematisation: Problematisation is the first step of translation where one or more key actors try to define the exact nature of the problem [26] as well as the roles of other actors that could fit with the proposed solution [3]. Problematisation embraces what MAS theory defines as “commitments” [4]. Interessement: Synonymous with interposition interessement can be defined as the way allies are locked in place. It corresponds to the processes that are trying to provide identity and role as defined at the first (problematisation) stage [3]. This stage tries to break all competitive liaisons and build a system of alliances within actor-network. At the interessement stage the socio-technological communities are formed and fixed. This stage of the translation process embraces “conventions” discussed in MAS studies [4], but the interessement goes further: in terms of MAS it is driven by the actor’s intention to weaken/break other actors’ commitments/conventions and thus create new conventions with them to achieve a particular goal. Enrolement: The core function of the enrolement is the determination and coordination of roles of actors aiming at creation of a steady network of alliances [3]. This stage of translation is also interconnected with MAS theory where one of the forms of actor obligations is the role accepted by or assigned to an actor [4]. Mobilization: Through step-by-step appointment of representatives and establishment of a series of equivalences heterogeneous actors are moved and then “reassembled” at a new place/time. This stage completes translation and certain actors start acting as representatives (delegates) of other actors [2]. The mobilization concept suggests new semantics for translation-like effects discussed in the framework of MAS theory where a collection of actors needed to accomplish a task frequently includes humans who have delegated tasks to nonhuman and/or human actors to do some work, and (hence) it is essential that the actor communication functions be common across the language(s) of communication between heterogeneous actors in the actor-network [27]. The new vision towards agency as a mere effect of interaction of heterogeneous actors regardless

86

Y. Iskanderov and M. Pautov

of their nature and inherent intelligence (or absence thereof) introduced in ANT (generalized symmetry principle) may thus help reconsider and enrich coordination scenarios discussed in MAS theory [28].

3 Heterogeneous Engineering as an Assembly Tool in Intelligent Logistics 3.1 Intelligent Logistics Systems as Networks of Heterogeneous Actors The terms “intelligent logistics” or “smart logistics” refer to various logistics operations (e.g. inventory, transport or order management) programmed or controlled in a more intelligent way versus the traditional solutions. Intelligent logistics encompasses a variety of methods and applications (e.g. online product or transport tracing, problem identification, automatic decision making and execution) [5]. Nowadays a number of novel approaches emerge to make logistics systems more intelligent [5], e.g.: autonomous logistics [6], product intelligence [7], intelligent transportation systems, Physical Internet [8] and self-organizing logistics [12]. Among those the intelligent transportation systems focusing on transport and traffic management using novel information, automation and communication technologies [15] have gained a special interest since they expand the domain of the logistics of products to the global material distribution [5], also including in their scope the mobility of humans. For the purposes of this paper we adopt the heterogeneous engineering concept exposed in [11] which accentuates: (1) heterogeneity of elements indispensable for intelligent logistics assemblages; (2) complexity and contingency of links between these elements; (3) decision making methods in conflict situations. Although intelligent logistics systems are conceived by humans and start as quasi-societies, the social does not play a privileged or most influential role in the system building process. The nonhuman factors (natural, technological) may resist any attempts of the system builders to manipulate them and thus invest in understanding of the social structure of the systems being built [11]. In logistics systems these factors are accountable for an increasing number of challenges appealing for changes to the existing operating practices [15], e.g. uncertain operating environment where congested transportation routes or transport network nodes cause uncertainties in collection and delivery times [15]. A combination of the intelligent logistics concepts provides a new vision of supply chains as assembly lines where the output of the final product is understood as its delivery to the final consumer [5]. Undoubtedly, such complex and heterogeneous assembly line is a technological object. In ANT technology is viewed as a family of associating and channelizing methods applied to objects and forces, both human and nonhuman [11]. Technology thus understood is used to achieve the basic objective of heterogeneous engineering: to build a relatively stable system of interrelated parts and fragments with emergent properties in an unfavorable (adverse or indifferent) environment [11]. There is always a risk that the associated elements constituting a technological system may dissociate if challenged by a competing adverse system [11]. This problem is paradigmatic in system assembly: to network the social, technological, natural or mixed elements so that they stay steady in their places without

Heterogeneous Engineering in Intelligent Logistics

87

being dissociated by the competing heterogeneous actors acting in the same environment. This example also demonstrates the applicability of the generalized symmetry principle of ANT to explain the system building process. An intelligent logistics system associates virtually everything: from humans to weather conditions. The system builders strive to create a network of reciprocally supporting heterogeneous elements, to dissociate the adversely acting forces and, through transforming these forces, associate them with their project. But every particular form of association/dissociation depends on the balance of forces. Some forces are invincible, while others can be manipulated or even totally controlled [11]. The network structure reflects the nature and power of all forces: those available for use and those counteracting in the network. Saying that an artefact is well adapted to its environment should mean that it is a part of a network able to assimilate or resist potentially adverse external forces (i.e. a quasi-stable network) [11]. The author of [11] gives a vivid historical example of the ambitious logistics project of the XV-XVI centuries: Portuguese maritime expansion aiming to control the India-Europe strategic trade lane. The heterogeneous engineering method was used to demonstrate how human (ship crews, engineers), technological (navigation devices, novel vessel designs) and natural (streams, winds, capes, stars) actors were step-by-step mobilized to create a stable system and thus achieve the project objective: total control over the India-Europe trade lane by Portugal [11]. In the actor-network concept neither the society, nor technologies or nature are able to play any role before they get in touch with the system builder (see definitions of “interessement” and “enrolement” stages of the translation process above). Technologies can associate and dissociate, that is why in the approach towards the historical Portuguese maritime expansion the formerly adverse capes and streams were “interested”, passed through “enrolement” and were eventually “mobilized”, associated with the network to act along with the ships and sailors to achieve stability of the whole system. As soon as we adopt the actor-network principle of generalized symmetry all those actors become intrinsic and non-excludable elements of the actor-network. There are thus two closely related methodological principles for the study of intelligent logistics systems as heterogeneous networks. The first (generalized symmetry) states that the same type of analysis should apply to all components in a system, regardless of whether these components are human or not, intelligent or not. The second (reciprocal definition) states that actors represent “the entities that exert detectable influence on others” [11]. 3.2 Intelligent Logistics Architectures This subsection provides a summary of the intelligent logistics architectures [5] viewed through the lens of heterogeneous engineering approach: Autonomous logistics: This architecture is based on the processes of decentralized decision making and coordination of actions in heterarchical structures. Autonomous control in logistics systems requires the ability of actors to process information, to make and execute decisions independently [6]. Actors in an autonomous logistics system can use peer-to-peer (P2P) technologies to support their interaction. These technologies are totally decentralized and do not require any centralized/hierarchical structure to control interactions. A remarkable example of temporary alliance between IBM and Maersk established to tackle shipping document processing inefficiencies is given in [13]. The

88

Y. Iskanderov and M. Pautov

alliance resulted in Blockchain solution allowing for assemblage of the vast global network of shippers, carriers, ports and customs where every document or approval was shadowed on the Blockchain [13]. In the actor-network approach the “problematisation” and “interessement” phases of the translation macro-process explain formation of P2P alliances within autonomous logistics systems. Physical Internet: Physical Internet (PI, π) was declared as an “open global logistics system based on physical, digital and operational interconnectivity of its elements through encapsulation, interfaces and protocols” [17]. This approach is aligned with the “material turn” of the actor-network theory inasmuch as it exploits the digital Internet metaphor to develop the Physical Internet concept of an open, global, efficient and sustainable logistics web defined as a set of interconnected networks of physical, digital, human, organizational and social actors [8, 17]. To create a stable functional system the Physical Internet builders must associate the following five networks with a myriad of heterogeneous components which cannot independently form, or do even resist, an efficient and sustainable logistics web (only through their well-designed relationships and interconnections the system as a whole can achieve its purpose completely) [17]: (1) mobility network; (2) distribution network; (3) realization network; (4) supply network; (5) service network. A key protocol set was suggested to monitor performance of the π-actors based on the measurements of critical factors such as speed, service level, reliability, safety and security [17]. Product intelligence: An intelligent product as an actor-network associates information and rules governing the way it is intended to be prepared, stored, handled or transported which enables the product to support these operations [7, 18]. The Physical Internet architecture heavily relies on informational and communicational encapsulation (e.g. smart π-containers embedding the intelligent products). In the Internet of Things operational ecosystem π-containers can communicate with their embedded intelligent physical objects [17]. Intelligent products act as the system builders involving other heterogeneous actors in their actor-network aiming to create a sustainable system allowing for their smooth passage through a supply chain. Intelligent transportation systems: Intelligent Transport Systems (ITS) are created to provide innovative multimodal transportation and traffic control solutions for coordinated, efficient and safe use of transport networks [15]. In his signature opus [19] Latour reflects on the project named Aramis – an early experimental personal rapid transit (PRT) system developed in France for deployment in the Paris area – and the reasons behind its failure. At the time of its conception Aramis was a unique PRT system where the vehicles were supposed to be electronically trained in platoons and controlled to a separation of about 30 cm using ultrasonic and optical sensing [20]. However when the vehicle capacity was increased to the economically reasonable number of 10 passengers, the project turned out to be not well suited to network operation and was eventually abandoned. In [19] Latour demonstrates the trajectory of evolution of Aramis from a textual project towards a technological object [22]. In the beginning there was no distinction between the project and the object: both circulated between the offices in the form of texts, documents, plans, reports, models and irregular synopses. It was the semiotic realm of signs, language, texts [19]. In the process of translation the object tends to detach itself from the project and start its own existence in the form of technology. When translation fails, a

Heterogeneous Engineering in Intelligent Logistics

89

potential object contained in the project does not change the textual form of its existence [23]. Translation of a project into an object (a set of objects) is a potentially reversible function. The life cycle of an object may bring it back to the textual form of the project. Aramis did not materialize itself as an object. Even if it could exist as a transport system, it would be an institute or a corporate body rather than an object [21]. However the texts the project was based on gave birth to the new ambitious projects with higher potential of translation into material objects. Self -organizing logistics: A logistics system is self-organizing when it can function without significant intervention of managers, engineers, or software control [12]. In the actor-network approach towards formation and self-organization of logistics systems the critical role is played by fundamental refusal of ANT to view actor-networks as preexisting (ready-made) structures [28]. A logistics system as an organization may be seen as a set of cooperating strategies aiming to achieve network durability, spatial mobility, representation, calculability characteristic of most formal organizations. The structure of a self-organizing logistics system reflects not only the common will of its actors to find a workable solution, but also the balance between the forces they can mobilize and the forces mobilized by their opponents/competitors [11].

4 Conclusions The ANT-based method of heterogeneous engineering was introduced as a tool to assemble intelligent logistics architectures. The paper gives a necessary insight into the fundamentals of the actor-network theory underlying the discussed method and demonstrates the emerging demand for integration of the actor-network paradigm with the intelligent systems research, multi-agent systems studies, knowledge engineering, ergonomics and other related fields of research and applications. We foresee further evolution of the actornetwork approach from the descriptive theory created (and further revised) by Latour, Callon, Law and other ANT creators and protagonists, through its formalization and integration with other relevant methods of agent-based and network-oriented research, towards its eventual conversion into a full-fledged applied tool for modelling and simulation of socio-technological systems. Starting paving this way we have demonstrated that the actor-network theory provides new semantics for some core concepts of the multi-agent systems theory [16]. Also, in the ongoing (yet unpublished) research we suggest to use the elements of applied semiotics and logics of action (TI, SAL) to formalize basic concepts of the actor-network theory. The paper aims at bringing attention of the broader research community to the actor-network paradigm and its potential in MAS and HIS studies.

References 1. Latour, B.: On actor-network theory. a few clarifications plus more than a few complications. Soziale Welt. 47, 369–381 (1996) 2. Callon, M.: Some elements of a sociology of translation; domestication of the scallops and the fishermen of St. Brieuc Bay. In: Law, J. (ed.) Power, Action and Belief, pp. 196–223. Routledge, London (1986)

90

Y. Iskanderov and M. Pautov

3. Silic, M.: Using ANT to understand dark side of computing – computer underground impact on eSecurity in the dual use context. University of St Gallen. Institute of Information Management, January 2015 4. Gorodetskii, V.I., Karsayev, O.V., Samoylov, V.V., Serebryakov, S.V.: Applied multiagent systems of group control. Sci. Tech. Inf. Process. 37(5), 301–317 (2010) 5. McFarlane, D., Giannikas, V., Lu, W.: Intelligent logistics: involving the customer. Comput. Ind (2016). https://doi.org/10.1016/j.compind.2015.10.002 6. Hülsmann, M., Windt, K.: Understanding Autonomous Cooperation and Control in Logistics: The Impact of Autonomy on Management, Information, Communication and Material Flow. Springer (2007) 7. McFarlane, D., Giannikas, V., Wong, A.C., Harrison, M.: Product intelligence in industrial control: theory and practice. Ann. Rev. Control 37(1), 69–88 (2013) 8. Montreuil, B.: Toward a physical internet: meeting the global logistics sustainability grand challenge. Logistics Res. 3(2–3), 71–87 (2011) 9. Law, J.: Notes on the theory of the actor-network: ordering strategy and heterogeneity. Syst. Pract. 5, 379–393 (1992) 10. Erofeeva, M.: On the possibility of actor-network theory of action. Sociol. Power 27(4), 51–71 (2015) 11. Law, J.: Technology and heterogeneous engineering: the case of Portuguese expansion. In: Bijker, W.E., et al. (eds.) The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, pp. 105–127. MIT Press, Cambridge (2012) 12. Bartholdi III, J.J., Eisenstein, D.D., Lim, Y.F.: Self-organizing logistics systems. Ann. Rev. Control 34(1), 111–117 (2010) 13. Hackius, N., Petersen, M.: Blockchain in logistics and supply chain: trick or treat? In: Proceedings of the Hamburg International Conference of Logistics (HICL), vol. 23 (2017) 14. Cordella, A., Shaikh, M.: Actor-network theory and after: what’s new for IS research. In: European Conference on Information Systems, 19 June 2003–21 June 2003 (2003) 15. Sanchez-Rodrigues, V., Potter, A., Naim, M.M.: Evaluating the causes of uncertainty in logistics operations. Int. J. Logistics Manag. 21(1), 45–64 (2010) 16. Iskanderov, Y., Pautov, M.: Agents and multi-agent systems as actor-networks. In: Rocha, A., Steels, L., van den Herik, J. (eds.) Proceedings of the 12th International Conference on Agents and Artificial Intelligence ICAART 2020, 22–24 February 2020, vol.1, pp. 179–184 (2020) 17. Montreuil, B., Meller, R.D., Ballot, E.: Physical internet foundations. In: Borangiu, T., Thomas, A., Trentesaux, D. (eds.) Service Orientation in Holonic and Multi Agent Manufacturing and Robotics. Studies in Computational Intelligence, vol. 472, pp. 151–166. Springer, Heidelberg (2013) 18. Meyer, G.G., Främling, K., Holmström, J.: Intelligent products: a survey. Comput. Ind. 60(3), 137–148 (2009) 19. Latour, B.: Aramis, or The Love of Technology. Harvard University Press, Cambridge (1996) 20. Anderson, J.E.: Some lessons from the history of personal rapid transit (PRT). http://staff. washington.edu/jbs/itrans/history.htm 21. Hansen, M.: Embodying Technesis. University of Michigan Press, Ann Arbor (2000) 22. Erofeeva, M.: Actor-network theory: an object-oriented sociology without objects? Logos 27(3), 83–112 (2017). ISSN 0869-5377 23. Latour, B.: On technical mediation. Common Knowl. 3(2), 29–64 (1994) 24. Iskanderov, Y.; Pautov, M.: Security of information processes in supply chains. In: Abraham, A., Kovalev, S., Tarassov, V., Snasel, V., Sukhanov, A. (eds.) Proceedings of the Third International Scientific Conference “Intelligent Information Technologies for Industry” (IITI 2018). Advances in Intelligent Systems and Computing, vol. 875. Springer, Cham (2019)

Heterogeneous Engineering in Intelligent Logistics

91

25. Callon, M.: Techno-economic networks and irreversibility. Sociol. Rev. 38(1), 132–161 (1990) 26. Fischer, E.: Socio-technical innovations in urban logistics: new attempts for a diffusion strategy. In: 16th Conference on Reliability and Statistics in Transportation and Communication, RelStat 2016, 19–22 October 2016, Riga, Latvia. In: Procedia Engineerim 178(2017), 534–542 (2017) 27. Cohen, P.; Levesque, H.: Communicative actions for artificial agents. In: Proceedings of ICMAS 1995, pp. 65–72 (1995) 28. Iskanderov, Y., Pautov, M.: Actor-network approach to self-organisation in global logistics networks. In: Kotenko, I., Badica, C., Desnitsky, V., El Baz, D., Ivanovic, M. (eds.) Intelligent Distributed Computing XIII. IDC 2019. Studies in Computational Intelligence, vol. 868. Springer, Cham (2020) 29. Gonçalves, F.A., Figueiredo, J.: How to recognize an immutable mobile when you find one: translations on innovation and design. Int. J. Actor-Netw. Theory Technol. Innov. 2, 39–53 (2012)

Extracting Unknown Repeated Pattern in Tiled Images Prasanga Neupane(B) , Archana Tuladhar, Shreeniwas Sharma, and Ravi Tamang Alternative Technology, Kathmandu, Nepal [email protected]

Abstract. Humans can easily recognize specific patterns and their repetition in an image but it is very difficult for machines to do. However, the machines can create a plethora of repeated patterns and tiled images. This research paper proposes some methods of recognizing an unknown repeated motif in tiled images in computer generated raster graphics. Three approaches, autocorrelation of an image, comparing with template strip and cyclic bitwise XOR-ing of the image are compared in this paper. Finally, the third algorithm is proposed as it detects locally repeating unknown motifs in a tiled image and outperforms the former two methods in robustness and provides a reliable result. Unlike other traditional approaches, the proposed method does not require any feature extraction and clustering of features or patches and it is unsupervised. Keywords: Repeated pattern · Tiled image · Computer generated · Autocorrelation · Template strip · Cyclic bitwise XOR

1 Introduction Images with repeating motifs can be found abundantly. Usually, in computer-generated graphics, repeating a small structure can create a wide variety of images. For instance, most traditional designs use basic geometric shapes like circles, squares, polygons and repeat them in different patterns to get complex intricate patterns. These kinds of images find use mainly in textiles, fabrics, wallpapers, architectural designs, etc. In today’s world, machines are challenging human performance in many aspects but they lag when it comes to intuition. Similarly, it is an innate behavior of humans to recognize and analyze visual repeating patterns but it is a very difficult task for machines. In this research, we try to solve the problem of automatically detecting a repeated pattern in images without any human intervention. For this research, we focus on computer-generated graphics and tiled images rather than photographic images. We use the term “tiled image” for those images, which are created by the repetition of some unknown motifs. This paper proposes a new algorithm to detect a repeated pattern in tiled images and compares it with two other approaches. The work [1] of wallpaper groups classifies different possible symmetries of motifs into 17 groups. This work has been widely adopted in the field of detecting repeated © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 92–102, 2021. https://doi.org/10.1007/978-3-030-49336-3_10

Extracting Unknown Repeated Pattern in Tiled Images

93

motifs in fabrics and textiles. The research [2–6] uses signal processing approaches like Autocorrelation and Fourier transform to calculate the local maxima (discussed in Sect. 3.1) in order to locate the repeating motif. But, these approaches require a generalization of those peaks into one of the 17 symmetry groups defined in [1] which will be a hassle for complex patterns. The paper [11] proposes a method to detect the possible repeated pattern using Convolutional Neural Network (CNN) as a filter to detect the peak points and then using a voting algorithm to get the displacement vector. Although the results of this methods look promising in the photographic real-world images, the problem domain we are dealing with (i.e. computer-generated tiled images) does not require such computationally demanding CNN architecture and the proposed algorithm requires pre-labeled dataset and it is also highly sensitive towards the flat prior and vector precision parameter which requires manual adjustment. The paper [12] proposes a method to find the periodicity of a repeated element in a textured image by computing the energy in HSV color space. However, it fails to detect whether the image is composed of repeating element or not and its result in a grayscale image is not satisfactory. Similarly, the proposed method in [19] to detect repeated unit requires the process of color quantization in HSI color space and further, it requires manual selection of a template image.

Fig. 1. Overview of proposed method

For detection of an unknown repeating motif in an image, we propose a new method called Cyclic XOR-ing of image. It is based on two facts:

94

P. Neupane et al.

1. When the image made up of repeating motif is slid (Fig. 2) through the pixel distance equal to the size of the repeating element, the slid image will be exactly similar to the original image. 2. Two same value subjected to the input of XOR gate will produce a LOW output. We exploit these facts to realize our system as shown in Fig. 1. We feed the original image to one input of the XOR gate and in the other input, the copy of original image slid by one pixel at a time is fed until the image is slid completely along its width and height. We store the key-value pair of the average value of output image of XOR gate for each of the slid position along two directions: width and the height. Later, the two key-value pair of horizontal direction and vertical direction are analyzed to get the horizontal and vertical distance vector of repeating element. Our contribution: • Decision of whether the image is composed of repeating motif or not • Completely unsupervised approach for detecting the repeated motif in the tiled image in real time • Independent of number of colors and insights of horizontal or vertical repeated patterns Organization of rest of the paper is as follows: Sect. 2 discusses the related past works on detecting repeated patterns and our contribution. Section 3 presents a detailed comparative study of different approaches and a proposed algorithm for locating repeating motif of a tiled image. Section 4 shows the results of our algorithm in varieties of patterns and in Sect. 5 we conclude.

2 Related Works Template Matching [7] is one of the traditional and widely used approaches for locating a similar pattern in the image which uses cross-correlation of image. Template matching requires a template image to locate a similar portion in the image and similar kind of method is discussed in Sect. 3.2 but here we are dealing with an unknown repeating motif. The research [13] applies the Scale Invariant Feature Transform (SIFT) to locate the reused portion of an image, but it cannot detect the reused portion where SIFT does not return any features. So this approach doesn’t guarantee successful detection of repeating motifs in tiled images. Defect detection by machine inspection in a patterned fabric has been a popular research topic among the researchers in the field of computer vision. The papers [14–16] are all related to locating the defect in patterned fabric. The research [14] employs Fourier analysis, [15] applies wavelet analysis and [16] uses the energy of moving subtraction to locate the defective region in repeated patterns fabrics. The article [8] is related to detecting deformed lattice, [9] is related to detecting repetitive units in façade images, [10] tries to group similar elements in images and [18] matches the regular patterns of urban environment scene for automatic geotagging of photographs. These papers [8–10, 18] all consider real-world scene and photographic images which are out of the scope for this research. The work [17] proposes a method of using XOR gate for the reference image and sample image to for the automated inspection of circuit boards and our proposed method is somehow inspired by this approach.

Extracting Unknown Repeated Pattern in Tiled Images

95

3 Methodology 3.1 Auto-Correlation of Image Autocorrelation is a mathematical tool to find the repeating pattern in signals. We experimented with autocorrelation of the image to find the repeating motif of the image using Wiener–Khinchin theorem [21] and we were able to get local maxima as proposed by [2–6]. But, to identify the repeating motif we need another generalization approach so we moved towards other methods. 3.2 Sliding a Template Strip and Comparing In this method, we chose a template strip from the image itself and then compare with the remaining portion of the image to find the repeating distance. In order to determine the horizontal repeating distance, we took the template strip from the leftmost part of the image and slid it right horizontally and compare. The horizontal distance vector is then taken as the pixel position where the template strip matches with the image while sliding. A similar approach was adopted for the vertical repeating distance. But, there were some images where this algorithm would give false results. The top-left images of Fig. 4 and 5 show the horizontal and vertical lining patterns. For these images, this algorithm reported the horizontal and vertical distance vector of one which is not true. Sticking with this algorithm and using variable length and width for the template strip might solve the problem of false detection but it again poses another generalization question of selecting appropriate strip width and height for different kinds of images. Hence, this method created a foundation to our completely unsupervised and more robust algorithm of Cyclic XOR-ing. 3.3 Proposed Method: Cyclic Bitwise XOR-ing First, a copy of the original image is created. It is then cyclic-rotated by one pixel each time and used as one input of the bitwise-XOR gate and the original image is feed to the other input of bitwise-XOR gate. The aforementioned term ‘cyclic-rotation’ and ‘sliding/slid’ is explained using a colored checker board in Fig. 2. Each time, the average

Fig. 2. Figure showing cyclic-rotation or sliding process. When the rightmost original image is slid by 15 pixels horizontally towards the right we obtain the image in the middle. And when the original image is further slid by 85 pixels (which is equal to the size of two square in the checker) we obtain the image on the left which is near duplicate of the original image. This sliding process can be realized in a computer program by simply removing the row/column from the one end and copying it to the other end.

96

P. Neupane et al.

pixel value of resultant XOR-ed image is calculated and stored in the key-value pair data structure consisting of slid pixel-position as key and its corresponding average value of the XOR-ed image as a value. This process is repeated along the width and height of image to get two horizontal and vertical key-value pair. During horizontal iteration, a minimum average value is obtained as the output of the bitwise-XOR function when the sliding input image is slid horizontally by a multiple of the width of the repeating motif which we refer as horizontal distance vector in this paper. Similarly, a minimum average value is obtained when the sliding input image is cyclic-rotated vertically by the multiple of the height of the repeating motif referred as the vertical distance vector. Finally, minimum average value and its corresponding pixel-position are analyzed from the key-value pair data structures each of horizontal iteration and vertical iteration to get the exact width and height of the repeating motif. This algorithm is presented in detail below: Algorithm. 1. for horizontal distance vector: • initialize empty Dictionary where key= average value of xor_image and value=pixel_position • cyclic_image = main_image.copy() • for i=1 to i 1): x_min_1 = 1st pair of min_avg ; x_min_2 = 2nd pair of min_avg ; distance = Dictionary[x_min_2] – Dictionary[x_min_1] if distance == 1: return “ Continuous Horizontal Repeating Pattern” else: return distance as horizontal distance vector 3. Similarly repeat steps 1 and 2 for vertical distance vector

Rule of Rejection. This algorithm not only extracts the repeating portion from the tiled image but can also decide if the given image is composed of single repeating motif or not. From our observation in a mixed dataset of tiled images and non-tiled images we found an interesting phenomenon. In case of tiled image, minimum average value of the XOR-ed image occurs at the pixel-position multiple of distance vector. But in case of non-tiled images, the minimum average value of XOR-ed image occurs at the two edges of the image (i.e. while sliding horizontally, the minimum average value of the XOR-ed image occurs at the sliding value of one and width of image minus one). Reason behind this phenomenon is that images are most nearly similar to the original images when they are only slid by one pixel if they don’t have repeating pattern. But, in the case of the

Extracting Unknown Repeated Pattern in Tiled Images

97

tiled image, slid images are completely similar to the original image when they are slid by a multiple of the distance vector. This fact serves for our acceptance and rejection criteria. Reason for Considering Minimum Value Rather than Zero. A complete dark image is obtained from the bitwise-XOR function if two exactly similar images are fed as the input. But, if repeating motif has some noise or if the motifs are not in exact symmetry then the XOR-ed image will not be completely dark. So, to make this algorithm resilient against minor noise artifacts that occur during image creation and rendering, a minimum average value of the XOR-ed image is used.

4 Results and Analysis 4.1 Tiled and Non Tiled Image Classification Using the proposed algorithm of cyclic XOR-ing, a bounding rectangular region is obtained for the repeating motif as shown in Fig. 3. Furthermore, Fig. 4 shows the image which is classified as non-repetitive by our algorithm. The distance vector graph in Fig. 4 shows minimum value at the two extreme pixel position which is our rejection criteria. 4.2 Horizontal Lining Pattern The horizontal lining pattern can also be identified using the proposed method as shown in Fig. 5. Image with horizontal lines as the repeating motif, shows minimum value for every sliding position during the horizontal iteration. 4.3 Vertical Lining Pattern Similarly, vertical repeating pattern can also be identified from the proposed algorithm as shown in Fig. 6. Image with vertical lines as the repeating motif, shows minimum value for every sliding position during the vertical iteration.

Fig. 3. Subplot showing the selection of repeating motif. Top-Left (original image), Top-Right (plot of horizontal vectors), Bottom-Left (plot of vertical vectors), Bottom-Right (white rectangular box showing the repeating motif of the image)

98

P. Neupane et al.

Fig. 4. Subplot showing rejection criteria and no repeated motif

Fig. 5. Subplot showing detection of horizontal lining pattern

Fig. 6. Subplot showing detection of vertical lining pattern

Extracting Unknown Repeated Pattern in Tiled Images

99

4.4 Validation of Algorithm In order to validate the correctness of our algorithm we created 1500 tiled images of varied sizes ranging from 153 × 169 to 1500 × 1800 pixels (px) considering all the five kinds of lattices (i.e. square, rectangle, parallelogram, rhombus and hexagonal) as defined by wallpaper groups [22]. The motifs, belonging to one of the aforementioned lattice shapes, were also varied in size ranging from 13 × 13 to 500 × 500 pixels with different aspect ratios. We further resized random 300 tiled images to variable size using bilinear interpolation to introduce some noise in the dataset. The performance of our algorithm for each lattice class is shown in Table 1. The overall success percentage of the proposed algorithm is 95.4% as seen in table below and the algorithm performs well over different lattice shapes as its success rate is between the tight range of 94 to 96%. Table 1. Performance of algorithm in different lattice shapes Lattice class

Total images

Correctly detected

Incorrectly detected

Success rate

Square

337

325

12

96.43%

Rectangle

378

359

19

94.97%

Parallelogram

248

236

12

95.61%

Rhombic

318

300

18

94.34%

Hexagonal

219

211

8

96.35%

The graph in Fig. 7 shows accuracy with different image sizes. The dataset images are categorized into 5 bins with interval of 200 pixels based on their larger dimension (i.e. maximum of height and width) and success rate is calculated for each bin. The plot shows somewhat uniform accuracy over all the bins with minimum accuracy for the bin with images below 200px in dimension.

Fig. 7. Performance of proposed method with different image sizes

The graph in Fig. 8 shows accuracy of the method with different sizes of motifs. For this, the motifs in the dataset were divided into 5 bins according to their maximum dimension. There seems sudden drop in accuracy with the larger motifs. This observation leads to the conclusion that our algorithm performs better if the repetition number of

100

P. Neupane et al.

motifs in the tiled images are higher. In our dataset the largest dimension of tiled image was 1500 × 1800 so the tiled images with larger motifs had less number of repetitions. This claim might be interesting to validate in further research by creating a different sized tiled images of larger motifs but we limit the scope of this research at this finding.

Fig. 8. Performance of proposed method with different motif sizes

The Fig. 9 shows sample result for each lattice class along with the one false detection of repeated motif.

Fig. 9. Samples of output where white bounding box represents the detected repeating motif.

5 Conclusion In this research, our contribution is a new algorithm that can detect the repeating motif in a tiled image. We use cyclic sliding of image and XOR gate to realize our proposed algorithm. This algorithm performs very well in locating unknown repeating motif and it can also decide whether the image is composed of repeating motif or not. It also provides

Extracting Unknown Repeated Pattern in Tiled Images

101

some insights on horizontal and vertical repeating pattern in image. Furthermore, this method does not require image in grayscale and works in any color space. There are some limitations to our proposed method. Using the minimum value of XOR-ed image we have made this algorithm somehow resilient towards noise but still this method is quite sensitive towards the image noise that occurs during resizing of image. Although, this method gives a rectangular bounding box for the repeating motif, it cannot classify the detected motif into 17 symmetry groups as defined by [1]. Moreover, same repeating structure in different color will be detected as different motif by this method. To resolve this issue, we need to use the outline of an image before applying this algorithm so we leave this for further research. This method can be easily extended to detect defective motifs in fabric and textile images. For the defective motifs, distance vector graph shows unusual deviation from equally spaced minimum values which can also mark its location in the image. This method also finds it application for resizing images that contains repeating motifs. This method can be employed to add or remove the repeating motif to increase or decrease the size of an image. Acknowledgement. Authors of this paper would like to appreciate Galaincha software for making the arduous work of creating the dataset of tiled images easier and faster. Furthermore, authors are thankful for whole Galaincha Team for the continuous support during this research.

References 1. Schwarzenberger, R.L.E.: The 17 plane symmetry groups. Math. Gaz. 58, 123–131 (1974) 2. Lin, H.-C., Wang, L.-L., Yang, S.-N.: Extracting periodicity of a regular texture based on autocorrelation functions. Pattern Recogn. Lett. 18(5), 433–443 (1997) 3. Matsuyama, T., Miura, S., Nagao, M.: A structural analysis of natural textures by Fourier transformation. CVGIP 24(3), 347–362 (1983) 4. Liu, Y., Collins, R., Tsin, Y.: A computational model for periodic pattern perception based on frieze and wallpaper groups. IEEE Trans. Pattern Anal. Mach. Intell. 26(3), 354–371 (2004) 5. Wood, E.J.: Applying Fourier and associated transforms to pattern characterization in textiles. Text. Res. J. 60, 212–220 (1990) 6. Nasri, A., Benslimana, R., Ouaazizi, A.: A genetic based algorithm for automatic motif detection of periodic patterns. In: Tenth International Conference on Signal-Image Technology & Internet-Based Systems (2014) 7. Brunelli, R.: Template Matching Techniques in Computer Vision: Theory and Practice. Wiley, Hoboken (2009). ISBN 978-0-470-51706-2 8. Park, M., Brocklehurst, K., Collins, R., Liu, Y.: Deformed lattice detection in real-world images using mean-shift belief propagation. IEEE Trans. Pattern Anal. Mach. Intell. 31, 1804–1816 (2009) 9. Recheis, M.: Automatic Recognition of Repeating Patterns in Rectified Facade Images (2009) 10. Leung, T., Malik, J.: Detecting localizing and grouping repeated scene elements from image. In: Fourth European Conference on Computer Vision (1996) 11. Louis, L., Michal, M., Kenneth, K., Luc, V.: Repeated pattern detection using CNN activations. In: IEEE Winter Conference on Applications of Computer Vision (WACV) (2017) 12. Li, L., Qi, F., Wang, J.: Periodicity estimation of regular textile fabrics based on energy function. In: Joint International Conference on Service Science, Management and Engineering and International Conference on Information Science and Technology (2016)

102

P. Neupane et al.

13. Pinho, A., Ferreira, P.: Finding unknown repeated patterns in images. In: European Signal Processing Conference (2011) 14. Chan, C., Pang, G.: Fabric defect detection by Fourier analysis. IEEE Trans. Ind. Appl. 36(5), 1267–1276 (2000) 15. Ngan, H., Pang, G., Yung, S., Ng, M.: Wavelet based methods on patterned fabric defect detection. Pattern Recogn. 38(4), 559–576 (2005) 16. Ngan, H., Pang, G., Yung, N.: Motif-based defect detection for patterned fabric. Pattern Recogn. 41(6), 1878–1894 (2008) 17. Chin, R., Harlow, C.: Automated visual inspection: a survey. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-4(6), 557–573 (1982) 18. Schindler, G., Krishnamurthy, P., Lublinerman, R., Yanxi, L., Dellaert, F.: Detecting and matching repeated patterns for automatic geo-tagging in urban environments. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, pp. 1–7 (2008) 19. Kuo, C.-F., Shih, C.-Y., Lee, J.-Y.: Separating color and identifying repeat pattern through the automatic computerized analysis system for printed fabrics. J. Inf. Sci. Eng. 24, 453–467 (2008) 20. Lowe, D.: Object recognition from local scale-invariant features. In: Proceedings of the International Conference on Computer Vision, vol. 2, pp. 1150–1157 (1999) 21. Champeney, D.: Power spectra and Wiener’s theorems. In: A Handbook of Fourier Theorems, p. 102. Cambridge University Press, Cambridge (1987) 22. Wallpaper Groups-Lattices. http://www2.clarku.edu/faculty/djoyce/wallpaper/lattices.html. Accessed 12 Aug 2019

Convolutional Deep Learning Network for Handwritten Arabic Script Recognition Mohamed Elleuch1,2(B) and Monji Kherallah2 1 National School of Computer Science (ENSI), University of Manouba, Manouba, Tunisia 2 Faculty of Sciences, University of Sfax, Sfax, Tunisia

[email protected]

Abstract. During the last years, deep convolution networks have emerged to become widespread, resulting in substantial gains in various benchmarks. In this paper, Convolutional Deep Belief Networks (CDBN) is applied to learn automatically the finest discriminative features from textual image data consisting of Arabic Handwritten Script. This architecture is able to lay hold of the advantages of Deep Belief Network and Convolutional Neural Network. We subjoin Regularization methods to our CDBN model so that we can address the issue of over-fitting. We evaluated our proposed model on high-level dimension in Arabic textual images. The obtained outcomes from the experiments prove that our model is more effective if compared to the ultra-modern results in handwritten script recognition using IFN/ENIT data sets. Keywords: Arabic Handwritten Script · Deep convolution networks · Convolutional Deep Belief Networks · Regularization · Over-fitting

1 Introduction and Related Works The techniques in relation to the information processing at present cognizes hectic progress in relationship with data processing. It has an increasing potential in the domain of the human-computer interaction. Furthermore, in recent years, human reading’s machine simulation has been intensively subjected to many studies. The recognition of writing is part of the larger domain of pattern recognition. It aims at developing a system able to be the closest to the human ability of reading. Arabic-handwriting languages are lagging behind mainly because of their complexity and their cursive nature. Consequently, automatic recognition of handwritten script represents a burdening work to be fulfilled. Since the late 1960s, by dint of its broad applicability in several engineering technological areas, Arabic handwritten script (AHS) recognition has been positively seen as the subject of in-depth studies [1]. A lot of studies have been realized to recognize Arabic handwritten characters using unsupervised feature learning and hand-designed features [2, 3]. Improving suitable characteristics from the image describes a difficult and complex chore. It really requires not only a skilled but also an experienced specialist in the domain © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 103–112, 2021. https://doi.org/10.1007/978-3-030-49336-3_11

104

M. Elleuch and M. Kherallah

of feature extraction methods like: MFCC features in speech area, Gabor and HOG features in computer vision. The choice and goodness of these hand-designed features makes it possible to identify the efficiency of the frames utilized for classification and recognition like Multi-layer Perceptron (MLP), Hidden Markov Model (HMM), Support Vector Machine (SVM), etc. However, the majority of classifiers meet a major problem which lies in the variability of the vector features size. Thereby, many researchers have targeted the use of raw or untagged data in training developed handwriting systems, as they are the easiest way to handle large data. The ability to automatically extract features and model high-level abstraction in various signals, namely image and text, has made deep learning (DL) algorithms widespread in the world of Artificial Intelligence research. Therefore, our first ongoing study is to implement a system for automatic feature extraction that is richer than the one obtained by employing heuristic signal processing based on the knowledge domain. This approach depends on the notion of in-depth learning of a representation of Arabic script from the image signal. So as to carry out that, the use of unsupervised and supervised learning methods has shown some potential. Learning such representations is likely to be applied to various handwriting recognition tasks. Recent research has shown that DL methods have made it possible to make decisive progress in solving tasks such as object recognition [4, 5], computer vision [6], speech recognition [7, 8] and Arabic handwriting recognition [9]. Elaborate by LeCun et al. [10], Convolutional Neural Network (CNN) is a specialist type of Neural Network (NN) that automatically learning favorable features at every layer of the architecture based on the given dataset, which can be a convolution layer, a pooling layer and a fully connected layer. Then Ranzato et al. [11] improved performance by using unsupervised pre-training on a CNN. Another classifier which is employed extensively is Deep Belief Network (DBN) [12]. DBN is one of the most classical deep learning models, composed of several Restricted Boltzmann Machines (RBM) in cascade. This model learns representations of high-level features from unlabeled data that uses unsupervised learning algorithms. In comparison to shallow learning, the pros of DL are that deep structures can be designed to learn internal representation and more abstract details of input data. However, the high number of parameters given can also lead to another problem: over-fitting. Thus, improving or developing novel effective regularization techniques is an unavoidable necessity. In recent years, various regularization techniques have been suggested as batch normalization, Dropout and Dropconnect. The participations of this paper are to leverage the DL approach to solve the problem of recognizing handwritten text in Arabic. To fulfill our target, we are studying the potential benefits of our suggested hybrid CDBN/SVM structure [13]; this model handled CDBN as an automatic characteristic extractor and let SVM to be the output predictor. On the other hand, to enhance the efficiency of CDBN/SVM model, regularization methods can contribute to the defense of over-fitting as Dropout and Dropconnect techniques. This paper is organized as follows: Sect. 2 gives an overview of the basic components of Convolutional Deep Belief Network model and regularization techniques. Then, our target architectures are explored and discussed to recognize Arabic handwriting text.

Convolutional Deep Learning Network for Handwritten Arabic Script Recognition

105

Section 3 describes experimental study, and Sect. 4 discusses the results. The last section concludes this work with some remarks.

2 Deep Models for Handwritten Recognition In this section, the DBN model based on the RBM is firstly represented and after that, the CDBN model is reviewed. Just then, the effect of Dropout and Dropconnect techniques is analyzed in our CDBN architectures. 2.1 Restricted Boltzmann Machine (RBM) DBN is a hierarchical generative model [12] involving several RBM layers [14, 15] consisting of a layer of observed units and multiple layers of hidden units. The link between the two upper layers of DBN is not oriented, the other links are oriented, and there is no connection for the units of the same layer. To initialize the weights of the network, Deep Belief Networks utilize a greedy layer by layer pre-trained algorithm. An RBM is a non-oriented graphical model layer consisting of a two layers, in which the visible units ‘v’ are connected to the hidden units ‘h’. The joint probability distribution and the energy function are computed as:    vi wij hj − ci vi (1) E(v, h) = − bj hj − i,j

P(v, h) =

j

i

1 −E(v,h) e Z

(2)

Where wij is the weight between visible units i and hidden units j, bj is bias terms for hidden unit, ci is bias terms for visible unit and Z represents the partition function. 2.2 Convolutional Restricted Boltzmann Machine (CRBM) The construction of hierarchical features structures is a challenge and the Convolutional Deep Belief Network is one of the famous features extractor often used in the last decade in the field of pattern recognition. In this subsection, we thoroughly clarify the basic notion of this approach. As a hierarchical generative model [16], the Convolutional Deep Belief Network reinforces the efficiency of bottom-up and top-down probabilistic inference. Similar to the Deep Belief Network standard, this model made up of several layers of probabilistic max-pooling CRBMs stack on top of each other, and the training was carried out by the greedy layer-by-layer algorithm [12, 17]. This probabilistically decreases the representation of the detection layers. Decreasing the representation with max-pooling allows representations of the upper layer to never change to local translations of input data, reduces the computational load [18] and is useful for vision recognition issues [19]. Building a convolutional Deep Belief Network, the algorithm learns high-level features using end-to-end training. In our experiments, we trained CDBN architecture

106

M. Elleuch and M. Kherallah

with a couple of CRBM layers to automatically learn hierarchical features in an unsupervised/supervised manner. Figure 1 clarifies the architecture of CRBM made up of two layers: a visible layer V and a hidden layer H, both joined by sets of local and common parameters. A detailed technical report is available at [20]. By using visible inputs with real values, the probabilistic max-pooling CRBM is fixed by the following equation: E(v, h) =

NW NH  NV K   1  2 k k vi,j − hi,j wr,s vi+r−1,j+s−1 2 k=1 i,j=1 r,s=1

i,j=1



K  k=1

bk

NH  i,j=1

hi,j − c k

NV 

vi,j

(3)

i,j=1

Fig. 1. Representation of a probabilistic max-pooling CRBM. NV and NH refer to the dimension of visible and hidden layer, and NW to the dimension of convolution filter.

2.3 Regularization Methods The utilization of Deep Networks models for cursive handwriting recognition has made significant progress over the past decade. Nevertheless, for these architectures to be used effectively, a wide amount of data needs to be collected. Consequently, over-fitting is a serious problem in such networks due to the large number of parameters that will be carried out gradually as the network increases and gets deeper. To overcome this problem, many regularization and data augmentation procedures have been ameliorated [21–23].

Convolutional Deep Learning Network for Handwritten Arabic Script Recognition

107

In this sub-section, two regularization techniques will be shortly introduced that may affect the training performance. Dropout and Dropconnect are both methods for preventing over-fitting in a neural network. To practice Dropout, a subset of units are haphazardly selected and set their output to zero without paying attention to the input. This efficiently removes these units from the model. A Varied subset of units is selected randomly each time we present an example of training. Dropconnect operates in the same way, excluding that we deactivate individual weights (i.e., fix them to zero), rather of nodes, so a node may stay partly active. In addition, Dropconnect is a generalization of Dropout as it generates yet more possible models, since there are practically still more links than units. 2.4 Model Settings To extend our study [13] so that we can discover the power of the deep convolutional neural networks classifier done on the problem of AHS recognition, we point out in this work an itemized study of CDBN with Dropout/Dropconnect techniques. In this subsection, we identify the tuning parameters of the chosen convolutional DBN structure. As noted above, our CDBN architecture is composed of two layers of CRBM (See Fig. 2). The efficiency of this architecture during IFN/ENIT’s handwritten text recognition task was evaluated. The description of the CDBN architecture exploited in the experiments conducted in the IFN/ENIT database is given as follows: 1 × 300 × 100 − 12W 24G − MP2 − 10W 40G − MP2. This architecture corresponds to a network with dimension input images 300 × 100, the initial layer consisting of 24 groups of 12 × 12 pixel filters and the pooling ratio C for each layer is 2. The second layer includes 40 maps, each 10×10. We define a sparseness parameter of 0.03. The initial layer bases learned strokes consisting of the characters, as for the second layer bases learned characters parts by the groups of strokes. By integrating the activations of the first and second layers, we constructed feature vectors; Support vector machines are used to rank these features. In order to regularize and make the most effective use of these architectures, units or weights have been removed. Dropout was used only at the input layer with a probability of 20% and at each hidden layer at a probability of 50%, while Dropconnect was only applied at the input layer with a probability of 20%.

3 Experiments with Proposed Model This section illustrates a test to evaluate the suggested approach performance on the IFN/ENIT benchmark database [24]. In our experiments, each IFN/ENIT dataset image was normalized to the same input data dimension with 300 × 100 pixels for the visible layer. These textual images are at the gray level and resizing is not necessarily square. Generally, script handwriting recognition system consists of three principal steps: pre-processing, automatic feature extraction and classification. • Pre-processing: This phase consists in generating a normalized and uniform text image.

108

M. Elleuch and M. Kherallah

Fig. 2. Representation of the suggested CDBN structure with dropout.

• Feature extraction: Consists in determining different feature vectors. • Training: The training phase consists to find the most appropriate models to the inputs of the problem. • Parameters setting: For configuration, it is a must to identify the number and size of filters, sparsity of the hidden units and max-pooling region size in each layer of the Convolutional DBN model. Referring to the size of the images used (highdimensional data), we specify a hyper-parameters setting for the configuration of the Convolutional DBN structure. So, to get the most out use of this architecture, two regularization methods have been put into practice separately for the Convolutional DBN structure called Dropout and DropConnect. 3.1 Dataset Description and Experimental Setting To measure the effectiveness of our system proposed for high-level dimension of data input image, the IFN/ENIT database [24] is employed. Indeed, the IFN/ENIT database comprises 26459 handwritten Arabic words developed with contributions from 411 volunteers, making a total of around 115420 parts of Arabic words (PAWs) and around 212167 letters. The words written are 946 Tunisian town and village names with the postal code of each. Data processing consists of offline handwritten Arabic words. Dataset ‘a’ and ‘b’ are employed for training phase whereas the test set was chosen from set ‘c’. Figure 3 illustrates samples of village name, written by 5 different writers. 3.2 Experimental Results and Comparison Table 1 makes a comparison between our approach outcomes with those already published outcomes. We noted that the work of our CDBN structure yielded encouraging outcomes, with a Word Error Rate (WER) of around 9.76% if compared to Maalej and kherallah’s works [25] using Recurrent Neural Network (RNN), after applying Dropout. On the other hand, with Dropconnect we got an error rate of 14.09%. In addition, the rate achieved is contrasted to our earlier work. These experiments clearly prove that the outcome in [13] reaches 16.3% using the Convolutional DBN

Convolutional Deep Learning Network for Handwritten Arabic Script Recognition

109

Fig. 3. Samples from the IFN/ENIT data set.

structure without Dropout, which is not excellently contrasted to the classic approaches [26, 27]. It is thanks to the Convolutional DBN architecture that is able to be overcompleted. On an experimental basis, a model that is too complete or too adjusted may be prone to learn inconsiderable solutions, such as pixel detectors. In our present work to find a suitable solution to this issue, we utilize two regularization techniques, namely Dropout and Dropconnect for Convolutional DBN. As a result, the acquired outcomes prove an amelioration rate of approximately 6.54% with Dropout and 2.21% with Dropconnect. Table 1. Comparison of word recognition performances utilizing the IFN/ENIT database. Authors

Used techniques

Present work

Convolutional DBN with Dropout

WER 9.76%

Convolutional DBN with Dropconnect

14.09%

Elleuch et al., 2015 [13]

Convolutional DBN (without Dropout)

16.3%

Maalej and Kherallah, 2016 [25]

Recurrent Neural Network (MDLSTM with Dropout)

11.62%

AlKhateeb et al., 2011 [26]

Hidden Markov Model

13.27%

Saabni and El-Sana, 2013 [27]

Dynamic Time Warping

21.79%

In general, it is evident that the proposed DL architecture, Convolutional Deep Belief Network with Dropout, provides satisfactory performance, specially against over others approaches such as the Dynamic Time Warping (DTW) and the Hidden Markov Model applied to the IFN/ENIT database.

4 Discussion As mentioned above, our suggestion depicts a DL approach for Arabic Handwriting Script recognition, in particular the Convolutional DBN. To confirm the efficiency of

110

M. Elleuch and M. Kherallah

the proposed framework, we introduced experimental outcomes utilizing Arabic words handwritten databases; IFN/ENIT database. We are able to observe that our Convolutional DBN architecture with Dropconnect has reached a promising error rate of 14.09% when used with large dimension data. In addition, we have rebuilt our proposed Convolutional DBN setting with Dropout. The effectiveness is then raised to achieve a WER of 9.76%, which corresponds to a gain of 4.33%. The results obtained, regardless of their size, are sufficiently important compared to scientific researches using other classification methods, in particular those they obtained with raw pixels without feature extraction phase (See Fig. 4). This participation portrays an interesting challenge in the field of computer vision and pattern recognition, as it will be a real incentive to motivate the use of deep machine learning with Big Data analysis.

Fig. 4. WER comparison utilizing IFN/ENIT Database.

5 Conclusion With the development of DL technique, deep hierarchical neural network has drawn great attentions for handwriting recognition. In this article, we first introduced a baseline of the DL approach to Arabic Handwriting Script recognition, primarily the Convolutional Deep Belief Network. Our aim was to leverage the energy of these Deep Networks that can process large dimensions input image, permitting the usage of raw data inputs rather than extracting a feature vector and studying the complex decision boundary between classes. Secondly, we investigated the efficiency of two regularization methods employed separately in the Convolutional DBN structure to recognize Arabic words using IFN/ENIT Database. As we can observe, Dropout is a very efficient regularization technique compared to Dropconnect and the unregulated basic method. In addition, as a perspective of ours studies, we will evaluate the performance of our system for various applications for the image processing, such as, biometric and medical images analysis.

Convolutional Deep Learning Network for Handwritten Arabic Script Recognition

111

References 1. Mota, R., Scott, D.: Education for innovation and independent learning (2014) 2. Porwal, U., Shi, Z., Setlur, S.: Machine learning in handwritten Arabic text recognition. In: Handbook of Statistics, vol. 31, pp. 443–469. Elsevier (2013) 3. Elleuch, M., Hani, A., Kherallah, M.: Arabic handwritten script recognition system based on HOG and gabor features. Int. Arab J. Inf. Technol. 14(4A), 639–646 (2017) 4. Boureau, Y. L., Cun, Y.L.: Sparse feature learning for deep belief networks. In: Advances in Neural Information Processing Systems, pp. 1185–1192 (2008) 5. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010) 6. Huang, G.B., Zhou, H., Ding, X., Zhang, R.: Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 42(2), 513–529 (2011) 7. Mohamed, A.R., Dahl, G.E., Hinton, G.: Acoustic modeling using deep belief networks. IEEE Trans. Audio Speech Lang. Process. 20(1), 14–22 (2011) 8. Dahl, G., Mohamed, A.R., Hinton, G.E.: Phone recognition with the mean-covariance restricted Boltzmann machine. In: Advances in Neural Information Processing Systems, pp. 469–477 (2010) 9. Al-Ayyoub, M., Nuseir, A., Alsmearat, K., Jararweh, Y., Gupta, B.: Deep learning for Arabic NLP: a survey. J. Comput. Sci. 26, 522–531 (2018) 10. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998) 11. Marc’Aurelio Ranzato, F.J.H., Boureau, Y.L., LeCun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR 2007), vol. 127. IEEE Press, June 2007 12. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006) 13. Elleuch, M., Tagougui, N., Kherallah, M.: Deep learning for feature extraction of Arabic handwritten script. In: International Conference on Computer Analysis of Images and Patterns, pp. 371–382. Springer, Cham, September 2015 14. Mohamed, A.R., Sainath, T.N., Dahl, G.E., Ramabhadran, B., Hinton, G.E., Picheny, M.A.: Deep belief networks using discriminative features for phone recognition. In: ICASSP, pp. 5060–5063, May 2011 15. Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8), 1771–1800 (2002) 16. Lee, H., Grosse, R., Ranganath, R., Ng, A.Y.: Unsupervised learning of hierarchical representations with convolutional deep belief networks. Commun. ACM 54(10), 95–103 (2011) 17. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Advances in Neural Information Processing Systems, pp. 153–160 (2007) 18. Lee, H., Grosse, R., Ranganath, R., Ng, A.Y.: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 609–616. ACM, June 2009 19. Jarrett, K., Kavukcuoglu, K., LeCun, Y.: What is the best multi-stage architecture for object recognition? In: 2009 IEEE 12th International Conference on Computer Vision, pp. 2146– 2153. IEEE, September 2009 20. Elleuch, M., Kherallah, M.: Boosting of deep convolutional architectures for Arabic handwriting recognition. Int. J. Multimed. Data Eng. Manag. (IJMDEM) 10(4), 26–45 (2019)

112

M. Elleuch and M. Kherallah

21. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012) 22. Wan, L., Zeiler, M., Zhang, S., Le Cun, Y., Fergus, R.: Regularization of neural networks using dropconnect. In: International Conference on Machine Learning, pp. 1058–1066, February 2013 23. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207. 0580 (2012) 24. Pechwitz, M., Maddouri, S.S., Märgner, V., Ellouze, N., Amiri, H.: IFN/ENIT database of handwritten Arabic words. In: Colloque International Francophone sur l’Ecrit et le Document (CIFED), pp. 127–136 (2002) 25. Maalej, R., Kherallah, M.: Improving MDLSTM for offline Arabic handwriting recognition using dropout at different positions. In: International Conference on Artificial Neural Networks, pp. 431–438. Springer, Cham, September 2016 26. AlKhateeb, J.H., Ren, J., Jiang, J., Al-Muhtaseb, H.: Offline handwritten Arabic cursive text recognition using Hidden Markov Models and re-ranking. Pattern Recogn. Lett. 32(8), 1081–1088 (2011) 27. Saabni, R.M., El-Sana, J.A.: Comprehensive synthetic Arabic database for on/off-line script recognition research. Int. J. Doc. Anal. Recogn. (IJDAR) 16(3), 285–294 (2013)

Diversity in Recommendation System: A Cluster Based Approach Naina Yadav(B) , Rajesh Kumar Mundotiya, Anil Kumar Singh, and Sukomal Pal Indian Institute of Technology (BHU), Varanasi, India {nainayadav.rs.cse18,rajeshkm.rs.cse16,aksingh.cse,spal.cse}@iitbhu.ac.in

Abstract. The recommendation system is used to process a large amount of data to recommend new item to users, which are achieved using the many developed algorithms. Hence, it is a challenging task for lots of online applications to establish an efficient algorithm for a recommendation system that follows a good trade-off between accuracy and diversity. Diversity in recommendation systems is used to avoid the overfitting problem as well as excellent skill, which provides a recommendation based on increasing the quality of user experiences. In this paper, we proposed a methodology of recommendation to the user with diversity. The impact of diversity on the system leads to user experience for new items. The aim of this paper is to provide a brief overview of diversification with state of the art. A further similarity measure based on heuristic similarity measure “proximity impact popularity” is used to provide a new model with the better-personalized recommendation. The proposed approach gives profitability to many applications for better user experience and diverse item recommendations.

Keywords: Recommender system Diversity · Accuracy

1

· Proximity impact popularity ·

Introduction

Recommender System is the software tool that is designed for analyzing the user’s past experiences and gives a list of suggestions form a large amount of information. The better opinion or recommendation leads an efficient system that is developed for better user experience [1]. Many recommendation algorithms are developed to learn the user’s past behavior after that recommendation are generated as per their preference history [2]. The recommendation system is a technique that is used to provide suggestions to the user for the selection of the item. These suggestions are based on various decision-making processes, i.e., choice of items to buy, screening for the movie from a set of Supported by Indian Institute of Technology (BHU), Varanasi. c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 113–122, 2021. https://doi.org/10.1007/978-3-030-49336-3_12

114

N. Yadav et al.

movies similarly to other online application [3]. Different types of recommendation algorithms work according to their respective domain and the knowledge used for users; at the end, different types of prediction algorithms are used for the generation of recommendation. There are many recommendation algorithms are defined as collaborative filtering, content-based recommendation system, and hybrid recommendation system. The collaborative filtering algorithm is based on information filtering or finding of co-related patterns using different techniques involving collaboration among diverse users and items [4,27]. Another method apart from collaborative filtering is content-based algorithms where the system tries to recommend items to users that are similar to the other user’s past preferences [5]. The similarity between items and users is calculated using the different similarity metrics i.e., cosine similarity measure, Pearson correlation coefficient, etc. The similarity between different user and item is calculated using the content information provided by the user. Both collaborative filtering and content-base approach have their pros and cons. The collaborative filtering approach suffers from cold start problem, which means for the new item, and the new user recommendation generation is impossible, similarly in the contentbased recommendation generation, specification of specific content description is confusing. On the other hand, the collaborative filtering approach suffers from sparsity, which means the existing number of items exceed the amount a person can explore. In content-based filtering techniques, sometimes difficulty in distinguishing between personal information of user [7]. In the past, evaluation in recommendation system depends upon the accuracy, which means how much relevant items are recommending to the user [6]. But nowadays, too many other evaluation measures are obtained for a sound recommendation system, diversity, serendipity and novelty is defined for a better recommendation system. In our proposed algorithm, we described diversity as a performance measure for our recommendation model, which gives users a diverse recommendation.

2

Related Work

Most of the recommendation system follows the same steps for recommendation generation. It starts with information analysis of items and users followed with user model generation, which stores information processed from information analysis after that, these models are using for recommendation generation [8]. When recommending items to users, it is essential to consider many performance metrics and not just the accuracy of a recommendation prediction. There are variety of metrics for recommendation evaluation [6]. – Diversity - Diversity is inclusion of different types of item set in recommendation for user which is different from their past preferences. Diversity is calculated using intra list similarity measure. Diversity =

1  sim(ij , ik ) 2 i u i u j

k

(1)

Diversity in Recommendation System: A Cluster Based Approach

115

sim(ij , ik ) is the similarity measure between two item ij and ik commonly rated by the user u. – Serendipity - Serendipity is the measure of how surprising or relevant recommendations are generated for the user. Serendipity is calculated as the difference of the probability an item i that is recommended for a user u and the probability that item i is recommended for any other user. Serendipity =

 RSu ∪ Eu |Eu |

u

(2)

where RSu is the recommendation generated for user u and Eu is the items set of user u and |N | defines the complete item set of user. – Novelty - Novelty is fundamental qualities of recommendation system by which effectiveness and the new item was added to the recommendation list, which also leads to good accuracy. N ovelty =

Ux Ui

(3)

Ux is the item set which is unknown to user and Ui is the item set that is likes by the user U [9]. 2.1

Diversity in Recommendation System: State of Art

Diversity in recommendation systems is introduced to solve the problem of overfitting, which is, in the past few years, become a topic discussed by many researchers with informative publications. Diversity in recommendation has a twofold purpose: first is as mentioned above is overfitting problem, and second is user satisfaction with recommendation using diversification. K Bradley, B Smyth is the author who described diversity as a new type of diversity, preserving retrieval algorithm based on a similarity measure, which is proficient in delivering substantial improvements in recommendation diversity outwardly compromising recommendation similarity [10]. They propose diversity in three main strategies, which include retrieval of the k-item set from the complete item set using Bounded random selection. They also introduce a diversity measure as an intra list similarity measure. N Lathia, S Hailes et al. describe diversity with the time constraint. They proposed that recommendation grows with time over time as new users and items introduced to the system. Author calculate diversity using the collaborative filtering (CF) approach by giving the user a list of top-n recommendation. The formula for diversity calculation is Diversity(L1 , L2 , N ) =

L2 L1

N

(4)

where L1 , L2 is for ranked list generated by CF algorithm and N is the total number of item in the set. Fleder, Daniel M. et al. examines the effect of recommender systems on the diversity of sales. To measure sales diversity, they

116

N. Yadav et al.

adopt the Gini coefficient. Gini coefficient is used to evaluate using a simulated environment for user purchase tracking. The Gini coefficient for sales is defined as recommendation and diversity bias [12].  1 diversitybias(G) = 1 − 2 L(u)dx (5) 0

Clarke, Charles LA et al. presents a methodology that is used for evaluation and comprehensively rewards novelty and diversity. They define diversity as a part of nDCG measure to avoid the ambiguity problems [13]. G(K) =

m 

j(dk , i)(1 − α)ri ,k−1

(6)

i=1

on the another end Hu, Rong, et al. propose an approach based on a user study, that was conveyed to analyze an organization interface, which clubs recommendations into classes, with a standard list interface to perceived categorical diversity. They calculate diversity by a survey conducting between 20 participants [14]. Vargas, Sa´ ul, et al. proposed a methodology based on Binomial framework for genre diversity in recommender systems. They also propose an efficient greedy optimization technique to optimize Binomial diversity [15]. BinomDiv(R) = Converage(R) × N onRed(R)

(7)

Hu, Liang, et al. stated that recommendation generation has diversified by using session contexts information with personalized user profiles. The author uses session-based wide-in-wide-out networks that are intended to efficiently learn session profiles across a large number of users and items [17]. Karakaya et al. proposed diversification using reranking algorithms that are utilized to aggregate diversity using the ranked list of recommendations [19]. Wilhelm, Mark, et al. proposed a diversified recommendation for the live YouTube user feed page. The author uses a statistical model of diversity based on determinantal point processes with set-wise optimization of recommendations [18]. M¨ oller et al. use topic diversity in news recommendations using different diversity metrics in social science and democracy news [20]. Most of the researchers also focus on the other aspect of diversity in a recommendation, which includes serendipity and accuracy and their effects on diversity. Kotkov et al. proposed a serendipity oriented, the greedy reranking algorithm which improves serendipity of recommendations using feature diversification [21]. Apart from a tradeoff between diversity and other metrics of recommendation, Matt, Christian et al. described different types of diversity in the recommendation. The author proposed algorithmic recommendation diversity, perceived recommendation diversity, and sales diversity and identified different recommendation algorithms and user perception effects on sales recommendation [23]. Bag, Sujoy et al. proposed a model for online companies with personalized assistance to their consumers. The author suggested a prediction model for the profitability of online companies by recommending various items to users [24]. Recently Antikacioglu et al. give two different systemwide diversity metrics. The author in this proposed approach is considered as subgraph selection on a bipartite graph, which represents user and item [25].

Diversity in Recommendation System: A Cluster Based Approach

3

117

Proposed Approach

The recommendation system is a beneficial decision support tool, and nowadays, they are an inevitable part of any user’s daily life and web services. Basic building blocks of recommendation systems are user and item. Numerous algorithms described for recommendation system are based on the feedback of user-provided to the item in terms of review, tag, and rating or track the user behavior in terms of his likes and dislikes for items based on these information algorithms gives predictions. This article aims to provide a recommendation to the user with item diversification. Diversity in the recommendation system is for delivering the item to the user, which is different from user preferences. Suppose a user’s preference for movie genres is action and science fiction. Still, for some variety, he wished to watch movies belongs to the family genre; algorithms like collaborative filtering give movies belonging to the genre action and science fiction frequently, which is not relevant in terms of recommendation generation. Recommendation system using a large amount of data, several machine learning algorithms are used for recommendation generation. In the proposed approach use K- means clustering algorithm to develop a personalized movie recommendation system with MovieLens dataset. K-means clustering algorithm is an unsupervised learning technique which is used for categorizing data and it depends on the hyperparameter k which denotes the number of clusters that is used for data classification. 3.1

Diversification Algorithm

Diversification in our approach is achieved using k-means algorithm with similarity measure algorithm “Proximity-Impact-Popularity” (PIP). PIP similarity measure is described by the Ahn, Hyung Jun. in 2008 [16]. Steps in diversification algorithm as follows K-Means Clustering Algorithm is an unsupervised learning algorithm which is used for unlabelled data to classify them in clusters. Clusters in the k-means algorithm share the same set of properties. The algorithm works iteratively for each data point to cluster them using the features that are provided. K-means Algorithm work as follows:– – Define the number of the cluster as k. – Randomly select k data point and calculate centroid without data shuffling. – Keep iterating until centroid value is not going to change. It means data value assigned to the cluster is not changed. – Compute the euclidean distance from one data point to other data points and assign data point to each cluster having minimum distance and then compute the centroid. As per the algorithm we define cluster size as 20, it depends on the genre information, there are twenty distinct genre presents in the MovieLens dataset.

118

N. Yadav et al.

Proximity Impact Popularity (PIP) is a heuristic measure based on domain-specific data. PIP is more effective than any other similarity calculation because it overcomes the issue of the cold-start problem, which is an important issue in the recommendation system. PIP similarity calculation is based on three different factors i.e., Proximity, Impact, and Popularity.  P IP (rip , rjp ) (8) SIM (ui , uj ) = kCi,j

rip , rjp is the rating value of item p rated by user i and j. – Agreement - It is Boolean function Agreement(r1 , r2 ) is based on value Rm Rm = Agreement(ri , rj ) =

M aximumRating + M inimumRating 2 ⎧ ⎨F alse, ⎩T rue,

(9)

if (ri > Rm & rj < Rm ) or (ri < Rm & rj > Rm ) otherwise

– Proximity - A absolute distance between two rating is defined as  |ri − rj |, if Agreement(ri , rj ) = T rue D(ri , rj ) = 2 × |ri − rj |, if Agreement(ri , rj ) = F alse P roximity(ri , rj ) = 2 × (Rmax − Rmin ) + 1 − D(ri , rj )

2

(10)

(11) (12)

– Impact - Impact(ri , rj ) defined as ⎧ ⎨(|ri − rm | + 1)(|rj − rm | + 1) Impact(ri , rj ) = 1 ⎩ (|r −r |+1)(|r , −r |+1) i

m

j

m

if Agreement(ri , rj ) = T rue if Agreement(ri , rj ) = F alse

(13)

– Popularity - Let α is the average rating of item p given by all user. ⎧ ⎨1 + ( ri +rj − α)2 , 2 P opularity(ri , rj ) = ⎩P opularity(ri , rj ) = 1

if (ri > α, rj > α) or (ri < α, rj < α) otherwise

(14)

Recommendation Generation. In the final step of our approach is recommendation generation, which is achieved by using these two algorithms discussed in Sect. 3.1. From the k-means algorithm, we get the cluster information of the user, item, and rating. Clustering is based on the genre information of the movie. We are using MovieLens dataset, which contains 20 distinct genres, so we obtained 20 different clusters. We calculate similarities between a user from one cluster to another and then received an item set from another cluster. Then we calculate the predicted rating for that item set using the PIP algorithm and then recommend top - k item to the target user (Fig. 1).

Diversity in Recommendation System: A Cluster Based Approach

119

Fig. 1. Flow-diagram for proposed approach Table 1. Recommendation generation table

4

UserId Recommendation generation

Genre

12

Star Kid (1997)

Adventure—Children—Sci-Fi

12

They Made Me a Criminal (1939)

Crime—Drama

12

Someone Else’s America (1995)

Comedy—Drama

12

Saint of Fort Washington, The (1993)

Drama

12

Prefontaine (1997)

Drama

31

Marlene Dietrich: Shadow and Light (1996) Documentary

31

They Made Me a Criminal (1939)

Crime—Drama

31

Star Kid (1997)

Adventure—Children—Sci-Fi

31

Someone Else’s America (1995)

Comedy—Drama

31

Saint of Fort Washington, The (1993)

Drama

Experiment and Results

In this section, we present results which are achieved using our proposed methodology as discussed in Sect. 3.1. We present our top - 5 recommendation for the user id - 12 and 31 using Movielens 100k dataset. These users belong to different clusters, that clusters are achieved using the k-means algorithm over genre information. user- id 12 belongs to clusterId - 0, and user-id 31 belongs to clusterId - 12. For a number of the optimal clusters, we also applied the alleged elbow method. It requires running the algorithm multiple times over a loop, with a growing number of cluster selection, and then contriving a clustering score as a function of the number of clusters. The optimal cluster value for a different pair of the genre is 7, 17, 22, and 27. Some of the cluster analysis of genre pair Drama

120

N. Yadav et al.

Fig. 2. Cluster analysis of genre- Drama & Action

Fig. 3. Cluster analysis of genre- Drama & Comedy

& Action and Comedy & Action are shown in Fig. 2, 3, respectively. The final recommendation for two different users with their respective genre are described in the Table 1.

5

Conclusion

The primary concern for any Recommendation system is to provide an accurate recommendation, but some times, this recommendation from the same domain makes the user uninterested. So consider these limitations, we proposed an algorithm for diversification. In the future, we plan for directing serendipity and stability issues for the recommendation system with accuracy trade-off. Further, we want to examine the association of diversification methods with various deep learning approaches so that we can learn a suitable plan for diversification.

References 1. Salton, G., McGill, M.J.: Introduction to Modern Information Retrieval. McGrawHill, New York (1983) 2. Kunaver, M., et al.: Increasing Top-20 diversity through recommendation postprocessing. In: Presutti, V., et al. (eds.) Semantic Web Evaluation Challenge. Springer, Cham (2014)

Diversity in Recommendation System: A Cluster Based Approach

121

3. Ricci, F., Rokach, L., Shapira, B.: Introduction to recommender systems handbook. In: Ricci, F., Rokach, L., Shapira, B., Kantor, P. (eds.) Recommender Systems Handbook, pp. 1–35. Springer, Boston (2011) 4. Terveen, L., Hill, W.: Beyond recommender systems: helping people help each other. HCI New Millenn. 1(2001), 487–509 (2001) 5. Su, X., Khoshgoftaar, T.M.: A survey of collaborative filtering techniques. Adv. Artif. Intell. 2009 (2009) 6. Gunawardana, A., Shani, G.: A survey of accuracy evaluation metrics of recommendation tasks. J. Mach. Learn. Res. 10, 2935–2962 (2009) 7. C ¸ ano, E., Morisio, M.: Hybrid recommender systems: a systematic literature review. Intell. Data Anal. 21(6), 1487–1524 (2017) 8. Kurapati, K., et al.: A multi-agent TV recommender. In: Proceedings of the UM 2001 Workshop “Personalization in Future TV” (2001) 9. Zhang, L.: The definition of novelty in recommendation system. J. Eng. Sci. Technol. Rev. 6(3), 141–145 (2013) 10. Bradley, K., Smyth, B.: Improving recommendation diversity. In: Proceedings of the Twelfth Irish Conference on Artificial Intelligence and Cognitive Science, Maynooth, Ireland (2001) 11. Lathia, N., et al.: Temporal diversity in recommender systems. In: Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM (2010) 12. Fleder, D.M., Hosanagar, K.: Recommender systems and their impact on sales diversity. In: Proceedings of the 8th ACM Conference on Electronic Commerce. ACM (2007) 13. Clarke, C.L.A., et al.: Novelty and diversity in information retrieval evaluation. In: Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM (2008) 14. Hu, R., Pu, P.: Helping users perceive recommendation diversity. In: DiveRS@ RecSys (2011) 15. Vargas, S., et al.: Coverage, redundancy and size-awareness in genre diversity for recommender systems. In: Proceedings of the 8th ACM Conference on Recommender Systems. ACM (2014) 16. Ahn, H.J.: A new similarity measure for collaborative filtering to alleviate the new user cold-starting problem. Inf. Sci. 178(1), 37–51 (2008) 17. Hu, L., et al.: Diversifying personalized recommendation with user-session context. In: IJCAI (2017) 18. Wilhelm, M., et al.: Practical diversified recommendations on YouTube with determinantal point processes. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management. ACM (2018) ¨ Aytekin, T.: Effective methods for increasing aggregate diversity 19. Karakaya, M.O., in recommender systems. Knowl. Inf. Syst. 56(2), 355–372 (2018) 20. M¨ oller, J., et al.: Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity. Inf. Commun. Soc. 21(7), 959–977 (2018) 21. Kotkov, D., Veijalainen, J., Wang, S.: How does serendipity affect diversity in recommender systems? A serendipity-oriented greedy algorithm. Computing, 1–19 (2018) 22. Wu, Q., et al.: Recent advances in diversified recommendation. arXiv preprint arXiv:1905.06589 (2019)

122

N. Yadav et al.

23. Matt, C., Hess, T., Weiß, C.: A factual and perceptional framework for assessing diversity effects of online recommender systems. Internet Res. 29(6), 1526–1550 (2019) 24. Bag, S., Ghadge, A., Tiwari, M.K.: An integrated recommender system for improved accuracy and aggregate diversity. Comput. Ind. Eng. 130, 187–197 (2019) 25. Antikacioglu, A., Bajpai, T., Ravi, R.: A new system-wide diversity measure for recommendations with efficient algorithms. arXiv preprint arXiv:1812.03030 (2018) 26. Yuan, B., et al.: One-class field-aware factorization machines for recommender systems with implicit feedbacks. Technical report. National Taiwan University (2019) 27. Tewari, A.S., Yadav, N., Barman, A.G.: Efficient tag based personalised collaborative movie reccommendation system. In: 2016 2nd International Conference on Contemporary Computing and Informatics (IC3I). IEEE (2016)

Contribution on Arabic Handwriting Recognition Using Deep Neural Network Zouhaira Noubigh1(B) , Anis Mezghani2 , and Monji Kherallah3 1 Higher Institute of Computer Science and Communication Technologies,

University of Sousse, Sousse, Tunisia [email protected] 2 Higher Institute of Industrial Management, University of Sfax, Sfax, Tunisia [email protected] 3 Faculty of Sciences of Sfax, University of Sfax, Sfax, Tunisia [email protected]

Abstract. Arabic handwriting recognition is considered among the most important and challenging recognition research subjects due to the cursive nature of writing and the similarities between different characters shapes. In this paper, we investigate the problem of handwritten Arabic recognition. We propose a new architecture combining CNN and BLSTM based on character model approach with CTC decoder. The handwriting Arabic database KHATT is used for experiments. The results demonstrate a net advantage of performance for the CNN-BLSTM combining approach compared to the approaches used in the literature. Keywords: Deep learning · CNN · LSTM · Arabic database · Handwriting recognition

1 Introduction In recent years, offline Handwriting recognition is considered as a very important research area for several pattern recognition applications. Various handwriting recognition systems were proposed and the difficulty of these systems is dependent on the writing styles of the recognizing units [1]. In fact, recognizing characters or digits is significantly easier than recognizing cursive words or text lines. Therefore, the previous handwriting recognition systems are usually able to recognize single characters with very small vocabularies [2]. Nowadays, the accelerating progress and availability of low-cost computer hardware and the growing difficulty of the tackled problems encouraged the use of computationally expensive techniques. Therefore, recent recognizers were developed to deal with continuous sequence in order to recognize isolated words and text lines extracted from handwritten documents. Recent researches are focused on open vocabulary recognition with less constrained type of document [3]. In the last few decades, Arabic handwriting recognition (AHR) has attracted considerable attention and has become one of the challenging areas of research in the field © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 123–133, 2021. https://doi.org/10.1007/978-3-030-49336-3_13

124

Z. Noubigh et al.

of document image processing. The cursive nature of the Arabic script, the similarity between many Arabic character shapes and the unlimited variation in human handwriting make the AHR a complicated task and present some specific challenges [4]. Therefore, few efforts were provided in the recognition of Arabic text compared to the recognition of text in other scripts like Latin and Chinese. Most of the recent approaches for Arabic handwritten text/word recognition have used HMM-based techniques or shallow Artificial Neural Networks (ANN) [5]. Recently the Deep Learning (DL), subfield of machine learning [6, 7], proved a great performance improvement for a robust classification, recognition, and segmentation. The most famous deep learning techniques are Convolutional Neural Network (CNN) and different variations of Recurrent Neural Network (RNN) like Long Short-Term Memory (LSTM), Bidirectional LSTM, and Multidimensional LSTM [7]. Deep convolutional neural networks has provided an efficient solution for handwritten characters and digits recognition [8, 9]. LSTM showed promising performance and has proved as an efficient model with a combination of output CTC layers for sequence labeling over Hidden Markov Models (HMM) and other models [10–12]. In this paper, a new contribution for Arabic handwriting recognition based on the combination of two famous deep learning techniques cited below, CNN and BLSTM, is presented. The paper is organized as follows: Sect. 2 reports related works based on deep learning approach for text recognition. Section 3 details the proposed CNN-BLSTM based method for Arabic character recognition. Experimental results obtained on KHATT database are presented in Sect. 4 with a comparison study. Finally, Sect. 5 presents the conclusion of the paper.

2 Related Works Recent research works investigate combining deep learning technologies to improve recognition results. Shi et al. [13] were the first ones proposed the combination of deep CNN and RNN with CTC decoder for image based Sequence Recognition. Afterwards, many approaches for handwritten text recognition were inspired from this deep architecture. In this section, we present the important works based on this approach proposed for Arabic handwriting recognition. The same architecture based on combining CNN and BLSTM was used by Suryani et al. [14] with a hybrid HMM decoder instead of CTC. CNNs were applied on both isolated characters and text lines processed by a sliding window technique. The proposed approach was tested on offline Chinese handwriting datasets. Rawls et al. [15] published a CNN-LSTM model where CNN is used for feature extraction, and bidirectional LSTMs for sequence modeling. In this work, authors presented a comparison stage between features provided types. It is proved that CNN model is better than both existing handcrafted features and a simpler neural model consisting entirely of Fully Connected layers. Results are presented on English and Arabic handwritten data, and on English machine printed data. For Arabic handwriting recognition, AL-Saffar et al. proposed a review that presented Deep Learning Algorithms for Arabic Handwriting Recognition [2]. Authors improve that the first successful systems based DL proposed for Arabic character were based on Convolutional Neural Network (CNN) [16, 17] and Deep Belief Networks [18, 19].

Contribution on Arabic Handwriting Recognition Using Deep Neural Network

125

BenZeghiba proposed a comparative study based on four different optical modeling units for offline Arabic text recognition [20]. These units are the isolated characters, extended isolated characters with the different shapes of Lam-Alef, the character shapes within their contexts and, the recently proposed subcharacter units that allow sharing similar patterns in the different character shapes. Ahmad et al. [21] proposed an MDLSTM based Arabic character recognition system. Connectionist Temporal Classification (CTC) is used as a final layer to align the predicted labels according to the most probable path. Jemni et al. [22] proposed an Arabic handwriting recognition system based on multiple BLSTM-CTC combination. The paper presented a comparative study of different combination levels of BLSTM-CTC recognition systems trained on different feature sets. Three combination levels were compared low-level fusion, Mid-level combination methods and the high-level fusion. The experiments were conducted on the Arabic KHATT dataset.

3 Proposed Method In this section, we describe the architecture of the proposed system. It is a hybrid approach based on combining CNN and BLSTM for Arabic handwriting text lines recognition. It consists of three main steps as presented in Fig. 1. The first step is the preprocessing of the input image. The preprocessing stage is necessary in order to reduce the generated noise and eliminate any variability resource that occurred during the images scanning phase, especially with the challenging issues related to text-lines KHATT dataset. In this work, we applied the same preprocessing used in [30], including the discard of any additional white region, the Binarization and the skew detection and correction.

Fig. 1. Proposed approach steps

The two principal steps in the proposed recognition system are the feature extraction with CNN and the sequences modeling based on BLSTM and CTC.

126

Z. Noubigh et al.

3.1 Feature Extraction In handwriting recognition, the purpose of the feature extraction step is to capture the essential characteristics of the character or the word which make it different from another. Features extraction techniques differ from one application to another dependent on the complexity of studied script and image quality. Therefore, the selection of features extraction method remains the most important step in the recognition process. The Features extraction techniques used for handwritten texts can be classified into two global categories; handcrafted features methods and non-handcrafted or learned features methods. The recent deep learning networks, especially the Convolutional Neural Networks (CNNs) provide efficient solutions for feature extraction where deep layers act as a set of feature extractors. They extract a non-handcrafted features named learning features which are generic and independent of any specific classification task. The convolution operation generate many maps that present different features extracted from the original image. The idea behind this approach is to discover multiple levels of representation so that higher level features can represent the semantics of the data, which in turn can provide greater robustness to intra-class variability. In this paper, we use the learning features for our handwriting recognition system. The input is a grayscale image of size 64 × 1024. The first layer in CNN is the convolution layer. In this layer, a sliding matrix called filter is used to find features everywhere in the image. CNN multiplies each pixel in the image with each value in the filter for each filters and the output of this layer will be a set of filtered images. The architecture of our system consist of 6 convolution layers. The filters are of size 3 × 3. Second layer is called “Nonlinearity layer”. The Rectified Linear Unit (ReLU) activation function [a verifier] is implemented to produce an output after each convolution. The ReLU objective is to introduce non-linearity in the network since convolution is a linear operation. A max pooling Layer is used to summarize image regions and outputs a downsized version of the previous layer. The CNN is applied over a sequence of images of size 64 × 64 obtained from the text line image using a horizontal sliding window scanning the image from right to left. It result a multi-channel output of dimension 1 × 16 × 256, where 256 is the number of filter maps in the last convolution layer, and the two other dimensions depend on the amount of pooling in the CNN. The architecture details are illustrated in Table 1 and Fig. 2. Table 1. CNN layers configuration and dropout Type

Configuration

Input

64 × 64 gray scale image

Conv1

#maps: 32 k: 3 × 3

Max pooling Window: 2 × 2, s: 2 Conv2

#maps: 64 k: 3 × 3

Max pooling Window: 2 × 2, s: 2 (continued)

Contribution on Arabic Handwriting Recognition Using Deep Neural Network

127

Table 1. (continued) Type

Configuration

Conv3

#maps: 128 k: 3 × 3

Max pooling Window: 2 × 1, s: 2 Conv4

#maps: 128 k: 3 × 3

Max pooling Window: 2 × 1, s: 2 Conv5

#maps: 256 k: 3 × 3

Max pooling Window: 2 × 1, s: 2 Conv6

#maps: 256 k: 3 × 3

Max pooling Window: 2 × 1 Output

1 × 16 × 256

Dropout

Dropout ratio = 0.7

3.2 Sequence Modeling A few of works for Arabic handwriting recognition are based on BLSTM although this model proves its performance for other scripts. The successful results of deep BLSTM networks in several applications motivating us to use it for Arabic text recognition. The deep BLTSM networks for text recognition is usually combined with the Connectionist Temporal Classification (CTC) function. This loss function is a variant of the Forward Backward algorithm that enables to train LSTM networks by inferring the ground truth at the frame level from the word transcription level. CTC allows the network to predict the sequences of output labels directly without the need to segment the input. The first and simplest approximation of decoding the RNN output is the best path decoding presented in [5]. Other decoding algorithm called beam search is described in the paper of Hwang and Sung [3]. Multiple candidates for the final labeling are iteratively calculated and are called beams. At each time-step, each beam-labeling is extended by all possible characters. Additionally, the original beam is also copied to the next timestep. The Beam Width (W) is defined to give the number of beams to keep (the best beams). The beam width determines the complexity and the accuracy of the algorithm. If W is big enough, the probability will be one and the algorithm will be too complex.. In our system, we use the beam search decoding algorithm with BLSTM and W is fixed experimentally to 30. The proposed system present three BLSTM layers with 512 neurons in each layer and direction. The first layer get their input from the preceding CNN features extraction stage. The RNN output is a matrix of size T × (C + 1), T denotes the time step length and C is the number of characters with a pseudo-character added to the RNN called blank. This matrix is fed into the CTC beam search decoding algorithm. The probability of a path is defined as the product of all character probabilities on this path. A single character from a labeling is encoded by one or multiply adjacent occurrences of this character on the path, possibly followed by a sequence of blanks.

128

Z. Noubigh et al.

Fig. 2. CNN-BLSTM architecture

4 Experiments and Discussion 4.1 KHATT Database In this approach, we used the offline Handwritten Arabic Text database KHATT, which was created by King Fahd University of Petroleum & Minerals, Technical University of Dortmund and Braunschweig University of Technology [23]. The KHATT database contains 4000 grayscale paragraph images and its ground-truth as described in [29]. It consists of scanned Arabic handwriting at different resolutions (200, 300 and 600 dpi) from 1,000 distinct male and female writers representing diverse countries, age groups, handedness and education levels. 2000 of these images contain similar text each covering all Arabic characters and shapes whereas the remaining 2000 images contain free texts written by the writers on any topic of their choice in an unrestricted style. 4.2 System Settings Sliding Windows A horizontal sliding window is used to scan the image from right to left, the direction of written of Arabic texts. The height of the window is equal to the height of the text-line image, which has been normalized to 64 pixels. ˙In the last convolutional layer, 16 feature vectors are extracted from the feature maps for each window of size 64 × 64. Those features are the inputs of the BLSTM layer.

Contribution on Arabic Handwriting Recognition Using Deep Neural Network

129

Dropout Dropout is a regularization approach used in neural networks. It prevents over fitting and helps reducing interdependent learning between the neurons. The term “dropout” refers to dropping out (shut down) units in a neural network. A unit is dropping out mean that it is temporarily removed from the network and the choice of this unit is randomly. All the incoming and outgoing connections from this unit are ignored. Applying dropout to a neural network amounts to sampling a “thinned” network from it. The thinned network consists of all the units that not dropped [24]. Dropout improves the performance of neural networks has been reported to have achieved success and on supervised learning tasks on several benchmark databases. For our recognition system, a first dropout layer is applied after the CNN layers with dropout ratio 0.5 and a second layer applied to the RNN cells with dropout ratio 0.8. 4.3 Results and Discussion In these experiments, the inputs were preprocessing text line images extracted from the KHATT database. These images were passed through the five CNN layers followed by three BLSTM layers. The experiments are carried out on full-text-lines images of KHATT dataset. The training set has 8505 text-lines, test set has 1867 text-lines, and validation set contains 1584 text-line images. In this work, we report the Word Error Rate (WER) and Character Error Rate (CER) as presented in Table 2. Furthermore, Fig. 3 shows the plot of errors corresponding to each epoch during training and validation. Table 2. Performance regarding CER and WER Heading level CER % WER % Training set

8.53

19.24

Test set

8.63

20.17

Validation set 15.13

39.53

Fig. 3. Character and Word Error rates corresponding to each epoch during training and testing on KHATT database

130

Z. Noubigh et al.

Table 3 present a comparison study between our recognition system and the best results reported so far on KHATT dataset. In this study, we introduce for each system the used part of KHATT dataset, the features extraction techniques, the used vocabulary, the language models and the results. It appears clear from the results of those approaches that using language model improves the system performance. Our contribution for Arabic handwriting text line recognition, is the first approach based CNN-BLSTM considering the KHATT dataset as a train and test case. The proposed approach, in this paper, is based on CNN-BLSTM architecture with CTC beam search decoder and use only KHATT training set for training. The results demonstrate a net advantage of performance for the CNN-BLSTM combining approach. Furthermore, the use of CNN-BLSTM instead of MDLSTM, that is the most used for Arabic handwriting recognition, is less expensive in memory and computing time and prove more performance.

5 Conclusion A new contribution for Arabic Handwriting recognition is submitted in this paper. The proposed architecture is a deep CNN-BLSTM combination based on character model approach. KHATT dataset, which is one of the challenging datasets that contains Arabic handwritten text lines, is used in experiments for training and testing. The obtained results are very promising and encouraging. In fact, other preprocessing are possible to improve the quality of input images. Furthermore, it will be interesting to improve the proposed approach performance with LMs and to test it for a large vocabulary Arabic corpus.

Database

Unique-text-lines of KHATT dataset 4, 428 for train, 959 for test 876 for validation

Full KHATT dataset

Khatt database train: 9475 validation: 1901 Test: 2007

Full KHATT dataset

Model

MDLSTM + CTC

MDLSTM + CTC

BLSTM + CTC

CNN + BLSTM + CTC

Paper reference

BenZeghiba et al. [25]

BenZeghiba [20]

Jemni et al. [22]

Our approach

CNN for features extraction

Segment Based Feature extraction + Distribution-Concavity (DC) based features

Raw pixels

Raw pixels

Features

Result



LM based hybrid word/PAW 3_grams

No

8%

16,27%

18k words running in the 3_grams LM training corpus. 23K distinct words (extracted from the KHATT Corpus)

7,85%





LM based Part-Of-Arabic-Word 3_grams

LM based hybrid word/PAW 3_grams



Character error rate LM based word 3_grams

Language model

3_grams LM

23K distinct words (extracted from the KHATT corpus)

About 20K–30K word (A hybrid vocabulary that incorporates both the most frequent words and the resulted PAWs)

About 20K–30K word (A hybrid vocabulary that incorporates both the most frequent words and the resulted PAWs)

Vocabulary

Table 3. Comparison study

Word error rate

20.1%

29,13%

13.52%

24,1%

31,3%

30,9%

37,8%

Contribution on Arabic Handwriting Recognition Using Deep Neural Network 131

132

Z. Noubigh et al.

References 1. Alginahi, Y.M.: A survey on Arabic character segmentation. Int. J. Doc. Anal. Recogn., 1–22 (2002) 2. Al-saffar, A., Awang, S., Al-saiagh, W., Tiun, S., Al-khaleefa, A.S.: Deep learning algorithms for Arabic handwriting recognition. Int. J. Eng. Technol. 7(3.20), 344–353 (2018) 3. Wigington, C., Stewart, S., Davis, B., Barrett, B., Price, B., Cohen, S.: Data augmentation for recognition of handwritten words and lines using a CNN-LSTM network. In: ICDAR (2017) 4. Al-hadhrami, A.A.N., Allen, M., Moffatt, C., Jones, A.E.: National characteristics and variation in Arabic handwriting. Forensic Sci. Int. 247, 89–96 (2015) 5. Parvez, M.T., Mahmoud, S.A.: Offline Arabic handwritten text recognition. ACM Comput. Surv. 45(2), 1–35 (2013) 6. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015) 7. Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015) 8. Ly, N.: Deep convolutional recurrent network for segmentation-free offline handwritten Japanese text recognition. In: ICDAR (2017) 9. Mudhsh, M.A., Almodfer, R.: Arabic handwritten alphanumeric character recognition using very deep neural network. Information 8(3), 105 (2017) 10. Messina, R., Louradour, J.: Segmentation-free handwritten Chinese text recognition with LSTM-RNN. In: ICDAR, pp. 171–175 (2015) 11. Sabir, E., Del Rey, M., Rawls, S., Del Rey, M., Del Rey, M.: Implicit language model in LSTM for OCR. In: ICDAR (2017) 12. Wu, Y., Yin, F., Chen, Z., Liu, C.: Handwritten Chinese text recognition using separable multi-dimensional recurrent neural network. In: ICDAR (2017) 13. Shi, B., Bai, X., Yao, C.: An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39(11), 2298–2304 (2017) 14. Suryani, D., Doetsch, P., Ney, H.: On the benefits of convolutional neural network combinations in offline handwriting recognition. In: International Conference on Frontiers in Handwriting Recognition, ICFHR, pp. 193–198 (2017) 15. Rawls, S., Cao, H., Kumar, S., Natarajan, P.: Combining convolutional neural networks and LSTMs for segmentation-free OCR. In: 2017 14th IAPR International Conference on Document Analysis and Recognition, pp. 155–160 (2017) 16. Elleuch, M., Mokni, R., Kherallah, M.: Offline Arabic handwritten recognition system with dropout applied in deep networks based-SVMs. IEEE (2016) 17. Amrouch, M., Rabi, M.: Deep neural networks features for Arabic handwriting recognition. In: International Conference on Advanced Information Technology, Services and Systems, pp. 138–149 (2017) 18. Porwal, U., Zhou, Y, Govindaraju, V.: Handwritten Arabic text recognition using deep belief networks. In: 21st International Conference on Pattern Recognition, November, pp. 302–305 (2012) 19. Alkhateeb, J.H.: DBN – based learning for Arabic handwritten digit recognition using DCT features. In: 6th International Conference on CSIT, pp. 222–226 (2014) 20. Benzeghiba, M.F.: A comparative study on optical modeling units for off-line Arabic text recognition. In: ICDAR (2017) 21. Ahmad, R., Naz, S., Afzal, M.Z., Rashid, S.F., Liwicki, M., Dengel, A.: The impact of visual similarities of Arabic-like scripts regarding learning in an OCR system. In: ICDAR (2017) 22. Jemni, S.K., Kessentini, Y., Kanoun, S., Ogier, J.M.: Offline Arabic handwriting recognition using BLSTMs combination. In: Proceedings - 13th IAPR International Workshop on Document Analysis Systems, DAS 2018, pp. 31–36 (2018)

Contribution on Arabic Handwriting Recognition Using Deep Neural Network

133

23. Alshayeb, M., et al.: KHATT: an open Arabic offline handwritten text database. Pattern Recognit. 47(3), 1096–1112 (2013) 24. Cicuttin, A., et al.: A programmable System-on-Chip based digital pulse processing for high resolution X-ray spectroscopy. In: 2016 International Conference on Advances in Electrical, Electronic and Systems Engineering, ICAEES 2016, vol. 15, pp. 520–525 (2017) 25. Benzeghiba, M.F., Louradour, J., Kermorvant, C.: Hybrid word/Part-of-Arabic-word language models for Arabic text document recognition. In: Proceedings of the International Conference on Document Analysis and Recognition, ICDAR 2015, vol. 2015, pp. 671–675, November 2015

Analyzing and Enhancing Processing Speed of K-Medoid Algorithm Using Efficient Large Scale Processing Frameworks Ayshwarya Jaiswal, Vijay Kumar Dwivedi(B) , and Om. Prakash Yadav UCER, Prayagraj, Uttar Pradesh, India [email protected]

Abstract. K-medoid algorithm has recently become a highly active and most discussed topic. It is better than k-means as it is more robust and less sensitive to outliers, but it itself has drawbacks such as number of medoids should be given in advance which is hard to determine and the initial k-clustering centers need to be chosen at random. This article focuses on new modified k-medoid++ algorithm, which is a proposed algorithm for increasing the processing speed and efficiency of K-medoid algorithm. However, not only modifying the algorithm increases the processing speed, but selecting appropriate framework to efficiently run the algorithm has its own perquisites. Apache Hadoop and Spark provide an effective open source solution for big data. Many researchers are making false interpretations about these frameworks regarding the performance and efficiency. In this paper, the performance of both the frameworks are compared by implementing simple k-medoid algorithm and then selecting the appropriate tool for modified k-medoid++ algorithm. It was also observed on implementing the kmedoid algorithm, that on selecting initial medoids randomly was giving random results. Keywords: K-medoid algorithm · Hadoop · Spark · MLlib · DBSCAN-KD

1 Introduction With the rapid increase in the digital data generated from different sources, being it size and complexity, it has become impossible to analyze such sheer sized unstructured data. The traditional data storage and processing systems are not efficient enough to handle such large data. This requirement prompted the development of many bigdata processing technologies. Among the technologies that are used to handle, process and analyze big data, the most effective and popular in the field of distributed and parallel processing is Apache hadoop and Spark. Both are open source platform and has been by far the most in-demand technology associated with Big Data Storage and Processing different types © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 134–144, 2021. https://doi.org/10.1007/978-3-030-49336-3_14

Analyzing and Enhancing Processing Speed of K-Medoid Algorithm

135

of data. Spark which is an alternative to the traditional batch map/reduce model has been designed to also run on top of Hadoop (as one of its ways). Its model is discussed in detail ahead in this article. There are many clustering algorithms, among which k-medoid is the best known clustering algorithm [5]. Many research are going on to decrease its computation speeds and increase its efficiency. In this article, the new modified K-Medoid++ algorithm [4] is proposed to be implemented on Spark which is the enhanced version of simple k-medoid algorithm as it removes the drawbacks of the traditional k-medoid algorithm. The K-medoid algorithm are computationally expensive and cannot be efficiently applied to large datasets. It faces some issues like the number of medoid need to be given in advance which is hard to determine and it is chosen at random. Choosing medoids at random increases the number of iterations, as some of the clustering centers are in the same clusters. This issue can be resolved by using Density based clustering algorithm to find the initial medoids i.e. initial seeds and then further proceeding with the kmedoid algorithm. This paper proposes an appropriate solution for k-medoid algorithm. It also has performed experimental analysis on spark and hadoop using simple k-medoid algorithm for observing interesting results. The paper structure is as follows. Section 2 is the brief description of previous work. Section 3 describes the Spark architecture and working. Section 4 discusses the benefits of Spark over Hadoop. Section 5 proposes new modified k-medoid++ algorithm. The experimental analysis is done in Sect. 6 followed by the conclusion in Sect. 7.

2 Related Work Research paper [12] has discussed and analyzed various frameworks for machine learning. Research paper [13], has discussed spark features and mentioned that spark is 100x faster in memory and 10x faster when running on disk than hadoop. It has described working with spark using hadoop. Research paper [11] analyzed Spark’s primary framework, core technologies and has run a machine learning instance on it. There are not much research work done on Spark as per previous papers. Research paper [1] has performed experiments on Weka and Spark MLlib and has proposed future work focusing on more practical experiments with Spark MLlib and utilizing variety of bigger datasets. Research Paper [3] implemented and tested their approach for k-medoid algorithm on spark. Paper [6] has implemented their k-medoid algorithm on mapreduce using their approach which can achieve parallelism independent of the number of k-clusters to be formed. Research paper [8] has proposed HK-medoid, a parallel k-medoid algorithm based on hadoop to break the big data limits. They experimentally showed that it has linear speed-up and good clustering result for big data. Many research have been done for optimal search of initial medoids. Research paper [19] proposes an algorithm which uses CLARA and triangular geometry method for optimal search of medoids. Both these methods uses random samples of datasets and are not suitable for large multidimensional datasets. Research paper [7] are attempting to further refine their solution by using DBSCAN method. Paper [20] has implemented DBSCAN method for optimal search of medoid on mapreduce.

136

A. Jaiswal et al.

Research paper [2] uses the KD method in DBSCAN algorithm for faster implementations. Research paper [9] compared DBSCAN-KD with the other density based clustering algorithm.

3 Spark Architecture Apache Spark is among one of a few most in-demand big data frameworks for parallel computing. It provides a consolidation of many features like in-memory computation, speed, polyglot, scalability and fault tolerance. It provides additional API’s for different languages to provide higher level support for various context. It stores datasets in memory. This makes it 100x faster than hadoop. Another feature being, in cluster it allows data to be loaded by the user onto the memory of cluster and allow repeated querying. It has “well defined layer architecture”. The layers and components of spark are loosely coupled and its various extensions and libraries are integrated with it. It is designed on two main abstraction defined in Sect. 3.1 and 3.2. 3.1 Resilient Distributed Dataset (RDD) It is the building block of any spark application. It is an immutable (read-only), fundamental collection of elements/items that are parallel processed i.e. the data items are fragmented and are stored in-memory on workers nodes of the spark cluster. It has the ability of rebuilding data on failure. RDD’s are of two types. First, the file that are stored on HDFS (hadoop dataset) and Second, parallelized collections that are build using existing scala collections. 3.2 Distributed Acyclic Graph (DAG) It is the scheduling layer which implements stage-oriented scheduling. Spark can construct DAGs that contain many stages while MapReduce constructs a graph in two stages Map and Reduce. The DAG abstraction helps in removing the Hadoop mapreduce multi0stage [16] execution model. This provides enhancement of performance over hadoop. The DAG operations are created by default in any program. Worker Cache

Executor task task Spark Context

Cluster Manager Cache Executor task task Worker Fig. 1. Spark basic architecture

Analyzing and Enhancing Processing Speed of K-Medoid Algorithm

137

Figure 1 shows the basic architecture of Spark. This framework uses a master slave architecture that accommodates a driver (acts as a master in the masternode) and lots of executors (run across the worker nodes). The executors are distributed agents. They have the responsibility of the tasks execution. They interact with the storage system and stores the computation results data in-memory, cache or on hard disk drives. Each executors of each spark application stays alive for the entire life cycle of its spark application. Working. When a spark user application code is submitted, the driver implicitly converts the code containing transformations and actions into logical Directed Acyclic Graph (DAG). The DAG is converted into physical execution plan with set of stages after performing certain optimization like pipelining transformation. It then creates small physical execution units referred as tasks under each stage which are bundled and send to the spark context, which was created at the time when the main program of an application is called by the driver program. The execution of jobs within the cluster are collectively controlled by spark driver and the spark context. The driver program co-ordinates with cluster manager (either spark own Standalone cluster manager, YARN or Mesos) to oversee many more jobs such as negotiating for resources, allocating resources and the splitting the job into multiple smaller job and then further distributing on to the worker nodes. The RDD created within the Spark context is distributed and cached across several worker nodes. The commencement of executors on the worker nodes is done by the cluster manager on the behalf of driver. The executor on the worker node start executing various tasks assigned by the driver program. The lifetime of the executor is the same as Spark Application. All the executors are terminated when the driver program main() method exists i.e. when the stop() method of the spark context is called and the resources are released from the cluster manager. MLlib. Spark MLlib library that is tightly integrated on top of Spark eases the development of efficient large-scale learning algorithms. Spark MLlib’s scalability, simplicity and language compatibility solve iterative data problems faster. Spark MLlib provides ultimate performance gain and is about ten times faster than hadoop’s Apache Mahout due to distributed memory-based Spark architecture. It leverages high level libraries packaged with the spark framework, Spark core, Spark SQL, Spark streaming and graph X. It implements a variety of machine learning components, ranging from ensemble learning and principal component analysis (PCA) optimization and clustering analysis.

4 Benefits of Spark over Hadoop Spark has gained importance in the recent years because of its significant qualities over hadoop mapreduce. Though both are open-source and supports cross platform. Mapreduce is extensively embraced for processing of large complex datasets with a ‘parallel distributed algorithm’ on a cluster, but there lies major drawbacks in Mapreduce which makes it less popular.

138

A. Jaiswal et al.

Firstly, because of replication of data, serialization and disk Input/Output operations, the sharing of data becomes slow in Map/Reduce. The interactive and iterative applications both requires data sharing at fast speed across parallel jobs but most of the time Hadoop Map-reduce spends in doing HDFS read and write operations. Addressing this problem, a ‘specialized framework’ called Apache Spark is developed. It removes the drawbacks of Map/Reduce in terms of speed and many more. Its main feature is Resilient Distributed Datasets (RDD) that supports in-memory processing computation i.e. it stores the memory state as an object across the jobs. The object is sharable among those jobs, this helps in making the data sharing 10x to 100x faster from that of disk and network. The hadoop Mapreduce persists back to the disk after map or reduce action while spark can cache data in-memory for further iterations. As a result it enhances system performance and outperforms Hadoop Mapreduce. Spark is platform independent. It can run on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. It is easier to program and includes an interactive mode, graph processing and machine learning capabilities. Hadoop Mapreduce is more difficult to program but many tools are available to make it easier. Another most important fact being, Hadoop is only build for batch processing whereas spark is for batch as well as real-time streaming data processing. Although, having so many advantages, spark lacks in one aspect that if a process crashes in the middle of the execution, spark will have to start processing from the beginning whereas Mapreduce continues from where it left off. Hence we can say Hadoop is slightly more tolerant. Spark is cost effective since less hardware can perform the same tasks much faster as it has been proved further in the experimental analysis.

5 Proposed Modified K-Medoid++ Algorithm The proposed modified k-medoid++ algorithm uses DBSCAN algorithm with KD-tree method since the DBSCAN method does not scale well to large dimensional datasets and it forms very less clusters even for large datasets. As per the experiments conducted on matlab it is also observed that the silhouette measure of the DBSCAN is less than DBSCAN-KD which makes DBSCAN less efficient than DBSCAN-KD. The KD stands for k-dimensional. When there is a need for multidimensional keys, then KD-tree plays its role. The DBSCAN-KD works efficiently in multidimensional large datasets. It uses indexing structure kd-tree for effective search of neighbor points from datasets which reduces the time complexity. The two parameter EPS (threshold value, epsilon) and MinPTs (minimum number of points in any cluster) are defined in this algorithm as MinPTs is set to 4 and 4-dist value is calculated and used as EPS. As for k = 4 it gives most promising result and for values of k > 4 it requires more calculations and the values do not significantly differ.

Analyzing and Enhancing Processing Speed of K-Medoid Algorithm

139

Proposed Algorithm Input: dataset //original dataset Output: a set of clusters //k clusters generated by modified k-medoid++ algorithm. Step 1. Use DBSCAN-KD method to find the initial set of clusters • • • • • •

Read an input file from HDFS and generate RDDs from the read data. Transform the existing RDDs into appropriate RDDs with Point type. Distribute those RDDs into executors. Find the core objects p. Foreach point p, check its eps-neighborhood using KD tree technique. If eps-neighborhood size is bigger than Minpts, generate new cluster C and then retrieve all the density reachable points from p in dataset and add to the cluster C, to form the local clusters • Else if eps-neighborhood contains less than minpts points then mark p as noise. • Merge the cluster with common core objects to form the global cluster. Step 2. The set of clusters obtained, find their clustering centers. These clustering centers are initial medoids for k-medoid algorithm and set maxIterations as number of clustering centers. Step 3. Using above data values, perform K-medoid algorithm on spark and generate k clusters. The k clusters obtained are the final output of the modified K-Medoid++ algorithm. The time complexity of DBSCAN algorithm is O(nlogn) where n is the size of datasets. Using KD-tree reduces the time complexity [9] to O(log n) (due to dimensionality reduction). Therefore, the complexity in step 1 is O(log n) The time complexity for step 2 and 3 is linear. 5.1 Benefits of New Modified K-Medoid++ Algorithm over K-Medoid Algorithm The modified k-medoid++ algorithm is specifically designed to remove the drawbacks of the k-medoid algorithm. In the k-medoid algorithm, the number of iterations need to be chosen at random which is hard to determine and also increases the time complexity. The k-medoid algorithm is not suitable for large multidimensional dataset. The new modified k-medoid++ algorithm avoids initial k-clustering centers appearing in the same cluster, hence it has less number of iterations and the clustering result is also improved. It is fully automatic i.e. it does not require any input from the user except the datasets. It is suitable for large multi-dimensional datasets. It is more efficient and has less time complexity. 5.2 Flow Diagram of New Modified K-Medoid++ Algorithm The proposed algorithm is illustrated by the flow diagram in Fig. 2. The modified K-medoid++ algorithm flowchart is based on spark. Each steps in the algorithm are elaborated in the flowchart.

140

A. Jaiswal et al.

Start

Read input file from HDFS and generate RDDs from the read data.

Transform the existing RDDs into appropriate RDDs with Point type, and distribute those RDDs into executors

Find all core objects p

A

B Check if all the core objects covered

Yes

No Generate new cluster C and retrieve all density reachable points from p and add to cluster C to form the local cluster

No

check wheather p’s eps-neighborhood size < minpts (Using KD-tree Technique)

Yes

Mark p as noise

B C

A Merge local clusters with common core objects to form the global cluster

Stop

Find the clustering centers of the global clusters. These are the initial medoids. Set the maxIterations as no. of clustering centers.

Obtain k clustering results.

No

Using the values obtained, assign objects to the corresponding cluster and update its center

Compute total cost S and check if S is changed

Fig. 2. Flowchart of algorithm

Yes

C

Analyzing and Enhancing Processing Speed of K-Medoid Algorithm

141

6 Experimental Analysis In order to compare the performances of hadoop and spark, three datasets have been considered. All the datasets are easily available from UCI Machine learning Repository (Lichman 2013) and their major characteristics are summarized in Table 1. Table 1. Description of datasets. Dataset name

No. of pattern No. of feature

Cov type dataset

581012

54

Poker hand dataset 1025010

11

Wine dataset

13

178

The experiments are conducted on four VMware virtual nodes with Linux (64 bit) operating system. The versions of various software used are listed in Table 2. Table 2. Software versions Software name

Version

Apache Hadoop

2.9.1

Apache Spark

2.4.3

SSH

OpenSSH_7.4p1

JRE

Java™ SE runtime environment (build 1.8.0_222-b10)

In hadoop architecture, there is one masternode which act as master as well as slave while there are three other slavenodes. The Spark Architecture is in Standalone mode with one node is acting as master and worker. In Hadoop HDFS, the RPC port for namenode is 8020 and the HTTP Port is 50070. The comparison of spark with hadoop various nodes is shown in Table 3. Table 3. Comparison of Spark and Hadoop Dataset

Spark’s MLlib

Hadoop MapReduce

(Time taken by hadoop/Time taken by spark)

Covtype

Time. 115762 ms No. of nodes. 1

Time. 935256 No. of nodes. 1

1 node. 8.08

Time. 580813 ms No. of nodes. 2

2 node. 5.02

Time. 546378 ms No. of nodes. 3

3 node. 4.72 (continued)

142

A. Jaiswal et al. Table 3. (continued)

Dataset

Poker-hand

Wine

Spark’s MLlib

Time. 34042 ms No. of nodes. 1

Time. 19114 ms No. of nodes. 1

Hadoop MapReduce

(Time taken by hadoop/Time taken by spark)

Time. 450314 ms No of nodes. 4

4 node. 3.89

Time. 627457 ms No. of nodes. 1

1 node. 18.43

Time.412500 ms No. of nodes. 2

2 nodes. 12.12

Time. 380929 ms No. of nodes 3

3 nodes. 11.19

Time. 332930 ms No. of nodes 4

4 nodes. 9.78

Time. 390838 ms No of nodes. 1

1 node. 20.45

Time. 288887 ms No. of nodes. 2

2 nodes. 15.11

Time. 258959 ms No. of nodes. 3

3 nodes. 13.55

Time. 235417 ms No. of nodes. 4

4 nodes. 12.31

The values from the Table 3 are potted in the graph in Fig. 3.

Fig. 3. Comparison between spark single node and hadoop mapreduce multinode cluster based on k-medoid algorithm program execution.

Analyzing and Enhancing Processing Speed of K-Medoid Algorithm

143

The x-axis in Fig. 3 represents the number of nodes in hadoop cluster. The y-axis represents the speed up ratio (i.e. time taken by hadoop/time taken by spark).

7 Conclusion and Future Work The results reveals that spark outperforms hadoop multinode cluster upto 4 nodes. Execution-time in spark single node is coming 8 to 20 times faster than execution time in hadoop single node. The graph is decreasing that means a point will come when the ratio is 1, which represents that the execution time in single spark node is equal to that of hadoop multinode cluster. It can be easily said that spark is cost effective as its single node is performing execution better than hadoop 4 nodes. This paper proposes to implement proposed new k-medoid++ algorithm on spark cluster. The issue of communication overhead that increases with large dataset in hadoop can also be further resolved in future works.

References 1. Assefi, M., Behravesh, E., Liu, G., Tafti, A.P.: Big data machine learning using apache spark MLlib. In: 2017 IEEE International Conference on Big Data (2017) 2. Han, D., Agrawal, A., Liao, W.-K., Choudhary, A.: A novel scalable DBSCAN algorithm with spark. In: IEEE Conference Publication, 04 August 2016 3. Martino, A., Rizzi, A., Mascioli, F.M.: Efficient approaches for solving the large scale kmedoids problem. In: 9th IJCCI (2017) 4. Jaiswal, A., Yadav, O.P.: Analyzing and enhancing processing speed for knowledge discovery from Big Data using Hadoop Framework. In: National Conference on Information Technology & Security Applications(NCITSA 2019) (2019). ISBN No. 9781-940543-0-6 5. Song, H., Lee, J.-G., Han, W.-S.: PAMAE: parallel k-medoids clustering with high accuracy and efficiency. In: KDD 2017, 13–17 August 2017, Halifax, NS, Canada (2017) 6. Omair Shafiq, M., Torunski, E: A parallel k-medoids algorithm for clustering based on MapReduce. In: 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA) (2016) 7. Yue, J., Mao, S., Li, M., et al.: An efficient PAM spatial clustering algorithm based on MapReduce. In: 2014 22nd International Conference on IEEE (2014) 8. Jiang, Y., Zhang, J.: Parallel K-Medoids clustering algorithm based on Hadoop. In: 2014 IEEE 5th International Conference on Software Engineering and Service Science (2014) 9. Vijayalaksmi, S., Punithavalli, M.: A fast approach to clustering datasets using DBSCAN and pruning algorithms. Int. J. Comput. Appl. (0975 – 8887) 60(14), 1–7 (2012) 10. Verma, J.P., Patel, A.: Comparison of MapReduce and Spark programming frameworks for big data analytics on HDFS. IJCSC 7(2), 180–184 (2016) 11. Fu, J., Sun, J., Wang, K.: Spark – a big data processing platform for machine learning. In: 2016 IEEE, International Conference on Industrial Informatics - Computing Technology, Intelligent Technology, Industrial Information integration (ICIICII) (2016) 12. Richter, A.N., Khoshgoftaar, T.M., Landset, S., Hasanin, T.: A multi-dimensional comparison of toolkits for machine learning with Big data. In: 2015 IEEE 16th International Conference on Information Reuse and Integration (2015) 13. Srinivas Jonnalagadda, V., Srikanth, P., Thumati, K.: A review study of apache spark in big data processing. Int. J. Comput. Sci. Trends Technol. (IJCST) 4(3), 93–98 (2016)

144

A. Jaiswal et al.

14. UCI Machine learning repository 15. Nandakumar, A.N., Yambem, N.: A survey on data mining algorithms on Apache Hadoop Platform. Int. J. Emerg. Technol. Adv. Eng. 4(1), 563–565 (2014) 16. https://www.dezyre.com/article/apache-spark-architecture-explained-in-detail/338 17. https://www.edureka.co/blog/spark-architecture/ 18. https://medium.com/better-programming/high-level-overview-of-apache-spark-c225a0 a162e9 19. Zhu, Y., Wang, F., Sang, X., Lv, X.: K-medoids clustering based on MapReduce and optimal search of medoids. In: The 9th International Conference on Computer Science and Education (ICCSE 2014), Vancouver, Canada, 24 August (2014) 20. Liu, A., Zuo, S., Qui, T., Bai, X.: Research on K-medoids clustering algorithm based on data density and its parallel processing based on MapReduce. J. Residuals Sci. Technol. 13, e4015 (2016)

Multiple Criteria Fake Reviews Detection Based on Spammers’ Indicators Within the Belief Function Theory Malika Ben Khalifa1,2(B) , Zied Elouedi1 , and Eric Lef`evre2 1

2

Universit´e de Tunis, Institut Sup´erieur de Gestion de Tunis, LARODEC, Tunis, Tunisia [email protected], [email protected] Universit´e Artois, EA 3926, Laboratoire de G´enie Informatique et d” Automatique de l” Artois (LGI2A), 62400 B´ethune, France [email protected] Abstract. E-reputation becomes one of the most important keys of success for companies and brands. It is mainly based on the online reviews which significantly influence consumer purchase decisions. Therefore, in order to mislead and artificially manipulate costumers’ perceptions about products or services, some dealers rely on spammers who post fake reviews to exaggerate the advantages of their products and defame rival’s reputation. Hence, fake reviews detection becomes an essential task to protect online reviews, maintain readers’ confidence and to ensure companies fair competition. In this way, we propose a new method based on both the reviews given to multiple evaluation criteria and the reviewers’ behaviors to spot spam reviews. This approach deals with uncertainty in the different inputs thanks to the belief function theory. Our method shows its performance in fake reviews detection while testing with two large real-world review data-sets from Yelp.com. Keywords: Online reviews · Fake reviews · Spammers evaluation · Uncertainty · Belief function theory

1

· Multi-criteria

Introduction

Nowadays, online reviews are one of the most valuable sources of information for customers. Moreover, they are considerate as the pillars on which companies’ reputation is built. Most of consumers believe in checking the reviews given to a product or service before deciding to purchase it. Therefore, companies with high number of positive reviews are lucked to attract a huge number of new consumers and consequently they will achieve significant financial gains. However, negative reviews or lowest rating reviews cause financial losses. Driven by the profit, some companies and brands pay spammers to post fake positive reviews on their own product in order to enhance their e-reputation, not only that but also they try to damage competitors’ reputation by posting negative reviews on their products. c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 145–155, 2021. https://doi.org/10.1007/978-3-030-49336-3_15

146

M. B. Khalifa et al.

Consequently, detection of opinion spam actually becomes more and more big concern to protect online opinions, to gain consumer trust and to maintain companies’ fair competition. For this reason, in the last years, several methods have been proposed trying to distinguish between the trustful and deceptive reviews. All the first studies rely on the review content using the linguistic aspects and feeling as well as readability and subjectivity [5]. Moreover, other techniques based on the individual words extracted from the review text as features [9], while some others are based on the syntactic and lexical features. It is important to mention that most of the methods based only on the review content can not successfully detect fake reviews cause of the lack of any distinguishing words that can give a definitive clue for classification of reviews as real or fake. Accordingly, detecting spammers can improve spotting spam review, since spammers generally share the same profile history and activity patterns. Hence, we notice the existing of various spammer detection methods in which the graph-theory have been used and most of them have shown promising results [18]. Moreover, other methods [7,11,13] are based on the different features extracted from the reviewer characteristics and behavioral. Furthermore, relying on both spam review detection and spammer detection when analyzing their behaviors is more effective solution for detecting review spam than either approach alone. In this way, we mention works in [8,15], that exploit both relational data and metadata of reviewers and reviews. Results prove that this kind of methods outperform all others. Although the fake reviews detection is an uncertain problem, no one of these previous works is able to manage uncertainty in the reviews. However, we have proposed some preliminary related works dealing with the uncertainty [1,2] but these approaches rely only on the review information. In this paper, we propose a novel approach that distinguishes between fake and genuine reviews while dealing with uncertainty in both the review and the reviewer information. As some reviewers prefer judge services or products through different evaluation criteria, our method deals with the different review rating criteria and analyzes the reviewers behaviors under the belief function framework, chosen thanks to its flexibility in representing and managing different types of imperfection. The rest of this paper is organized as follows: In Sect. 2, we remind the basic concepts of the belief function theory. Then, we elucidate our proposed approach in Sect. 3. Section 4 discusses the experimental study. Finally, we conclude in Sect. 5.

2

Belief Function Theory

The belief function theory is one of the useful theories that handles uncertain knowledge. It was introduced by Shafer [16] as a model to manage beliefs. The frame of discernment Ω is a finite and exhaustive set of different events associated to a given problem. 2Ω is the power set of Ω that contains all possible hypotheses. A basic belief assignment (bba) or a belief mass is defined as a function from 2Ω to [0, 1] that represents the degree of belief given to an element A such that:  Ω A⊆Ω m (A) = 1.

Multiple Criteria Fake Reviews Detection Based on Spammers’ Indicators

147

A focal element A is a set of hypotheses with positive mass value mΩ (A) > 0. Moreover, we underline some special cases of bba’s: – The certain bba represents the state of total certainty and it is defined as follows: mΩ ({ωi }) = 1 and ωi ∈ Ω. – Simple support function: In this case, the bba focal elements are {A, Ω}. A simple support function is defined as the following equation: ⎧ if X = Ω ⎪ ⎨w (1) mΩ (X) = 1 − w if X = A for some A ⊂ Ω ⎪ ⎩ 0 otherwise where A is the focus and w ∈ [0,1]. Moreover, the discounting operation [12] allows us to update experts beliefs by taking into consideration their reliability through the degree of trust (1 − α) given to each expert with α ∈ [0, 1] is the discount rate. Accordingly, the discounted bba, noted α mΩ , mΩ becomes:  α Ω m (A) = (1 − α)mΩ (A) ∀A ⊂ Ω, (2) α Ω Ω m (Ω) = α + (1 − α)m (Ω). Several combination rules have been proposed in the framework of belief function to aggregate a set of bba’s provided by pieces of evidence from different Ω experts. Let mΩ 1 and m2 two bba’s modeling two distinct sources of information defined on the same frame of discernment Ω. In what follows, we elucidate the combination rules related to our approach. ∩ and defined as: 1. Conjunctive rule: It was settled in [17], denoted by   Ω ∩ Ω Ω mΩ ∀B, C ⊆ Ω mΩ ∩ (A) = m1   2 (A) = B∩C=A m1 (B)m2 (C),

2. Dempster’s rule of combination: This combination rule is a normalized version of the conjunctive rule [4]. It is denoted by ⊕ and defined as:  Ω∩ Ω m1 m2 (A) if A = ∅, ∀A ⊆ Ω, Ω Ω ∩ mΩ 1−mΩ 1  2 (∅) (3) mΩ ⊕ (A) = m1 ⊕ m2 (A) = 0 otherwise. 3. The combination with adapted conflict rule (CWAC): This combination [6] is an adaptive weighting between the two previous combination rules acting like the conjunctive rule if bbas are opposite and as the Dempster rule otherwise. They use the notion of dissimilarity that is obtained through a distance measure, to ensure this adaptation between all sources. The CWAC is formulated as follows: Ω Ω Ω ↔ mi )(A) = Dmax m mΩ ↔ (A) = ( ∩ (A) + (1 − Dmax )m⊕ (A) 

(4)

148

M. B. Khalifa et al.

where Dmax is the maximal value of all the distances, it can be used to find out if at least one of the sources is opposite to the others and thus it may be defined by: Ω Dmax = max[d(mΩ i , mj )] where i ∈ [1, M ], j ∈ [1, M ], M is the total number of mass functions and d(mΩ , mΩ j ) is the distance measure proposed by Jousselme  i 1 Ω Ω Ω Ω Ω t [10]: d(m1 , m2 ) = 2 (m1 − mΩ 2 ) D(m1 − m2 ), where D is the Jaccard index  1 if E = F = ∅, defined by: D(E, F ) = |E∩F | ∀E, F ∈ 2Ω \∅ |E∪F |

Ω2 1 Frequently, we need to fuse two bba s mΩ 1 and m2 that have not the same frame of discernment. So, we apply the vacuous extension of the belief function which extend the frames of discernment Ω1 and Ω2 , corresponding to the mass 1 2 and mΩ functions mΩ 1 2 , to the product space Ω = Ω1 × Ω2 . The vacuous extension operation, denoted by ↑ and defined such that: mΩ1 ↑Ω1 ×Ω2 (B) = mΩ1 (A) if B = A × Ω2 where A ⊆ Ω1 , B ⊆ Ω1 × Ω2 . It transforms each mass to the cylindrical extension B to Ω1 × Ω2 . To determine relation between two disjoint frames of discernment Ω1 and Ω2 , the multi-valued mapping may be used. This operation denoted τ , allows us to join together two different frame of discernment the Ω2 that can  subsets BΩ⊆ 1 2 (B) match through τ to be a subset A ⊆ Ω1 : mΩ τ (A) = τ (B)=A m The belief function framework offers various solutions to ensure the decision making. We present the pignistic probabilities, used in our work, denoted by  mΩ (A) ∀ B ∈ Ω. BetP and defined as: BetP (B) = A⊆Ω |A∩B| |A| (1−mΩ (∅))

3

Multiple Criteria Fake Reviews Detection Based on Spammers’ Indicators Within the Belief Function Theory

In this following section, we elucidate our novel approach which aims to detect fake reviews. Our method relies on both the reviews and the reviewers information, since gathering behavioral evidence of spammers in more efficient than just identifying review spam only. Moreover, our proposed approach is divided on three parts: Firstly, we deal with the ratings given to various evaluation criteria, to judge a service or a product, in order to determine the reviewers’ opinion trustfulness through their degree with compatibility with all others’ opinions, this part is based on our previous work in [2]. Secondly, trying to obtain more preferment detection, we propose to rely on an other previous work in which we model the reviewers’ trustworthiness by analyzing the reviewers’ behaviors [3]. For that, we adopt the belief function theory to handle uncertainty in the various imprecise reviews and the imperfect reviewers’ information. To enhance the detection performance, we combine both the reviewer opinion and the reviewer trustworthiness modeling by mass functions in the third part. These three parts are detailed in the following subsections in which we consider a dataset of N reviewers and q evaluation criteria. Each reviewer Ri evaluates a product or a service by giving a rating vote between 1 and 5 stars to each criterion Cj .

Multiple Criteria Fake Reviews Detection Based on Spammers’ Indicators

3.1

149

Modeling the Reviewer’s Opinion Trustworthiness

Considering the rating reviews given to different evaluation criteria as inputs, where each vote Vij provided to each criterion Cj where j in {1...q} and q the number of evaluation criteria. Therefore, we propose to model uncertainty in Ω these rating reviews by representing each vote into mass function mikj with Ωj = {1, 2, 3, 4, 5} where each element represents the rating number given by each reviewer Ri . 3.1.1 Modeling the Uncertain Opinion We think that the reviewer gives always imprecise vote to one value close. Hence, we model this uncertainty by considering, the vote, the vote+1, the vote1 denoted k, for each rating review (i.e vote) given to each criterion. Hence, we transform the uncertain vote into a mass functions (i.e bba s) such that: Ω mikj ({k}) = 1 where k ∈ {Vi , Vi+1 , Vi−1 }. Then, we propose to take into consideration the reliability degree of each vote Vij based on its similarity with all others’ vote provided to the same criterion Cj . We also take into account the difference between the vote given by the reviewer and these modeled values ({Vi , Vi+1 , Vi−1 }) denoted by k. For this we apply double discounting in which we transform the mass functions on simple support system. After that, we aggregate the discounted bba representing the given vote using the Dempster rule (Eq. 3) in order to model each uncertain Ω vote given to each criterion by one global mass function mi j with i = 1, . . . , N and j = 1, . . . , q. Moreover, we propose to model the whole reviewer’s opinion in one joint bba, for that we have to apply the following steps: – Creating Ωc as the global frame of discernment relative to all criteria which represents the cross product of the different frames Ωj denoted by: Ωc = Ω1 × Ω 2 × . . . × Ω q . Ω – Extending the different bba s mi j to the global frame Ωc to get new bba s Ω ↑Ω mi j c . c – Combining the extended bba s using the Dempster rule of combination. mΩ i = Ω ↑Ω 1 ↑Ωc 2 ↑Ωc ⊕ mΩ ⊕ . . . ⊕ mi q c . This bba represents the reviewer’s opinion mΩ i i given by the different rating review criteria. 3.1.2

Measure the Compatibility Between the Reviewer Opinion and All the Others’ One In order to evaluate the opinion provided by each reviewer Ri through various criteria Cj , we compare it with all others reviewers’ opinions. For that, firstly we aggregate all the others review rating given to the same criterion using the CWAC combination rule to obtain one bba, modeling the whole rating reviews given each criterion except the current one. As a consequence, we obtain q (numΩ ber of evaluation criterion) bba s micj . Thus, we combine them to model all

150

M. B. Khalifa et al.

reviewers’ opinions except the current one in one joint bba. To achieve this, we Ω ↑Ω firstly extend them to the global frame of criteria Ωc to get micj c . After that, we aggregate the extend bba s through the Dempster rule of combination. Subsequently, for each reviewer we measure the distance between his provided opinion Ωc c modeled by mΩ i and all the others reviewers’ opinions represented by mic using the distance of Jousselme. 3.1.3 Modeling the Reviewer Opinion into Trustful or Not Trustful Since the calculated distance elucidates the average opinion rating deviation from the other reviewers’ opinion which one of the most important spam indicator. That’s why, more the distance decreases more the given opinion is considerate as trustful. Thus, we propose to transform each distance into new bba under Θ = {t, t¯) (t for trustful and t¯ for not trustful). In this part, we successfully model the whole reviewers’ opinion trustworthi¯ ness by mass function mΘ i under the frame of discernment Θ = {t, t}. 3.2

Modeling the Reviewer Spamicity

The average rating deviation from the others rating is considerate as an important indicators in spam review detection field. Despite that, spammers try to mislead readers and the usually post a lot of despite reviews to dominate the majority of the given opinions. Accordingly, it is essential to have recourse to the reviewers’ spamicity in order to reinforce the spam reviews detection. In this way, we propose to model uncertainty in the different reviewers information while relying on the spammers indicators. We represent each reviewer Ri by two S mass functions namely; the reviewer reputation mΩ RRi and the second one is to ΩS model the reviewer helpfulness mRHi with ΩS = {S, S} where S is spammer and S is not spammer. 3.2.1 Modeling the Reviewer Reputation Usually, the innocent reviewers post their opinion when they have already bought new products or used new services. Therefore, their reviews are generally dispersed over time interval and depend on the number of used products or services. However, the spammers post enormous reviews to some particular products in short interval, two or three days (more used in the spammer review detection field), to overturn the majority of the given reviews. Consequently, we construct the reviewer reputation through these two spammers’ indicators. Hence, we propose to check the reviewing history of each reviewer HistRi contained all past reviews given by each reviewer Ri to n discrete products or services. Each Hist reviewer average proliferation is calculated as follows: AvgP (Ri ) = n Ri We assume that if AvgP (Ri ) > 3, the reviewer is considered as a potential spammer since generally ordinary reviewers do not give more than three reviews per product [13]. Accordingly, the reviewer reputation is then represented by a certain bba as follows:

Multiple Criteria Fake Reviews Detection Based on Spammers’ Indicators

151

ΩS S mΩ RRi ({S}) = 1 else mRRi ({S}) = 1

Moreover, we check if the reviews are given in a short time of interval or are distributed all along the reviewing history. For that, we measure the brust spamicity degree denoted δi to weaken each reviewer reputation bba by its corresponding reliability degree using the discountS ing operation. Consequently, we find the discounted bba δi mΩ RRi which presents the reviewer reputation based on both the reviewer’s average proliferation and the brust spamicity. 3.2.2 Modeling the Reviewer Helpfulness The reviewer helpfulness is considerate as important spammer indicators. Thus, we extract the Number of Helpful Reviews (N HR) associated to each reviewer in order to check if the reviewer post helpful reviews or unhelpful one to mislead readers. Hence, if the reviewer is suspicious to be spammer (N HRi = 0), thus we model the reviewer helpfulness by a certain bba as follows: ΩS S mΩ RHi ({S}) = 1 else mRHi ({S}) = 1

We weaken the reviewer helpfulness mass by the non helpfulness degree for each reviewer Ri denoted by λi in order to not treat all the reviewers who give helpful reviews in the same way. Thus, we apply the discounting operation in order to S transform the bba into a simple support function λi mΩ RHi to take into consideration the helpfulness degree. Moreover, spammers usually post extreme ratings [13], either highest (5 stars) or lowest (1 star), in order to achieve their objective and dominate the average rating score of products or services. Nonetheless, the innocent reviewers are always not fully satisfied or dissatisfied by the tried products and services. Thus, they not usually post extreme rating. Hence, despite the fact that the reviewer has helpful reviews if they are crowded with extreme rating, his probabilities of being genuine reviewer will assuredly reduce. In order to take this indicator into consideration, we calculate the extreme rating degree denoted γi , corresponding to each reviewer Ri , which is considered as the discounting factor. Then, we weakened an other time the reviewer S helpfulness λi mΩ RHi by its relative reliability degree using the discounting operaS tion. Thus, the obtained discounted λi γi mΩ RHi modeled the reviewer helpfulness based on both the reviewer helpfulness degree and extreme rating which are an important spammers’ indicators. 3.2.3 Combining the Both Reviewer Reputation and Helpfulness With the purpose of representing the whole reviewer trustworthiness. We comλi γi ΩS S mRHi bine the reviewer bba’s reputation δi mΩ RRi with his helpfulness bba using the Dempster rule of combination under the frame of discernment ΩS . The S joint resultant bba mΩ RTi illustrates each reviewer’s the trustworthiness degree.

152

3.3

M. B. Khalifa et al.

Distinguishing Between the Fake and the Genuine Reviews

As highlighter before, relying on both spam review and spammer review detection become the most effective solution to spot deceptive reviews. For this reason, we combine both the reviewer’s opinion trustworthiness modeled by mΘ i with the S in order to make a powerful decision. reviewer spamicity represented by mΩ RTi For doing so, we apply the following steps. 3.3.1

Modeling Both the Reviewer and His Giving Opinion Trustworthiness In the interest of modeling both the reviewer and his opinion trustworthiness in one joint bba, we deal with the following steps: First of all, we define the global frame of discernment relative to the reviewer and his opinion trustworthiness. It represents the cross product of the two different frame Θ and ΩS denoted by: ΩRR = Θ × ΩS . Then, we extend all ΩS the mass functions mΘ i and mRSi to the global frame of discernment ΩRR RR S ↑ΩRR using the vacuous extension in order to get new bba s mΘ↑Ω and mΩ . i RSi Finally, we combine these extended bbas using the Dempster combination rule RR S ↑ΩRR RR RR = mΘ↑Ω ⊕ mΩ to get the joint bba mΩ that represents both mΩ i i i RSi the reviewer and his given opinion trustworthiness. 3.3.2 Reviewer and His Opinion Trustworthiness Transfer RR In following step, we transfer the mΩ under the product space ΩRR to the i frame of discernment ΘD = {f, f¯} to make decision by modeling the reviewer opinion into fake or not fake. In spam reviews detection field, all the reviews given by the spammers are considered as fake opinion reviews, because spammers are not real consumers and they try sometimes to post reviews compatible with the provided one to avoid being detected by the spam detection methods. For that, a multi-valued operation, denoted τ is applied. The function τ : ΩRR to 2ΘD rounds up event pairs as follows: – Masses of pairs that contain at least an element {S} spammer are transferred to fake f ⊆  ΘD as: RR (SRi ), (SRi = A × S) ⊆ ΩRR mτ ({f }) = τ(SR )=f mΩ i i ¯ not spammer are transferred – Masses of pairs including at least an element {S} ¯ to not fake f⊆ ΘD such that: RR ¯ ⊆ ΩRR (SRi ), (SRi = A × S) mτ ({f¯}) = τ(SR )=f¯ mΩ i i ¯ are transferred to ΘD as: – Masses of event no element in {S, S}  couples with ΩRR mτ (ΘD ) = τ(SR )=Θ mi (SRi ), (SRi = A × ΩS ) ⊆ ΩRR i

D

3.3.3 Decision Making Finally, we apply the pignistic probability BetP in order to distinguish between the fake and the genuine opinion. Hence, the BetP with the grater value will be considered as the final decision.

Multiple Criteria Fake Reviews Detection Based on Spammers’ Indicators

4 4.1

153

Experimentation and Results Experimentation Tools

In our method, we conduct two real datasets collected and used in [14,15] from yelp.com. These datasets are considered as the largest, richest, complete and only labeled datasets in the spam review research. These datasets offer the nearground-truth since they are labeled through the yelp filter classifier, which has been used in many previous works [8,14,15] as a ground truth thanks to its efficient detection method based on several behavioral features, where recommended (Not filtered) reviews correspond to genuine reviews, and not recommended (filtered) reviews correspond to fake ones. Due to the huge number of reviews, we random sample the two datasets with 10% from the total number of reviews given to three different evaluation criteria (services, cleanliness and food quality). Table 1 introduces the datasets and indicates the ratio of (filtered) fake reviews (and consequently reviewers). Furthermore, we evaluate our method through the three following criteria: accuracy, precision and recall. Table 1. Datasets description Datasets

Reviews (filtered %) Reviewers (Spammer %) Services (Restaurant or hotel)

YelpZip

608,598 (13.22%)

260,277 (23.91%)

5,044

YelpNYC 359,052 (10.27%)

160,225 (17.79%)

923

4.2

Experimental Results

As our method proposes a specific classifier able to differentiate between fake and genuine reviews given to overall or multiple evaluation criteria under an uncertain context. We propose to compare it with the state-of-art baselines classifiers; the Support Vector Machine (SVM) and the Naive Bayes (NB) used by most of the spam detection methods [7,11,13,14]. In order to maintain safe comparison when applying the SVM and NB classifiers, we construct a balancing data (50% of fake reviews and 50% of genuine ones) extracted from our datasets (YelpZip and YelpNYC) to avoid the over-fitting, then we divided into 30% of testing set and 70% of training set and we use the features considered in our proposed method; the rating deviation, the reviewers average proliferation, the brust spamicity degree, the reviews helpfulness and the extreme rating providing by each reviewer. In addition, the final estimation of each evaluation criterion is obtained by averaging ten trials values using 10-Fold cross validation technique. Furthermore, we compare our method with the proposed uncertain classifier Multiple Criteria Belief Fake Reviews Detection (MC-BFRD) [2] which relies only on the review rating information. The results are reported in the Table 2.

154

M. B. Khalifa et al. Table 2. Comparative results

Evaluation Accuracy criteria Methods

NB

YelpZip YelpNYC

Precision

SVM

MC Our BFRD method

64%

78%

70%

65%

82.5% 75%

91.5%

NB

SVM

Recall MC Our NB BFRD method

SVM

MC Our BFRD method

62%

80%

74%

95%

64%

75%

72.78% 90.1%

92.77% 65%

86%

83%

96%

70%

84%

80%

91.2%

Our method attends the highest performance detection according to accuracy, precision and recall over-passing the baseline classifier. It reaches an accuracy improvement until 14% with yelpZip and until 10% with yelpNYC data-set compared to SVM. Moreover, the improvement records between the two uncertain classifier (over 20%) confirms the importance of combining both the review and the reviewer features while considering the spammers’ indicators in this field. Despite the fact that our approach is based on fewer indicators than yelp’s filter classifier, we obtain competitive results (over 92%) thanks to our method ability in handling uncertainty within the different inputs. These encouraging results push us to integrate more behavioral features in our future work that we could improve our results and obtain identical or even better performance than yelp filter.

5

Conclusion

The spam review is an actual big issue threaten the online reviews. To tackle this problem, we have propose a specific classifier able to deal with uncertainty in the multi-criteria rating reviews and in the reviewer information while analyzing them through the spammers indicators. Our proposed method shows performance in classifying the fake and the innocent reviews while testing with two real data-sets form yelp.com.

References 1. Ben Khalifa, M., Elouedi, Z., Lef`evre, E.: Fake reviews detection under belief function framework. In: Proceedings of AISI, vol. 395–404 (2018) 2. Ben Khalifa, M., Elouedi, Z., Lef`evre, E.: Multiple criteria fake reviews detection using belief function theory. In: Proceedings of ISDA, vol. 315–324 (2018) 3. Ben Khalifa, M., Elouedi, Z., Lef`evre, E.: Spammers detection based on reviewers’ behaviors under belief function theory. In: Proceedings of IEA/AIE, vol. 642–653 (2019) 4. Dempster, A.P.: Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Stat. 38, 325–339 (1967) 5. Deng, X., Chen, R.: Sentiment analysis based online restaurants fake reviews hype detection. In: Web Technologies and Applications, vol. 1–10 (2014) 6. Lef`evre, E., Elouedi, Z.: How to preserve the confict as an alarm in the combination of belief functions? Decis. Support Syst. 56, 326–333 (2013)

Multiple Criteria Fake Reviews Detection Based on Spammers’ Indicators

155

7. Fei, G., Mukherjee, A., Liu, B., Hsu, M., Castellanos, M., Ghosh, R.: Exploiting burstiness in reviews for review spammer detection. In: Proceedings of ICWSM, vol. 13, pp. 175–184 (2013) 8. Fontanarava, J., Pasi, G., Viviani, M.: Feature analysis for fake review detection through supervised classification. In: Proceedings of DSAA, vol. 658–666 (2017) 9. Jindal, N., Liu, B.: Opinion spam and analysis. In: Proceedings of ACM, pp. 219– 230 (2008) ´ A new distance between two bodies of 10. Jousselme, A.-L., Grenier, D., Boss´e, E.: evidence. Inf. Fusion 2(2), 91–101 (2001) 11. Lim, P., Nguyen, V., Jindal, N., Liu, B., Lauw, H.: Detecting product review spammers using rating behaviors. In: Proceedings of CIKM, pp. 939–948 (2010) 12. Ling, X., Rudd, W.: Combining opinions from several experts. Appl. Artif. Intell. Int. J. 3(4), 439–452 (1989) 13. Mukherjee, A., Kumar, A., Liu, B., Wang, J., Hsu, M., Castellanos, M.: Spotting opinion spammers using behavioral footprints. In: Proceedings of ACM SIGKDD, pp. 632–640 (2013) 14. Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.: What yelp fake review filter might be doing. In: Proceedings of ICWSM, pp. 409–418 (2013) 15. Rayana, S., Akoglu, L.: Collective opinion spam detection: bridging review networks and metadata. In: Proceedings of ACM SIGKDD, pp. 985–994 (2015) 16. Shafer, G.: A Mathematical Theory of Evidence, vol. 1. Princeton University Press (1976) 17. Smets, P.: The transferable belief model for quantified belief representation. In: Quantified Representation of Uncertainty and Imprecision, pp. 267–301. Springer, Dordrecht (1998) 18. Wang, G., Xie, S., Liu, B., Yu, P.S.: Review graph based online store review spammer detection. In: Proceedings of ICDM, pp. 1242–1247 (2011)

Data Clustering Using Environmental Adaptation Method Tribhuvan Singh(B) , Krishn Kumar Mishra, and Ranvijay CSED, MNNIT Allahabad, Prayagraj, India {2015rcs09,kkm,ranvijay}@mnnit.ac.in

Abstract. Extracting useful information from a large-scale dataset and transforming it into the required structure for further use is termed data mining. One of the most important techniques of data mining is data clustering that is responsible for grouping the data into meaningful groups (clusters). Environmental Adaptation Method (EAM) is an optimization algorithm that has already proved its efficacy in solving global optimization problems. In this paper, an approach based on a new version of EAM has been suggested for solving the data clustering problem. To validate the utility of the suggested approach, four recently developed metaheuristics have been implemented and compared for six standard benchmark datasets. Various comparative performance analysis based on experimental values have justified the competitiveness and effectiveness of the suggested approach. Keywords: Data clustering · Evolutionary algorithms Environmental adaptation method · Optimization

1

·

Introduction

Many real-world applications such as machine learning [1], pattern recognition [2], data mining [3], etc, use the concept of data clustering to achieve the desired objective. Data clustering algorithms are based on either deterministic or heuristic approaches. Clustering algorithms based on the deterministic approach always follow the same execution path and produce the same output for a given input. The advantage of deterministic data clustering algorithms is its speed and consistency in the final outcome. The algorithms under this category are k-means, fuzzy c-means, etc. The performance of these algorithms highly dependent on the selection of initial cluster centroid. The beauty of the k-means clustering algorithm is its simplicity, easy to use, and its effectiveness in dealing with a large amount of data with linear computational complexity [4]. This algorithm divides the N data objects into K clusters based on some distance metric. The fitness function of the k-means algorithm is designed in such a way that it should minimize the sum of the intra-cluster distances between each data object to its nearest centroid [5]. The performance of the fuzzy c-means algorithm is c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 156–164, 2021. https://doi.org/10.1007/978-3-030-49336-3_16

Data Clustering Using Environmental Adaptation Method

157

better than k-means, but, expensive in terms of computational complexity. In general, deterministic data clustering algorithms have various shortcomings such as sensitivity to the initial cluster centroids, easy local entrapment, slow convergence towards the optimal points, etc. To resolve the aforementioned problems, heuristic approaches came into existence. The advantage of the algorithms under this category is that they are not dependent on the initial cluster centroids and they resolve the problem of local entrapment more efficiently compared to the deterministic approaches. Moreover, due to the involvement of the population-based search technique, the convergence speed of these algorithms is very high. The effectiveness of these algorithms has been utilized in solving data clustering problems in three different ways. In the first case, an existing algorithm is directly applied to solve the data clustering problems [6]. In the second case, some modification(s) is (are) done in the existing algorithm to improve its performance while solving data clustering problems [7]. In the third case, the algorithms of different classes are hybridized to achieve the desired goal [8]. Recently, Improved Environmental Adaptation Method with Real Parameter (IEAM-RP) [9] was proposed to solve complex problems. The overall performance of IEAM-RP is quite satisfactory in solving optimization problems. This algorithm is easy to implement and it uses less number of tuning parameters that motivated us to apply it for solving the data clustering problem. Section 2 gives the basic idea and objective function used in the data clustering problem. Section 3 describes the proposed work. Section 4 describes the experimental setup and benchmark datasets. Results analysis and conclusions are given in Sects. 5 and 6, respectively.

2 2.1

Preliminaries Clustering

It is one of the unsupervised tasks to group N data objects into K clusters or groups. The data objects within a cluster must have high similarity, whereas the data objects of different clusters must have high dissimilarity [10]. The similarity and dissimilarity are measured based on the values of the sum of squared Euclidean distances between each data object and the centroid of the respective cluster. In other words, the objective of data clustering is to minimize the intracluster distance within a cluster and maximize the inter-cluster distances among the clusters. The fitness function used in this study to achieve the aforesaid objective is given below as follows. F (O, Z) =

N  K 

wij ||(Oi − Zj )||2

(1)

i=1 j=1

where F(O, Z) is fitness value that needs to be minimized, wij will be 1 if data object i is assigned to cluster j, otherwise, wij = 0.

158

3

T. Singh et al.

Description of Proposed Approach

The problem of data clustering is an optimization problem in which the objective is to minimize the sum of intra-cluster distances. In this paper, IEAM-RP [9] has been used to achieve the desired goal. The proposed approach starts with the random initialization of candidate solutions given below as follows: P = lb + α × (ub − lb)

(2)

Here, P is the population of candidate solutions. α is a random number between 0 to 1. lb and ub are the lower bound and upper bound of a feature in a dataset taken under consideration. lb, ub, and α are represented in the form of matrix of size C×F . Here, C and F are the number of centroids and number of features in a dataset. After random initialization, the objective value is calculated using Eq. 1. Based on the objective values, two solutions one with maximum fitness (called worst solution) and another one with minimum fitness (called best solution) are identified. The positional difference between these two solutions is calculated which is utilized by N −1 solutions during the optimization process. The best solution updates its position vector as follows: Pi+1 = Pi × F (Pi )/Favg

(3)

Here, F (Pi ) and Favg are the fitness of Pi and population, respectively. The remaining N −1 solutions update their position vectors as follows: Pi+1 = Pi + α × (B − W )

(4)

Here, B and W are the position vectors of the best and the worst solutions, respectively. The updated position vectors of Eq. 3 and 4 are combined to get the offspring. In IEAM-RP, adaptation operator is responsible for creating the offspring. Selection operator is used to select the best N solutions based on objective values. The selected solutions are further participate in the creation of offspring. This process continues until the maximum number of iterations.

4

Experimental Setup and Datasets

In addition to IEAM-RP, four other algorithms have been used to solve data clustering problem. These algorithms are: Multi-Verse Optimizer (MVO) [11], SalpSwarm Algorithm (SSA) [12], Grey Wolf optimizer (GWO) [13], and Butterfly Optimization Algorithm (BOA) [14]. The parameters of MVO, SSA, GWO, and BOA are set according to their corresponding references [11–14], respectively. However, the common parameter setting for all algorithms is given below. – population size = 100 – maximum number of iterations, MaxIter = 200 – number of independent runs = 40.

Data Clustering Using Environmental Adaptation Method

159

Six benchmark datasets given in Table 1 are used to check the quality of the solutions of the proposed approach. The detailed description of these datasets is available at [15]. The number of instances represents the number of data objects in the respective dataset, whereas, the number of features can be considered as the number of attributes. During the optimization process, the dimension is the same as the number of features in the respective dataset. All algorithms have been implemented in MATLAB R2017a on a machine with a 64-bit Windows-7 Operating System, 4 GB of RAM, and Core-i5 processor. Algorithm 1. Proposed Approach 1: Input: Objective function 1, population size, initial population, number of the independent runs, MaxIter. 2: Output: Standard deviation, mean, and the best values of objective value. 3: Initialize population size N = 100, MaxIter = 200, number of independent runs R = 40. 4: repeat 5: for r = 1 to R do 6: Initialize the population according to equation 2. 7: Evaluate each solution according to equation 1. 8: for iter = 1 to MaxIter do 9: Update the position of the best solution according to equation 3. 10: Update the position of the remaining solutions according to equation 4. 11: Combine the updated positions to get the offspring. 12: Clamp the positions if they cross the boundary. 13: Evaluate each solution of offspring. 14: Select the best N solutions using selection operator. 15: end for 16: Store the best and mean values of objective value. 17: end for 18: Compute and store standard deviation, overall mean, and best values of objective value. 19: until the termination condition.

Table 1. Description of datasets Name

#Instances #Features #Clusters

Aggregation

788

2

7

Compound

399

2

6

Iris

150

3

4

Pathbased

300

2

3

Spiral

312

2

3

Yeast

1484

8

10

160

T. Singh et al.

5

Result Analysis

The performance of the algorithms has been compared based on various parameters. Table 2 shows the best, mean, and standard deviation of the sum of intracluster distances between each data object to its nearest centroids. Based on Table 2, the ranks of the algorithms have been evaluated by considering the mean value of the sum of the intra-cluster distances of the respective dataset and given in Table 3. From Table 3, it is clear that IEAM-RP is the best in all datasets considered in this study. Apart from this, three statistical tests (Friedman test, Iman-Davenport test, Holm test) have been conducted to check whether there is a significant difference among the performances of the algorithms. These tests have been performed at 5% (α = 0.05) significant level. Rejection of the null hypothesis reported in Table 4 shows that there is a significant difference in the performances of the algorithms. The result of the Holm test is shown in Table 5. The results of this test show that the control algorithm (IEAM-RP) is statistically better than MVO, SSA, and BOA regarding the sum of intra-cluster distances. There is no significant difference between GWO and IEAM-RP based on Holm’s method results. Tables 6, 7, 8, 9, 10 and 11 show the best centroids obtained by the suggested approach. Figures 1, 2 and 3 show the convergence curves that are drawn between the best value of sum of intra-cluster distances found in 40 runs and the number of iterations. All convergence curves justified the effectiveness of the proposed approach. The competitive behavior of the suggested approach for data clustering could be expected due to its fast convergence rate and population diversity. At the start of the program execution, B - W is large which decreases with respect to iterations. Hence, initially, IEAM-RP makes the individuals capture promising solutions from different regions of the search space. In other words, initially, IEAM-RP gives more attention in performing exploration, whereas, in later generations, it performs exploitation to target global optimal solutions. Aggregation Dataset

3500

3200 3100 3000 2900

IEAM-RP BOA SSA GWO MVO

1400 1350 Objective Values

3300 Objective Values

Compound Dataset

1450 IEAM-RP BOA SSA GWO MVO

3400

1300 1250 1200 1150

2800

1100

2700

1050 0

20

40

60

80

100 Iterations

120

140

160

180

200

0

20

40

60

80

100 Iterations

120

140

160

180

200

Fig. 1. Convergence curves of the algorithms for (A): Aggregation dataset, (B): Compound dataset

Data Clustering Using Environmental Adaptation Method

Table 2. Experimental values of algorithms for different datasets Dataset

Criteria BOA

SSA

GWO

MVO

IEAM-RP

Aggregation Best Mean Std

3016.2606 3021.5795 2861.1612 3193.3058 2712.0074 3431.8201 3370.1267 3105.3061 3487.4141 2920.0311 115.7823 182.0493 143.4811 156.0463 92.4011

Compound

Best Mean Std

1220.4659 1226.4716 1135.5401 1235.9539 1059.9693 1353.6063 1400.3753 1239.2458 1449.8194 1156.8289 50.7289 53.7393 47.6109 64.9182 36.7378

Iris

Best Mean Std

132.6838 156.5371 9.5721

Pathbased

Best Mean Std

1464.4799 1456.0782 1429.1363 1478.3134 1424.7146 1570.6518 1532.7950 1483.0788 1631.3121 1435.5742 47.3356 34.2839 39.9193 68.5842 2.9260

Spiral

Best Mean Std

1811.1415 1808.5225 1808.2743 1846.3969 1807.5107 1857.8585 1828.4559 1822.1418 1934.9132 1813.3840 20.3835 13.3307 10.1070 53.2782 1.3896

Yeast

Best Mean Std

380.7447 438.1627 20.9977

133.5985 162.9212 10.4877

479.4718 567.8223 37.2512

99.8506 124.6085 11.4278

120.7583 169.1354 13.8657

360.4609 393.5267 16.1074

395.9828 557.2306 58.5927

96.6554 102.4985 1.4485

269.0910 336.8691 9.9029

Table 3. Average rank of algorithms in different datasets. Ranking

Datasets

BOA SSA GWO MVO IEAM-RP

Rank

Aggregation Compound Iris Pathbased Spiral Yeast

4 3 3 4 4 3

3 4 4 3 3 5

3.5

3.67 2

Average rank

2 2 2 2 2 2

5 5 5 5 5 4

1 1 1 1 1 1

4.83

1

Table 4. Experimental results of statistical test. Test

Statistical value p-value

Null hypothesis

Friedman

21.733

0.000226 Rejected

Iman-Davenport 47.933

Site 1 > Site 3 > Site 4 > Site 5

TOP 20 by number of visits (January 2018)

Site 3 > Site 2 > Site 1 > Site 4 > Site 5

TOP 20 by number of visits (November 2017)

Site 2 > Site 3 > Site 1 > Site 4 > Site 5

TOP 20 by number of visits (January 2017)

Site 1 > Site 3 > Site 2 > Site 4 > Site 5

6 Conclusion In this work, the criteria’ weights affect the final rank of alternatives. The integrated fuzzy ANP-TOPSIS approach is an ideal solution in order to consider the expert judgments in the evaluation process. It is a qualitative approach. The results highlight the strengths and the weakness of each web site. They are more convincing than the ranking results by the number of visits. The approach is destined to web managers to apply it to control their sites. The ranking results can be updated and advertised regularly in parallel of the number of visits results in order to help visitors in choosing the best shopping site. In future work, the approach can be extended to the consumer judgments to know the ranking according to them and to study their behavior which is a relevant topic [1] in the marketing discipline. Acknowledgements. The author acknowledges the support of Prof. Ilhem Kallel, the helpful discussion elaborated with Prof. Jorge Casillas and the participation of the expert Nadhem Khanfir in answering the questionnaire. Ethical Approval. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

An Integrated Fuzzy ANP-TOPSIS Approach to Rank and Assess

207

Informed Consent Statement. Informed consent was obtained from all individual participants included in the study.

Appendix In this Appendix, the constructed questionnaire in Table 9 allows to the expert a better comprehension of the identified criteria. It is submitted online. Table 9. The questionnaire Criterion

Questions

Purchase intention 1. How do you assess your intention to purchase? 2. Is all the provided information about a product influences your intention/decision to purchase? 3. Did the use of symbols as a ‘Recommended product’, ‘Top sales’ and ‘Special offer’ influences your choice to buy a specific product? 4. Do you will return to this web site? Product

1. How do you assess the price of a product reasonable to buy it? 2. Did the information displayed reveal the popularity of a product?

Satisfaction

1. How do you assess satisfaction and enjoyment after visiting the web site? 2. Would you recommend this web site to a friend?

Service

1. 2. 3. 4.

Content

1. How do you find the information describing a product?

Design

1. How do you find the arrangement of visual elements on the user interface? 2. How do you assess the structure of the page?

Aesthetics

1. How do you assess the visual appearance of the web site? 2. How do you assess the used colors? 3. How do you assess the used fonts?

Security

1. How do you see the authentication service and privacy in order to protect your personal data? 2. Do you feel secure when making a transaction? 3. Did the purchase go through any security policies? (e.g. security certification) 4. How do you assess the accessibility and clarity of the policy statements?

How do you assess customer support? How do you assess order tracking on time delivery? How do you assess information about product availability? Did you find the product useful as you expected to be?

208

R. Rekik

References 1. Casillas, J., Martínez-López, F.J.: Mining uncertain data with multiobjective genetic fuzzy systems to be applied in consumer behaviour modelling. Expert Syst. Appl. 36(2 PART 1), 1645–1659 (2009). https://doi.org/10.1016/j.eswa.2007.11.035 2. Chen, S., et al.: Group-buying website evaluation based on combination of TOPSIS, entropy weight and FAHP. J. Convergence Inf. Technol. 7(7), 130–139 (2012). https://doi.org/10. 4156/jcit.vol7.issue7.17 3. Chiou, W.-C., et al.: A strategic framework for website evaluation based on a review of the literature from 1995–2006. Inf. Manag. 47(5–6), 282–290 (2010). https://doi.org/10.1016/j. im.2010.06.002 4. Gupta, M., Narain, R.: A fuzzy ANP based approach in the selection of the best E-Business strategy and to assess the impact of E-Procurement on organizational performance. Inf. Technol. Manage. 16(4), 339–349 (2014). https://doi.org/10.1007/s10799-014-0208-y 5. Gurrea, R., et al.: The role of symbols signalling the product status on online users’ information processing. Online Inf. Rev. 37(1), 8–27 (2013). https://doi.org/10.1108/14684521311311603 6. Hsu, C.-L., et al.: The impact of website quality on customer satisfaction and purchase intention: perceived playfulness and perceived flow as mediators. IseB 10(4), 549–570 (2012). https://doi.org/10.1007/s10257-011-0181-5 7. Hsu, T.-H., et al.: A hybrid ANP evaluation model for electronic service quality. Appl. Soft Comput. J. 12(1), 72–81 (2012). https://doi.org/10.1016/j.asoc.2011.09.008 8. Kumar, A., et al.: Using entropy and AHP-TOPSIS for comprehensive evaluation of internet shopping malls and solution optimality. Int. J. Bus. Excellence 11(4), 487–504 (2017). https:// doi.org/10.1504/IJBEX.2017.082575 9. Liang, R., et al.: Evaluation of e-commerce websites: an integrated approach under a singlevalued trapezoidal neutrosophic environment. Knowl. Based Syst. 135, 44–59 (2017). https:// doi.org/10.1016/j.knosys.2017.08.002 10. Luo, J., et al.: The effectiveness of online shopping characteristics and well-designed websites on satisfaction. MIS Q. Manag. Inf. Syst. 36(4), 1131–1144.A9 (2012) 11. Mavlanova, T., et al.: Website signal perceptions and seller quality identification. In: 17th Americas Conference on Information Systems 2011, AMCIS 2011, pp. 1272–1280 (2011) 12. Nirmala, G., Uthra, G.: Quality of online shopping websites in india: a study using intuitionistic fuzzy AHP. J. Adv. Res. Dyn. Control Syst. 9(4), 117–124 (2017) 13. Polites, G.L., et al.: A theoretical framework for consumer e-satisfaction and site stickiness: an evaluation in the context of online hotel reservations. J. Organ. Comput. Electron. Commer. 22(1), 1–37 (2012). https://doi.org/10.1080/10919392.2012.642242 14. Rekik, R., et al.: Assessing web sites quality: a systematic literature review by text and association rules mining. Int. J. Inf. Manage. 38(1), 201–216 (2018). https://doi.org/10.1016/ j.ijinfomgt.2017.06.007 15. Rekik, R., et al.: Extraction of association rules used for assessing web sites’ quality from a set of criteria. In: 2014 14th International Conference on Hybrid Intelligent Systems (HIS), pp. 291–296 (2014). https://doi.org/10.1109/HIS.2014.7086164 16. Rekik, R., et al.: Ranking criteria based on fuzzy ANP for assessing E-commerce web sites. In: 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 3469– 3474 (2016) 17. Rekik, R., et al.: Using multiple criteria decision making approaches to assess the quality of web sites. Int. J. Comput. Sci. Inf. Secur. 14(7), 747 (2016)

An Integrated Fuzzy ANP-TOPSIS Approach to Rank and Assess

209

18. Sun, S.-Y., et al.: Social cognition and the effect of product quality on online repurchase intention. In: Pacific Asia Conference on Information Systems, PACIS 2014 (2014) 19. Zaim, H., et al.: Multi-criteria analysis approach based on consumer satisfaction to rank B2C E-commerce websites. Presented at the SITA 2016 - 11th International Conference on Intelligent Systems: Theories and Applications (2016). https://doi.org/10.1109/SITA.2016. 7772260

Implementation of Block Chain Technology in Public Distribution System Pratik Thakare(B) , Nitin Dighore(B) , Ankit Chopkar, Aakash Chauhan(B) , Diksha Bhagat(B) , and Milind Tote(B) Department of Computer Science and Engineering, JD College of Engineering and Management, Nagpur, India [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]

Abstract. PDS is the scheme which was started by government for the distribution of necessary good to the people who are unable to purchase the goods or who doesn’t have proper income to take the goods. There are various frauds and malicious activities carried out while distributing the goods to individual like making the non-availability of goods and also making of the fraud ration cards, to reduce the frauds like this government has applied many of the schemes to make the system digital. We are trying to build a block chain process which help in governing the PDS which consist of the network having Farmers, Central Government, State Government, Fair price shop and customers. The block chain stores the logs of all the food transfer. Food transfers includes supply of food stock from Farmer to Central Government, Central Government to State Government, food distribution to Fair Price Shop by State government and food supply to customer by fair rice shop. Implementing the block chain technology in this system will help us to make the system decentralized and also will provide the transparency among the every individual who is working in the system or connected to the system. As block chain is the most secured technology it will be the best way to make this area free from all the malicious activities. The implementation will help the customers to get rid of the activities who are not in favour of customer also they will have the rights to see what was transferred to the fair price shop by government also to the customers who have taken the good and the stock left with the shop owner. The whole process will become decentralized and there will be the transparency among every person connected in the system. Keywords: Block chain · Decentralized · Transparency · Malicious activities

1 Introduction As its core, a block chain is the circulated database that permits direct exchange between the two people without the need of the focal expert. The blocks in the chain are connected together through the hash esteem. The hash worth is the cryptographic worth and is © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 210–219, 2021. https://doi.org/10.1007/978-3-030-49336-3_21

Implementation of Block Chain Technology in Public Distribution System

211

produced by the framework utilizing a portion of the calculation and each hash worth will be reliant on the other hash an incentive for the reference. The square chain is the developing of the information records as the exchanges are made. Each square information is reliant on the past information square. The information in the squares ca be in the any advanced arrangement. Square chain will be the dark pony in the challenge of the advancements because of it is impacting the a large portion of the ongoing advances like the cloud. In the public distribution system, from numerous decades we saw that there is no such straight forwardness between the government and the client in regards to the exchange of the products. It was set up for the individuals who can’t bear the cost of the pricings of the quality products. This sponsored plan was propelled in year 1994. There are numerous cheats with respect to the stocks accessibility, counterfeit ration card is made by the reasonable value retailers to win the extra money by harming the rights of the clients. For the most part, the universal method is that we need to go to the shop and demonstrate our ration cards and take the merchandise and the retailer do the entry related to goods and we need to leave. On the off chance that he says that there is no stock, we can’t do anything with respect to it. To maintain a strategic distance from this sort of issues we are building up the system to offer rights to every single client to gain admittance to the records of stocks which are given by the legislature. In our framework every exchange of the products will be open for survey not for adjusting. The primary goal of our undertaking is to decentralized the framework.

2 Literature Survey In their system, they are utilizing a model dependent on ATM machine. Aadhar card contains all related data, for example, name, contact number, address, ledger subtleties, biometric data and statistic information. In robotized framework, they supplant the customer ration card by digital card (RFID based), which contains one of a kind Aadhar recognizable proof number of all the family members, card holder type APL or BPL which is utilized for client confirmation to purchase their commodities. The confirmation sooner or later flops because of the specialized issues in the chip or the blunders and the server mistakes. Human connection is more in the system to go to the spot the card to get our ideal necessities [1]. (Public Distribution System) which is extremely helpful for productive, exact, and computerized dispersion of ration distribution system. presently ration distribution system has disadvantages like mistaken amount of merchandise, huge holding up time, low preparing velocity and material robbery in proportion shop. Principle goal of the structured framework is to supplant manual work with the atomization of proportion shop to have a straight forwardness in Public Distribution System. They are costly because of their unpredictability and they are just perfect with a portion of the card pursuers. on the off chance that the memory disappointment happened and the exchanges are not recorded, at that point it will have an enormous misfortune. GSM innovation can likewise have the transfer speed slack because of numerous clients utilizing same bandwidth [2]. Block chain can be utilized in numerous applications, a few applications that embrace block chain innovation are banking applications, e-casting a ballot applications and

212

P. Thakare et al.

computerized scientific applications. Regularly, applications that utilization block chain innovation just center around creating one of the block chain innovations that suits their needs. Practicality study in putting away the computerized proof by utilizing square chain is yet to be actualized. It will help in the cost decrease too. It can likewise have the honesty of the evidence. The utilization of block chain innovation isn’t restricted to money related angles yet additionally to a wide range of uses/executions [3]. Block chain-based applications are jumping up, covering various fields including money related administrations, notoriety system and Internet of Things (IoT) etc. Be that as it may, there are as yet numerous difficulties of block chain innovation, for example, adaptability and security issues holding on to be overcome. Each mode needs to store all the exchange to approve them on the block chain in light of the fact that they need to check if the source of the present exchange is unspent or not. The square chain can just process 7 exchange for each subsequent it is difficult to process the enormous measure of the information. they present an extensive review on block chain. they first give a diagram of block chain advances including block chain design and key attributes of block chain [5]. MedRec: a Book, decentralized record management system to deal with Electronic Medical Records, utilizing block chain innovation. Our framework gives patients an exhaustive, permanent log and simple access to their restorative data crosswise over suppliers and treatment locales. Utilizing block chain innovation, MedRec has indicated how standards of decentralization may be connected to largescale information the executives in an Electronic Medical Records framework. MedRec empowers persistent information sharing and impetuses for therapeutic analysts to support the system [6]. A negligible prologue to block chain innovation, trailed by a medicinal application (Med Blocks). Square chain innovation can possibly alter both medication and some other field for system decentralization. It is valuable that later on to reach to utilize such innovation that comes on the side of both the patient, yet additionally of specialists, analysts who need to have access to the medicinal records of the patients [7]. The present Public Distribution System has a few all around archived issues, for example, absence of straight forwardness, responsibility, poor administration and poor service delivery mechanism. This undertaking proposes the extemporized strategy of actualizing brilliant ration card system. This will monitor their particular accounts. The framework demonstrates effective administration of the apportion conveyance framework. The framework executive can have keep an eye on the accessibility of the proportion to the recipient on one side and the clients can see exchanges at their end [8]. Block chain is one of the most intensely put innovations lately. Because of its carefully designed and decentralization properties, square chain has turned into a perfect utility for information stockpiling that is material in numerous certifiable modern situations. One significant situation is web log, which is treated as wellsprings of specialized centrality and business incomes in real web companies. Building block chain controlled capacity framework that is quick and versatile with security is their long haul objective. Their last objective is to get as close as conceivable to a square chain controlled log framework like Elastic Search that works for evaluating logs as well as web server logs [9].

Implementation of Block Chain Technology in Public Distribution System

213

The outcomes demonstrate that the selection of block chain based applications in eGovernment is still constrained and there is an absence of experimental proof. The principle difficulties looked in block chain reception are prevalently introduced as innovative angles, for example, security, adaptability and adaptability. A portion of the articles talk about current issues, the potential advantages, the significance and general vision of receiving block chain innovation to improve open administrations conveyance and e-casting a ballot [10].

3 Methodology 3.1 Working of Block Chain There are the different steps engaged with the formation of the block chain. Making a block isn’t a simple undertaking we have to remember the few thing and we have likewise to take care of the thing to approve the blocks and the information in it by utilizing a portion of the calculations. 1. Inserting the data into the block: For every new transaction we need to insert the information regarding to it in the block. It is to be done by the person who is making the transaction in the system. The users have to do the correct transaction. 2. Validating the data of the blocks: The data which is entered in the blocks need to be validated to be the unique and right. This is to be done by the minors who are in the network for the validation process. Validation is necessary due to avoid the frauds and the inconsistency. 3. Generation of the data hash: The data hash is the system generated value for the data which the individual enter while doing the transaction in the block. Once the data hash is generated the values cannot be altered. 4. Generation of the block hash: As same as the above steps there will be also the generation of the block hash for the purpose of the uniqueness. The hash values will be generated by the SHA 256 algorithm of system hash generation. The hash value generated will be referenced by the previous hash value. 5. Adding the blocks in the chain: what exactly this means is that once the transaction will be made and it is validated by the minors and depending on the hash value the transaction will be added to the chain. 3.2 Existing System Working In the current system we become more acquainted with that the focal furthermore, the state government assumes the liability to oversee the distribution of the stocks. By and large, the capacity division, the transportation division and the distribution offices are constrained by the central government experts. The assignment is done in mass add up to each state. The state government is in charge of the designation of the sustenance to the many enrolled reasonable value shops. And afterward the proprietors are capable to circulate that among the clients who are beneath destitution line. This is to be done in the reasonable manner as expected by the administration individuals.

214

P. Thakare et al.

In this current framework we become more acquainted with that there is no record appeared to the clients in regards to the accessibility of the products. The reasonable value proprietors additionally offer the merchandise to private business people to win an additional cash from it. And furthermore the client doesn’t know the precise amount that had been apportioned for their locales. There is no straight forwardness with respect to the information so the issues emerge. These days Aadhar based designation is going on however it isn’t straight forward exchanges. It just settles the issue of bogus cards (Fig. 1).

Fig. 1. Existing system

3.3 Proposed System Site will be gotten to by every one of the recipients. The principal page would be Home page which will contain some data about the open conveyance, what kind of things are disseminated in public distribution system. It will likewise contain some broad data which you can find in all Indian government locales. One of the tab will take you to make your ration card account. In this you should enter your general data and all of you the data of your ration card. One of the tab will take you to the exchange page where just the more recognizable expert record can move the proportion to the one lower specialist account. Above all else, to make a square there is need of exchange to be done between to individuals. Besides the exchange won’t just be checked by the two clients who are taking an interest yet in addition some different specialists who are joined to the system of PDS for accommodating exchange. The exchange adjusting group will do all the numerical

Implementation of Block Chain Technology in Public Distribution System

215

figuring. They will confirm how much stocks does that record holder has and the amount he is moving. On the off chance that the client is attempting to move more proportion, than he really has than the exchange won’t be legitimate and the exchange will come up short. At the point when are exchange gets affirmed from the exchange adjusting group then this exchange is put away in block. Every one of the information, for example, measure of stock, date, time and so on will be put away in the block and will be prepared to be affixed to other block. At the point when all the above procedure has been executed effective then the block will be given its own hash to recognize itself. At that point the exact opposite thing that will be added to the block will be the hash of the past block. After every successful transaction the blocks will be added to the chain and the chain continues.

4 Working As we probably are aware the ranchers create huge measure of sustenance grains and they offer this to the focal government in huge sum. The administration sells this grains as apportion to the BPL families through Fair Price Shops. In this ranchers will have credit account, the credit record will have the passages of measure of sustenance grains offered to the legislature. While making a credit account, ranchers should give their own detail and biometric detail for their confirmation. A card will be given to every one of the ranchers containing their fundamental data. The administration will move measure of cash straightforwardly to the rancher’s financial balance. This passages will be refreshed by the focal government. The Central Government, State Government and Fair value retailer will have administrator accounts. The administrator account holders can just make or erase any record and they will have rights to do changes in record. In any case, the administrator records won’t have any rights with regards to moving proportion starting with one record then onto the next record. The focal government will credit the predetermined measure of proportion to every one of the state government. This exchanges will be in the unit of Kilograms and attributed starting with one record then onto the next records. All the state government will have their different records. The focal government will move their proportions (in unit Kg) to the state government. The satisfy government will again move this to each region government and after that to each reasonable value shop. The reasonable value retailer will at that point move this apportion in the customers. The focal government will continue checking of exchange of proportion to state government (Fig. 2). In this all the data is encrypted with the help of SHA-256 algorithm and hash values is generated and then with the help of DCT-based data hiding schemes it will be distributed among the server. With the help of block chain API’s, we will retrieve this data and the user will be able to see it. The E-POS (Electronic Point Of Cell) scanner will be used to read the finger print of the retailer. As we have seen the proportion will be moved starting with one record then onto the next record. In this the Fair Price retailer will make the record of for purchasers. The shoppers record will be shared service of every relative. The record will take data about every one of the relative and their biometric subtleties.

216

P. Thakare et al.

Fig. 2. Block diagram of stock exchange

This is so that the any of the relative can issue proportion from reasonable value shop. The customer will likewise be given comparable card as ranchers with their fundamental data. This card will have the record no, card holders name, and some essential data. At the point when the customer will come to issue his proportion, at that point first he will present his card, the proprietor will at that point enter the record no and advise the shopper to check his unique finger impression. After fruitful approval a page will open which will demonstrate the data off customer and his relatives. It will likewise demonstrate that how much apportion is assigned to his family. The proportion which is distributed is determined on the essential of relatives. 2 kg Rice per relative, 3 kg Wheat per relative and 400 gm Sugar per relative. At that point customer will tell the measure of apportion he need to buy. The proprietor will enter every one of the information and snap on confirm, this will check each sustenance grains independently. The proprietor should choose one sustenance grain and weight will be confirmed with the contribution of weight machine. At the point when everything will be confirmed it will direct to exchange page. In this page the cash will be appeared and the cash will be straightforwardly charged from the customer’s financial balance. The proprietor will screen the stock and bank will oversee credits (Fig. 3). In this system we are avoiding normal means to store the data in rows and columns and using blockchain to store and hide data. In earlier days we used to store data in databases which is easily vulnerable to third party. To avoid this we are encrypting all the data that user is going to enter and scatter them with hiding algorithms all over the memory or cloud. We will use SHA 256 algorithm to encrypt the data and DCT-based data hiding scheme to hide the data.

Implementation of Block Chain Technology in Public Distribution System

217

Fig. 3. Distribution to customer

5 Conclusion By building up this venture, we will probably reason that there will be no cheats identified with the products won’t occurs and there will be the straightforwardness between the clients and the administration identified with the stocks dispersion. There will be no cheats done by the retailers when there will be the information sections to be done and the proprietors won’t most likely offer the products to any of the private shops for increasing more cash. This framework will most likely keep track on the exercises performed by the administration and different associations who are engaged with the distributions. There will be no fakes later on identified with the thing currently occurring in the framework. When the framework progresses toward becoming decentralized at that point there will be no association of the outsider in the middle of the exchanges. As this is the most developing innovation on the planet and is verified for the usage we can likewise utilize this in the democratic framework for the figuring the votes and to have the straightforwardness which group or the gathering is getting the more number of the votes. We can likewise utilize this innovation in the conveyance in the cash which is given by the legislature to the individuals for the improvement of the WC’s and for making house to the destitute individuals. Acknowledgement. We are thankful to our guide Prof. Milind Tote, Assistant professor of Computer Science and Engineering department of JD college of engineering and management for their valuable support related to every concept and for giving the opportunities to work on the new technologies.

218

P. Thakare et al.

References 1. Padmavathi, R., Mohammed Azeezulla, K.M., Venkatesh, P., Mahato, K.K., Nithin, G.: Digitalized aadhar enabled ration distribution using smart card. ECE, RGIT, Bangalore, India, 978-1-5090-3704-9/17© 2017 IEEE (2017) 2. Gaikwad Priya, B., Nikumbh, S.: E – public distribution system using SMART card and GSM technology. Department of Electronics Yadavrao Tasgaonkar Institute of Engineering and Technology, Karjat, India, 978-1-53861959-9/17 ©2017 IEEE (2017) 3. Andrian, H.R., Kurniawan, N.B., Suhardi: Blockchain technology and implementation: a systematic literature review. School of Electritical Engineering and Informatics, Institute technology Bandung (ITB) Bandung, Indonesia 978-1-5386-5693-8/18/ ©2018 IEEE (2018) 4. Anil, Tumkuru Mallikarjun, B.S., Mala, S.: IOT based smart public distribution system. Department of ECE, Siddaganga Institute of Technology, Tumkuru. Int. J. Comput. Appl. (0975 – 8887) National Conference on Electronics, Signals and Communication (2017) 5. Zheng, Z., Xie, S., Dai, H., Chen, X., Wang, H.: An overview of blockchain technology: architecture, consensus, and future trends, School of Data and Computer Science, Sun Yat-sen University Guangzhou, China, 978-1-5386-1996-4/17 © 2017 IEEE (2017) 6. Azaria, A., Ekblaw, A., Vieira, T., Lippman, A.: MedRec: using blockchain for medical data access and permission management, Media Lab Massachusetts Institute of Technology Cambridge, MA, 02139, USA, 978-15090-4054-4/16 © 2016 IEEE (2016) 7. Cirstea, A., Enescu, F.M., Stirbu, N.B.C., Ionescu, V.M.: Block chain technology applied in health. University of Pitesti, România, 978-1-5386-4901-5/18 ©2018 IEEE (2018) 8. Chandankhede, C., Mukhopadhyay, D.: A proposed architecture for automating public distribution system, Department of Information Technology Maharashtra Institute of Technology Pune, India, 978-1-5090-6471-7/17 ©2017 IEEE (2017) 9. Yang, D., Duan, N., Guo, Y., Zhang, L.: Medusa: blockchain powered log storage system. Humsen, Beijing, China, 978-1-5386-65657/18/2018 © IEEE (2018) 10. Carter, L., Ubacht, J.: Challenges of blockchain technology adoption for E-governance. https:// www.researchgate.net/publication/325497149©ResearchGate 11. Hitaswi, N., Chandrasekaran, K.: Agent based social simulation model and unique identification based empirical model for public distribution system. In: 2017 International Conference on Recent Advances in Electronics and Communication Technology (ICRAECT), Bangalore, pp. 324–328 (2017). https://doi.org/10.1109/icraect.2017.68 12. Cai, Y., Zhu, D.: Fraud detections for online businesses: a perspective from blockchain technology. Financ. Innov. 2(1), 1–10 (2016). https://doi.org/10.1186/s40854-016-0039-4 13. Chen, G., Xu, B., Lu, M., Chen, N.-S.: Exploring blockchain technology and its potential applications for education. Department of Information Management, National Sun Yat-sen University, Kaohsiung, Taiwan, Springer Smart Learning Environments 5, 1 (2018). https:// doi.org/10.1186/s40561-017-0050 14. Krishnan, A., Raju, K., Vedamoorthy, A.: Unique identification (UID) based model for the Indian Public Distribution System (PDS) implemented in windows embedded CE, Instrumentation Department, Mumbai University, Vivekanand Institute of Technology, Chembur, Mumbai, ICACT 2011, February 13–16 (2011). ISBN 978-89-5519-1554 15. Fotiou, N., Polyzos, G.C.: Decentralized name-based security for content distribution using blockchains. 978-1-4673-9955-5/16/$31.00 ©2016 IEEE (2016) 16. Stanciu, A.: Blockchain based distributed control system for edge computing. National Institute for Research and Development in Informatics Bucharest, Romania, 2379-0482/17 © 2017 IEEE (2017). https://doi.org/10.1109/cscs.2017.102

Implementation of Block Chain Technology in Public Distribution System

219

17. Aishwarya, M., Nayaka, A.K., Chandana, B.S., Divyashree, N., Padmashree, S.: Automatcration materiel dispensing system. In: ICEI 2017, Department of Electronics and Communication Engineering Sambhram Institute of Technology, Bengaluru, 978-1-5090-42579/17/$31.00 ©2017 IEEE (2017) 18. Marchesi, M.: Why blockchain is important for software developers, and why software engineering is important for blockchain software, University of Cagliari, Italy, 978-1-5386-59861/18/© 2018 IEEE (2018) 19. Wang, R., Tsai, W.-T., He, J., Liu, C., Li, Q., Deng, E.: A video surveillance system based on permissioned blockchains and edge computing, Digital Society & Blockchain Laboratory Beihang University Beijing, China, 9781-5386-7789-6/19/ ©2019 IEEE (2019)

Chaotic Salp Swarm Optimization Using SVM for Class Imbalance Problems Gillala Rekha1(B) , V. Krishna Reddy2(B) , and Amit Kumar Tyagi3(B) 1

Koneru Lakshmaiah Educational Foundation, Hyderabad, India [email protected] 2 Koneru Lakshmaiah Educational Foundation, Guntur, India [email protected] 3 Vellore Institute of Technology, Chennai, India [email protected]

Abstract. In most of the real world applications, misclassification cost of minority class samples can be very high. For high dimensional data, it will be a challenging problem as it may increase in overfitting and degradation of performance of the model. Selecting the most discriminate features is popular and recently used to address this problem. To solve class imbalance problems may optimization algorithms have been proposed in the literature. One among them is bio-inspired optimization algorithm. These algorithms are used to optimize the feature or instance selection. In this paper, a new bio-inspired algorithm called Chaotic Salp Swarm Algorithms (CSSA) were used to find the most discriminating features/attributes from the dataset. We employed 10 chaotic maps functions to assess the main parameters of salp movements. The proposed algorithm selects the important features from the dataset and it is mainly comprised of features selection phase, and classification phase. In the former phase, the most important features were selected using CSSA. Finally, the selected features from CSSA were used to train Support Vector Machine (SVM) classifier in the classification phase. Experimental results proved the ability of selecting optimal feature subset using CSSA, with accurate classification performance. Our observation on different data sets using Accuracy, F-measure, G-Mean, AUC and weighted as indicative metric provide better solution. Keywords: Salp swarm algorithm · Support vector machine mapping · Feature selection · Optimization algorithm

1

· Chaotic

Introduction

The imbalanced datasets occur most often in many application domains [7]. The datasets are said to be skewed in nature, when one class is sufficiently represented called majority/negative class while other very important class has fewer Supported by KL University. c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 220–229, 2021. https://doi.org/10.1007/978-3-030-49336-3_22

Chaotic Salp Swarm Optimization Using SVM for Class Imbalance Problems

221

samples called minority/positive class. In simple terms, the important/positive samples will be very less in number compared to that of majority or negative samples. When the classifier is learned on such data distribution may result in misclassification of positive samples. To deal with skewed distribution problem, different approaches have been proposed in literature, broadly including data-level techniques, algorithm-level techniques and ensemble techniques [10]. The data-level technique works as preprocessing technique by resampling the data. The most common techniques used in resampling are Random OverSampling (ROS) and Random UnderSampling (RUS). In the former technique, the random samples are generated for minority samples, to balance the distribution. While in the latter, the samples of majority class are discarded. The main drawback for oversampling is increase in the classifier training time and for undersampling loss of important information [6]. One of the most simple and most popular oversampling technique is Synthetic Minority Over-sampling TEchnique (SMOTE) proposed by Chawla [3]. SMOTE generates synthetic samples from current minority samples by interpolation. But it may not be possible for critical applications like medical diagnosis which depends much on real data. Some of the methods proposed in extension with issues of SMOTE are Borderline-SMOTE [11], Safe-level SMOTE [14], Cluster Based Oversampling [12]. For Random Undersampling the proposed methods are Edited Nearest Neighbor (ENN) [8] and Tomek Links [9] are purely based on data cleaning. The algorithm-level techniques surrounded with modification of existing learning algorithms to avoid the bias towards minority class or incorporate different misclassification cost while learning. Few examples of former are Hellinger Distance Decision Trees (HDDT) [4], Class Confidence Proportion Decision Tree (CCPDT) [13] and other insensitive class-size decision trees and for latter, the author [5] presents Naive Bayesian and Support Vector Machine learning methods with equal and unequal costs. In the literature, the ensemble methods often give better results than individual classifiers. Ensemble algorithms [7] like bagging and boosting with pre-processing techniques have been successfully designed to work with imbalanced data. Recently, feature selection has gained interest by researchers in addressing the imbalance learning problem [16]. Previous techniques like sampling techniques, algorithm techniques, and ensemble methods have focused on training data samples. On the other hand, feature selection focus on identifying important features from the training data. Feature selection is an important pre-processing technique to achieve better performance of the classification algorithm. The main goal of feature selection is to eliminate redundant features in the dataset. Feature selection has been classified into filter method and wrapper method. Filter method works by using data properties without depending on learning algorithms whereas, wrapper methods will use the learning algorithms to generate important features. The latter methods are more exact but computationally expensive than former methods. The feature selection problem is defined as multi-objective optimization problem and the aim is to select the important features to maximize the performance of the classifier. However, exhaustive search

222

G. Rekha et al.

for optimal subset features is almost practically impossible. Swarm-based and evolutionary algorithms have been proposed in the literature to search for the most important features from the given dataset. Many Bio-inspired optimization algorithms exist in literature namely Particle Swarm Optimization (PSO) [17], Aritificial Bee Colony (ABC) [2], Dragon Fly Algorithm (DFA) [18], Salp Swarm Algorithm (SSA) [15]. In this work, the important features are selected using Salp Swarm Algorithm (SSA). As any other optimization algorithms, SSA falls into local optima and does not find the global optima. Hence, in this paper we combined various chaos function with SSA and proposed Chaotic Salp Swarm Algorithm (CSSA) to select the most important features. In this work, a support vector machine model was proposed to evaluate the imbalanced dataset. The proposed model consists of two phases. The former is the attribute selection phase, wherein the most distinction attributes were selected using the proposed CSSA algorithm. The resultant optimal attributes were then trained on SVM classifier in the next phase, i.e. the classification phase. The rest of this paper is organized as follows: Section 2 presents a brief description of the SSA that are used in our proposed model. The brief introduction to chaotic map functions were presented in Sect. 3. Section 4 deals with assessment methods used for measuring the performance of the classifier on imbalanced data. The proposed model is been presented in Sect. 5. The experimental results along with its analysis and conclusion were presented in Sect. 6 and 7 respectively.

2

Salp Swarm Optimization (SSA)

Recently, evolutionary and swarm-based algorithms have been widely used for feature selection problem. These algorithms adaptively search the feature space by applying agents to reach optimal solution. The Working of Salp Swarm Algorithms (SSA). In this section, the basic representation of SSA is proposed. Salp swarm algorithm is a nature based metaheuristic algorithms proposed by Mirjalili [15] in 2017. SSA came from swarming behaviour of salp in the heavy oceans. They form a swarm known as salp chain for optimal locomotion based on foraging in oceans Mirjalili [15]. Mathematical Model of SSA: In SSA, the entire population is divided into two groups, leader and the follower. The front position of swarm taken up by the leader followed by the remaining swarms as followers. The salp positions are represented in an n-dimensional search space, where ‘n’ is the number of dimensions or features. The positions of all the salps are denoted on two-dimensional matrix called x. ‘F’ denotes the food source in the target search space. The mathematical model to update the positions of the leaders is as follows: ⎧ ⎨Fi + r1 ((upn − lwn )r2 + lwn ) r3 ≥ 0 (1) x1i = ⎩ Fi − r1 ((upn − lwn )r2 + lwn ) r3 < 0

Chaotic Salp Swarm Optimization Using SVM for Class Imbalance Problems

223

Where x1i represents the salp position known as leaders in i − 1th dimension, upn and lwn denotes the upper and lower boundaries at i-1th dimension respectively. Fi is the position of food at i − 1th dimension and r1 , r2 , r3 are the random numbers. The mathematical definition of r1 is represented as follows:  −

2e

4k K

2

(2)

Where K is the maximum number of iteration and k represents the current iteration. The random number r2 , r3 are generated uniformly between the range of [0, 1]. The position of the followers is updated using the Eq. (3): xji =

1 2 αt + β0 t 2

(3)

Where xji shows the position of j th follower, j ≥ 2, β0 is the initial speed, α = βf inal 0 and β = x−x β0 k . Because time is represented as optimization within each iteration process and discrepancy within the iterations is equal to 1 and β0 = 0, the equation for updating the followers position in i − 1th dimension is represented as follows: xji =

3

 1 j xi + xj−1 i 2

(4)

Chaotic Map

Chaos play an vital role to address the behavior of swarms at each iteration. A negligible change in its underlying state of swarm may prompt non-linear change in the future behavior. Chaos optimization algorithms are popular search algorithms recently applied to evolutionary algorithms. The main course of action is to search for global optima based on chaotic properties like stochastic, regularity and ergodicity. Its main attention is to avoid evolutionary algorithms to fall into local optima. In this work, we used ten chaotic mapping techniques to increase the performance of SSA. Figure 1 shows the different chaos mapping functions.

4

Performance Metrics for Skewed Data Distribution

Accuracy (Acc) is a notable performance metric used in classification. It is defined as the quantitative relation between the classified data samples (correctly) to the total number of data samples (5). In the imbalanced datasets, accuracy shows bias towards majority class and lead to wrong decisions. Therefore, different performance metrics are need to assess the performance of the classifier when trained on imbalanced datasets. The suitable metrics used are precision, recall, AUC to measure the performance of classifier when trained on imbalanced datasets. Precision is the proportion of true positive to the total

224

G. Rekha et al.

Fig. 1. Different choatic mapping functions

number of true positive and false positive samples (6). Recall/sensitivity represents how well the model detects the true positive samples (7). The F-measure combines both recall and precision and defined as (8). Therefore, F-measure is suitable when the data is skewed in nature than any other metric. Acc = (T rueP ositive + T rueN egative) / (T rueP ositive + T rueN egative + F alseP ositive + F alseN egative) (5) P recision = (T rueP ositive) / (T rueP ositive + F alseP ositive)

(6)

Recall = (T rueP ositive) / (T rueP ositive + F alseN egative)

(7)

F − Score = 2 ∗ (P recision ∗ Recall) / (P recision + Recall)

(8)

In this paper, the various performance metrics used are Accuracy, AUC, FScore, G-Mean and Weighted measure. We have used weighted metric of F-Score, G-Mean and AUC and it is defined as (9) W eightedmetric = (F − Score + G − M ean + AU C) /3

5

(9)

CSSA: The Working Model

In this section a detail description of the proposed model will be illustrated. The proposed model consists of feature selection and classification phases. The proposed architecture is shown in Fig. 2. Each phase is been explained in detail as below.

Chaotic Salp Swarm Optimization Using SVM for Class Imbalance Problems

225

Fig. 2. The architecture of proposed model

5.1

Feature Selection Phase Using Chaotic Salp Swarm Algorithm (CSSA)

To select the appropriate attributes/features, Chaotic Salp Swarm algorithm (CSSA) was used. As discussion before, there are three main parameters chosen randomly and may affect the salp positions. The randomness in exploitation and exploration may badly affect the performance of the algorithms. To overcome this situation, chaotic maps are used to replace the random parameters using in the SSA. So, the combination of chaotic maps with salp swarm algorithm can be defined as Chaotic Salp Swarm Algorithms (CSSA). In this paper, ten chaotic mapping techniques were used to select the features for classification. The CSSA feature selection is based on wrapper method to find the optimal features at each iteration and result in improvement of classification performance. Fitness Function. The position of the salp at each iteration is evaluated by using predefined fitness function Fi. The main objective criteria in evaluating the position of salp were to select the minimum features with maximum classification accuracy. The fitness function is defined with combination of accuracy and weight factor with value between [0, 1] as below. F i = max(Accuracy + W F (1 − F S/F C))

(10)

Where FS stands for feature subset and FC stands for total count of attributes/features. The weight factor (WF) is used to improve the accuracy of the classifier and usually set near to 1. In our experiments, WF is set to 0.9. We employed 10-fold cross validation with dataset partitioned into training and testing set. The training set is used to learn the SVM classifier and the test set is used to evaluate the classifier and select the optimal features. Additionally, to select the discriminate features K-Nearest Neighbor(K-NN) algorithm was been used, where k represents the number of nearest neighbors. The best solution is the one, with optimal number of attributes/features and with maximum classification accuracy. 5.2

Classification Phase

In classification phase, the selected features are trained on Support Vector Machine (SVM). We applied different kernel function such as Linear, Radial Basis Function, and Polynomial. The kernel functions are used to transform the

226

G. Rekha et al.

non-linear data into high dimensional space, to make it linearly separable. The selected features are given to SVM classifier as input to check the robustness of the selected features.

6

Result and Analysis

In this section, the proposed model is evaluated on different datasets using 10 choatic mapping function. In the first experiment aims we evaluated the performance of Chaotic SSA using different SVM kernel techniques. In the second experiment, our proposed model is compared with SSA used to deal with class imbalance problems. 6.1

Data Set

We evaluate the proposed algorithm using 11 datasets from Keel repository with different imbalance ratio (IR) [1]. Table 1 shows the details of the imbalanced datasets with number of features and imbalance ratio. Table 1. Datasets used Dataset Breast Cancer

9

286

2.36

Elico

7

220

1.86

Glass

9

214

1.82

Harberman

3

306

2.78

Pageblock

10

5472

8.79

Parkinsons

22

195

3

8

768

1.87

Thyroid

5

215

5.14

Vehicle

18

846

2.88

Pima

6.2

No. of features No. of samples IR

Wisconsin

9

683

1.86

Yeast

8

1484

2.46

Results

In this section, we compared the results (Fig. 3, 4 and 5) using different kernel functions to determine the best kernel function. We also aim at identifying the best chaotic mapping function for imbalanced datasets. From the experiments it is been observed that SVM with linear kernel performed well on most of the datasets, next polynomial and last is the radial basis kernel. Out of 11 datasets, 5 datasets had better performance using linear kernel compared with polynomial and radial basis kernel. We also observed that all choatic mapping functions

Chaotic Salp Swarm Optimization Using SVM for Class Imbalance Problems

227

Fig. 3. Performance of accuracy on different choatic mapping functions

Fig. 4. Performance of AUC on different choatic mapping functions

worked well when classified using SVM (RBF, Linear, Polynomial kernel) on Breast cancer, Elico, Wisconsin, Thyroid and Pageblock datasets. In the experiment, the datasets are evaluated based on SSA and CSSA and better result shown when experimented on CSSA.

228

G. Rekha et al.

Fig. 5. Performance of F-Measure on different choatic mapping functions

7

Conclusion

In this paper, a novel hybrid chaos with salp swarm algorithm (CSSA) was proposed. To enhance the performance of SSA ten chaotic mapping functions were used. The proposed CSSA is applied on feature selection to select the most discriminate features from imbalanced dataset. The results shows that the CSSA algorithm outperformed over SSA. In terms of classification, Linear SVM performed well on features selected from CSSA.

References 1. Alcal´ a-Fdez, J., et al.: Keel data-mining software tool: data setrepository, integration of algorithms and experimental analysis framework. J. Multiple Valued Logic Soft Comput. 17 (2011) 2. Braytee, A., Hussain, F.K., Anaissi, A., Kennedy, P.J.: ABC-sampling for balancing imbalanced datasets based on artificial bee colony algorithm. In: 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), pp. 594–599. IEEE (2015) 3. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002) 4. Cieslak, D.A., Chawla, N.V.: Learning decision trees for unbalanced data. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 241–256. Springer (2008) 5. Datta, S., Das, S.: Near-bayesian support vector machines for imbalanced data classification with equal or unequal misclassification costs. Neural Netw. 70, 39–52 (2015) 6. Fern´ andez, A., del R´ıo, S., Chawla, N.V., Herrera, F.: An insight into imbalanced big data classification: outcomes and challenges. Complex Intell. Syst. 3(2), 105– 120 (2017)

Chaotic Salp Swarm Optimization Using SVM for Class Imbalance Problems

229

7. Galar, M., Fernandez, A., Barrenechea, E., Bustince, H., Herrera, F.: A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 42(4), 463–484 (2011) 8. Garc´ıa-Pedrajas, N., del Castillo, J.A.R., Cerruela-Garcia, G.: A proposal for local k values for k-nearest neighbor rule. IEEE Trans. Neural Netw. Learn. Syst. 28(2), 470–475 (2015) 9. Gu, Q., Cai, Z., Zhu, L., Huang, B.: Data mining on imbalanced data sets. In: 2008 International Conference on Advanced Computer Theory and Engineering, pp. 1020–1024. IEEE (2008) 10. Haixiang, G., Yijing, L., Shang, J., Mingyun, G., Yuanyue, H., Bing, G.: Learning from class-imbalanced data: review of methods and applications. Expert Syst. Appl. 73, 220–239 (2017) 11. Lemaˆıtre, G., Nogueira, F., Aridas, C.K.: Imbalanced-learn: a python toolbox to tackle the curse of imbalanced datasets in machine learning. J. Mach. Learn. Res. 18(1), 559–563 (2017) 12. Lim, P., Goh, C.K., Tan, K.C.: Evolutionary cluster-based synthetic oversampling ensemble (eco-ensemble) for imbalance learning. IEEE Trans. Cybern. 47(9), 2850– 2861 (2016) 13. Liu, W., Chawla, S., Cieslak, D.A., Chawla, N.V.: A robust decision tree algorithm for imbalanced data sets. In: Proceedings of the 2010 SIAM International Conference on Data Mining, pp. 766–777. SIAM (2010) 14. Ma, J., Afolabi, D.O., Ren, J., Zhen, A.: Predicting seminal quality via imbalanced learning with evolutionary safe-level synthetic minority over-sampling technique. Cogn. Comput., 1–12 (2019) 15. Mirjalili, S., Gandomi, A.H., Mirjalili, S.Z., Saremi, S., Faris, H., Mirjalili, S.M.: Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 114, 163–191 (2017) 16. Moayedikia, A., Ong, K.L., Boo, Y.L., Yeoh, W.G., Jensen, R.: Feature selection for high dimensional imbalanced class data using harmony search. Eng. Appl. Artif. Intell. 57, 38–49 (2017) 17. Wahono, R.S., Suryana, N.: Combining particle swarm optimization based feature selection and bagging technique for software defect prediction. Int. J. Softw. Eng. Its Appl. 7(5), 153–166 (2013) 18. Zhang, L., Srisukkham, W., Neoh, S.C., Lim, C.P., Pandit, D.: Classifier ensemble reduction using a modified firefly algorithm: an empirical evaluation. Expert Syst. Appl. 93, 395–422 (2018)

Three-Layer Security for Password Protection Using RDH, AES and ECC Nishant Kumar(B) , Suyash Ghuge, and C. D. Jaidhar Department of Information Technology, National Institute of Technology Karnataka, Surathkal, Mangalore, India {16it123.nishant,16it114.suyash,jaidharcd}@nitk.edu.in

Abstract. In this work a three-layer password protection approach is proposed by utilizing Advanced Encryption Standard (AES), Elliptic Curve Cryptography (ECC) and Reversible Data Hiding (RDH). RDH is a sort of data concealing approach whereby the host image can be recuperated precisely. The proposed approach was implemented and evaluated by considering various images. Results have been presented by calculating Peak Signal to Noise Ratio (PSNR) and Rate. Obtained experimental results demonstrate the effectiveness of the proposed approach. Keywords: Advanced encryption standard · Elliptic curve cryptography · Reversible data hiding

1 Introduction Data hiding is a technique used to hide information inside the object. It links the two collections of information, one part of data to be embedded and another media in the form of an image. The interconnection between the two of these collections of data simulates applications that are entirely different. For secrete communication, the concealed information may not be suitable for the media object. In validation, the implanted data is nearly in relation with the media object. In both the forms of applications, concealed data invisibility is of more importance. Generally, data hiding can lead to distortion of the media object, for example, an image. Due to this disfigurement, the image now cannot be used to extract the hidden data. Watermarking the object media is an effective method to create an application for non-irreversible, non-misstate and transposable data hiding method. In the current scenario, most of the techniques seem to be irreversible [1]. RDH is one of the data concealing techniques. Currently, there has been a significant increase in applications making use of this technique [2]. In present times JPEG images are widely used and they are suitable envelope for data hiding. Thus, a JPEG image is preferred for this technique. Apart from camouflaging the data in an image, the image is also restored. In the proposed work here, we think of images of type JPEG as ideal wrappers and propose an RDH technique using these images. AES is a symmetric key-based encryption technique used to encrypt data [3]. The unit round of AES uses multiple stages to create cipher text and it is widely used in ATM © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 230–239, 2021. https://doi.org/10.1007/978-3-030-49336-3_23

Three-Layer Security for Password Protection Using RDH, AES and ECC

231

machines and also used in the Windows VISTA auditing system. ECC is a public-key cryptosystem, it is an effective method as similar to Rivest, Shamir, and Adelman (RSA) public-key cryptosystem with shorter key length. It increases performance and decreases computation complexity. The rest of the paper contains a Literature survey followed by the Proposed Method of encrypting the AES key with the ECC algorithm and then hiding the encrypted text in a JPEG image. In the Methodology Section, proposed algorithm is meticulously explained. The Results section contains a detailed time complexity calculation of the proposed algorithm. The results are tabulated in Tabulation of Results section and then the paper is concluded in the Conclusion section.

2 Literature Survey This section provides RDH techniques accounted in literature. The technique [4] is completed in the spatial space. It utilizes modulo 256 expansion to compute a hash estimation of the first picture for the purpose of authentication. The method for embedding is, which means the first picture, the checked picture as well as watermark, which signifies the hash capacity worked on the first picture, and the mystery key. In the course of modulo 256 expansion, there is the flipping of grayscale value over some-where in the range of 0 and 255. This is commonly known as salt-pepper-commotion. The subsequent technique of convertible marking was created in transfiguration domain, and it depends on a multiresolution transform which is lossless [5]. It additionally utilizes a modulo 256 addition. Spatial domain strategy was accounted [6] that listlessly packs some chosen bit plane(s) and thus leaves space for the data to be embedded. Because the essential information is likewise inserted in the cover medium and it is an overhead, this technique is reversible. Since the strategy in [7] is applicable for authentication, the measure of data that is hidden is restricted. The limit of method [8], which depends on the modulo 256 addition and the possibility of patchwork is additionally constrained aside from the fact that hidden data shows some sturdiness against JPEG compression of high quality. Due to modulo 256 summation, it is likewise to experience salt noise and pepper noise. Thus, the strategy cannot be used in the majority of applications [9]. Specifically, the capacity of embedding which is estimated by authors ranges from a lower bound of 3 Kb to an upper bound of 41 Kb for a 512 × 512 × 8 cover grayscale picture with amplitude of embedding being 4 (the assessed normal PSNR of checked picture versus the first picture is 39 dB) [10]. Most recent JPEG picture RDH procedures are being accomplished by altering the Discrete Cosine Transformation(DCT) coefficients because the Discrete Cosine Transform coefficients’ graph includes a distribution which is Laplace-like, that is an ideal transporter for RDH. In [11], it is explained that convertibility is attained by changing the quantization table. The capacity of embedding of [11] is developed by utilizing another table by multiplying by the inverse of an integer mentioned in [12]. Cover medium is used to embed key information by altering the quantization table along with modifying the length of zero coefficients which are sequential [13]. A histogram-pair based system is put forward [14]. An optimized methodology is proposed to perform optimization of embedding by picking dilating bits and furthermore

232

N. Kumar et al.

the magnitude of embedding. The technique [14] is improved in [15] by using leveled blocks utilizing the difference of Direct Cosine coefficients. A substitution strategy is put forward in [16] which targets limiting the document size increment by utilizing optimization which focuses on rate-distortion. The AC coefficients in [17] with non-zero value are changed and thus, information hiding is achieved. Furthermore, the levelled blocks are used in [17] for the improvement of the implementation of embedding. A block’s smoothness is estimated by calculating the total count of AC coefficients which are zero values inside the block. Before ECC become well known, public key cryptosystems calculations depended on Diffie-Hellman key exchange [18], Digital Signature Algorithm [19] and RSA cryptosystem [20]. RSA is still significant today, and is regularly utilized close by ECC. RSA can be effectively clarified, it is generally comprehended, and digital signature implementations, decryption and encryption can be composed effectively. In any case, the establishments and implementations of ECC are as yet a puzzle and not reasonable to most. [21] displayed different methodologies for effective hardware implementation of AES. Authors have discussed Architectural and Advanced streamlining optimization strategies.

3 Proposed Method 3.1 Encrypting Key of AES Using ECC The proposed approach utilizes AES and ECC encryption techniques. Cryptographic hardness of the proposed approach relies on the strength of the ECC and the ease of AES. The content or numerical information is exposed to underlying encryption by utilizing AES, the key for which is picked arbitrarily. For the second degree of security, the AES key is encoded by utilizing ECC. The resultant cipher text is consequently compressed and has undergone two degrees of encryption comprising of AES and ECC. Such a hybrid model of encryption provides a vastly improved degree of security when contrasted with these cryptographic techniques used individually. The decryption procedure is just the reverse procedure. 3.2 Hiding the Encrypted Data in a JPEG Image Using Reversible Data Hiding The favorable conditions for RDH to work is to use an image which is smooth or chose the blocks in the image which are smooth. This method has proved to be effective for a JPEG image [14–16]. It ensures selecting the smooth DCT blocks for hiding the data in the image. Over a period of time, several methods have been put forward for searching or selecting smooth blocks for data hiding with the help of the quantization table as shown in Fig. 1. One of the methods includes sorting the variance of DC coefficients. This also comprises counting the AC coefficients of Direct Cosine Transformation block having value null, for calculating the smoothness value [13]. Fluctuation value represented by Q measures each block’s smoothness in the JPEG image as shown in Eq. (1).   (1) N (yk)2 Q = 1/N ∗ k=2

Three-Layer Security for Password Protection Using RDH, AES and ECC

233

Fig. 1. Calculation of quantized DCT coefficients.

Here, yk is the kth AC coefficient in a DCT block. Since AC coefficients are used in the proposed methodology for deviation calculation the index value of k starts from 2. Ideally, for a threshold PF, the block with Q less than PF, is selected for data induction. The block which does not fulfill the above-stated condition is omitted and unchanged. This method is then followed for the subsequent blocks present in the image and the text extraction is achieved by processing the algorithm in reverse order.

Fig. 2. Lena.

Fig. 3. Baboon.

Fig. 4. Airplane.

Fig. 5. Barbara.

4 Methodology In the proposed method, a ‘Private key’ and a ‘Cipher private key’ are randomly generated keys. We can consider these to be private keys of two users who want to communicate. Using these keys along with the equation of the ECC curve, a ‘Public key’ and a ‘Cipher public key’ are generated. That is ‘Public key’ is generated from ‘Private key’ and ‘Cipher public key’ is generated from ‘Cipher private key’. These public keys are the actual keys that can be sent over the communication network. When received by users,

234

N. Kumar et al.

these keys are multiplied by the user’s private key to generate a secret key. The question of the same secret key being generated here arises. This is ensured by the relation of these keys. These keys are related to each other using Eq. (2): Private key × Cipher public key = Public key × Cipher private key

(2)

This is the property of the ECC curve. While proceeding for encryption, an ‘ECC key’ is obtained by multiplying ‘Public key’ and ‘Cipher private key’ as shown in Eq. (3). ECC Key for encryption = Public key × Cipher private key

(3)

This key exchange method is the Elliptic-curve Diffie–Hellman key exchange. This ‘ECC key’ is then used to generate an ‘AES key’. The ‘AES key’ of size 256 bit is generated by using the SHA256 hash function. Generated AES key is used to encrypt the data that needs to transfer securely. The encrypted data is in hexadecimal form which is then converted to ASCII format. This data is to be hidden within an image using RDH. The image in which the data is to be hidden should be in grayscale format. Thus, first the image needs to be converted to grayscale if the image is colored image. Figure 2, Fig. 3, Fig. 4, and Fig. 5 are used as the cover medium for data hiding. Further, the data that needs to be hidden should be converted to Binary format from ASCII format before embedding into the image. The binary value of every character is generated and padding of bits is done to ensure that the binary value of every ASCII character is 10 bits. This binary data is then embedded into the image as shown in Fig. 6. For decrypting the data, it has to be first recovered from the image. For further knowledge about the process of encryption and decryption of data, [22] can be referred. Data recovered from the image is in binary format. This binary form is then converted to ASCII form. Plain text is obtained from the cipher text, which was earlier encrypted using AES. The key required to decrypt the extracted text from the image is to be derived again. The communicating users would have their respective private keys. As mentioned in the encryption process, these private keys can be used to generate public keys which can be shared by users. Secret key can be generated using the public key. ‘Private key’ and ‘Cipher public key’ are multiplied to obtain ‘ECC key’ as shown in Eq. (4). ECC Key for decryption = Private key × Cipher public key

(4)

This ECC key is passed through a SHA256 function and thus an ‘AES key’ is generated which can be used for the AES decryption as shown in Fig. 7.

5 Results It is interesting to look into the time complexity of the proposed method. 5.1 Time Complexity of RDH An image can be considered as a combination of rows and columns of pixels. Consider m rows and n columns in the image in which the data is to be hidden. Now each block

Three-Layer Security for Password Protection Using RDH, AES and ECC

235

of size 8 × 8 is used for DCT calculations. Number of blocks will be [(m/8) × (n/8)]. We know that the maximum number of traversals in a block can be 64 (since block size 8 × 8). The time taken to embed the data will be O(k), where k is the number of bits of data to be embedded. And the maximum time taken to travel the entire image will be O ((m/8) × (n/8)) × 64). Thus the time complexity will be as shown in Eq. (5). O((m/8) × (n/8) × 64) + O(k) = O (m × n) + constant

(5)

Which is equal to Eq. (6). O ((m/8) × (n/8) × 64) + O(k) = O (m × n)

(6)

5.2 Time Complexity of AES The time taken by AES is approximately same, independent of input size and the block size is fixed. This time complexity of the previous step is O(1). Supposing that the size of the message is greater, since we have O(k) blocks of data to encrypt, the complexity reaches O(k), where k is the message size. Since we are using 10 bits for each image the data size will be, k = (No. of characters × 10) bits. Thus by using AES encryption, the time complexity will be O(k). 5.3 Time Complexity of ECC Similarly, in the ECC algorithm, the time complexity depends on the order of the size of the data to be encrypted. Time complexity remains the same as the one calculated in the AES encryption technique. Assume the size of the AES key to be used to be L bytes. Using ECC to encrypt the AES key, the time complexity will O(L), which is constant. 5.4 Time Complexity of the Proposed Algorithm The time complexity of the proposed algorithm will be the summation of the time complexities of Reversible Data Hiding (RDH), Advanced Encryption Standard (AES) and Elliptic Curve Cryptography (ECC) algorithms. This can be shown with the help of Eq. (7). Time complexity of three-layered algorithm = O (m × n) + O(k) + O(L)

(7)

Here we know that k and p are in the order of size of the number of bits in data. And m and n are much larger than the order of data that is supposed to be hidden since it is a Security Password, which is generally from 4 characters to 16 characters. Thus the time complexity of the proposed algorithms reduced to Eq. (8). Time complexity of three-layered algorithm = O (m × n)

(8)

236

N. Kumar et al.

Fig. 6. Data encryption and hiding

Fig. 7. Data decryption and extraction

6 Tabulation of Results The proposed methodology was implemented on Windows 10 Operating System, on a 64-bit machine with 4 GB RAM, i3 6th generation processor having 4 CPUs working at 2.30 GHz. Python3 programming language, as well as Python libraries such as cv2, Numpy, Matplotlib, Scipy and aes were used to implement the proposed approach. Peak Signal-to-Noise Ratio between two images is calculated and it helps in quality assessment between the modified and original image. Reconstructed image has a quality directly proportional to the value of PSNR. Equation (9) gives the formula for calculating PSNR PSNR = 20.log10 (MAXI ) − 10.log10 (MSE)

(9)

The ratio of the difference of increased file size and the payload to the payload gives the Rate, where the payload is the number of bits in the data. The rate is calculated using Eq. (10). Rate = (Increased File Size − Payload ) / (Payload )

(10)

Table 1 depicts the obtained experimental results. The parameters such as Threshold (T), Payload (PL), Fluctuation Threshold (TF), DCT Coefficients Lower Range Value (TL), DCT Coefficients Upper Range Value (TH) and PSNR were recorded to measure the efficiency. Threshold (T) determines DCT Coefficients to be selected for data embedding. According to these values, we can infer that with the increase in payload, PSNR

Three-Layer Security for Password Protection Using RDH, AES and ECC

237

value shows insignificant change and Fig. 8 was used to compare Rate Vs Payload for all the JPEG images that were used for data embedding. The proposed method works better with respect to both the embedding performance as well as different PL size. The increase in file size in [13] is quite large since blocks are not chosen optimally. In [17], the method of selecting blocks is inefficient, even though zero-valued DC coefficients are left unmodified. In this work, optimal blocks were chosen which was not there in [13] and it is an improvement of [17]. In [23], AES was used to encrypt the data and then encrypted data was hidden in an image using the Least Significant Bit technique. In our proposed approach, the AES key is encrypted using ECC thus it provides additional security. In the Least Significant Bit technique, the PSNR decreases with an increase in the data to be hidden in a gray-scale image, while in RDH the image quality remains unaffected [24]. Thus, RDH technique was used over LSB to hide the data in a grayscale JPEG image. Table 1. PSNR value for different payloads for each image. JPEG image LENA

BARABARA

BABOON

AIRPLANE

PL (Bits)

T

TF

TL

TH

PSNR (+5.2774)

40

0.2

0.1

0.2

0.3

0.000029

80

0.2

0.1

0.2

0.3

0.000034

120

0.2

0.1

0.2

0.3

0.000030

160

0.2

0.1

0.2

0.3

0.000030

40

0.2

0.1

0.2

0.3

0.000759

80

0.2

0.1

0.2

0.3

0.000760

120

0.2

0.1

0.2

0.3

0.000761

160

0.2

0.1

0.2

0.3

0.000950

40

0.2

0.1

0.2

0.3

0.000022

80

0.2

0.1

0.2

0.3

0.000022

120

0.2

0.1

0.2

0.3

0.000022

160

0.2

0.1

0.2

0.3

0.000031

40

0.2

0.1

0.2

0.3

0.000456

80

0.2

0.1

0.2

0.3

0.000455

120

0.2

0.1

0.2

0.3

0.000455

160

0.2

0.1

0.2

0.3

0.000578

238

N. Kumar et al.

Fig. 8. Rate vs payload for JPEG Images

7 Conclusion This paper proposes a password protection approach using the RDH technique with two layers of security using AES and ECC. The AES key is encrypted using ECC, the encrypted AES key is used to encrypt the data which is to be sent. The encrypted data is hidden inside into an image using RDH. This image acts as a cover medium and is further used to decipher the data. A wide-set of experiments were conducted to measure the practicality of the proposed approach. Results show that increasing the payload size causes meager changes in the PSNR value. Thus there is not much noise caused by hiding data in the image. Also, the proposed approach ensures a high level of security. Obtained experimental results demonstrate the strength of the proposed approach.

References 1. Sarkar, T., Sanyal, S.: Reversible and irreversible data hiding technique. arXiv:1405.2684 (2014) 2. Shi, Y.Q., Li, X., Zhang, X., Wu, H., Ma, B.: Reversible data hiding: advances in the past two decades. IEEE Access 4, 3210–3237 (2016) 3. Soliman, S.M., Magdy, B., El Ghany, M.A.A.: Efficient implementation of the AES algorithm for security applications. In: IEEE Conference on System-on-Chip Conference (SOCC), pp. 206–210. IEEE (2016) 4. Honsinger, C.W., Jones, P., Rabbani, M., Stoffel, J.C.: Lossless Recovery of an Original Image Containing Embedded Data. U.S. (2001) 5. Bender, W., Gruhl, D., Morimoto, N., Lu, A.: Techniques for data hiding. IBM Syst. J. 35(3–4), 313–336 (1996) 6. Fridrich, J., Goljan, M., Du, R.: Invertible authentication. In: Proceedings SPIE Security Watermarking Multimedia Contents, vol. 4314, pp. 197–208. Springer (2001) 7. Huang, J., Shi, Y.Q.: An adaptive image watermarking scheme based on visual masking. Electron. Lett. 34(8), 748–750 (1998) 8. De Vleeschouwer, C., Delaigle, J.F., Macq, B.: Circular interpretation on histogram for reversible watermarking. In: IEEE International Multimedia Signal Processing Workshop, pp. 345–350. IEEE (2001)

Three-Layer Security for Password Protection Using RDH, AES and ECC

239

9. Shi, Y.Q., Ni, Z., Zou, D., Liang, C.: Lossless data hiding: fundamentals, algorithms and applications. In: IEEE International Symposium Circuits System, vol. 2, pp. 33–36. IEEE (2004) 10. Goljan, M., Fridrich, J., Du, R.: Distortion-free data embedding. In: Proceedings 4th Information Hiding Workshop, vol. 2137, pp. 27–41. Springer (2001) 11. Fridrich, J., Goljan, M., Du, R.: Lossless data embedding for all image formats. In: Proceedings SPIE, Electronic Imaging, Security and Watermarking of Multimedia Contents, vol. 4675, pp. 572–583 (2002) 12. Lin, C.C., Shiu, P.F.: DCT-based reversible data hiding scheme. In: ICUIMC 2009, Proceedings of the 3rd International Conference on Ubiquitous Information Management and Communication, vol. 5, no. 2, pp. 214–224 (2010) 13. Wang, K., Lu, Z.-M., Hu, Y.-J.: A high capacity lossless data hiding scheme for JPEG images. J. Syst. Softw. 86, 1965–1975 (2013) 14. Xuan, G., Shi, Y.Q., Ni, Z., Chai, P., Cui, X., Tong, X.: Reversible data hiding for JPEG images based on histogram pairs. In: Proceedings International Conference on Image Analysis and Recognition, vol. 4633, pp. 715–727. Springer (2007) 15. Sakai, H., Kuribayashi, M., Morii, M.: Adaptive reversible data hiding for JPEG images. In: 2008 International Symposium on Information Theory and Its Applications, vol. 23, pp. 1–6. IEEE (2008) 16. Efimushkina, T., Egiazarian, K., Gabbouj, M.: Rate-distortion based reversible water- marking for JPEG images with quality factors selection. In: European Workshop on Visual Information Processing (EUVIP), pp. 94–99. IEEE (2013) 17. Huang, F., Qu, X., Kim, H.J.: Reversible data hiding in JPEG image. IEEE Trans. Circuits Syst. Video Technol. 26, 1610–1621 (2016) 18. Dong, Y.R., Hahn, S.G.: On the bit security of the weak Diffie-Hellman problem. Inf. Proces. Lett. 110(18–19), 799–802 (2010) 19. Lin, Y.O.U., Yong-xuan, S.A.N.G.: Effective generalized equations of secure hyperelliptic curve digital signature algorithms. J. China Univ. Posts Telecommun. 17(2), 100–108 (2010) 20. Chia, L.W., Hu, C.H.: Modular arithmetic analyses for RSA cryptosystem. In: Proceedings of 2014 International Symposium on Computer Consume and Control IEEE Press, Taichung, Taiwan, pp. 816–819. IEEE (2014) 21. Zhang, X., Parhi, K.K.: Implementation approaches for the Advanced Encryption Standard algorithm. IEEE Circuits Syst. Mag. 2, 24–46 (2002) 22. Xuan, G., Li, X., Shi, Y.-Q.: Minimum entropy and histogram-pair based JPEG image reversible data hiding. J. Inf. Secur. Appl. 45, 1–9 (2019) 23. Osuolale, F.: Secure data transfer over the internet using image cryptosteganography. Int. J. Sci. Eng. Res. 8, 1115–1122 (2017) 24. Pravalika, S.L., Sheeba Joice, C., Raj, A.N.J.: Comparison of LSB based and HS based reversible data hiding techniques. In: 2014 2nd International Conference on Devices, Circuits and Systems (ICDCS), pp. 1–4. IEEE (2014)

Clothing Classification Using Deep CNN Architecture Based on Transfer Learning Mohamed Elleuch1,3(B) , Anis Mezghani2 , Mariem Khemakhem3 , and Monji Kherallah3 1 National School of Computer Science (ENSI), University of Manouba, Manouba, Tunisia

[email protected] 2 Higher Institute of Industrial Management, University of Sfax, Sfax, Tunisia 3 Faculty of Sciences, University of Sfax, Sfax, Tunisia

Abstract. The need of a powerful visual analytics tools becomes a necessity today especially with the emergence of pictures on the Internet and their use several times instead of text. In this paper, a new approach for clothing style classification is presented. The types of clothing items we consider in the proposed system include shirt, pants, suit, dress and so on. Certainly, clothing style classification represents a recent computer vision research subject who has several attractive applications, including e-commerce, criminal law and on-line advertising. In our proposed approach, the classification has been carried out by Deep Convolutional Neural Networks (CNNs). This Deep Learning technique Inception-v3 has shown very good performances for different object recognition problems. For deep features extraction, we use a machine learning technique called Transfer learning to refine pretrained models. Experiments are performed on two clothing datasets, particularly on the large and public dataset ImageNet. According to the obtained results, the developed system provides better results than those proposed in the state of the art. Keywords: Convolutional neural network · Deep learning · Inception-v3 · Clothing image recognition

1 Introduction Today, pictures become the main content on Internet. The size of these digital pictures collected from online users has grown rapidly. The analysis of the collected data makes it possible to better predict consumer behavior and also to recommend products [1]. Recently, several researches were conducted on clothing recognition [2, 3], clothing item retrieval [4–7], clothing style recognition [8–10]. In many cultures, clothing reflects information about social status, age, gender and lifestyle. Clothing is also an interesting descriptor in the identification of humans. Style and texture variations present a major problem for clothing recognition. Also, clothes are often subject to deformation and occlusion, in addition to the wide variation when taken in different scenarios, like selfies compared to online photos. Clothing recognition algorithms relies usually © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 240–248, 2021. https://doi.org/10.1007/978-3-030-49336-3_24

Clothing Classification Using Deep CNN Architecture

241

on handcrafted features, like HOG, SIFT and histogram analysis. Yamaguchi et al. [2] proposed a clothing recognizing system consisting of three classifiers for each pixel. All the results are then combined for the final prediction. The authors created a dataset containing 158,235 images and used only 685 images for validation. Di et al. [6] used LBP, SIFT, and HOG features for classification of garment in 12 classes using SVM classifiers. The same features were used by Chen et al. [11] to classify garment in 10 fashion style classes based on a sparse-coding approach. Deep learning has recently demonstrated its performance compared to other classical methods of machine learning especially when recognizing images from large amounts of data. The deep learning technique is based on learning features automatically from unlabelled input data and transform them non linearly in each layer to extract more representative hidden discriminative features. Convolutional Neural Networks (CNN) is currently very used in pattern recognition and signal processing researches. Based on the remark that various problem domains can benefit from the same low and mid-level features, the transfer learning is attracting more and more attention [12]. Several researches have shown that transfer learning could be efficient in the case of transferring a model from a large-scale dataset to other tasks [13, 14]. Many researchers used the transfer technique with deep neural networks, especially CNNs, in the field of clothing classification and retrieval [15, 16]. The ImageNet is the most used dataset in this research area as it is considered as one of the largest datasets for image object recognition with 1.2 million 256 × 256 RGB images [17]. Chen et al. [14] implemented a specific dual-path deep neural network to classify the input garment. Each deep network is used to model a garment domain. Lin et al. [18] proposed a clothing retrieval system based on hierarchical deep search framework. Transfer learning was then applied after pre-training network with mid-level visual features. The experiments were conducted on 15 clothing classes in the dataset composed of 161,234 images from Yahoo shopping websites. VGGNet [19] has been widely used considering its architectural simplicity. On the other hand, he suffers from the necessity of a lot of computation. GoogLeNet’s Inception architecture [20] has the advantage of being designed to perform well even under strict constraints on the memory and the calculation budget. So, GoogLeNet uses a reduced number of parameters compared to VGGNet and AlexNet. The low Inception computational cost prompted researchers to use Inception networks in image recognition from large-scale datasets [21]. Few works have been conducted on garments class recognition based on deep and transfert learning. The purpose of this work is to recognize from an image, the clothing type from a given dataset. To achieve this, the raw images are trans-formed using a pre-trained Inception deep neural network to build the deep features which are then used thereafter to train the classifiers. The rest of the paper is organized as follows. Section 2 details the proposed method. In Sect. 3, we present the experimental results. In this section, the proposed deep learning architecture is validated on the popular ImageNet clothing Dataset [22]. Finally, the results are discussed and concluding remarks are given.

242

M. Elleuch et al.

2 Proposed Method A Convolutional Neural Network (ConvNet/CNN) is a powerful deep neural network, which has been broadly utilized to solve hard machine learning issues. Different researches practice CNNs straight on the raw pixel images, without needing to determine features a priori. CNNs utilize fewer parameters than a fully connected network by calculating convolution on little regions of the input space and by sharing parameters between regions. This permitted the models to be trained on greater sequence windows, thus improving the detection of pertinent models. Various architectures that are based on CNN have been developed and are designed for the 1000-class image classification such as AlexNet [17], VGGNet [19], ResNet [23] and GoogLeNet [20]. Indeed, in this work the model that was used to build a clothing recognition system is Inception-v3 by Google. Inception-v3 architecture is depicted in Fig. 1. Consist of three inception module (A, B and C) punctuated with grid size reduction step. At the end of the training operation, when accuracies were approaching saturation, the auxiliary classifiers participate as regularizer and specially when they had Dropout or BatchNorm techniques.

Fig. 1. Inception-v3 model with Tensorflow (BatchNorm and ReLU are employed after Conv)

2.1 Inception-v3 Architecture The Inception deep convolutional approach was presented as GoogLeNet with 22 layers. It’s composed of parallel connections (See Fig. 2), whereas previously there was only one serial connection. Since its presentation in 2014, inception has various versions: v1, v2/v3 and v4. Inception-v3 breaks down the convolutions by employing smaller 1-D filters as indicated in Fig. 3 to minimize number of Multiply-and-Accumulates (MACs) and weights, as well as benefit from the factorizing Convolutions, in order to go deeper to 42 layers. In conjunction with batch normalization [24] utilized with inception-v2, v3 reaches over 3% lower top-5 error than v1 with 2.5 × increase in computation [21]. Inception-v4 utilizes residual connections [25].

Clothing Classification Using Deep CNN Architecture

243

Fig. 2. Inception module from GoogleNet [20]

1 × 7 filter

7 × 7 filter

7 × 1 filter

Carry out in sequence

Fig. 3. Decomposing greater filters into reduced filters: Building a 7 × 7 support from 1 × 7 and 7 × 1 filter

2.2 Transfer Learning Transfer learning could be seen as the ability of a system to recognize and apply knowledge and skills, learned from previous work, on new tasks or areas sharing similarities. Training a convolutional neural network requires a huge volume of data because it has to learn millions of weights. However, rather than learning a convolutional neural network from scratch, it is common to use a pre-trained model to automatically extract features from a new dataset. This method, called transfer learning, is a practical solution for applying Deep Learning algorithms without requiring a large data set or a very long training. In our apparel recognition system we used the transfer learning solution in order to reuse the feature extraction part and re-train the classification part with a dataset. Figure 4 shows the training model.

3 Experiments and Results In this section, we present the clothing datasets used for experiments. Thereafter, we details and discusses the experimental settings. The obtained results are then presented and compared with the proposed systems in the state of the art.

244

M. Elleuch et al.

Fig. 4. Training model

3.1 Dataset To evaluate the proposed system, we create a clothing dataset composed of 80,000 images of 13 style classes: Coat, Poncho, Blouse, Dress, Shirt, Vest, Lingerie, T-shirt, Uniform, Suit, Sweater, Jacket, Sports sweater (in French: ‘Manteau’, ‘Poncho’, ‘Chemisier’, ‘robe’, ‘chemise’, ‘gilet’, ‘Lingerie’, ‘T-shirt’, ‘Uniforme’, ‘Costume’, ‘Pull’, ‘Jacket’, ‘Pull de sport’). All the images are normalized to size 299 × 299 pixels. We train and validate the model on a subset of 4100 images and test it on another subset of 2050 images. In validation, we employ 20% of the training set for the parameter tuning. In the adopted split, no clothing item overlaps between the different subsets. Some samples of clothing images of the created dataset are shown in Fig. 5.

Fig. 5. Samples of clothing images

Clothing Classification Using Deep CNN Architecture

245

Althought the created dataset contains a big number of clothing images, our dataset is too small to obtain an accurate deep neural network of over a million parameters. The use of deep features created from pre-trained inception-v3 model, using 1.2 million images of ImageNet, represent a practical solution for applying Deep Learning without requiring a very large dataset or a very long training. 3.2 Experimental Settings The architecture is implemented with Python deep learning library TensorFlow, which is an open-source machine learning algorithms created and released by Google. To validate our suggested system based on CNN/inception-v3, we use the local database containing clothing images. It’s divided into two parts: training set and testing set. For performing experiments, the size considered of the input images shape is 299 × 299 × 3. Inception-v3 was trained with ILSVRC (ImageNet Large Scale Visual Recognition Challenge) using data exploited in the competition in 2012. We used this same trained network but it is recycled to distinguish clothing according to our own examples of images. The configuration of the inception-v3 architecture is characterized by: RMSProp Optimizer which represents a faster learning optimizer, a factorized 7 × 7 convolution, a BatchNorm in the Auxillary Classifiers and label smoothing to prevent from over-fitting. The training parameters of our model are listed as follows: batch size = 32 using RMSProp Optimizer with learning rate (LR) of 0.001 and decay rate of 0.3. Finally, the training is given for 20 epochs, which ensures convergence. 3.3 Results and Discussion The automatic extraction of relevant features through deep learning can save merchants and customers a lot of time. Indeed, our proposed system shows its reliability and speed with a satisfactory recognition rate of 70%. This represents an improvement of 3.5% compared to GoogLeNet approach and an improvement of 5.7% over the VGG16 network. From Table 1, it is clear that Inception-v3 method performs better results than VGG16 and GoogLeNet, and requires less time in testing. Consequently, as described above with 42 deep layers, reducing the number of parameters provided by inception process did not decrease the efficiency of the network. Table 1. Performance of our proposed method Method

Recognition rate Test time (ms)

Inception-v3 70%

1.4 ms

VGG16

64.3%

3.0 ms

GoogLeNet

66.5%

2.7 ms

246

M. Elleuch et al.

From the obtained results (See Fig. 6), we notice that the classes of articles (Manteau, Gilet, Uniforme, Pull de sport) record the highest rate with a percentage almost 90% (See Fig. 7). The other classes of articles (Chemisier, T-shirt, Chemise, Lingerie, Pull) provide a low rate (50%) because of the similarity of these five classes.

Accuracy Rate (%)

100

88

90

81

75 53 60

90 50 50

50

90

80

70

50

Pull de sport

Pull

Jacket

Uniforme

Lingerie

Robe

Chemise

Gilet

Costume

T shirt

Poncho

Chemisier

Manteau

0

Fig. 6. Accuracy rate for each class of clothing

Fig. 7. Accuracy rate for styles “manteaux” and “gilet”

Comparison with other outcomes is presented in Table 2. We observed that the performance of our clothing recognition system achieved a promising result, with an accuracy rate of about 70% compared to other research works based on traditional methods [9, 11] and deep learning approach [15].

Clothing Classification Using Deep CNN Architecture

247

Table 2. Performance comparison Architecture

Classification rule

Recognition rate

Our Proposed

Inception-v3

70%

Liu et al. (2016) [15]

FashionNet (deep model)

66.43% (top-3) 73.16% (top-5)

Chen et al. (2015) [11]

low-level features ⇓← sparse coding

68%

Bossard et al. (2012) [9]

Transfer Forest

41.36

4 Conclusions Deep Learning technology has had today a major impact on various areas of research. We benefited from this success story to propose a solution for clothing recognition. In this work, we have used the deep Convolutional Neural Networks with Inception architecture for identifying clothing class. We used pre-trained weights as a starting point to avoid a very long treatment. Thereafter, we compared the proposed approach with several hand-crafted and machine learning based shallow structure approaches. The proposed system provided promising recognizing results on the clothing dataset, which proves the effectiveness of the proposed approach for clothing recognition.

References 1. Yamaguchi, K., Berg, T.L., Ortiz, L.E.: Chic or social: visual popularity analysis in online fashion networks. In: ACM Conference on Multimedia, pp. 773–776 (2014) 2. Yamaguchi, K., Kiapour, M.H., Ortiz, L.E., Berg, T.L.: Parsing clothing in fashion photographs. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3570–3577 (2012) 3. Yamaguchi, K., Kiapour, M.H., Berg, T.L.: Paper doll parsing: retrieving similar styles to parse clothing items. In: International Conference on Computer Vision, pp. 3519–3526 (2013) 4. Kalantidis, Y., Kennedy, L., Li, L.J.: Getting the look: clothing recognition and segmentation for automatic product suggestions in everyday photos. In: ACM International Conference in Multimedia Retrieval, pp. 105–112 (2013) 5. Liu, S., Feng, J., Domokos, C., Xu, H., Huang, J., Hu, Z., Yan, S.: Fashion parsing with weak color-category labels. IEEE Trans. Multimedia 16(1), 253–265 (2014) 6. Di, W., Wah, C., Bhardwaj, A., Piramuthu, R., Sundaresan N.: Style finder: fine-grained clothing style detection and retrieval. In: CVPR Workshops, pp. 8–13 (2013) 7. Liang, X., Lin, L., Yang, W., Luo, P., Huang, J., Yan, S.: Clothes co-parsing via joint image segmentation and labeling with application to clothing retrieval. IEEE Trans. Multimedia 18, 1175–1186 (2016) 8. Chen, J.C., Liu, C.F.: Visual-based deep learning for clothing from large database. In: ASE BigData & Social Informatcis (2015) 9. Bossard, L., Dantone, M., Leistner, C., Wengert, C., Quack, T., Van Gool, L.: Apparel classification with style. In: ACCV, pp. 321–335 (2012) 10. Veit, A., Kovacs, B., Bell, S., McAuley, J., Bala, K., Belongie, S.: Learning visual clothing style with heterogeneous dyadic co-occurrences. In: ICCV, pp. 4642–4650 (2015)

248

M. Elleuch et al.

11. Chen, J.C., Xue, B.F., Lin Kawuu, W.: Dictionary learning for discovering visual elements of fashion styles. In: CEC Workshop (2015) 12. Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Learning and transferring mid-level image representations using convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1717–1724 (2014) 13. Huang, J., Feris, R.S., Chen, Q., Yan, S.: Cross-domain image retrieval with a dual attributeaware ranking network. arXiv preprint arXiv:1505.07922 (2015) 14. Chen, Q., Huang, J., Feris, R., Brown, L.M., Dong, J., Yan, S.: Deep domain adaptation for describing people based on fine-grained clothing attributes. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5315–5324 (2015) 15. Liu, Z., Luo, P., Qiu, S., Wang, X., Tang, X.: DeepFashion: powering robust clothes recognition and retrieval with rich annotations. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016) 16. Chen, J.-C., Liu, C.-F.: Deep net architectures for visual-based clothing image recognition on large database. Soft. Comput. 21(11), 2923–2939 (2017). https://doi.org/10.1007/s00500017-2585-8 17. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: NIPS, pp. 1106–1114 (2012) 18. Lin, K., Yang, H.F., Liu, K.H., Hsiao, J.H., Chen, C.S.: Rapid clothing retrieval via deep learning of binary codes and hierarchical search. In: ACM International Conference in Multimedia Retrieval, pp. 499–502 (2015) 19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014) 20. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015) 21. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016) 22. Deng, J., Dong, W., Socher, R.-J., Li, L., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009) 23. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) 24. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015) 25. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-First AAAI Conference on Artificial Intelligence, February 2017 26. Sze, V., Chen, Y.H., Yang, T.J., Emer, J.S.: Efficient processing of deep neural networks: a tutorial and survey. Proc. IEEE 105(12), 2295–2329 (2017)

Identification of Botnet Attacks Using Hybrid Machine Learning Models Amritanshu Pandey1 , Sumaiya Thaseen1(B) , Ch. Aswani Kumar1 , and Gang Li2 1 School of Information Technology and Engineering, Vellore Institute of Technology, Vellore,

Tamil Nadu, India [email protected] 2 School of Information Technology, Deakin University, Melbourne, Australia

Abstract. Botnet attacks are the new threat in the world of cyber security. In the last few years with the rapid growth of IoT based Technology and networking systems connecting large number of devices, attackers can deploy bots on the network and perform large scale cyber-attacks which can affect anything from millions of personal computers to large scale organizations. Hence, there is a necessity to implement countermeasures to over-come botnet attacks. In this paper, three hybrid models are proposed which are developed by integrating multiple machine learning algorithms like Random Forest (RF), Support Vector Machine (SVM), Naive Bayes (NB), K-Nearest Neighbor (KNN) and Linear Regression (LR). According to our experimental analysis, the RF-SVM has the highest accuracy (85.34%) followed by RF-NB-K-NN (83.36%) and RF-KNN-LR (79.56%). Keywords: Accuracy · Botnet · Classifier · Feature · Phishing

Abbreviations IDS KNN LR NB RF SVM

Intrusion Detection System K-Nearest Neighbor Linear Regression Naïve Bayes Random Forest Support Vector Machine

1 Introduction The process in which multiple devices are connected via internet and run an automated script for malicious intentions is called Botnet Attacks. In this kind of attack, the automated script is executed which designed to run without the knowledge of the owner of This paper is an extension of Khan N.M., Madhav C. N., Negi A., Thaseen I.S. (2020) Analysis on Improving the Performance of Machine Learning Models Using Feature Selection Technique. In: Abraham A., Cherukuri A., Melin P., Gandhi N. (eds) Intelligent Systems Design and Applications. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 249–257, 2021. https://doi.org/10.1007/978-3-030-49336-3_25

250

A. Pandey et al.

the device. Such a script is called a bot. With the advent of the Internet of Things, this is one of the most threatening concerns in the 21st Century. In the current situation, small devices like Google’s Alexa and smart baby monitors are utilized to create a largescale Botnet to perform Massive Distributed Denial of Service Attacks (DDoS). The attacks are performed by connecting thousands of IoT based devices and utilize them for targeting large scale IT systems, for example, Domain Servers and Cloud Servers. The major issue in this attack is that the identification of source is challenging due to the integration of different devices in the network. Many infected online devices or bots are required to develop a bot-network also known as botnet or bot-masters. The impact of botnet will be enormous if the bots are in a dense network. Bots are designed to infect millions of devices. The basic approach of bot herders is to inject a Trojan horse through which botnets can be deployed. When the users open email as attachments in their own systems, clicking on malicious pop up ads and downloading untrusted software from website. After being infected, botnets modify any kind of information within the system, such as personal information. By increasing the complexity, they are ready to spread the infection to other systems. The characteristics of botnets are self-propagation; search and spread infection automatically. Detection of botnets is a challenging task as only minimal power is consumed and is considered as a normal device activity without alarming the user. Autonomous botnets are advanced as they are developed to update their behavior and continuously search the web for internet enabled devices which have not updated their operating system or installed antivirus software. In addition, newer versions of botnet designs are evolving thereby increasing difficulty for the users to identify such malicious software. The remaining paper is structured as follows: the literature is discussed in Sect. 2; Sect. 3 specifies the three proposed models developed for identifying botnet attacks. The results are summarized in Sects. 4 and 5 concludes the work.

2 Related Work The detection of botnet attacks has been in research for many years. In this paper [1], Adam J. Aviv and Andreas Haeberlen examined the challenges faced during evaluation of botnet detection systems. To tackle the problem of traffic classification, Szabó et al. [4] and his team devised a validation method for analyzing the accuracy of traffic classification techniques. The important advantage of this approach is the classification of traffic is highly automated and trustworthy validation. To curb the threat of the botnet attacks, Matija Stevanovic and Jens Myrup Pedersen [20] presented a paper on contemporary botnet detection methods which utilize machine learning algorithms to identify botnet related traffic. In this section, a comparison of existing detection methods is analyzed by comparing their characteristics, performances, and limitations. Finally, the study is concluded by showing the limitations and challenges of utilizing machine learning for identifying botnet traffic. In another study by Ping Wang and his team [8] developed a honeypot detection system utilized in both Peer-to-Peer (P2P) botnets and centralized botnets using basic detection principle. Different botnet identification techniques in the literature are discussed in this section. Bertino. E and N. Islam identified various botnets like Liux, Dariloz Worm and Mirai

Identification of Botnet Attacks

251

which resulted in an DDoS attack. An Intrusion Prevention System (IPS) was deployed in the network (Routers, Gateways) to monitor the traffic. R. Hallman et al. [17] detected Mirai botnet which caused DDoS attacks depending on the operational steps of the malware. M. Ozcelik et al. [18] also detected Mirai botnet which propagated itself through scanning. The Mirai botnet infected the IoT devices. However, the botnets were detected by dynamic updating of flow rules where in the deployment level was called as “thin fog” and data from emulated IoT nodes and simulated network. D. H. Summerville et al. [7] detected the malware by deep packet anomaly detection at the host level with two real devices. Pa et al. [6] detected the DDoS attacks by implementing a honeypot to collect and analyze data at the host and network level on real data H. Slot Edjelmaci [19] identified the routing attacks namely sink hole and selective forwarding. The authors utilized hybrid and signature based anomaly detection at the host level on simulated data. Bostani and Sheikhan [1] developed a specification based anomaly detection deployed at the network level within routers and route nodes and tested on simulation data. Midi. D et al. [15] detected ICMP flood, replication worm-hole, HELLO jamming, selective flooding, TCP SYN flood, and data modification. Knowledge driven anomaly detection was developed by the authors at network level on real devices and simulated data. S. Raza et al. [9] identified routing attacks like selective forwarding and spoofed sinkhole. A signature based anomaly detection technique was utilized on the simulation data at the border router and hosts. The challenges and forecasts in anomaly detection for IoT and cloud were presented by Butun et al. [5]. The authors introduced the contributing features for IoT and Cloud. Zhao et al. [3] proposed a new approach to identify botnets based on traffic behavior analysis using machine learning techniques. The classification is done on regular time intervals. Elisa and Nayeem [16] identified the distributed denial-of-service attacks and implemented scalable security solutions optimized for the IoT ecosystem. In general, the industry is in favor of signature based detection in comparison to anomaly based approach for implementation of IDS [2]. With the advent of the Internet of Things (IoT) Technology, botnet attacks have become easier and deadlier to networks and computer systems. Rohan Doshi et al., [10] demonstrated how IoT based systems can work as a botnet network to perform DDoS attacks. The attacks can be DDoS can be detected in IoT network with a high accuracy by implementing IoT specific network behavior along with feature selection. The authors utilized machine learning techniques, including neural networks to identfiy such attacks. Nazrul Hoque et al., [12] analyzed how DDoS attacks are built, performed, and executed. Another kind of Botnet Attack, called the HTTP botnet attack has been a popular research study. Rudy Fadhlee and Mohd Dollah [11] proposed to utilize machine learning classifiers to detect HTTP botnets. The authors achieved an accuracy of 92.93%. Nowadays, the mobile devices have wireless carriers, capable of transmitting any kind of information via multiple mediums (Bluetooth, Wi-Fi, Infrared). Shahid Anwar et al., [13] discusses about how mobile devices can be used to create a botnet and perform malicious activities. The authors conducted research on devices which had Android as their Operating System. While researchers have utilizing algorithms like Naïve Bayes and Decision trees [14], further studies are explored in the domain of deep learning where Neural Networks can be trained to detect and identify botnets. Hayretdin Bah¸si et al. [14] used deep learning algorithms to study how neural networks can be utilized to detect

252

A. Pandey et al.

and stop botnet attacks. One of the techniques used by the authors is Dimensionality Reduction.

3 Proposed Model 3.1 RF-SVM Model Figure 1 depicts the hybrid model which is a fusion of Random Forest and SVM. The classifier RF splits the data into multiple units and every sub entity is bagged to obtain the concluding result. The SVM reclassifies every sub entity to improve the accuracy. The results of both the classifiers are merged by bagging technique. These classifiers are chosen because in our previous work [21], the classifiers SVM and RF resulted in increased detection rate, precision and F-measure in comparison to other supervised classifiers. Preprocessing along with correlation feature selection is the first step performed to identify optimal features for the model. The features that contribute to the positive correlation with the class label are selected. Training and testing records are selected randomly from the table which has the imported dataset. In the RF module, bagging is used to divide the training data into multiple entities, SVM is deployed for training each of the sub entities. There are three phases in SVM namely, feature removal, feature selection and classification. Voting is based on a majority rule technique. Decision fusion gives the class label which has the highest vote. This output is considered as the result obtained from a single entity of RF classifier. Majority rule is also used to decide the final decision result which is the cumulative result of highest number of votes. The fusion of RF-SVM is trained to predict the class label and determine the various performance measures.

Fig. 1. Proposed RF-SVM model

Identification of Botnet Attacks

253

3.2 RF-NB-KNN Model Figure 2 illustrates another assimilation model created by combining RF, NB and KNN for identification of botnet attacks in the network. Similar to previous model, RF performs the same functionality; however every sub entity of RF is again classified by NB and KNN to improve the correctness of the model. The results of the base learners namely NB and KNN are merged by bagging. These classifiers are chosen in the model because in our earlier analysis [21], performance of those classifiers has been superior to other classifiers. The entire process of preprocessing and feature selection is similar to the above model. RF is trained and bagging is performed which divides the data into multiple entities performed by decision trees. NB is deployed on each of the sub entities after the entities are sub divided into multiple sub entities.

Fig. 2. Proposed RF-NB-KNN model

3.3 RF-KNN-LR Model Figure 3 represents the fusion of RF, KNN and Linear Regression for discovering botnet attacks in the network. RF is retained in this model also for training the records and the process remains the same as in the previous models. Every sub entity is again classified by the KNN and LR to improve the accurateness of the model. The results of base learners are merged by bagging. The individual classifiers are preferred because of the superior performance in our analysis done in the prior work [21]. The process of preprocessing and feature selection is similar as in the other models. RF performs the training and bagging divides the data into multiple entities which represent decision trees. NB is deployed on every sub entity after each entity is fragmented into multiple entities. Within the process of the linear regression Classifier, we run the training dataset through a k-NN classifier to segregate the packet information based on whether it is a botnet or not. If it

254

A. Pandey et al.

is a botnet, it will be further classified under the botnet attack category. In addition, the linear regression classifies the set of packets under the category of botnets and decision is based on voting approach as to whether the record is normal or botnet. The botnet class label is the output obtained from the ‘Decision Fusion’ module resulting from the linear regression-k-NN classifier. This result is considered as the single entity output from the RF. The ‘Final Decision Unit’ counts the type of botnets identified and tags the packets which look suspicious or malicious.

Fig. 3. Proposed RF-KNN-LR model

4 Experimental Results 4.1 Dataset The dataset used to perform the comparative analysis of the hybrid algorithms is the ISCX 2012 dataset [22]. The features used in this dataset are Time, Source, Destination, Protocol, length and Info. Here, Source and Destination show the IP Address. Time specifies the time taken by the packets to be transmitted. The type of protocol used to transmit data is specified in the protocol attribute. Info shows the information passed by the packets. The dataset consists of 5,114,514 packets out of which 80% of these packets are used for training and 20% is used for testing. A physical testbed implementation using real devices is deployed for generating network traffic data with all the standard protocols. 4.2 Comparative Study The metrics are calculated as follows: Acc =

TP + TN TP + TN + FP + FN

(1)

Identification of Botnet Attacks

255

Precisi(P) =

TP TP + FP

(2)

Recall(R) =

TP TP + FN

(3)

F1 =

2∗P∗R P+R

(4)

Classification error = 100 − Acc

(5)

Where, true Positives, True Negatives, False positives and False Negatives are denoted as TP, TN, FP and FN respectively. From Table 1, it is inferred that RF-SVM has the highest Accuracy with the value of 85.34% followed by RF-Naive Bayes-K-NN with an accuracy of 83.36% and RF-k-NN-Linear Regression with an accuracy of 79.56%. In terms of Precision, the highest value of 82.78% is obtained by RF-SVM, while the lowest value is 74.35% is obtained in RF-KNN-LR. RF-Naive Bayes-KNN has the highest value for F1 with a value of 80.45% while the RF-SVM has the lowest value of 73.56%. Highest value for AUC is given by RF-SVM technique with 84.35%, while the RF-Naïve Bayes-KNN method resulted in 81.56% and RF-KNN-LR obtained a value of 20.44%. Finally, in terms of Classification Error, RF-SVM has the least classification error. Table 1. Performance comparison of various hybrid algorithms Performance metrics

RF-SVM (In percentage)

RF-Naive Bayes-KNN (In percentage)

RF-KNN-LR (In percentage)

Accuracy

85.3

83.36

79.56

Precision

82.7

80.45

74.35

F1

73.5

76.67

74.67

Classification error

14.66

16.64

20.44

5 Conclusion A serious issue in the domain of internet is there are no security measures to identify the botnets which can send malicious packets or script and enter in to the users system. In this paper, three hybrid models have been proposed and developed namely RF-SVM, RFNB-KNN and RF-KNN-LR. The various performance metrics analyzed are classification accuracy, error, AUC, precision and recall. Among all the three models analyzed, RFSVM proves to be superior in majority of the metrics namely accuracy, precision, AUC and Classification error. Thus RF-SVM outweighs the other models. The ensemble models prove to be better in comparison to single classifiers Hence, an integration of supervised classifiers are chosen for achieving better performance.

256

A. Pandey et al.

Acknowledgement. This research was undertaken with the support of the Scheme for Promotion of Academic and Research Collaboration (SPARC) grant SPARC/2018-2019/P616/SL “Intelligent Anomaly Detection System for Encrypted Network Traffic”.

References 1. Bostani, H., Sheikhan, M.: Hybrid of anomaly-based and specification-based IDS for Internet of Things using unsupervised OPF based on MapReduce approach. Comput. Commun. 98, 52–71 (2017) 2. Tavallaee, M., Stakhanova, N., Ghorbani, A.A.: Toward credible evaluation of anomalybased intrusion-detection methods. IEEE Trans. Syst. Man Cybern. Part C (Appl Rev.) 40(5), 516–524 (2010) 3. Zhao, D., Traore, I., Sayed, B., Wei, L., Saad, S., Ghorbani, A., Garant, D.: Botnet detection based on traffic behavior analysis and flow intervals. Comput. Secur. 39, 2–16 (2013) 4. Szabó, G., Orincsay, D., Malomsoky, S., Szabó, I.: On the validation of traffic classification algorithms. In: International Conference on Passive and Active Network Measurement. Springer, Heidelberg, pp. 72–81 (2008) 5. Butun, I., Kantarci, B., Erol-Kantarci, M:. Anomaly detection and privacy preservation in cloud-centric Internet of Things. In: 2015 IEEE International Conference on Communication Workshop (ICCW), pp. 2610–2615 (2015) 6. Pa, Y.M.P., Suzuki, S, Yoshioka, K., Matsumoto, T., Kasama, T., Rossow, C.:. IoTPOT: a novel honeypot for revealing current IoT threats. J. Inf. Process. 24(3), 522–533 (2016) 7. Summerville, D.H., Zach, K.M., Chen, Y.: Ultra-lightweight deep packet anomaly detection for Internet of Things devices. In: 2015 IEEE 34th International Performance Computing and Communications Conference (IPCCC), pp. 1–8 (2015) 8. Wang, P., Lei, W., Cunningham, R., Zou, C.C.: Honeypot detection in advanced botnet attacks. Int. J. Inf. Comput. Secur. 4(1), 30–51 (2010) 9. Raza, S., Wallgren, L., Voigt, T.: SVELTE: real-time intrusion detection in the Internet of Things. Ad Hoc Netw. 11(8), 2661–2674 (2013) 10. Doshi, R., Apthorpe, N., Feamster. N.: Machine learning ddos detection for consumer internet of things devices. In: 2018 IEEE Security and Privacy Workshops (SPW), pp. 29–35 (2018) 11. Dollah, R., Fadhlee Mohd, M., Faizal, A., Arif, F., Mas’ud, M.Z., Xin, L.K.: Machine learning for HTTP botnet detection using classifier algorithms. J. Telecommun. Electron. Comput. Eng. (JTEC), 10(1–7), 27-30 (2018) 12. Hoque, N., Bhattacharyya, D.K., Kalita, J.K.: Botnet in DDoS attacks: trends and challenges. IEEE Commun. Surv. Tutorials 17(4), 2242–2270 (2015) 13. Anwar, S., Zain, J.M., Inayat, Z., Ul Haq, R., Karim, A., Jabir, A.N.: A static approach towards mobile botnet detection. In: 2016 3rd International Conference on Electronic Design (ICED), pp. 563–567 (2016) 14. Bah¸si, H., Nõmm, S., La Torre, F.B.: Dimensionality reduction for machine learning based iot botnet detection. In: 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), pp. 1857–1862 (2018) 15. Midi, D., Rullo, A., Mudgerikar, A., Bertino, E.: Kalis—A system for knowledge-driven adaptable intrusion detection for the Internet of Things. In: 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), pp. 656–666 (2017) 16. Bertino, E., Islam, N.: Botnets and internet of things security. Computer 2, 76–79 (2017) 17. Hallman, R., Bryan, J., Palavicini, G., Divita, J., Romero-Mariona, J.: IoDDoS - the internet of distributed denial of sevice attacks-a case study of the mirai malware and IoT-based botnets. In: IoTBDS, pp. 47–58 (2017)

Identification of Botnet Attacks

257

18. Özçelik, M., Chalabianloo, N., Gür, G.: Software-defined edge defense against IoT-based DDoS. In: 2017 IEEE International Conference on Computer and Information Technology (CIT), pp. 308–313 (2017) 19. Sedjelmaci, H., Senouci, S.M., Al-Bahri, M.: A lightweight anomaly detection technique for low-resource IoT devices: a game- theoretic methodology. In: 2016 IEEE International Conference on Communications (ICC), pp. 1–6 (2016) 20. Stevanovic, M., Pedersen, J.M.: An efficient flow-based botnet detection using supervised machine learning. In: 2014 International Conference on Computing, Networking and Communications (ICNC), pp. 797–801, February 2014 21. Pandey, A., Gill, N., Nadendla, K.S.P., Thaseen, I.S.: Identification of phishing attack in websites using random forest-SVM hybrid model. In: International Conference on Intelligent Systems Design and Applications, pp. 120–128. Springer, Cham, December 2018 22. Shiravi, A., Shiravi, H., Tavallaee, M., Ghorbani, A.: Toward developing a systematic approach to generate benchmark datasets for intrusion detection. Comput. Secur. 31, 357–374 (2012)

Congestion Control in Vehicular Ad-Hoc Networks (VANET’s): A Review Lokesh M. Giripunje(B) , Deepika Masand, and Shishir Kumar Shandilya VIT Bhopal University, Bhopal, India [email protected]

Abstract. In last few decades, Intelligent Transport Systems (ITS) have made huge impact on industries and Education. ITS is targeted to improve safety and user comfort while driving is administered by VANETs (Vehicular Ad hoc Networks). In VANET, each fast-moving mobile node (vehicles) are provided with communication devices and allowed to exchange various messages between themselves and/or with the roadside infrastructure for sharing traffic and safety information. Congestion Control is the most tackled problem and plays a vital role in the performance of VANETs. Till date, an extensive research is done in the field of Congestion control in VANETs, however, very few of them have addressed the uni-priority of event-driven safety messages in congestion control. In a dense VANET, almost all vehicular nodes broadcast periodic beacon messages at high frequencies and hence Control Channel (CCH) gets easily congested. The all-time availability of CCH is extremely essential for in-time and safe delivery of eventdriven safety emergency messages. This paper reviews existing research works in the field of congestion control and concludes with possible future extensions in the field of disseminating uni-priority of event-driven safety messages to solve congestion problems in VANET. Keywords: Intelligent Transport Systems (ITS) · Vehicular Ad Hoc Networks (VANETs) · Congestion control · Event-driven safety message · Control Channel (CCH) · Uni-priority

1 Introduction Intelligent Transportation System (ITS) is one of the most fundamental systems in the world today. The road network transportation is the most widely used transport service in many parts of the world. The traffic density on roads and highways has been increasing constantly in recent years due to population growth, motorization, and urbanization. Even after innovations and advancements in vehicles for improving safety on road, there are huge fatalities as a result of road accidents. Researchers have shown that more than 60% of the fatalities could be avoided if drivers are alerted of the upcoming emergency situation before it happens. This can be achieved with the help of VANET. Hence, the VANET is getting a lot of attention from researchers, governments, telecommunication © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 258–267, 2021. https://doi.org/10.1007/978-3-030-49336-3_26

Congestion Control in Vehicular Ad-Hoc Networks (VANET’s)

259

and automobile industries in achieving the goals of the intelligent transportation system (ITS), ultimately fulfilling a dream of accident-free motorways [1]. There are two types of safety messages; Periodic beacon and event-driven emergency messages. Beacon messages are periodic messages broadcasted by each vehicle in VANET to announce other vehicles about their situations such as Speed, direction, identity etc. While a sudden abnormal condition or danger leads to the generation of event-driven safety messages [2]. The existing 802.11 protocols do not meet the requirements of VANETs because of some special characteristics of VANET such as faster topological changes, low delay and high reliability requirements for safety applications. Federal Communications Commission (FCC) assigned 5.850-5.925 GHz frequency band for Dedicated Short Range Communication (DSRC) [2, 3]. In 2003, IEEE has developed IEEE 802.11p standard aiming to solve problems associated with VANET, is also known as Wireless Access in Vehicular Environment (WAVE). The DSRC is divided into 7 channels each with 10 to 20 MHz band. Out of 7 channels, 6 are Service Channels (SCH), and one is the Control Channel (CCH). The Control Channel (CCH) channel is utilized for safety messages, remaining six service channels provides non-safety services and WAVE-mode short messages [2, 3].

2 Congestion in VANET In a dense VANET, almost all vehicular nodes broadcast periodic beacon messages at high frequencies hence Control Channel (CCH) gets easily congested. The all-time availability of CCH is extremely essential for in-time and safe delivery of event-driven safety emergency messages. A robust, reliable and efficient congestion control algorithm is needed, to avoid congestion of the CCH channel and delay of event-driven emergency safety message [4]. 2.1 Parameters Affecting Congestion Researchers have classified parameters affection congestion into primary parameters and derived parameters. Transmit power, transmit rate and beacon frequency are primary parameters which directly affect congestion; while fairness, prioritization, utility function and node density depends on primary parameters are known as derived parameters [5]. 2.2 Congestion Control Strategies for VANETS Reactive, proactive and hybrid congestion control strategies are VANET congestion control strategies. The way of prevention or control the congestion and Information in VANET is the major classification criterion. Transmission parameters are adjusted based on the information in Congestion Control mechanisms [2, 6]. In reactive congestion control, actions to reduce channel load are taken only after a congestion situation has been detected. In proactive congestion control, a system model is employed to calculate the channel load based on various transmission parameters which will adhere to a maximum congestion limit so that channel congestion can be avoided. The hybrid strategies make use of both proactive and reactive strategies to adjust or control two different parameters [2, 6].

260

L. M. Giripunje et al.

3 Evolution of Congestion Control Approaches in VANETS There are five ways for congestion control in VANETs classified on the basis of parameters and means to control congestion. 3.1 Rate-Based Strategies Researchers with Wischhof [7] suggested a strategy which adjusts transmission rate based on the utility and size of packets. The decentralized Utility-based Packet Forwarding and Congestion Control (UBPFCC) strategy implemented on the top of MAC 802.11 MAC protocol works well for non-safety applications but not suitable for safety applications. A decentralized algorithm with no packet priority “On-Demand Rate Control (ODRC)” is proposed by Huang et al. [8]. The transmission rate is increased when the unexpected movement of vehicle is detected and for reducing the packet loss during channel collision the transmission rate is decreased. Application level piggybacked acknowledgement suggested by Seo et al. increases interval between two beacon messages reducing beacon rate which ultimately reduces channel load [9]. The authors in [10] adjust the beaconing rate based on the mobility property of all nodes in the network and situation of the network. This adaptive beaconing rate controls the channel load and congestion. The cross-layer congestion control strategy suggested by He et al. [11]. In this, transmissions rate of event-driven safety messages is increased than periodic beacon messages. On congestion, all beacon messages are blocked by MAC blocking method and control channel is kept only for event-driven safety messages. The rate of beacon messages is controlled by setting new threshold (half of previous threshold) value after detection of congestion. After MAC unblocking, transmission rate increases till new threshold. The modified the WAVE standard with new layer to communicate with MAC layer is suggested by Ye et al. [12]. This research work decides packet transmission rate based on efficiency and reliability of broadcasting for controlling congestion in VANETs. In Periodically Updated Load Sensitive Adaptive Rate (PULSAR), channel load decides the range of transmission [13]. Channel Busy Ratio (CBR) detects congestion and then controls transmission rate with respect to threshold CBR. The Adaptive Traffic Beacon (ATB) strategy suggested by Sommer et al. [14], dynamically adjusts beaconing rate and avoid channel overloading. In Congested scenario, channel load is reduced by transmitting important messages only. Mohamad Yusof Darus et al. [15] presented a solution for congestion control which is addressing for beacon and event driven messages. Control Channel is monitored to detect congestion, exceeding of threshold will discard all beacon messages. In event driven detection, with the detection of event driven message it will freeze all transmission queues except the event driven messages. Adaptive beacon rate scheme [16] reduces congestion by monitoring channel load. The channel load at each node is calculated by continuous monitoring depending on traffic situation. If channel load goes above threshold, the proposed system monitors channel load with controlled beacon rate till it becomes non- congested. The updated beacon rate is given to initial stage for further

Congestion Control in Vehicular Ad-Hoc Networks (VANET’s)

261

processing. The drawback with these systems is that it has too strict conditions which may result in poor VANET performance [16]. Shwetha A. et al. [17] proposed a cross-layered congestion control algorithm for VANET can detect and avoid congestion in the network. A dual queue scheduler is used to control traffic. The transit packets should be transmitted immediately as compared to generated packets at a node. In this method, transmission rate of source is updated by the sink periodically with the help of dual queues and route the packets through less congested paths. These approaches decreases transit packet drop because of collision and overhead in channel [17]. Remark: In these strategies, the channel transmission rate or packet generation is adjusted depending on the channel load. The performance of VANETs safety applications can be improved by increasing the transmission rate but this cause congestion of control channel because of high beaconing rate and high bandwidth utilization [18].

3.2 Power-Based Strategies Fair Power Adjustment for Vehicular environment (FPAV) algorithm [19], provides fairness for bandwidth utilization. This algorithm supports both beacon and safety messages as power is controlled based on traffic density. The drawback of reserving part of bandwidth for event driven messages in FPAV is overcome by same researchers in further research Distributed Fair Power Adjustments for Vehicular environments (D-FPAV). In D-FPAV, priority of event driven messages is more as compared to beacon messages as range of beacon messages is decided based on traffic density. Fallah et al. [20] presents a mathematical model which estimates channel utility based on range and rate of transmission, contention window size, and vehicle density and decides transmission range. In [6], the reduction in transmission power leads to less packet reception at longer distance from source node but have good packet reception nearer to source node. This approach suits safety applications. Remark: In these strategies, the channel load is controlled by adjusting transmission power (range) dynamically. The aim of power based strategies is to reduce channel collision and provide fairness to all nodes in network to communicate with all neighbor nodes. Increasing transmission power increases the range of communication but at the same time it causes channel collision and saturation [6, 18, 19].

3.3 CSMA/CA-Based Strategies The closed-loop congestion control strategy by exchanging the RTS/CTS messages using unicast transmission detects the congestion. The increase in the contention window size to reduces the channel overload [21]. To overcome the problem associated with Enhanced Distributed Channel Access (EDCA) in the IEEE 802.11p protocol standard, Barradi et al. [22] proposed MAC layer congestion control strategy. In this approach, contention window size and Arbitration inter-frame spacing (AIFS) play an important role to control congestion [23]. Minimum size adjustment of contention window based on network density reduces collision in dense network is known

262

L. M. Giripunje et al.

as Adaptable Offset Slot (AOS) [23]. But this leads to increase in the delay of emergency messages because of increase in contention window size in dense network. The approach in betterment of VANET performance by reducing collision in channels is described by Stanica et al. [24]. Safety Range CSMA (SR-CSMA) controls congestion by setting safety ranges less than transmission range which leads to higher reception ratio of safety messages [24]. Remark: The aim of these strategies is to increase the reception probability of emergency messages and to reduce delay between nearby vehicular nodes. This work also avoids retransmission of messages with the help of acknowledgments. This approach does not consider real time complexities of VANET [24].

3.4 Prioritizing and Scheduling-Based Strategies The periodic broadcasting causes broadcast flooding problem because of redundancy packets in beacon messages. Dynamic priority-based Scheduling guarantees reliable delivery of messages with higher priority. However, medium and low priority messages are rescheduled for transferring in the channels. Research done by Suthaputchakun [25] increases the reliability of broadcasting with congestion control approach in which vehicular communications are prioritized based on the urgency and average delay. Bouassida and M. Shawky [26] defined priority of messages using static and dynamic factors and the size of the messages. The static process affects messages based on message priority then scheduling them while dynamic scheduling process happens periodically, new priorities are calculated based on total priority indicator of every single message by observing message queue then rescheduling them. In First-in-Firstout (FIFO) algorithm, the first packet entering the queue leaves first for channel. Messages with longer waiting time are given highest priority in Longest Wait Time (LWT) algorithm. The messages in demand (request in various services) are given highest priority in Maximum Request First (MRF) algorithm. In the First Deadline First (FDF) algorithm, scheduling of messages is based on time remaining for their deadlines. The lower size messages are given higher priority in Smallest Data-size First (SDF) algorithm [27]. In Longest Total Stretch First (LTSF), a stretch metric decides the priority. The aim of LTSF is to decrease the waiting time in the queues. Quality of Data (QoD) and Quality of Service (QoS) values decides scheduling of messages in Maximum Quality Increment First (MQIF) algorithm. Finally, Deadline (D) and Size (S) of messages decides priority in D*S algorithm [28]. Context Aware Beacon Scheduling (CABS) approach to solve high beaconing rate problem in vehicle density network is suggested by Bai et al. [29] Time Division Multiple Access (TDMA) allocates time slots for sending beacon messages with reduced delay and reception rate and hence controls congestion. The drawback of this system is that MAC layer inter-networking is not considered while dividing time slots. WAVE-enhanced Safety message Delivery (WSD) strategy enables switching between multi-channel VANETs to reduce the delivery delay of safety messages is suggested by Felice et al. [28]. VeMAC ensures that all the nodes located in proximity of each other are allocated different time slots according to distributed time slot assignment

Congestion Control in Vehicular Ad-Hoc Networks (VANET’s)

263

policy, and this reduces collisions to provide safe broadcasting. Since it is TDMA based approach, slot-synchronized for each node is must. If collision is detected, then they acquire another available time slots in the second frame [30]. Remark: In this approach, the uni-priority event driven messages are given more chance for channel usage and transferred with minimum delay. This is possible because of proper priority assignment for all messages and proper scheduling in the control or service channels to controls the congestion [9]. The rescheduling and sharing the context of messages between the neighboring vehicles increases overhead.

3.5 Hybrid Strategies In a joint congestion control approach developed by Baldessari et al. [31] reduces the collision but sharing the same part of channel bandwidth is undesirable for event driven emergency applications. Transmission rate and power is decided based on channel busy time and vehicle density. In A Vehicle Oriented Congestion Control Algorithm (AVOCA), the packet transmission gets initiated with the entry of vehicle in the coverage area and AOVCA resets the congestion control parameters. While the congestion control parameters get freeze with the exit of vehicle and packet transmission is terminated. Due to fairness in channel allocation network throughput increases [32]. In the novel concept suggested by Sepulcre et al. [6], as per need of individual node is fulfilled by varying various communication parameters. This Proactive Applicationbased Congestion Control uses a combination of rate based and power based approach to prevent the congestion based on locally available information. This approach is not suitable for multiple applications having several requirements in a single vehicle. In further research by Sepulcre et al., channel collision is reduced using context of the communication information. This reduces channel overload and channel busy time by exchanging CAMs (Cooperative Awareness Messages). The broadcast beacon messages in VANET consume a major bandwidth of control channel which causes congestion. Three step congestion control approach is explained by Djahel et al. [33]. In this, the vehicle in the network shares the calculated transmission power and rate to reduce channel load. This new technique increases reception rate of the emergency messages but delay and channel overhead increases. Meta-heuristic techniques based Uni-Objective Tabu (UOTabu) approach is suggested by Taherkhani and Pierre. This closed-loop hybrid strategy use channel usage as a factor for congestion detection. The Tabu Search algorithm calculated transmission range and rate. Using these parameters UOTabu improves overall VANET performance by reducing the average delay, number of packet losses and increases average throughput [34]. In Adaptive Message Rate Control (AMRC) strategy developed by Guan et al., utility of packets decides control channel interval and transmission rate which ultimately controls congestion [35]. D-FPAV algorithm is modified by adding dynamic Maximum Beaconing load (MBL) allocation to existing congestion control algorithm. This hybrid approach is based on combination of transmit power control and message generation rate to fine tune D-FPAV. In this dynamic MBL assigned depending on traffic and non-traffic conditions. It gives better throughput and message reception as compared to D-FPAV [36].

264

L. M. Giripunje et al.

In [37], first, self-possessed event driven network detects congestion even if one node detects event-driven message. Second, the congestion is detected if channel usage level increases above threshold level (70%). Controlling unit uses Tabu search procedure which gives the optimum transmission rate and range for message channel [37]. The ability to solve real world problems makes nature inspired algorithms a choice for solving congestion problem in VANET [38]. Data mining approaches can also be combined with hybrid approaches to get better congestion control in VANET [39]. Security plays an important role in event driven situation and it helps to avoid congestion causing problems [40, 41]. Remark: In Hybrid strategies, any combination of above four strategies is use which controls two or more parameters reduce congestion [18]. This approach works well in real time complex VANET.

4 Opinion and Discussion To maintain the effectiveness of network operation while avoiding degradation of wireless channels communication is challenging task. Many researchers have proposed the congestion control algorithm as one of the solution. The main objective of a congestion control strategy is to minimize delay, jitter, packet loss, number of retransmissions and to improve performance of VANET communication channels for safety applications. The robust framework of congestion control is the need of future VANETs. Many congestion control approaches used proactive actions to prevent congestion in communication channel because of their characteristic which prevents congestion before it happens. Hybrid strategies use any combination of rate-based, power-based, CSMA/CA-based, prioritizing and scheduling-based strategies to provide congestion control. On roads, in real traffic situation, multiple uni-priority event-driven safety messages are generated from drivers various reactions. However, most of researches were not focused on congestion control algorithms for uni-priority event-driven safety message congestion.

5 Conclusions and Future Works After thoroughly reviewing existing research works, we can conclude that the best approach to control congestion in VANETs is to have a proactive algorithm with hybrid strategies. With these approaches, emergency messages can be delivered to its intended recipients without any delay and that to without the expense of beacons. In hybrid congestion control, a proactive strategy can be combined with a reactive strategy. Using this hybrid congestion control strategy, first proactive strategy can avoid channel saturation then the channel overhead is reduced using the reactive strategy to control congestion. Further, a proactive hybrid algorithm can be proposed for congestion control in VANETs while verifying and evaluating performance of proposed congestion control algorithm by network simulator tools, which is more realistic approach.

Congestion Control in Vehicular Ad-Hoc Networks (VANET’s)

265

References 1. Jeremiah, C., Nneka, A.J.: Issues and possibilities in Vehicular Ad-hoc Networks (VANETs). In: International Conference on Computing, Control, Networking, Electronics and Embedded Systems Engineering, pp. 254–259 (2015) 2. Reza, M., Sattari, J., Md Noor, R., Keshavarz, H.: A taxonomy for congestion control algorithms in vehicular ad hoc networks. In: COMNETSAT 2012, pp. 44–49. IEEE (2012) 3. Raw, R.S., Kumar, M., Singh, N.::Security challenges, issues and their solutions for VANET. Int. J. Network Secur. Appl. (IJNSA) 5(5), 95–105 (2013) 4. Zhang, W., Festag, A., et al.: Congestion control for safety messages in VANETs: concepts and framework. In: Proceeding 8th Conference on ITS Telecommunications (ITST), Thailand, pp. 199–203 (2008) 5. Vyas, I.B., Dandekar, D.R.: Review on congestion control algorithm for VANET. In: International Conference on Quality Up-gradation in Engineering, Science and Technology (ICQUEST), pp. 7–11 (2014) 6. Sepulcre, M., Mittag, J., Santi, P., Hartenstein, H., Gozalvez, J.: Congestion and awareness control in cooperative vehicular systems. Proc. IEEE 99(7), 1260–1279 (2011) 7. Wischhof, L., Rohling, H.: Congestion control in vehicular ad hoc networks. In: Proceeding of IEEE International Conference on Vehicular Electronics and Safety, Germany, pp. 58–63 (2005) 8. Huang, C.-L., Fallah, Y.P., Sengupta, R., Krishnan, H.: Information dissemination control for cooperative active safety applications in vehicular ad-hoc networks. In: IEEE Global Telecommunications Conference, 2009, GLOBECOM 2009, pp. 1–6 (2009) 9. Seo, H., Yun, S., Kim, H.: Solving the coupon collector’s problem for the safety beaconing in the IEEE 802.11 p WAVE. In: IEEE 72nd Vehicular Technology Conference Fall (VTC 2010-Fall), pp. 1–6 (2010) 10. Schmidt, R.K., Leinmüller, T., Schoch, E., Kargl, F., Schäfer, G.: Exploration of adaptive beaconing for efficient inter vehicle safety communication. IEEE Network 24(1), 14–19 (2010) 11. He, J., Chen, H.-H., Chen, T.M., Cheng, W.: Adaptive congestion control for DSRC vehicle networks. IEEE Commun. Lett. 14(2), 127–129 (2010) 12. Ye, F., Yim, R., Roy, S., Zhang, J.: Efficiency and reliability of one-hop broadcasting in vehicular ad hoc networks. IEEE J. Sel. Areas Commun. 29(1), 151–160 (2011) 13. Tielert, T., Jiang, D., Chen, Q., Delgrossi, L., Hartenstein, H.: Design methodology and evaluation of rate adaptation based congestion control for Vehicle Safety Communications. In: IEEE Vehicular Networking Conference (VNC), pp. 116–123 (2011) 14. Sommer, C., Tonguz, O.K., Dressler, F.: Traffic information systems: efficient message dissemination via adaptive beaconing. IEEE Commun. Mag. 49(5), 173–179 (2011) 15. Darus, M.Y., Bakar, K.A.: Congestion control algorithm in VANETs. World Appl. Sci. J. 21(7), 1057–1061 (2013) 16. Vyas, I.B., Dandekar, D.R.: An Efficient Congestion Control Scheme for VANET. International Journal of Engineering Research & Technology (IJERT) 3(8), 1467–1471 (2014) 17. Shwetha, A., Sankar, P.: Queue management scheme to control congestion in a vehicular based sensor network. In: Proceedings of the Second International Conference on Inventive Systems and Control (ICISC 2018), pp. 917–921. IEEE (2018) 18. Shen, X., Cheng, X., Zhang, R., Jiao, B., Yang, Y.: Distributed congestion control approaches for the IEEE 802.11 p vehicular networks. IEEE Intell. Transp. Syst. Mag. 5(4), 50–61 (2013) 19. Torrent-Moreno, M., Mittag, J., Santi, P., Hartenstein, H.: Vehicle-to-vehicle communication: fair transmit power control for safety-critical information. IEEE Trans. Veh. Technol. 58(7), 3684–3703 (2009)

266

L. M. Giripunje et al.

20. Fallah, Y., Huang, C., Sengupta, R., Krishnan, H.: Congestion control based on channel occupancy in vehicular broadcast networks. In: IEEE Vehicular Technology Conference Fall (VTC), Canada, pp. 1–5 (2010) 21. Jang, H.-C., Feng, W.-C.: Network status detection-based dynamic adaptation of contention window in IEEE 802.11 p. In: IEEE 71st Vehicular Technology Conference (VTC 2010Spring), pp. 1–5 (2010) 22. Barradi, M., Hafid, A.S., Gallardo, J.R.: Establishing strict priorities in IEEE 802.11 p WAVE vehicular networks. In: IEEE Global Telecommunications Conference, pp. 1–6 (2010) 23. Hsu, C.-W., Hsu, C.-H., Tseng, H.-R.: MAC channel congestion control mechanism in IEEE 802.11 p/WAVE vehicle networks. In: IEEE Vehicular Technology Conference (VTC Fall), pp. 1–5 (2011) 24. Stanica, R., Chaput, E., Beylot, A.L.: Congestion control in CSMA-based vehicular networks: Do not forget the carrier sensing. In: 9th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks (SECON), pp. 650–658 (2012) 25. Suthaputchakun, C.: Priority-based inter-vehicle communication for highway safety messaging using IEEE 802.11e. Int. J. Veh. Technol. 2009(1), 1–12 (2009) 26. Bouassida, M.S., Shawky, M.: A cooperative congestion control approach within VANETs: formal verification and performance evaluation. EURASIP J. Wirel. Commun. Networking 2010(1), 1–13 (2010) 27. Kumar, V., Chand, N.: Data scheduling in VANETs: a review. Int. J. Comput. Sci. Commun. 1(2), 399–403 (2010) 28. Felice, M.D., Ghandour, A.J., Artail, H., Bononi, L.: Enhancing the performance of safety applications in IEEE 802.11 p/WAVE vehicular networks. In: IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks, pp. 1– 9 (2012) 29. Bai, S., Oh, J., Jung J.: Context awareness beacon scheduling scheme for congestion control in vehicle to vehicle safety communication. Ad Hoc Networks 11(7), 2049–2058 (2013) 30. Omar, H.A.B., et.al.: Wireless access technologies for vehicular network safety applications. IEEE Network, pp. 22–26 (2016) 31. Baldessari, R., Scanferla, D., Le, L., Zhang, W., Festag, A.: Joining forces for VANETS: A combined transmit power and rate control algorithm. In: 6th International Workshop on Intelligent Transportation (WIT) (2010) 32. Huang, Y., Fallon, E., Qiao, Y., Rahilly, M., Lee, B.: AVOCA–A Vehicle Oriented Congestion Control Algorithm, ISSC, Trinity College Dublin, pp. 1–6 (2011) 33. Djahel, S., Ghamri-Doudane, Y.: A robust congestion control scheme for fast and reliable dissemination of safety messages in VANETs. In: IEEE Wireless Communications and Networking Conference (WCNC), pp. 2264–2269 (2012) 34. Taherkhani, N., Pierre, S.: Congestion control in vehicular ad hoc networks using Metaheuristic techniques. In: Proceedings of the Second ACM International Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications, pp. 47–53 (2012) 35. Guan, W., He, J., Ma, C., Tang, Z., Li, Y.: Adaptive message rate control of infrastructured DSRC vehicle networks for coexisting road safety and non-safety applications. Int. J. Distrib. Sens. Networks 2012(1), 1–8 (2012) 36. Reza, M., Sattari, J., Noor, R.M., Ghahremani, S.: Dynamic congestion control algorithm for vehicular ad-hoc networks. Int. J. Softw. Eng. Appl. 7(3), 95–108 (2013) 37. Ravikumar, K., Vishvaroobi, T.: Congestion Control in Vehicular Ad Hoc Networks (VANET) using meta-heuristic techniques. Int. J. Comput. Sci. Trends Technol. (IJCST) 5(4), 66–72 (2017) 38. Shandilya, S.K., Shandilya, S., Deep, K., Nagar, A.K.: Handbook of Research on Soft Computing and Nature-Inspired Algorithms, IGI Global, (2017)

Congestion Control in Vehicular Ad-Hoc Networks (VANET’s)

267

39. Dubey, A., Shandilya, S.K.: Exploiting need of data mining services in mobile computing environments. In: Computational Intelligence and Communication Networks (CICN) (2010) 40. Shandilya, S.K., Ae Chun, S., Shandilya, S., Weippl, E.: Internet of things security: fundamentals, techniques and applications (2018) 41. Shandilya, S.K., Ae Chun, S., Shandilya, S., Weippl, E.: IoT Security: An Introduction. River Publishers, Denmark (2018)

Advances in Cyber Security Paradigm: A Review Shahana Gajala Qureshi(B) and Shishir Kumar Shandilya VIT Bhopal University, Bhopal, India [email protected]

Abstract. This review paper discusses the various defensive models and mechanisms used so far in cyber security. Cyber security is very sensitive issue, where technologies are integrated day by day. To deal with sophisticated attackers, there is a need to develop a strong proactive defensive mechanism for fastest growing malware codes and other attacks too. In particular, digitization and information infrastructure initiated a battle for dominance in cyber space. This paper aims to highlight various challenges in cyber security, recent integrated technologies along with the recent advances in cyber security paradigm. Keywords: Cyber security · Cyber crime · Malicious code · Proactive security mechanisms · Hybrid approach · Cyber Security Decision Support (CSDS) system

1 Introduction The world is moving towards digitalization with rapid technological developments. Therefore data security is one of the leading challenges in front of us. Integration in technologies has made the Internet the most important infrastructure for the business development of government and private organizations [1]. While computer networks and the Internet remain an important part of organizations, they are also creating enough opportunities for attackers. Strong Cyber Security infrastructures are required for nation’s security and economic welfare by protecting critical information. With the advancement in communication technologies like latest tools, denser network, and high bandwidth, cyber attackers are having more possibilities to exploit and new vulnerabilities. Data security is one of the major issues while sharing of data in different areas like banking, government department, e-commerce, communications, national defense, entertainment, finance, and private organization over cyberspace. To protect essential information, many techniques have been developed but still, the databases are prone to variety of attacks. These attacks are further classified as active attacks and passive attacks [2, 3]. A strong cyber architecture can be a solution to this, which is mostly emphasizes on security features, such as cybersecurity devices like firewalls, Intrusion Detection/Protection Systems, strong passwords encryption/decryption devices, etc. and secure communication protocols such as HTTPS, SSL, etc. However, most of the organizations face difficulties in identifying what critical assets need to be protected and how © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 268–276, 2021. https://doi.org/10.1007/978-3-030-49336-3_27

Advances in Cyber Security Paradigm: A Review

269

to implement appropriate cyber architecture to control, and segment the network. To avoid these difficulties, organizations need to move to Cyber Security Decision Support (CSDS) systems. There are various types of security mechanisms, which are based on the various attacks [4, 5]. Figure 1 depicts some of the most common cyber-attacks. The first level categorizes the types of cybersecurity, the second level corresponds to the objective related to each type and the third level in the hierarchy includes various attacks observed.

Fig. 1. Classification of cyber security with attacks

2 Literature Review In the mid-1990s, cyber issues came into existence and by the end of the ’90s; official responses to dealing with these issues had also taken shape [6]. And since then, many defensive mechanisms have been developed so far to deal with cyber issues. In this paper, we have tried to throw some light on cyber-attacks and their defensive mechanisms. In 2008 Moradian E. et al. [7] proposed a meta-agents approach in web services. For a business system, web services have always been a subject of concern. The approach was specifically proposed to monitor the threats and attacks in web services. They proposed meta-agents over software agents in a multi-agent system to prevent possible attacks on web services. In meta-agents, an agent was used to monitor software agent activities and accordingly work was directed to software agents. By using this approach, unexpected event were also handled. Bedi P. et al. in 2009 [8] proposed a system based on multi-agent system planning for threat avoidance (MASPTA) where the system works in a multi-agent environment and uses a goal-oriented action planning (GOAP) strategy with the threat modeling process. In their proposed system, the agents played an important role to avoid threats. In particular, the main aim of this approach was to protect web-based systems by avoiding identified threats. The system used to treat modeling

270

S. G. Qureshi and S. K. Shandilya

concepts to identify the threats first and after that, an attack tree was created by using Hierarchical Task Network (HTN) technique. Along with this, Goal Oriented Action Planning (GOAP) was used to generate an action plan which avoids threats. Whereas Saurabh A. et al. [9] considered the problem of security-constrained optimal control for discrete-time. In particular, they focused on a class of denial-of-service (DoS) attack models and were aimed to minimize the objective function of the problem by finding an optimal feedback controller subject to safety and power constraints. To solve this problem they presented a semi-definite programming based solution. Nassar M. et al. in 2010 [10] proposed a framework for monitoring SIP (Session Initiation Protocol - RFC 3261) enterprises networks. They proposed an approach Anomaly detection provided security to SIP enterprises networks at three levels; 1) Traffic on network, 2) the server logs and 3) enterprises billing records. This Anomaly detection was based on two factors: feature extraction and one-class Support Vector Machines (SVM). They also proposed methods for anomaly/attack type classification and attack source identification. Fu-Hau H. et al. in 2011 [11] proposed a BrowserGuard, to protect a browser against drive-by download attacks. In this type of attack, attackers can download any code on a victim’s host as well as can execute it also. BrowserGuard used to monitor the download scenario of every loaded file on the web browser. To implement BrowserGuard on IE 7.0, they used the BHO (browser helper object) mechanism of the window. Their experiment result showed less than 2.5% of low performance and in their experiment, they did not use false positives and false negatives for the web pages. In 2012, Gandotra V. et al. [12] presented a Three Phased Threat-Oriented Security model based on the concept of proactive threat management. In this model, they provided security for both known and unknown threat, which was not possible in the traditional method. By this model, in the first phase; they applied both threat modeling processes and research honey tokens together to identify unknown threats and in the second phase; using a multi-agent system, concern and necessary security measures had reduced the dangers. Basically, this model was used in the risk analysis segment of the spiral model to enhance security. This model leads the traditional technique, where they provide security only against the identified threats. Whereas Roy A. et al. [13] proposed a novel attack tree (AT): attack countermeasure trees (ACT) that took into account both attacks and countermeasures as detection mechanisms and mitigation techniques respectively. This proposed model allows one to perform security on the basis of qualitative and probabilistic analysis. Their proposed model outperforms as compare to other existing analytical model-based security optimization strategies. In 2013, Almasizadeh J. et al. [14] proposed a State-Based-Stochastic model which uses Semi-Markov-Chain to generate a security matrix. Through this matrix, the degree of system security was counted. The degree indicated the level of security on the system. In particular, the proposed model was described as the attacker’s activity, as well as the system’s reactions over time by using probability distribution function. In 2014 [15], Dewar presented a paper intending to define cybersecurity terminologies. Along with this, three approaches were proposed: 1) Active Cyber Defense (ACD): It was designed to predict proactive measures to identify malicious codes. 2) Fortified Cyber Defense (FCD): it was designed to provide security by constructing secure communication and information networks. 3) Resilient Cyber Defense (RCD): This approach

Advances in Cyber Security Paradigm: A Review

271

was designed to focus on decisive infrastructure and services which provided continue network communication and services. Peri Net Machine is a tool of graphical and digital modeling along with a strong mathematical basis and graphical modeling capability. In the same year [16] researchers, Xinlei Li and Di Li found that traditional machines are not capable enough in detection capability in the synthetic model, and one cannot use all Peri Net Machines to describe attack behavior and if the machine is having a machine element simply it causes errors. Hence, to overcome failures in traditional machines, they proposed a Network Attack model based on Colored Petri Net. This model supported both synthesis operation and colored synthetic operation along with this model ensured synthetic model reserves original detection capability. The same year, an Intelligent approach against injection (e.g. SQL, XSS) and Trojan attacks happened in web applications had been proposed by Razzaq A. et al. [17]. They modeled security framework using the ontology approach. This was very promising to detect zero-day vulnerabilities. Especially, this model captured the context; detect HHP protocol attacks, focused only on specific requests and responses where malicious attacks were possible. This model also took into consideration important content of attacks, source, target, vulnerabilities, technologies used by attackers and controls for mitigation. IoT (Internet of Things) can be seen as a new instrument in the era of technology enhancement. 2015 was the year, where industries were progressively enabling IoT in their organizations. Neisse R. et al. [18] proposed Model-Based Security Toolkits for IoT devices. This toolkit was constituted in a management framework to support both specification and efficient evaluation of security policies of user data protection. This framework addressed two major problems: i) validity of security and privacy of user’s data towards IoT. ii) Maintaining trust between IoT technology and individuals. Through a case study in a Smart City scenario, they successfully evaluated its feasibility, performance and concluded that their proposed model successfully gained trust in IoT transactions. In 2016 Varshney G. et al. [19] proposed Phishing Detection System: Lightweight Phish Detector (LPD). The basic principle of LPD was to discover the right set of features associated with authentic web pages, through popular engines. LPD used two features to check the authenticity of the web page: 1) URL’s Domain Name and 2) title of the page. They compared the current search engines that supported anti-phishing approaches and others who used chromes, Firefox, Internet Explorer-like popular search engines and got 92.4% to 100% true negative varying rate and 99.5% true positive rate and concluded that the proposed scheme was accurate enough. In the same year, Deore1 D. et al. [20] presented a survey of different automated software used to protect data. To protect data from the virtual machines, they used different distributed cybersecurity automation framework. In their proposed work, they explained the various techniques used to develop software like: user virtualization, event log analysis, one-time password, and malicious attack detection along with this some privacy protection was also explored in their proposed work. Meszaros J. et al. [21] same year proposed a new framework for online services security risk management. This framework was used by both service providers and service consumers. They also performed a case study for the validation of the framework.

272

S. G. Qureshi and S. K. Shandilya

Threat model and a Risk model were the two key components of the proposed framework. These two models provided a specific feature for online services. Mainly in their proposed work their entire focused was on services used in the public internet environment. With the aim of automated management to detect and prevent potential problems such as identifying traffic behavior patterns, Gilberto F. et al. [22] proposed two anomaly detection mechanisms. These proposed mechanisms were based on statistical procedure principle Principal Component Analysis, Ant Colony Optimization metaheuristic and Dynamic Time Warping methods and the major contribution of the proposed method were in pattern recognition and anomaly detection. Seyed Mojtaba H. B. et al. [23] proposed an intrusion detection framework. This framework was based on multiple criteria linear programming (MCLP) and support vector machines (SVM), and tome-varying chaos particle swarm optimization (TVCPSO). The proposed method performed welled in terms of having a high detection rate and a low false alarm rate. In 2017 Park J. et al. [24] addressed the accessibility issues for the enterprise management system who were providing remote access to their users. They proposed an Invi-server system that addressed this issue. It was designed to protect the secrete server from unauthorized access by keeping IP and MAC addresses that remain invisible from external scanning. They suggested that this Invi-server system could be used to reduce the attacker’s surface. They also implemented the prototype of Invi-server which significantly reduced attack surface without affecting the performance of the network. Wagner N. et al. [25] in 2018 proposed an Automatic method for generating segmentation architectures. These segmentation architectures were optimized for security, cost and mission performance. They proposed the concept of network segmentation as a mitigation technique to protect the computer network by partitioning it into multiple segments. It was a hybrid approach that combined Nature Inspired optimization along with cyber risk modeling and simulation. The prototype systems were used to implement the method and demonstrated a network environment under cyber-attacks through a case study. In 2019 Badsha S. et al. [26] proposed a Privacy Preserving Protocol. They addressed that organizations that used this protocol can freely share their private information in encrypted form with anyone and they could know about the future prediction by learning the information without disclosing any information to anyone. They also addressed that through a properly developed decision tree, organizations can predict whether the email received is spam or not.

3 Recent Scenario in Cyber Security Enoch S. et al. [27] proposed a Temporal Hieratical Attack Representation Model to evaluate the effectiveness of security metrics. They categorized the network into two categories (e.g., first changes in hosts and second in the edge). They used Attack Graphs and Attack Trees Graphical Security Models for dynamic networks for the systematical analysis of security posture by using a security matrix. Most of the time these models were lacking to capture dynamic network (changes in topology, firewalls, etc.). There proposed Temporal Hieratical Attack Representation Model has overcome these problems by systematically capturing and analyzing the changes of security in the network. Semerci M. et al. proposed an Intelligent Cyber Security System against Distributed

Advances in Cyber Security Paradigm: A Review

273

Denial of Service (DDoS) Attacks in communication Networks [28]. The proposed model was consists of two components: A monitor to detect DDoS attacks and a discriminator to detect unwanted users in the system. They deployed their proposed model over a Simulated telephone network evaluated the performance of the model by a high throughput simulation environment. The proposed system detected the attack as well as identifies the attackers, but particularly the proposed model was focused on DDoS attack. Hajisalem V. et al. [29] proposed a hybrid classification Intrusion Detection System. The proposed system was based on the Artificial Bee Colony Algorithm (ABC) and Artificial Fish Colony (AFC) algorithms. They used Fuzzy C-Means Clustering (FCM) to divide the training data set and Feature selection (CFS) techniques to remove irrelevant features in the data set. If any single deviation was found system considered it as an attack. Whereas the normal IDS system used two techniques for the same: pattern matching and statistical anomaly. There proposed method outperformed as compared to the normal IDS system and achieved a 99% detection rate and 0.01% false-positive rate. Li Y. et al. [30] proposed a framework to facilitate the design of Self-Destructing Wireless Sensors that ensured the security and performance of the wireless sensors. In a proposed framework, a cryptographic self-destructing mechanism was used that enabled autonomous selfdestruction in wireless sensors. Self-destructing wireless sensors required the ability to determine that, whether the sensor is lost and if yes then timely the sensitive information should destroy. The proposed framework was capable enough on performing quantitative analysis on the security and performance of wireless sensors.

4 The Challenges of Cyber Security To develop strong security mechanism which meets all modern requirements is a very complex task. Following are some reasons behind it, • There are many security mechanisms that are designed so far, but how logically we could select and use the appropriate security mechanism(s) is a subject of concern. • While designing security mechanisms, potential attacks are always a matter of concern but still in many cases, attacks are designed by looking at problems in present system, therefore an unexpected weakness in the mechanism is possible. • The dynamic nature of the network system constitutes another challenge to network security, where devices, IMP’s, and security elements like firewall, topologies keep changing dynamically. • Continuously monitor and maintain integrity in security over time is also one of the major issues in overloaded environment. • Many organizations are facing accessibility issues in providing remote access to their users, because once the network server is connected with the Internet, any host on the Internet can access the server and steal the user’s private information. • Security Validation of IoT devices and maintaining the privacy of user’s data while keeping trust among users is very challenging. • Many organizations are looking to move their most of the data to ‘the cloud’, which has created a new opportunity for the attackers.

274

S. G. Qureshi and S. K. Shandilya

5 Research Gap Cyber Security in the modern network is difficult to assess because they are dynamic in a configuration such as changes in topology, firewalls, routers, etc. We cannot deny that in existing infrastructure, there are numerous limitations (such as lacking selfawareness, absence of self-organizing mechanism and feedback mechanisms, no ability to diagnose miss-configuration). Many traditional techniques such as data encryption techniques, authenticate mechanisms, firewalls are applied to protect computers and networks. Moreover, Graphical security models such as Attack Graphs and Attack Trees are widely used to systematically analyze the security posture. The basic problem with these models is that they are unable to capture dynamic changes in terms of host and edges in networks. Many other models were applied as a solution to handle dynamic changes at hosts and edges level but not at configurationally. Intrusion detection systems and Intrusion pretension systems are well-known security instruments to the network layer to identifies and block malicious activities if firewalls fail to provide securities but they fail to identify unknown malicious activities. In recent years, to optimize the performance of intrusion detection systems various nature-inspired meta-heuristic techniques such as Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony were applied. But they also failed to provide complete security because of their predictable nature somewhere. The current cyber security architectures are static therefore usually it is controlled by humans such as systems properties and systems behavior being highly dependent upon human administration to be programmed and told how and what can done. This extensively influences the decision-making procedure and perhaps is the major drawback in the automation of such systems. Therefore, the current architectures are neither reliable nor robust in nature hence with this no-adaptive behavior, unable to learn or have limited learning capability makes them unsuitable to adapt unexpected situations in dynamic environment. Therefore, it is important to propose and experiment self-organized and resilient cyber architecture.

6 Conclusion and Future Scope As the use of integrated technologies has increased, cyber security has received the paramount importance. Static mechanisms are vulnerable to many attacks because of their predictable nature such as centralized control, limited learning capabilities and inability to handle new cases in a frequently changing environment. These features present new challenges, as achieving security is more difficult in dynamic mechanisms. After studying existing research work, it is observed to have an automated architecture with proactive defense mechanism. A Hybrid approach could be a solution in the area of cyber security decision support (CSDS) that leverage data-driven methods to generate optimal/near-optimal security decisions in dynamic network conditions.

References 1. Sharma, R.: Study of latest emerging trends on cyber security and its challenges to society. Int. J. Sci. Eng. Res. 3(6), 1–4 (2012)

Advances in Cyber Security Paradigm: A Review

275

2. Burtescu, E.: Database security-attack and control method’s. J. Appl. Quant. Methods 4(4), 449–454 (2009) 3. Deorel, D., Waghmmare, V.: A literature review of cyber security automation for controlling distributed data. Int. J. Innov. Res. Comput. Commun. Eng. 4(2), 2013–2016 (2016) 4. Ghate, S., Agrawal, P.: A literature review on cyber security in indian context. J. Comput. Inf. Technol. 8(5), 30–36 (2017) 5. Homer, J., Zhang, S., Schmidt, D., et al.: Aggregating vulnerability metrics in enterprise networks using attack graphs. J. Comput. Secur. 21(4), 561–597 (2013) 6. Warner, M.: Cybersecurity: a pre-history. Intell. Natl. Secur. 27(5), 781–799 (2012) 7. Moradian, E., Håkansson, A.: Approach to solving security problems using meta-agents in multi agent system. In: Nguyen, N.T., Jo, G.S., Howlett, R.J., Jain, L.C. (eds.) KES-AMSTA 2008. LNCS (LNAI), vol. 4953, pp. 122–131. Springer, Heidelberg (2008). https://doi.org/ 10.1007/978-3-540-78582-8_13 8. Bedi, P., Gandotra, V., Singhal, A., Vats, V., Mishra, N.: Avoiding threats using multi agent system planning for web based systems. In: Nguyen, N.T., Kowalczyk, R., Chen, S.-M. (eds.) ICCCI 2009. LNCS (LNAI), vol. 5796, pp. 709–719. Springer, Heidelberg (2009). https:// doi.org/10.1007/978-3-642-04441-0_62 9. Amin, S., Cárdenas, A.A., Sastry, S.S.: Safe and secure networked control systems under denial-of-service attacks. In: Majumdar, R., Tabuada, P. (eds.) HSCC 2009. LNCS, vol. 5469, pp. 31–45. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-00602-9_3 10. Nassar, M., Stat, R., Festor, O.: A framework for monitoring SIP enterprise networks. In: Fourth International Conference on Network and System Security, pp. 1–8 (2010). https:// doi.org/10.1109/nss.2010.79 11. Fu-Hau, H., et al.: BrowserGuard: a behavior-based solution to drive-by-download attacks. IEEE J. Sel. Areas Commun. 29(7), 1461–1468 (2011) 12. Gandotraa, V., Singhala, A., Bedia, P.: Threat-oriented security framework: a proactive approach in threat management. Elsvier-Procedia Technol. 4, 487–494 (2012) 13. Roy, A., Kim, D.S., Trivedi, K.S.: Scalable optimal countermeasure selection using implicit enumeration on attack countermeasure trees. In: 42nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) (2012) 14. Almasizadeh, J., Adollahi, M.: A stochastic model of attack process for the evolution of security matrix. Elsevier- Comput. Networks 57(10), 2159–2180 (2013) 15. Dewar, R.: The triptych of cyber security: a classification of active cyber defense In: Brangetto, P., Maybaum, M., Stinissen, J. (eds.) 6th International Conference on Cyber Security 2014, NATO, pp. 7–21. Tallinn (2014) 16. Li, X., Di, L.: A network attack model based on colored petri net. J. Networks 9(7), 1883–1891 (2014) 17. Razzaq, A., et al.: Ontology for attack detection: an intelligent approach to web application security. Comput. Secur. 45, 124–146 (2014) 18. Neisse, R., et al.: SecKit: a model asked security tool kits for internet of things. ElsevierComput. Secur. 54, 60–76 (2015) 19. Varshney, G., et al.: A phish detector using lightweight search features. Comput. Secur. 62, 213–228 (2016) 20. Ujjwala, D., et al.: A literature on cyber security automation for controlling distributed data. Int. J. Innov. Res. Comput. Commun. Eng. 4(2), 2013–2016 (2016) 21. Meszaros, J., et al.: Introducing OSSF: a framework for online service cybersecurity risk management. Comput. Secur. 65, 300–313 (2016) 22. Femandes, G., et al.: Network anomaly detection using IP flows with principal component analysis and ant colony optimization. J. Network Comput. Appl. 64, 1–11 (2016)

276

S. G. Qureshi and S. K. Shandilya

23. Hosseini Bamakan, S.M., et al.: An effective intrusion detection framework based on MCLP/SVM optimized by time-varying chaos particle swarm optimization. Neurocomputing 199, 90–102 (2016) 24. Park, J., et al.: Invi-server: reducing the attack surfaces by making protected server invisible on networks. Comput. Secur. 67, 89–70 (2017) 25. Wagner, N., et al.: Automatic Generation of Cyber Architectures Optimized for Security, Cost, and Mission Performance: A Nature-Inspired Approach, pp. 1–25. Springer (2018) 26. Badsha, S., et al.: Privacy preserving cyber threat information sharing and learning for cyber defense. In: 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC) (2019) 27. Enoch, S.Y., et al.: A systematic evaluation of cybersecurity metrics for dynamic networks. Comput. Netw. 144, 216–229 (2018) 28. Semerci, M., et al.: An intelligent cyber security system against DDoS Attacks in SIP networks. Comput. Netw. 136, 13–154 (2018) 29. Hajisalem, V., et al.: A hybrid intrusion detection system based on ABC-AFS algorithm for misuse and anomaly detection 136, 37–50 (2018) 30. Li, Y., et al.: Designing self-destructing wireless sensors with security and performance assurance 141, 44–56 (2018)

Weighted Mean Variant with Exponential Decay Function of Grey Wolf Optimizer on Applications of Classification and Function Approximation Dataset Alok Kumar(B) , Avjeet Singh(B) , Lekhraj(B) , and Anoj Kumar(B) Computer Science and Engineering Department, Motilal Nehru National Institute of Technology, Allahabad, India {alokkumar,2016rcs01,lekhraj,anojk}@mnnit.ac.in

Abstract. Nature-Inspired Meta-heuristic algorithms are optimization algorithms those are becoming famous day by day from last two decades for the researcher with many key features like diversity, simplicity, proper balance between exploration and exploitation, high convergence rate, avoidance of stagnation, flexibility, etc. There are many types of nature inspired meta-heuristics algorithms employed in many different research areas in order to solve complex type of problems that either single-objective or multi-objective in nature. Grey Wolf Optimizer (GWO) is one most powerful, latest and famous meta-heuristic algorithm which mimics the leadership hierarchy which is the unique property that differentiates it from other algorithms and follows the hunting behavior of grey wolves that found in Eurasia and North America. To implement the simulation, alpha, beta, delta, and omega are four levels in the hierarchy and alpha is most powerful and leader of the group, so forth respectively. No algorithm is perfect and hundred percent appropriate, i.e. replacement, addition and elimination are required to improve the performance of each and every algorithm. So, this work proposed a new variant of GWO namely, Weighted Mean GWO (WMGWO) with an exponential decay function to improve the performance of standard GWO and their many variants. The performance analysis of proposed variant is evaluated by standard benchmark functions. In addition, the proposed variant has been applied on Classification Datasets and Function Approximation Datasets. The obtained results are best in most of the cases. Keywords: GA · GP · ES · ACO · PSO · GWO · Exploitation · Exploration · Meta-heuristics · Swarm intelligence

1 Introduction Heuristic algorithms face some problems and limitations like it may stuck in local optima, produced limited number of solutions, problem dependent, and so forth. To overcome these types of issues, meta-heuristics algorithms come into the picture and © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 277–290, 2021. https://doi.org/10.1007/978-3-030-49336-3_28

278

A. Kumar et al.

play an important role to improve the performance and simplicity to the researchers. Nature-inspired meta-heuristic algorithms become inspired by nature and follow the teaching-learning process to the group elements. Nature-inspired meta-heuristic algorithms can be classified into four categories as shown in Fig. 1, Evolutionary algorithm, Swarm based algorithm, Physics Based Algorithms, and Biological inspired algorithms. Surprisingly, algorithms such as, Genetic Algorithm (GA), Genetic Programming (GP), Evolution strategy (ES), etc. comes under evolutionary algorithm, Ant Colony Optimization (ACO), Bat algorithm, Particle Swarm Optimization (PSO), Grey Wolf Optimizer (GWO), etc. comes under swarm based algorithm. In this paper, a focus on Grey Wolf Optimizer (GWO) of swarm based algorithm to improve the performance.

Fig. 1. Classification of Nature-inspired meta-heuristic algorithms

A homogenous and large group of bird or animal is known to swarm. Surprisingly, an algorithm is employed on the intelligence of swarm that considered as swarm intelligence algorithms. GWO is a swarm intelligence algorithm and the scientific name of grey wolves is canis lupus that was inspiration of innovation of this proposed algorithm. Genetic Algorithm (GA) [1, 2] is an optimization algorithm and was introduced by John Holland in 1960 that follows the principle of Darwin’s theory of evolution which state that “theory of natural selection and evaluation” regarding survival of fitness, i.e. eliminate those elements or species from the environment whose are not survive or fitted in the environment. Holland‘s student David E. Goldberg further extended and proposed the GA in 1989. It is initiated with random solution called population and performs bioinspired operators such as selection, crossover, and mutation relaying recursively until obtained the desired output. More chance to get the best solution in next generation than present one after implementation. Crossover and mutation operators of GA perform the exploration as well as exploitation property of optimization technique. GA comes under the evolutionary algorithms of nature inspired meta-heuristic optimization. This optimization algorithm employed in many research areas to solve complex type of problem. The problem of image segmentation and image classification are solved by GA which are the research domains of image processing. Genetic Programming (GP) [3, 4] is a sub class of Evolutionary Algorithms and based on evolution theory. It was introduced by Jone Koza in 1992 and performed reproduction, crossover, and mutation operators initially and architecture-altering operations at the

Weighted Mean Variant with Exponential Decay Function

279

end to implement this algorithm. This algorithm is an elongation of GA and a domainindependent method. This optimization algorithm employed in many research areas to solve complex type of problem. The problem of image segmentation and image classification are solved by GP which are the research domains of image processing. GP can exploit complex and variable length representation that uses various kinds of operator to combine the input in linear or non- linear form which is suitable to construct new features. Evolution strategy (ES) [5–8] also comes under the evolutionary algorithms of nature inspired meta-heuristic optimization and was introduced in early 1960s by Ingo Rechenberg, Hans-Paul Schwefel and Bienert. It was further developed in 1970 and based on evolution theory. Mutation and recombination operators are employed to perform the process of evaluation of algorithm to obtain the batter results in each generation. (1 + 1) in [6], (1 + λ) and (1, λ) in [7] are categories of ES which are used to select the parent. This optimization algorithm employed in many research areas with different domains. The problem of image segmentation and medical image are solved by ES which are the research domains of image processing. Ant Colony Optimization (ACO) [9–11] is a sub class of Swarm Based Algorithms, based on the concept of swarm intelligence, and during searching the food by ants is the inspiration of this algorithm. It was initially proposed by Marco Dorigo in 1992 in his Ph.D. thesis. Ant has the capability or ability to find the food source from their nest with best possible shortest path. To find the optimal path, ants disperse the pheromone (a special type of perfume or chemical) to indirect communication between them. This meta-heuristic optimization algorithm employed in many research areas commonly to solve graphical type of problems. The problem of image classification is solved by ACO which comes under the research domains of image processing. The Bat algorithm [12] is also a meta-heuristic algorithm, sub class of Swarm Based Algorithms, and inspired by echolocation behavior of micro bats. In 2010, Xin-She Yang developed this algorithm for global optimization. A highly innovative aesthesia of hearing have developed by few bats in their path and generates echoes back to bats. Simplicity and flexibility are the main advantages of this algorithm and it is very easy to design. This meta-heuristic optimization algorithm employed in many research areas to solve complex type of problem. The problem of image compression is solved by bat algorithm which is the research domains of image processing. Particle Swarm Optimization (PSO) [13] proposed and design by Kennedy and Eberhart in 1995. The simulation of swarm (group of particles) optimization algorithm based on social behavior or social intelligence of species such as fish schooling (In biological vocabulary, any group of fishes that halt together for untidily reason known as shoaling, further, if the group of fishes swims in same direction for hunting in unified manner, that known as schooling) and bird flocking (an assembly of group of similar types of animals in order to travel, pasturage or jaunt with one another, that known as Flocking). PSO is implemented with only two paradigms, PBEST (particle best or personal best) and GBEST (Global best). Individual best solution of any particle during any course of generation called personal best solution, subsequently best out of all personal best solution is known as Global best. The velocity and Position vector simulate the mathematical model to generate optimal results. This swarm based optimization algorithm employed

280

A. Kumar et al.

in many research areas to solve complex type of problem. The problems of image segmentation and medical image which are the research domains of image processing are solved by PSO.

2 Literature Review In this section, a description and brief literature review of variants of Grey Wolf Optimizer and their applications in different research domain. The literature of GWO is as follows: Al-Aboody et al. [14] devised a three-level clustered routing protocol using GWO for wireless sensors to increase the performance and stability. The procedure done completely in three phases, in the first level centralized selection helps in finding the cluster heads from the base level, in the second level routing for data transfer is done where the nodes select the best route to the base station to consume less energy, and in the third & last level, distributed clustering is introduced. The evaluation of the algorithm was done through the network’s lifetime, stability, and energy efficiency and it was also found that refined realization of the algorithm in terms of lifetime stability and more residual energy. The proposed algorithm performs better than LEACH in terms of the lifetime of the network. Partial discharge (PD) leading to insulation degradation occurring in the insulation systems of the transformer is the major cause of their deterioration. Dudani and Chudasama [15] adopted sensor based acoustic emission technique in order to detection of PD along with Adaptive GWO for localization of PD source. New randomization adaptive technique gives an aid to AGWO algorithm with faster convergence and less parameter dependency to realize global optimal solution. The proposed approach applied on unconstrained test benchmark function to check the performance and locate the optimum location of PD in the transformer. An outcome of AGWO shows that it is superior to other optimization algorithms and, electrical and chemical detection methods. Jayabarathi et al. [16] presented a prestigious research work on the application of a grey wolf optimizer to solve non-linear, non-convex, and discontinuous in nature dispatch problems with various constraints. The algorithm includes crossover and mutation for hybridizing the algorithm to increase its performance by giving lower final cost and good convergence named as Hybrid GWO. Four dispatch intricacies with obstructed operating zones, valve point effect and ramp rate limits were solved using this algorithm and found that there was no transmission loss, and latter it compared with several algorithms to check the competitive performance. The results reveal that this algorithm works better and has a low cost. Jitkongchuen [17] proposed an alternative approach to finding the optimized way for better performance of the general DE. It uses new mutation schemes where the controlling parameters are self-adapted based on the feedback from the evolutionary search. The GWO is applied to the crossover to increase the favorability of the solution. The experimental results show that this method has been found ruthless when compared with PSO, jDE, and DE algorithm. The proposed algorithm was also tested upon nine standard benchmarks and was found to be more productive in finding the solution to the complex problems. To optimize the histograms of image and perform multilevel image segmentation, Li et al. [18] in this work proposed an algorithm know as Modified Discrete GWO

Weighted Mean Variant with Exponential Decay Function

281

(MDGWO). MDGWO is adopted for Multilevel Thresholding as it improves the location selection mechanism of α, β, and δ during hunting. Also, MDGWO uses weight coefficient to optimize final position (best threshold) of prey. Kapur’s entropy is used as the optimized function. The proposed algorithm is tested on standard test images like Lena, Cameraman, Baboon, Butterfly, Starfish etc. The experimental results demonstrate that MDGWO can sharply find out the optimal thresholds. These thresholds are very near to the outcomes tested by comprehensive searches. MDGWO is superior to Electromagnetism Optimization (EMO), the DE, ABC, and the classical GWO. MDGWO yields better image segmentation quality, objective function values, and their stability. In this research work, Li et al. [19] proposed an algorithm to handle multilevel image thresholding problem leading to image segmentation. For optimizing Fuzzy Kapur’s entropy to obtain a set of thresholds, MDGWO is adopted. Fuzzy Kapur’s entropy is picked as optimal objective function. To initiate fuzzy membership, MDGWO is utilized as the tool which implies pseudo-trapezoid shaped. Finally with the employment of local information aggregation segmentation is achieved. The algorithm is known as FMDGWO when applied with Fuzzy. The schemed algorithm is verified under a set of benchmark images which are picked from the Berkeley Segmentation Data Set Benchmarks 500. FMDGWO yields improved PSNR and formulated objective function values. FMDGWO outperforms EMO, MDGWO, FDE (Fuzzy entropy based DE algorithm). FMDGWO produced high level of segmentation and provide more stability. The popularity of meta-heuristics algorithms becoming famous day to day by solving complex and NP-hard problems those are impossible in linear time complexity. GWO is illustrious, renowned, and latest swarm intelligence algorithm. The No Free Lunch (NFL) [20] has logically proved that there does not exist any meta-heuristic algorithm which is best suited for all optimization problems. Hence, new variants are proposing day by day to overcome related issues and to solve various kinds of problems of real life. In this research article, proposed a variant of GWO, namely Weighted Mean GWO (WMGWO) with an exponential function.

3 Grey Wolf Optimizer and Their Variants Grey Wolf Optimizer (GWO) is a renowned, novel and famous meta-heuristic optimization algorithm that was developed by Mirjalili et al. [21]. GWO is a swarm intelligence algorithm and the scientific name of grey wolves is canis lupus that was inspiration of innovation of this proposed algorithm. To perform the hunting operation, gray wolves follow a spatial type of social hierarchy and hunting behavior. There are four levels of social hierarchy contains from level 1 to level 4 to categories the population each at different level. The social hierarchy of the wolves is depicted in the Fig. 2. The leader is apex wolf of the hierarchy is called as Alpha (α) hold at level 1 getting male or female. Many types of decision making like hunting, selection of place for sleeping etc. responsibilities are taken by leader of the group. One more interesting point is that all the group wolves acknowledge to the leader and holding down their tail. Subsequently, betas (β) are the advisors for the alpha and occupied the second level of hierarchy. Hence, these are the subordinate wolves and discipliner for the pack. They help to the alpha in decision making and ensure that order taken by the leader should be

282

A. Kumar et al.

Fig. 2. Social hierarchy of Grey wolves [21]

follow by all the subordinates and feedback to the leader. Surprisingly, deltas (δ) are the subordinates and take place at level three in the hierarchy. They play many duties for the pack and categories into many class according to their duty like scouts (responsible for observation of boundary), elders (old wolf who retired from the post of alpha or beta), caretakers (caring for ill wolves, lovingness for weak and wounded wolves), hunters (helps to alpha and beta in process of hunting), and sentinels (duty for protecting the pack). Omegas stay at last and fourth level of the hierarchy and are the ant apex wolves. They are like scapegoat and lastly allow eating. Leadership and decision making power goes down from top to bottom. To create the proper balance between exploration and exploitation, Mittal et al. [22] proposed a modified GWO (mGWO) algorithm in this work. The modification involves an exponential decay function (Eq. 2) to balance the exploration and exploitation in the search space during course of iterations. The Clustering problem in WSN is also illustrated in which mGWO is adopted for the Cluster Head (CH) selection. For simulation, many benchmark functions like, Rastrigin’s function, Weierstrass’ function, Griewank’s function, Ackley’s function, Sphere function are selected. According to gotten outcomes of the proposed method, due to rapid convergence and fewer possibilities to get stuck at local minima, is advantageous for real-world applications. When compared with other existing meta-heuristic algorithms (GA, PSO, BA, and CS) and traditional GWO, mGWO yielded better results and has potential to solve real word optimization problem. One more modified variant of GWO (MVGWO) proposed by Singh [23] through enhancing the leadership hierarchy of grey wolves by adding one more gamma (γ) level of wolves (hierarchical level: alpha, beta, gamma, delta, and omega) simulated in hunting behavior and mean operator variable (μ) obliges the wolves to encircling and attacking prey to assist update their positions by modifying corresponding equations. 23 classical testing well-known benchmark functions employee to check the performance of proposed variant. Proposed variant apply on sine dataset and cantilever beam design function. The competitive comparison of proposed variant with other related algorithms like Convex Linearization method (CONLIN), Method of Moving Asymptotes (MMA), Symbiotic Organisms Search (SOS), CS, and Grid based clustering algorithm –I and II (GCA-I and GCAII) to find optimal solution.

Weighted Mean Variant with Exponential Decay Function

283

In classical GWO, the value of a goes from 2 to 0 which is decreasing linearly with equation as follows: a = 2*(1 - t/T)

(1)

Where t indicates current iteration and T is maximum number of iteration in the implementation of standard GWO [21]. To proposed new variant namely Weighted Mean GWO (WMGWO), mGWO [22] employed with a weighted mean function as described in Eq. 3. In mGWO, an exponential function (Eq. 2) used to calculate the value of a instead of above linear function. Where      (2) a = 2* 1− t2 / T2 This exponential function 70% values employed for exploration while 30% for exploitation to balance between them. Simultaneously, in forth step of proposed algorithm, to evaluate the next position of prey or optimal solution of the problem, the following equation is used Xp =((C1 ∗ X1 )+(C2 ∗ X2 )+(C3 ∗ X3 ))

(3)

Which state that Alpha is most powerful search agent in the population or hierarchy i.e. imposing maximum weight (C1 = 0.54) to find the optimal value. At the next level, beta is second most powerful search agent i.e. imposing medium weight (C2 = 0.3) to find the optimal value. Delta held at last level of hierarchy, i.e. that imposing lowest weight (C = 0.16) to evaluate optimal values.

4 Simulation Environment The GWO, mGWO, MVGWO, and WMGWO meta-heuristic algorithms are coded and implementing on MATLAB R2017a, 12 GB RAM, and Intel(R) Core(TM) i7-4770 CPU @ 3.40 GHz.

5 Results and Discussion In this section, a test bed of 23 standard benchmark functions (F1 –F23 ) that are used to check the performance of proposed variant. All considered functions are taken from CEC 2005 [24]. Tables of Unimodal, multimodal, and fixed-dimension multimodal benchmark functions are listed in [24] that are containing and indicating the minimization functions in which “Function” indicates the function’s number in the list, dimensionality of function is indicated by Dim, Range indicates the boundary of function’s search space, and optimum value of function is indicated by fmin . Unimodal functions (F1 –F7 ) contains the single optima and in order of analysis of exploitation. On the other hand, multimodal functions (F8 –F13 ) contain many local optima and in order of analysis of exploration that is contrast to unimodal functions.

1.30599E-09

0.090082221

0.001102125

27.40083178

0.992893241

0.003968536

F2

F3

F4

F5

F6

F7

0.021371574

5.347086278

147.5579999

0.005936185

0.485392767

7.03332E-09

0.002722704

0.664058979

27.09246461

2.77686E-05

0.025754812

1.86978E-12

2.67843E-20

Avg.

2.27017E-14

Std.

Avg.

4.21371E-15

mGWO

GWO

FI

Functions

0.014667375

3.576621977

145.8973886

0.00014958

0.138775006

1.00693E-11

1.44646E-19

Std.

0.00509117

2.023212521

27.90121218

0.009080818

0.297631683

3.98662E-07

3.6405IE-11

Avg.

MVGWO

0.027417099

10.89553133

150.252699

0.048903791

1.603277833

2.14691E-06

1.9605IE-10

Std.

Table 1. Results of Unimodal Benchmaik Functions

0.002410905

0.653481149

27.0840552

2.15365E-05

0.004752608

3.79065E-13

5.02515E-21

Avg.

WMGWO

0.012983249

3.519216794

145.8522577

0.000116021

0.025608344

2.0419E-12

2.70745E-20

Std.

284 A. Kumar et al.

31740.9066

−5894.138186

7.356904137

1.12305E-08

0.007615591

0.068264756

0.793118719

F8

F9

F10

F11

F12

F13

4.271078347

0.367652946

0.041035588

6.04819E-08

0.674573321

0.048516075

0.002900091

2.74079E-11

2.307621237

−5549.216951

Avg.

Std.

Avg.

39.64168844

mGWO

GWO

Functions

3.632721675

0.261281

0.015626751

1.4765 IE-10

12.43430666

29883.45828

Std.

1.166904305

0.116457161

0.010091632

1.19966E-06

14.5025056

−5857.761095

Avg.

MVGWO

6.2840331

0.627149194

0.054367938

6.46168E-06

78.10809588

31545.00948

Std.

Table 2. Results of Multimodal Benchmark Functions

0.581785374

0.047127158

0.001061582

1.15544E-11

1.509290756

−5323.667732

Avg.

WMGWO

3.133057218

0.253790407

0.005720189

6.22295E-11

8.132610238

28670.25142

Std.

Weighted Mean Variant with Exponential Decay Function 285

0.397890704

3.000097122 16.15601744

F16

FI 7

F18

0.00515243

0.027759002 2.142757024

5.555488095

F23

−10.02953273 −10.51497179

55.99689189

−9.991687751 53.80697919

−10.39836178 56.62485602

54.01072723

−9.209708338 49.59609511

−9.140609798 49.22404714

F22

−3.267669935 17.59694401

−3.244473245 17.47202492

F21

1.618128436

F20

−0.300478907

3.000066002 16.15584986

0.39789999

−1.031628241

−0.300478907

1.618128436

Std.

4.100819903 22.09110622

Avg.

mGWO

F19

2.142707015

0.031569745

5.555488694

0.005859398

−1.031628352

F15

3.779502905 20.39451909

Std.

F14

Avg.

Functions GWO Std. 0.020819527 2.142698747

5.555488968

1.618128436 47.00121754

−10.26224386

55.26389735

−9.889402585 53.25614683

−8.72776938

−3.272850824 17.62484343

−0.300478907

3.000188886 16.1565116

0.397889168

−1.031628403

0.003864542

7.572118498 40.78259721

Avg.

MVGWO

Table 3. Results of Fixed-dimension Multimodal Benchmark Functions

Std.

2.142745511

5.555488265

0.006936066

1.618128436

−10.33235551

−10.18954766

55.64144912

54.87240339

−9.626360265 51.83962196

−3.254300855 17.52495089

−0.300478907

3.000034202 16.15567861

0.397897852

−1.031628272

0.001287706

3.777813967 20.34468741

Avg.

WMGWO

286 A. Kumar et al.

DXOR (36 Dim)

Best_score

5.0447E-05

8.955IE-05

5.5064E-05

9.8528F. 07

Datasets→

Variants↓

GWO

mGWO

MVGWO

WMGWO

100

100

100

100

Classification rate (%)

6.8620F. 18

3.7800E-22

1.3751E-23

3.4394E-33

Best_score

DBAL (55 Dim)

100

100

100

100

Classification rate (%)

0.0012

0.0014

0.0012

0.0014

Best_score

100

99

100

98

Classification rate (%)

DCAN (209 Dim)

Table 4. Best_score and Classification Rate of Classification Dataset

0.0688

0.0781

0.0594

0.0531

Best_score

DH (1081 Dim)

91.25

86.25

90

88.75

Classification rate (%)

Weighted Mean Variant with Exponential Decay Function 287

288

A. Kumar et al.

To simulate the classical algorithm (GWO), their variants (mGWO and MVGWO), and proposed work (WMGWO), 30 are the number of search agents and 300 maximum numbers of iterations. To improve the accuracy and check the performance due to randomness, every algorithm repeats 30 times and obtained results shown in Tables 1, 2, and 3. Average (Avg.) and slandered deviation (Std.) are the evaluated parameters and bold values show the best result of particular functions.

6 Real-World Dataset Problems To apply the proposed variant on real-world applications, XOR Dataset, Balloon Dataset, Cancer Dataset, and Heart Dataset are to be considered of the Classification dataset. In addition, Sigmoid Dataset, Cosine Dataset, and Sine Dataset are to be considered of function approximation dataset. To simulate the application datasets, there are 200 search agents and maximum iterations are also 200. The dimensionality of XOR dataset (abbreviated as DXOR ) is 36; Balloon dataset (DBAL ) is 55, 209 of Cancer (DCAN ), 1081 of Heart (DH ), 46 of Sigmoid (DSIG ), 46 of Cosine (DCOS ), and 46 of Sine (DSIN ) dataset. Rest of the detail about dataset and other information can take from [7]. Table 4 dictates the results of different datasets with Best_score and Classification Rate (%) that are considered as evaluation parameters of classification dataset and bold values depict the best result. Table 5. Best_score and Test_error of Function Approximation Dataset Dataset→

DSIG (46 Dim)

Dcos(46 Dim)

Dsin (46 Dim)

Variants↓

Best_score

Test_error

Best_score

Test_error

Best_score

Test_error

GWO

0.2464

17.5628

0.1761

4.7553

0.4254

142.6560

mGWO

0.2464

17.5850

0.1761

4.7751

0.4423

145.5733

MVGWO

0.2464

17.5756

0.1759

4.7463

0.4530

149.0522

WMGWO

0.2464

17.5397

0.1759

4.7254

0.3984

135.3883

Table 5 also dictates the results of different datasets with Best_score and Test_error that are considered as evaluation parameters of function approximation dataset and bold values depict the best result as above.

7 Conclusion Meta-heuristics algorithms are becoming popular from last two decades with their strength and ability to solve complex and NP-hard types of problems. GWO is illustrious, renowned, and latest swarm intelligence algorithm. No free lunch theorem state that no such algorithm exists that solves all kinds of problems and satisfies all related conditions. Hence, new variants are proposing day by day to overcome related issues and to solve various kinds of problems of real life. In this research article, proposed a variant of GWO,

Weighted Mean Variant with Exponential Decay Function

289

namely Weighted Mean GWO (WMGWO) with an exponential function. To check the performance of proposed variant, test beds of 23 benchmark functions are employed and compare the obtained results with standard GWO and their other variants of GWO like mGWO, MVGWO. The results of unimodal and multimodal benchmark functions state that the proposed variant worked properly i.e. compete the other algorithm. Surprisingly, it also provides the competitive results on fixed-dimension multimodal benchmark function. Formulate the proposed model, simulate on higher dimensional, and compare the results with other swarm and meta-heuristics algorithm as the future work. The proposed variant gives better and comparative results on classification and function approximation datasets than standard and other variants. CEC-2017 benchmark function will be considered as future work to evaluate the performance of proposed variant.

References 1. Holland, J.H.: Genetic algorithms. Sci. Am. 267(1), 66–73 (1992) 2. Davis, L.: Handbook of Genetic Algorithms (1991) 3. Koza, J.R.: Human-competitive results produced by genetic programming. Genet. Program Evolvable Mach. 11(3–4), 251–284 (2010) 4. Kinnear, K.E., Langdon, W.B., Spector, L., Angeline, P.J., O’Reilly, U.M. (eds.) Advances in Genetic Programming, vol. 3. MIT Press (1999) 5. Hansen, N., Kern, S.: Evaluating the CMA evolution strategy on multimodal test functions. In: International Conference on Parallel Problem Solving from Nature, pp. 282–291. Springer, Heidelberg (2004) 6. Jagerskupper, J.: How the (1 + 1) ES using isotropic mutations minimizes positive definite quadratic forms. Theoret. Comput. Sci. 361(1), 38–56 (2006) 7. Auger, A.: Convergence results for the (1, λ)-SA-ES using the theory of φ-irreducible Markov chains. Theoret. Comput. Sci. 334(1–3), 35–69 (2005) 8. Back, T., Hoffmeister, F., Schwefel, H.-P.: A survey of evolution strategies. In: Proceedings of the Fourth International Conference on Genetic Algorithms (1991) 9. Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. B Cybern. 26(1), 29–41 (1996) 10. Parsons, S.: Ant colony optimization by Marco Dorigo and Thomas Stutzle, MIT Press, 305 pp., $40.00, ISBN 0-262-04219-3. Knowl. Eng. Rev. 20(1), 92–93 (2005) 11. Colorni, A., Dorigo, M., Maniezzo, V.: Distributed optimization by ant colonies. In: Proceedings of the First European Conference on Artificial Life, vol. 142, pp. 134–142, December 1992 12. Yang, X.S.: A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), pp. 65–74. Springer, Heidelberg (2010) 13. Eberhart, R., Kennedy, J.: Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, November 1995 14. Al-Aboody, N.A., Al-Raweshidy, H.S.: Grey wolf optimization-based energy-efficient routing protocol for heterogeneous wireless sensor networks. In: 2016 4th International Symposium on Computational and Business Intelligence (ISCBI), pp. 101–107. IEEE, September 2016 15. Dudani, K., Chudasama, A.R.: Partial discharge detection in transformer using adaptive grey wolf optimizer based acoustic emission technique. Cogent Eng. 3(1), 1256083 (2016) 16. Jayabarathi, T., Raghunathan, T., Adarsh, B.R., Suganthan, P.N.: Economic dispatch using hybrid grey wolf optimizer. Energy 111, 630–641 (2016)

290

A. Kumar et al.

17. Jitkongchuen, D.: A hybrid differential evolution with grey wolf optimizer for continuous global optimization. In: 2015 7th International Conference on Information Technology and Electrical Engineering (ICITEE), pp. 51–54. IEEE, October 2015 18. Li, L., Sun, L., Guo, J., Qi, J., Xu, B., Li, S.: Modified discrete grey wolf optimizer algorithm for multilevel image thresholding. Comput. Intell. Neurosci. 2017, 1–16 (2017) 19. Li, L., Sun, L., Kang, W., Guo, J., Han, C., Li, S.: Fuzzy multilevel image thresholding based on modified discrete grey wolf optimizer and local information aggregation. IEEE Access 4, 6438–6450 (2016) 20. Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997) 21. Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014) 22. Mittal, N., Singh, U., Sohi, B.S.: Modified grey wolf optimizer for global engineering optimization. Appl. Comput. Intell. Soft Comput. 2016, 8 (2016) 23. Singh, N.: A modified variant of grey wolf optimizer. Int. J. Sci. Technol. Scientia Iranica (2018). http://scientiairanica.sharif.edu 24. Liang, J., Suganthan, P., Deb, K.: Novel composition test functions for numerical global optimization. In: Proceedings of the 2005 IEEE Swarm Intelligence Symposium, SIS 2005, pp. 68–75 (2005)

Enhanced Homomorphic Encryption Scheme with Particle Swarm Optimization for Encryption of Cloud Data Abhishek Mukherjee1(B) , Dhananjay Bisen2 , Praneet Saurabh3(B) , and Lalit Kane4 1 Technocrats Institute of Technology Advance, Bhopal, MP, India

[email protected] 2 Rajkiye Engineering College, Banda, UP, India

[email protected] 3 Mody University of Science and Technology, Lakshmangarh, Rajasthan, India

[email protected] 4 University of Petroleum and Energy Studies, Dehradun, Uttarakhand, India

[email protected]

Abstract. Cloud Computing is the decentralized type of architecture which is vulnerable to various type of security attacks. Homomorphic encryption is the encryption scheme to encrypt objects which are used to access data from the cloud server. Homomorphic encryption scheme has major disadvantage of key management and key sharing which reduce its efficiency. Particle swarm optimization algorithms (PSO) are nature-inspired meta-heuristic algorithms that are population dependent known for social behavior of birds and fishes. These concepts are used as an inspiration to build scientific approach for complex problem solving. Depending upon the quality of measure, the solutions are enhanced by the algorithms initiating from a randomly distributed set of particles. By moving the particles around the search space using a set of mathematical expressions, the improvisations are achieved. In this research work, technique of PSO is applied which generate key for the encryption. PSO in is the optimization algorithm used to generate fixed number key for the encryption. PSO based homomorphic algorithm is implemented in MATLAB and simulation results shows that it performs well in terms of execution time, resource utilization. The execution time and resource utilization of PSO based homomorphic algorithm is less as compared to homomorphic algorithm. The result is optimized upto 10% approx in the improved algorithm as compared to existing algorithm. Keywords: Particle swarm optimization · Homomorphic · Key management · Nature inspired · Resource utilization

1 Introduction There are several cryptographic techniques utilized by various security mechanisms [1]. To ensure security within the cloud, cryptographic techniques are important to be © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 291–298, 2021. https://doi.org/10.1007/978-3-030-49336-3_29

292

A. Mukherjee et al.

applied. In order to encrypt and decrypt the data, it is important to use a key in this mechanism. Thus, the confidentiality and integrity of data can be protected here. The data that is being shared in the cloud is provided security and is also ensured to be stored securely due to these approaches. The technology that designs ciphers is referred to as cryptography. Several cryptographic algorithms have been proposed today which are broadly categorized into symmetric and asymmetric types of algorithms. The manner in which keys are being used in these algorithms is different [2]. A common secret key is shared by the sender as well as receiver within the symmetric key encryption. It is important to ensure that this key is kept as secret by both ends. The messages can be encrypted and decrypted by both the parties using this key. There are however, two different keys included within the asymmetric key encryption. The encryption and decryption processes use two separate keys in these algorithms. In order to perform certain operations on the encrypted data such that the private key is not known and the secret key is only available with the client is known as homomorphic encryption technique [3, 4]. Similar to the calculations being performed on raw data, the result of any operation can be decrypted here. This is the technique through which computation is being performed on encrypted data without providing prior decryption. In case when the result that is in the encrypted form is decrypted after the operation, an original result is provided in which the original plaintext however cannot be known. An encryption can be known of homomorphic form in case when Enc(f (a, b)) is calculated from Enc(a) and Enc(b). The function f here can be +, × , ⊕. There is no private key involved here. The kinds of operations that are implemented on the accessed raw data help in differentiation amongst homomorphic encryption. The raw materials are only added in the case of additive homomorphic encryption approach. The raw products can only be multiplied in case of multiplicative homomorphic encryption [6]. Following is the depiction of both of these algorithms in which the key k is used within an encryption algorithm known as E_k and a decryption algorithm is presented by Dk. Dk (Ek (n) × Ek (m)) = n × m OR Enc(x ⊗ y) = Enc(x) ⊗ Enc(y)

(1)

DL (EL (n) × EL (m)) = n + m OR Enc(x ⊕ y) = Enc(x) ⊕ Enc(y)

(2)

This research work is related to homomorphic encryption scheme for cloud computing. In the Sect. 1, the details are presented related to homomorphic encryption scheme. In Sect. 2, the related work is presented which describe previous work of the authors. The Sect. 3 describes the proposed methodology with algorithm. Section 4 covers results and analysis is written with graphically analysis while Sect. 5 concludes the paper.

2 Related Work In recent years, there has been huge growth in popularity of computing as a service. Minimizing the capital as well as operating costs of the systems is the major objective of this approach. Further, dynamic scaling is provided along with the deployment of nee services here which also does not need to maintain an infrastructure that is completely dedicated [5]. Thus, the manner in which the organizations look upon their IT resources

Enhanced Homomorphic Encryption Scheme

293

has been transformed largely by the cloud computing application [6]. The organizations today are adopting cloud computing instead of the scenario of single system that includes a single operating system as well as application. There are large numbers of resources available within the resources and it is possible for a user to select any number of resources required. At the rate that is particularly set up as per the needs of the user, the services can be consumed. At any time interval, this on-demand service can be provided. All the important complex operations that were previously taken case by the user are taken care of using CSP in these systems. Providing higher flexibility in comparison to previous systems is the best advantage of cloud computing. The enterprise IT world has been highly benefitted due to these systems. Various methods suggest that key challenge in fully homomorphic scheme is key management and its subsequent sharing. Next section will introduce insight about the proposed method.

Fig. 1. Cloud architecture

As shown in Fig. 1, the non-adequate security measures are provided by cloud service providers along with the addition of various parameters such as confidentiality, integrity, control and so on. For providing secure connections within the cloud computing systems, there is a need to include several security mechanisms. Otherwise, the integrity of data might be lost since an unauthorized user might have an access to this private data. To protect the information of a user within the cloud applications, several privacy techniques have been proposed earlier. However, the complexity of cloud computing systems has increased today due to which the earlier proposed protection techniques cannot be applied.

3 Proposed Method To secure cloud scenario, there exists various encryption techniques. Fully homomorphic is more efficient as compared Full disk encryption scheme. Key challenge in this scheme

294

A. Mukherjee et al.

is key management and sharing. This work intends to overcome this problem and design an efficient scheme for the key sharing and management. Particle swarm optimization algorithm with homomorphic encryption scheme is taken to design the intended solution. The nature-inspired meta-heuristic algorithms that are population dependent are known as Particle swarm optimization algorithms in which the social behavior of birds and fishes is used as an inspiration to build up a scientific approach. Depending upon the quality of measure, the solutions are enhanced by the algorithms initiating from a randomly distributed set of particles [7]. By moving the particles around the search space using a set of mathematical expressions, the improvisation is done. There are few inter-particle communications performed with the help of these simple mathematical expressions which are very simple and basic. Towards the best experienced position, the mobility of each particle is suggested along with the best position identified for the swarm. The random perturbations are also included here. Several variants that utilize various updating rules are however scarcely available here. There is a dynamic definition of the objective function of Particle swarm optimization algorithm. The current iteration and previous iteration are compared on the basis of swarm value to do so. The objective function is identified by considering the swarm value that has the highest iteration [8]. Equation 3 shows the description of objective function which is dynamic in nature and thus after each iterations, its value changes. vi+1 = vi + c ∗ rand ∗ (pbest − xi ) + c ∗ rand ∗ (gbest − xi )

(3)

As shown in Eq. 3, the Vi is the velocity of the element, pbest is the best value among available options and rand is the random number here, for each attribute of website, x value is provided and the total number of features of website and defined by c variable. The best value identified from each population is chosen and denoted as pbest and the best value from each iteration is chosen and denoted as gbest . In case when the objective function is finalized with the traversing of each attribute, the value achieved is added within the Eq. 4 presented below. xi+1 = xi + vi+1

(4)

x(i+1) is the position vector. For solving such multi-objective optimization issues, Particle swarm optimization (PSO) algorithms are applied which include dynamic aspects calculated best value [9]. The particle swarm optimization algorithm take input the data which is used for the encryption and generated the optimized value which will be the key which is used for the encryption. Recently other bio inspired advances also gained attention in realizing different goals on this domain. Recently bio inspired advances [10] also gained attention in realizing different goals on this domain [11, 12]. Next subsection presents Enhanced homomorphic encryption algorithm (EHA) that incorporates the concepts of PSO in quest of overcoming the current limitations. 3.1 Enhanced Homomorphic Encryption Algorithm (EHA) Enhanced homomorphic algorithm (EHA) take input the image data for the encryption. The homomorphic encryption scheme uses the symmetric based cryptography. The keys which are used for the image encryption is generated using the particle swarm

Enhanced Homomorphic Encryption Scheme

295

optimization algorithm. Following are the various steps of the Enhanced homomorphic encryption algorithm (EHA): Step 1: In the first step, the data is taken as input which we need to encrypt. Step 2: In the step 2, the input data will be the defined as the initial population for the calculation of best value using Particle swarm optimization algorithm. Step 3: In the third step, the condition is applied for the calculation of best value in which optimization value which is calculated after each iteration is compared and with the value of previous iteration. The iteration at which value is least is considered as the best value. Step 4: The optimization value after each iteration is calculated with the formula- V = V +Cl*rand*(pBestp) + c8*rand*(gBest-p) Step 5: Stop.

4 Result Analysis The Enhanced homomorphic algorithm (EHA) is implemented in MATLAB to encrypt data on the cloud. The data which is taken as input is the image data. The symmetric encryption algorithm is implemented for the encryption and decryption of the cloud data. The Particle swarm optimization algorithm is implemented which generate optimal key which is used to encrypt the data using homomorphic encryption scheme. The performance of proposed technique is analyzed in terms of execution time and resource consumption. Table 1. Simulation parameters Parameter

Values

Operating system

Xnon

Number of virtual machines 7 Number of hosts

10

RAM

5 GB

Input data

Image data

Image size

256*256

Number of image

80

The simulation parameters are described in Table 1. In the dataset the operation system is the Xnon which is used on each virtual machine. The number of virtual machines are 7 and each virtual machine is having 5 GB RAM. The number of images is 80 and each has size of 256 * 256.

296

A. Mukherjee et al. Table 2. Comparison of techniques Image no

Homomorphic encryption

E-Homomorphic encryption

1

2.2 s

2s

2

2.7 s

2.4 s

3

2.6 s

1.7 s

4

1.9 s

2.8 s

5

2.1 s

2.9 s

As shown in Table 2, the conventional Homomorphic encryption and Enhanced homomorphic encryption (EHA) schemes are compared in terms of execution time. It is analyzed that Enhanced homomorphic scheme performs well in terms of execution time as compared to existing technique. Table 3. Comparison of techniques Image no

Homomorphic encryption

E-Homomorphic encryption

1

18 buffer

15 buffers

2

21 buffer

18 buffer

3

23 buffer

20 buffer

4

24 buffer

15 buffer

5

22 buffer

16 buffer

As shown in Table 3, the resource utilization of the proposed and existing scheme is compared for the performance analysis It is analyzed that Enhanced homomorphic scheme (EHA) performs well in terms of all parameters than the conventional homomorphic scheme. As shown in Fig. 2, the execution time of the enhanced homomorphic algorithm is compared with the existing algorithm. The proposed algorithm is the homomorphic encryption scheme and proposed Enhanced homomorphic encryption (EHA). Enhanced homomorphic algorithm (EHA) take less time because the keys are generated using the PSO value. The keys which are generated with PSO are more optimized for the generation of encrypted data.

Enhanced Homomorphic Encryption Scheme

297

Fig. 2. Execution time

Fig. 3. Resource utilization

As shown in Fig. 3, the resources utilization of proposed EHA is compared with conventional homomorphic encryption scheme. From the results it is quite clear that resource utilization of Enhanced homomorphic encryption technique (EHA) is less as compared to existing homomorphic techniques. In the Enhanced homomorphic algorithm (EHA), the keys for the generation of encrypted data are generated using PSO algorithm which is much optimized as compared to manual selection of the keys.

298

A. Mukherjee et al.

5 Conclusion In this work, it is concluded that homomorphic encryption is the scheme which encrypt cloud data. The homomorphic encryption scheme has major disadvantage of key management and key sharing. The Particle swarm optimization (PSO) is the optimization algorithm which is used to generate key for the encryption. The generated key is given as input to the homomorphic encryption scheme to generate encrypted data. The Enhanced homomorphic algorithm is implemented in MATLAB and results are analyzed in terms of execution time, resource utilization. The Enhanced homomorphic algorithm has less execution time and resource utilization as compared to existing homomorphic encryption scheme. In future, the technique will be further improved to ensure data integrity in the cloud environment.

References 1. Kanagavalli, R., Vagdevi, S.: A mixed homomorphic encryption scheme for secure data storage in cloud. In: IEEE Intenational Advanced Computing Conference IACC2015, pp 1062–1066 (2015) 2. Lauter, K., Nachirg, M., Vaikuntanathan, V.: Can homomorphic encryption be pratical? In: CCSW 2011, Chicago, Illinois, USA, pp. 113–124 (2011) 3. Tebaa, M., Elhajii, S.: Secure cloud computing through homomorphic encryption. Int. J. Adv. Comput. Technol. 5(16), 29–38 (2013) 4. Parmar, P.V.: Survey of various homomorphic encryption algorithms and schemes. Int. J. Comput. Appl. (0975–8887) 91(8), 26-32 (2014) 5. Song, X., Wang, Y.: Homomorphic cloud computing scheme based on hybrid homomorphic encryption, In: 3rd IEEE International Conference on Computer and Communications, pp 2450–2453 (2017) 6. Oppermann, A., Toro, G.,F., Seifert, J.: Secure cloud computing: communication protocol for multithreaded fully homomorphic encryption for remote data processing. In: IEEE International Symposium on Parallel & Distributed Processing with Applications, pp. 503–510 (2017) 7. Das, D.: Secure cloud computing algorithm using homomorphic encryption and multi-party computation. In: International Conference on Information Networking (ICOIN), pp. 391–396 (2018) 8. Ding, Y., Li, X.: Policy based on homomorphic encryption and retrieval scheme in cloud computing. In: IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC), pp. 568–571 (2017) 9. Saurabh, P., Verma, B.: An efficient proactive artificial immune system based anomaly detection and prevention system. Expert Syst. Appl. 60, 311–320 (2016) 10. Saurabh, P., Verma, B., Sharma, S.: An immunity inspired anomaly detection system: a general framework. In: Proceedings of Seventh International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA 2012). AISC, vol. 202, pp. 417–428. Springer (2012) 11. Saurabh, P., Verma, B., Sharma, S.: Biologically Inspired computer security system: the way ahead. In: Recent Trends in Computer Networks and Distributed Systems Security, Communications in Computer and Information Science, vol. 335, pp. 474–484. Springer (2011) 12. Saurabh, P., Verma, B.: Immunity inspired cooperative agent based security system. Int. Arab J. Inf. Technol. 15(2), 289–295 (2018)

Detection and Prevention of Black Hole Attack Using Trusted and Secure Routing in Wireless Sensor Network Dhananjay Bisen1(B) , Bhavana Barmaiya2 , Ritu Prasad2(B) , and Praneet Saurabh3(B) 1 Rajkiye Engineering College, Banda 210201, U.P., India

[email protected] 2 Technocrats Institute of Technology Advance, Bhopal 462021, M.P., India

[email protected], [email protected] 3 Mody University of Science and Technology, Lakshmangarh, Rajasthan, India

[email protected]

Abstract. A wireless sensor network (WSN) is a network of devices that can communicate after gathering information by monitoring any region through wireless links. Due to this delicate arrangement several numbers of attacks directly affects the WSN functions especially denial of service (DoS). DoS is the most popular and frequent among all to affect WSN. Recently, blackhole attack has taken over it and comprises security and integrity of WSN. Secure and reliable data transmissions are the prime requirements of WSN but new evolving attacks are threats to achieve this objective. This paper proposes an algorithm that detects and recovers nodes from blackhole assaults in WSN. The proposed algorithm Trusted and Secure routing (TSR) involves detector nodes, moves in the network algorithm identifies blackhole attacks in the network, marks this node as balckhole and subsequently excludes from the network. It enables to transmit data securely with alternate path using detector node. The proposed algorithm increases the performance and the delivery ratio of data in WSN. The experimental results show reliable and secure data transmission from DoS and blackhole attacks. Keywords: WSN · DoS · Blackhole · Security · False alarm · PDR

1 Introduction Wireless sensor network (WSN) is a network of devices that can communicate after gathering information after monitoring any region through wireless links [1]. WSN uses sensors that senses properties like, vibration, electromagnetic strength, light, temperature, humidity and transfer the gathered data to sensor that assist pass on the data. WSN has sensing ability and communication functionalities and works in different modules [2]. Central module of WSN detects malicious node and keeps this information in a wireless sensor network. But, present malicious data injection and detection of false alarm faces pertinent issues [3]. WSN always strives to realize availability, security [2] and reliability of routing protocols. Fundamental of trust lies in locating DoS and blackhole attacks, however, gaining trust of a node is very challenging in WSN [4]. Trust, security © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 299–308, 2021. https://doi.org/10.1007/978-3-030-49336-3_30

300

D. Bisen et al.

and routing are the main challenges in WSN [5]. Data should be transmitted securely irrespective of black hole and DoS in the network [6]. This paper proposes an algorithm that detects and recovers nodes from blackhole assaults in WSN. The proposed algorithm involves detector nodes, moves in the network algorithm identifies blackhole attacks in the network, marks this node as balckhole and subsequently excludes from the network. It enables to transmit data securely with alternate path using detector node. The proposed Trusted and Secure routing (TSR) increases the performance and the delivery ratio of data in WSN. The experimental results show reliable and secure data transmission from DoS and blackhole attacks. The paper is organized as follows. Section 2 provides the related literature. Section 3 represents proposed algorithm. Section 4 provides the implementation and result analysis. Section 5 provides conclusion.

2 Related Work Wireless sensor networks (WSN) offers connectivity through wireless link and then it collects data from various sensors deployed to achieve this task. WSN creates trust key model with a defense arrangement that utilizes grouping procedure to dynamically forward data packets [7]. **Routing in wireless network is not the same as in mobile adhoc systems [8]. WSN wireless associations are inconsistent and direction finding rules requires significant energy. Since, wireless sensors are energy deficient therefore secure and safe routing is paramount requirement of WSN. Presence of blackhole not only degrades the performance of WSN but also inflicts loss of trust in WSN [9]. Existing techniques and solutions only detects bad mounting connections and provide location and time based attacks. Various techniques for overcoming this situation have been developed and deployed. A trust distrust protocol for secure routing into wireless sensor system network is proposed that consisted of four stages. The first stage used an enhanced k-means procedure topology management, subsequent stage had test fitness estimation, next step employed fitness value grade point to mark every node and last step determined secure route for the routing according to grade point [10]. Illiano et al. [11] used available information of recommendation based trust model for the MANET and efficaciously realized the limitation in context of blackhole and location and time based attacks. The proposed algorithm will detect black hole based attacks in the network and informed to the network. Ma et al. [12] in their research pointed about a novel procedure to recognize malicious node affected by blackhole attack and also constructed dimension estimations that proved resilient to numerous compromised sensors. Subsequently, Magistretti et al. [13] performed dimension based investigations, and quantified that all the blackholes are related to measurements under unaffected environments and interrupt such connections. The drawbacks of the scheme are that the dimensions encompass and duplicate information. Son et al. [14] provided information about routing security in their method and detected blackhole attacks. Li et al. [15] in their work illustrated that like MANET, hosts in WSN are particularly defenseless to all attacks. Route discovery and creation are based nohe same mechanism of sending RREQ packet to the all the neighboring node for path but malicious node reply for RREQ complicates the routing. This whole process actually makes WSN vulnerable to new attacks and packet routed through them causing high packet drop ratio. In recent times some researchers

Detection and Prevention of Black Hole Attack Using TSR in WSN

301

explored this domain though various bio inspired techniques [16] that have successfully attained different objectives this domain [17–19]. The proposed Trusted and Secure routing (TSR) is designed to detect black hole based attacks in the network and then inform the network.

3 Proposed Method This section presents the Trusted and Secure routing (TSR) to overcome the problem of blackholes in WSN.

Start

Number of nodes Threshold value initialization Transmission node and destination node initialization, RQ packet initialization, required area setup

Initial trust key Calculation Trust Key= Timestamp

Broadcast the RQ packet to construct route request and find out route to the destination end

Is trust and confidence key == node key

Marked node invalid

Authenticated node

Is attack

Route Confirmed

Secure data transmission

Backup nodes

Route maintenance step

Fig. 1. Flow diagram of proposed algorithm

Above Fig. 1 represents the flow diagram that represents all the initialization parameters of the algorithm. The rectangle represents all the processing of the algorithm. The decision box represents all the conditions of the proposed Trusted and Secure routing (TSR). Initially all the required parameters are provided input to the input as algorithm.

302

D. Bisen et al.

The parameters are source node, number of nodes, destination node etc. All the threshold values are provided to the algorithm.

Detection and Prevention of Black Hole Attack Using TSR in WSN

303

Initially all the mandatory information is filled in the request packet (RQ) of the source node. The request packet (RQ) is then broadcasted to construct route request and search route to the destination. The request is acknowledged by intermediary node or destination node. If received request is identical then simply throw away the RQ. If received request is fresh or restructured route is established then next update the routing information entry for the source node and build or update inverse route in the direction of the source node. The next step is to check the information for receiving node. If receiving node is one or the other the intermediary or target node with newer route then again all the mandatory information is filled in the request packet RQ of the source node otherwise take the mandatory field values as of the received RQ update compulsory fields in the RQ beforehand broadcasting and again rebroadcast the RQ packet. Next step is to check if sending node is target node. If sending node is target node then increase the destination series number. After that, it fills reply (RP) packet with the mandatory columns and unicast the RP packet on the inverse route in the direction of the source. Intermediate node or the source node record the mandatory column values from the received RP and attachment of the corresponding documented values into RP. If the neighbor directing RP is striking as blacklisted then throw away the RP otherwise if fresh and restructured route is found then update the transmitting table record for the destination node. If receiving node is the main source node then reject the RP direct, data through the forward direction if the route is newer and the subsequent hop is reliable else forward the RP packet on the inverse route in the direction of the source node. The next step is to update trust. For each neighbor information entry authenticate the presence of attack information form neighbor. Estimate trust value of the neighbor node if the neighbor follows attack information then identify the node as mistrusted node. Else if the neighbor doesn’t have information of attack value, and suggested as trusted node then identify the node as trusted node. For routing information entry do the following steps repeatedly discover the information of the subsequent hop from the neighbor information if the subsequent hop is found to be disbelieved in the neighbor information then start a local route finding process to identify an optional path for desired output. Next step is belief recommendation in proposed algorithm. Create the vacant blacklist for reference purpose for each neighbor information entry do the subsequent step if the neighbor is identified as disbelieved node then supplement the neighbor identity into the blacklist. Next step is to integrate the blacklist into the hello data packet and broadcast the hello data packet

304

D. Bisen et al.

as of the neighbors take hello data packet from the neighbor. If the neighbor directing the HELLO data packet is trusted then take the blacklist from the hello data packet for each information in the blacklist do the following step and discover the equivalent information in the neighbor route table if the neighbor information occurs then set reference value as disbelieved for the neighbor. Trusted and Secure routing (TSR) also increases performance and the ratio of data delivery in network. The experimental outcomes show the system is good for safe data transmission secure from DoS and blackhole attacks.

4 Result Analysis This section presents the experimental setup and experimental results carried to measure the performance of Trusted and Secure routing (TSR) and its comparison with current state of the art (AODV). Table 1. Simulation parameters Parameter

Value

MAC layer protocol

802.11

Traiffc type

CBR-UDP

Routing protocol

AOMDV

Initial energy

1J

Number of nodes

50

Packet size

1024 s/s

Frequency range

1025 GHz

Received power

0.01 W

Trainsmitted power

0.02 W

Simulation area

1500 × 1500

Mobility model

Random way point

Maximum mobility

5 m/s to 25 m/s

Percentage of malicious nodes 0% to 50% Simulation time

200 to 1000 s

Number of connections

10

Communication range

250 m

Channel bandwidth

2 Mbps

Table 1 presents the performance parameters used for implementation like, dimension, total nodes, traffic, transmission rate, routing protocol, transmission range, sensitivity, transmission power etc. Below are detailed performance parameters on which results are obtained and analyzed.

Detection and Prevention of Black Hole Attack Using TSR in WSN

(i)

(ii) (iii)

(iv) (v)

(vi)

(vii) (viii) (ix) (x) (xi) (xii) (xiii) (xiv)

305

Simulation area: The simulation area represents the region where simulation is performed. Different simulation areas are used for implementation, like 500 × 500, 850 × 1200. Simulation duration: The overall time elapsed in complete execution of simulation is called simulation duration. Simulation is 100 s for experiments. Average Delay: This metric depicts the freshness of data containers. It is welldefined as the average epoch between the twinkling an information packet is directed by an info source besides the instant the sink accepts the data container. No of mobile nodes: The nodes used in simulations are 30, 50 with mobility and without mobility. Transmission range: The distance at which the information can be communicated precisely is termed as transmission range. Transmission range is 250 m in simulation. Data Delivery Ratio (R): This metric designates both the damage ratio of the path and routing technique and the energy mandatory to get data packets. This denotes the ratio between the amounts of information containers that are sent by the source and same desired by the sink. Movement model: Random waypoint model is used for simulation. Traffic type: The traffic type indicates types of traffic used by simulation environment. We have used CBR traffic type for implementation. Max node speed: Maximum node speed as 5 ms to 30 s used in simulation. Rate packet per size: 2 packets per size are used for implementation. Data payload: Different amount of data payload is used in implementation. In experiments 28 to 512 bytes data pay load is used. Protocol: Protocol represents set of rules for data communication. AODV protocol is used for implementation. Neighbor discovery probability: The discovery of the neighbor for data transmission. Neighbor discovery latency: The latency of the node during neighbor discovery.

Above Fig. 2 represents throughput analysis of attacks and security arrangement with an increase in number of nodes in the networks. Attackers aim is to drop the data packets or to hold the resources for that the communication is affected. Overall in existing work the throughput is maximum and security is minimum while in proposed work the throughput is minimum with maximum security. Black hole and security scheme’s Packet Delivery Ratio performance is depicted in Fig. 3 with an increase in number of nodes in the networks. In WSN, when blackhole is introduced in the network data packets are dropped as a consequence that leads decrease in the percentage ratio of data. Newly introduced detector nodes in WSN identify blackholes attack in the network. The identified node is then blacklisted from the network and they are excluded from network so that a different secure path established to complete the transmission. Before hand, ratio of packet drop was maximum and after using detector node ratio of packet drops becomes minimum.

306

D. Bisen et al.

Throughput Comparision 1.4

Throughput

1.2 1 0.8 0.6 0.4 0.2 0 0

10

20

30

AODV

40 50 Number of Nodes

60

70

80

Trusted and Secure Routing (TSR)

Fig. 2. Throughput analysis

Packet Drop %

Packet Drop Comparision 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0

10

20 AODV

30

40 50 60 70 Number of Nodes Trusted and Secure Routing (TSR)

80

Fig. 3. PDR analysis

As represented in Fig. 4 energy consumption of ActiveTrust is more in the existing method as compared to the energy consumption of the proposed method. Precisely, energy consumption is reduced as compared to ActiveTrust. Figure 4 very clearly states that the proposed method required lesser energy as compared to the existing method with an increase in number of nodes in the networks.

Detection and Prevention of Black Hole Attack Using TSR in WSN

307

Energy COnsumption

Energy Consumption 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

10

20 AODV

30

40

50

60

70

80

Number of Nodes Trusted and Secure Routing (TSR)

Fig. 4. Energy conversion analysis

5 Conclusions In WSN, transmission of data can be achieved if no malicious node remains present in the network. In situations with presence of malicious nodes and false alarm, WSN finds it very difficult to continue transmission. Data packets need to be transmitted securely irrespective of blackhole attack or malicious information in WSN. This paper introduced an innovative technique that protects network from blackhole and DoS attack by identifying the attack in WSN. The proposed system automatically detected the compromised node and then authenticated the secure path to achieve communication. The proposed method also prevented the network from blackhole attack and established trust through blacklisting the attacked node and making route safe. The experimental results demonstrated that proposed method outperformed the existing methods and enhanced energy proficiency in WSN.

References 1. Cao, Q., Abdelzaher, T., Stankovic, J., Whitehouse, K., Luo, L.: Declarative tracepoints: a programmable and application independent debugging system for wireless sensor networks. In: Proceedings of the ACM SenSys, Raleigh, NC, USA, pp. 85–98 (2008) 2. Shu, T., Krunz, M., Liu, S.: Secure data collection in wireless sensor networks using randomized dispersive routes. IEEE Trans. Mobile Comput. 9(7), 941–954 (2010) 3. Souihli, O., Frikha, M., Hamouda, B.M.: Load-balancing in MANET shortest-path routing protocols. Ad Hoc Netw. 7(2), 431–442 (2009) 4. Khan, S., Prasad, R., Saurabh, P., Verma, B.: Weight based secure approach for identifying selfishness behavior of node in MANET. In: Satapathy, S., Tavares, J., Bhateja, V., Mohanty, J. (eds.) Information and Decision Sciences. Advances in Intelligent Systems and Computing, vol. 701, pp. 387–397. Springer, Singapore (2017) 5. Aad, I., Hubaux, J.-P., Knightly, W.E.: Impact of denial of service attacks on ad hoc networks. IEEE-ACM Trans. Netw. 16(4), 791–802 (2008)

308

D. Bisen et al.

6. Mandala, S., Jenni, K., Ngadi, A., Kamat, M., Coulibaly, Y.: Quantifying the severity of blackhole attack in wireless mobile adhoc networks. In: Security in Computing and Communications. Springer, Heidelberg (2014) 7. Liu, Y., Dong, M., Ota, K., Liu, A.: ActiveTrust: secure and trustable routing in wireless sensor networks. IEEE Trans. Inf. Forensics Secur. 11(9), 2013–2028 (2016) 8. Dong, M., Ota, K., Liu, A., Guo, M.: Joint optimization of lifetime and transport delay under reliability constraint wireless sensor networks. IEEE Trans. Parallel Distrib. Syst. 27(1), 225–236 (2016) 9. Liu, X., Dong, M., Ota, K., Hung, P., Liu, A.: Service pricing decision in cyber-physical systems: insights from game theory. IEEE Trans. Serv. Comput. 9(2), 186–198 (2016) 10. Dong, W., Liu, Y., He, Y., Zhu, T., Chen, C.: Measurement and analysis on the packet delivery performance in a large-scale sensor network. IEEE/ACM Trans. Netw. 22(6), 1952–1963 (2014) 11. Illiano, P.V., Lupu, C.E.: Detecting malicious data injections in event detection wireless sensor networks. IEEE Trans. Netw. Serv. Manag. 12(3), 496–512 (2015) 12. Ma, Q., Liu, K., Zhu, T., Gong, W., Liu, Y.: BOND: exploring hidden bottleneck nodes in large-scale wireless sensor networks. In: Proceedings of the IEEE ICDCS, Madrid, Spain, pp. 399–408 (2014) 13. Magistretti, E., Gurewitz, O., Knightly, E.: Inferring and mitigating a link’s hindering transmissions in managed 802.11 wireless networks. In: Proceedings of the ACM MobiCom, Chicago, IL, USA, pp. 305–316 (2010) 14. Son, D., Krishnamachari, B., Heidemann, J.: Experimental analysis of concurrent packet transmissions in low-power wireless networks. In: Proceedings of the ACM SenSys, San Diego, CA, USA, pp. 237–250 (2005) 15. Li, X., Ma, Q., Cao, Z., Liu, K., Liu, Y.: Enhancing visibility of network performance in large-scale sensor networks. In: Proceedings of the IEEE ICDCS, Madrid, Spain, pp. 409–418 (2014) 16. Saurabh, P., Verma, B.: An efficient proactive artificial immune system based anomaly detection and prevention system. Expert Syst. Appl. 60, 311–320 (2016) 17. Saurabh, P., Verma, B.: Immunity inspired cooperative agent based security system. Int. Arab J. Inf. Technol. 15(2), 289–295 (2018) 18. Saurabh, P., Verma, B., Sharma, S.: An immunity inspired anomaly detection system: a general framework. In: Proceedings of Seventh International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA 2012). Advances in Intelligent Systems and Computing, vol. 202, pp. 417–428. Springer (2012) 19. Saurabh, P., Verma, B., Sharma, S.: Biologically inspired computer security system: the way ahead. In: Recent Trends in Computer Networks & Distributed Systems Security. CCIS, vol. 335, pp. 474–484. Springer (2011)

Recursive Tangent Algorithm for Path Planning in Autonomous Systems Adhiraj Shetty(B) , Annapurna Jonnalagadda(B) , and Aswani Kumar Cherukuri(B) School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India [email protected], [email protected], [email protected] Abstract. Autonomous drones play a vital role in Disaster mitigation systems and commercial good delivery systems. The problem involves finding the shortest path between the delivery points while simultaneously avoiding stationary obstacles (for example high raised buildings) and moving obstacles like other drones. The path needs to be continuously changed based on the telemetry from other drones or based on the addition of new way-points. This is major issue in planning problems. Any algorithm will have to make complex choices like abandoning shortest paths to avoid collisions. In this paper we propose a tangent algorithm which chooses paths based on many performance measures like number of obstacles in current path and the future path and the distance to the next obstacle. The path has very few sharp turns and the locations of these turns are calculated during the path planning. The performance evaluation on different environments demonstrates that the algorithm will be particularly faster in case of both sparse and dense obstacles. Keywords: Autonomous systems algorithm · Tangent algorithm

1

· Path planning · Recursive

Introduction

The research that is being dealt with in this composition surrounds path planning and collision avoidance in unmanned aerial vehicles. The drones in this environment need to avoid each other and other stationary obstacles and find the shortest path to their next delivery point. In recent times, this subject has become one of increasing interest to many. As such there are many preexisting algorithms one can consider but many of these require high computational power and still do not generate the most optimal path. No-fly zones can be modeled as cylinders with their base on the ground, radius and centre such that the entire area is covered and a suitable height. Even moving obstacles like other drones can be also modeled as cylinders centered at the drone. Most of the time the obstacles will be sparse as such the tangent algorithm will quickly return the straight line path in one step. c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 309–318, 2021. https://doi.org/10.1007/978-3-030-49336-3_31

310

A. Shetty et al.

Many solutions have been formulated to both detect and avoid collisions among UAVs or between UAVs and other obstacles. Each of these algorithms has a different outlook for the problem and therefore its own benefits and downfalls [1]. Algorithms like Generalized Roundabout Policy [2] and Multiple Party Collision Avoidance [3] use a Decentralized collision avoidance approach [11]. However, decentralizing a system takes up a lot more computational power than just using a single central unit and it might not be commercially feasible to have every UAV fitted with the necessary hardware. Artificial Potential Fields [4] involves hypothetically giving each UAV a negative charge, and the next way-point a positive charge. The main drawback of this algorithm is the tendency of the UAVs to get stuck at points of local minima in the potential field instead of moving to the way-point (global minima). In Fuzzy logic, the inputs are fuzzified, or made more general than specific numbers [5]. Due to the algorithm’s inexact nature, it is really important to implement the translation of values to concepts properly to get accurate results.

Fig. 1. Pygame simulation: comparison between tangent algorithm (left) and RRT (right) in situations with sparse or no obstacles in the way.

Algorithms like Rapidly Exploring Random Tree (RRT) [7] and A* algorithm [6] take longer time even if there are sparse obstacles as shown in Fig. 1 because they are based on randomly testing out each possibility. The tangent algorithm is a better solution as it directly tries to draw a straight line and finish the process, even in case of obstacles it makes minor deviations from the straight line path. Also paths generated by RRT, A* and other algorithms that randomly move around may need a lot of smoothening as the paths are (in most cases) made up of small lines with different slopes. Each small line shows one decision made by the agent. But the tangent algorithm only needs smoothening at a few locations where we draw new tangents because it directly draws a single big line and tries to reach the next point in one step. Thus, autonomous drones will be safer as the chances of having to take sharp turns is less [12]. As can be seen in Fig. 1 the RRT agent takes an unnecessarily longer path because a longer sub-tree happened to reach the goal first. The aim behind the inception of this algorithm is to create a method for path planning and obstacle avoidance that makes use of simple geometry like circles

Tangent Algorithm

311

and tangents. Therefore, students with basic knowledge of geometry and recursion can create working codes autonomous aerial systems. The aim was also to create an algorithm that can be suitable for fixed-wing system as in to generate the shortest path with minimum number of sharp turns in the fewest amount of iterations. Furthermore, we wished to create an algorithm where decision making process can be customized that is, by the use of different performance measures and different weights corresponding to the importance of each performance measure. In this paper, we propose a new algorithm for path-planning and obstacle avoidance and comparison with other path-planning algorithms. Our main contributions are as follows: – A new algorithm for path planning and obstacle avoidance in autonomous systems that uses a speculative approach where it assumes a path and then corrects the path if there is a collision. – Simulation results show that it is much faster in comparison to already existing algorithms like A* and RRT. – Performance measures used by the algorithm can be customized.

2

Proposed System 1

The Algorithm takes any number of way-points and any number of cylindrical obstacles. The algorithm starts from the first way-point and if the height is sufficient it passes over the obstacles to get to the next way-point. If the obstacle height is significantly higher, then the cylinders are considered as circles from the top view and tangents are drawn to the closest circle blocking the way to the next way-point. If there are no obstacles in the way, the algorithm draws the straight-line path. The goal is to reach the next way-point while choosing the path with the least number of obstacles and least distance.

Fig. 2. Pygame simulation of the tangent algorithm.

The lines drawn in black in Fig. 2 are all the unsuccessful attempts. First the agent tries to reach the target through a straight path, if an obstacle is detected two tangents are drawn to that obstacle and the best one is chosen. This algorithm first selects the tangent which intersects the least number of obstacles.

312

A. Shetty et al.

If the algorithm detects that both tangents have same number of obstacles then decision is made by drawing two straight lines from the end of the tangents to the destination using the two current tangents and checking their obstacles, if there is still a discrepancy then decision is made on the basis of the distance between the end point of tangent and next way-point.

Fig. 3. Extension and gap.

A gap is maintained between the tangent and the obstacle. The tangent is extended beyond the point of contact with the circle to prevent the agent from recursively operating on the same circle as shown in Fig. 3. The tangent will be extended by a factor directly proportional to the radius of the obstacle and inversely proportional to the distance between the start point and the centre of the circle. Between the first and second way-point (as can be seen in the Fig. 2) the agent has to abandon the shortest path because that would take the drone out of the boundary set for the flight. Each drone will be encompassed in a cylinder that other drones cannot enter. Stationary obstacles will also be modelled in the same way regardless of their shape.

Fig. 4. Decision made based on three measures.

For example in Fig. 4: 1. Draw path AB. Not possible. 2. Draw tangents AT1 and AT2 on the nearest obstacle.

Tangent Algorithm

313

3. Check number of obstacles obstructing AT1 and AT2. Both are zero. 4. Next check number of obstacles obstructing T1B and T2B. T1B has zero and T2B has one. So choose T1 tangent. 5. Since the previous test passed, no need to check for distance between T1B and T2B. Encountering less obstacles is given more priority than distance to maintain completeness. In the case that the way point is surrounded by obstacles and there is only one small gap, the drone needs to focus on getting out of the tight situation even if more distance is covered to get to destination.

3

Proposed System 2

Alternately, we can ignore the three measures and use threads to test out every different possibility. Tangents will not be chosen based on any measures, on the contrary all the tangents will be expanded. Each tangent will call two more tangents and this process keeps on going until all the paths reach the destination or quit. Distance will be calculated for all path and the shortest path will be chosen. This method takes more time and processing power than the first method but it ensures an optimal solution.

4

Algorithm

The algorithm for the first proposed system will input two input arrays: waypoint array with each element as a tuple of x and y coordinate and obstacle array with each element as a tuple of x, y coordinate and radius. This algorithm is a two-dimensional simulation and will add the gap and extension but will not recursively reduce them if there is a collision in the extension. So we will choose the smallest possible value of these from the start. For each pair of consecutive way-points draw a rectangle with the two waypoints as two diagonally opposite corners. Increase x-coordinate of 2 rightmost points by diameter of largest obstacle, Decrease x-coordinate of 2 leftmost points by diameter of largest obstacle, Increase y-coordinate of 2 topmost points by diameter of largest obstacle, Decrease y-coordinate of 2 bottom-most points by diameter of largest obstacle. Thus each consecutive pair of way-points will have its rectangle. Take each obstacle and if the end points of its two diameters which are parallel to x-axis and y-axis lie within the rectangle, then add it to the list. If we have n way-points then list[n − 1] will have each element as a list of coordinates of the obstacles within the rectangles. cor is an array containing the four corners of the boundary. Extension and gap is added by creating vectors and extending them in the “Add gap” and “Add Extension” section of the first part of the algorithm. Here, s1 and s2 are the points of contact of the tangent with the circular obstacle, (x, y) is the center of the obstacle and st is the start way-point from which the tangents are drawn.

314

A. Shetty et al.

Algorithm 1 Path planning and obstacle avoidance using tangent algorithm procedure collision(st, ed) Satisfy the line between st and ed with each circle in list[count] and find points of intersection. closest obs no = circle with point of intersection closest to st point obs count = Number of circles intersecting return closest obs no , obs count procedure tangent(st, ed, ob) Using point st draw two tangents on the obstacle[ob]  Can use pole polar concept in conics to quickly satisfy the polar with the circle and get the 2 points Let the points of contact be s1 and s2. Add gap: x = obstacle[ob][0]  x-coordinate of obstacle y = obstacle[ob][1]  y-coordinate of obstacle sdx1 = x+1.1*(s1[0]-x)  Equation 1 sdx2 = x+1.1*(s2[0]-x) sdy1 = y+1.1*(s1[1]-y) sdy2 = y+1.1*(s2[1]-y) Add Extension: d1 = distance between st and (sdx1,sdy1) d2 = distance between st and (sdx2,sdy2) r = obstacle[ob][2]  Radius of Obstacle t1[0] = st[0]+((r+d1)/d1)*(sdx1-st[0])  Equation 2 t2[0] = st[0]+((r+d2)/d2)*(sdx2-st[0]) t1[1] = st[1]+((r+d1)/d1)*(sdy1-st[1]) t2[1] = st[1]+((r+d2)/d2)*(sdy2-st[1]) return t1, t2 start=waypoint[0] end=waypoint[1] count = 0 procedure main() while start = waypoint[n] do draw straight line between start and end closest obs no,obs count=collision(start,end) if closest obs count == 0 then set straight line as final path start = end end = next way-point in the list count++ restart loop t1,t2 = tangent(start,end,obs no.) out of boundary1 = boundary(start,t1,cor) obs no1,obs count1=collision(start,t1) out of boundary2 = boundary(start,t2,cor) obs no2,obs count2=collision(start,t1) if out of boundary1 == 1 then if obs count2 == 0 then Set start to t2 as final path start=t2 restart loop else end = t2 restart loop

Tangent Algorithm

315

Algorithm 2 Path planning and obstacle avoidance using tangent algorithm (contd.) procedure main continued() while previous while loop continued do 3: Similarly, check is second tangent goes out of flight-boundary. if obs count1 ¿ obs count2 then if obs count2 == 0 then 6: set start to t2 as final path start=t2 restart loop 9: end=t2 restart loop else if obs count2 ¿ obs count1 then 12: if obs count1 == 0 then set start to t1 as final path start=t1 15: restart loop end=t1 restart loop 18: else closest obs no3,obs count3=collision(t1,end) closest obs no4,obs count4=collision(t2,end) 21: if obs count3 ¿ obs count4 then if obs count2 == 0 then set start to t2 as final path 24: start=t2 restart loop end=t2 27: restart loop else if obs count4 ¿ obs count3 then if obs count1 == 0 then 30: set start to t1 as final path start=t1 restart loop 33: end=t1 restart loop else 36: Calculate distance between t1 and end = d1 t2 and end = d2 39: if d1¿d2 then if obs count2 == 0 then set start to t2 as final path 42: start=t2 restart loop end=t2 45: restart loop else if d2¿d1 then if obs count1 == 0 then 48: set start to t1 as final path start=t1 restart loop 51: end=t1 restart loop else 54: Choose any Tangent

316

A. Shetty et al.

(sdx1, sdy1) and (sdx2, sdy2) are vectors along the center of the obstacle and the points of contact but, are slightly larger than the radius, and thus give us the points with gap. In Eq. 1, the vector sdx1 starts at x and extends in the direction (s1[0]-x) that is, towards point of contact. 1.1 is multiplied to the direction vector to give extension in that direction. The number after the decimal place can be increased or decreased to increase or decrease the gap. Similarly in Eq. 2, a vector is created between the start way-point and the point of contact with gap obtained in Eq. 1. The extension factor, 1 + (r/d1) is a function of radius of obstacle and distance between start way-point and point of contact with gap. This function always returns the best extension factor as the larger the circle, the larger the extension required to cross it and if the way-point is very close to the circle that is, d1 is very small then tangent drawn will be small and will thus require a larger extension.

5

Performance Analysis

Completeness: The algorithm is complete. Since the obstacles have gaps between them. A tangent can be drawn to pass any obstacle to reach the goal node. 1. The tangent undergoes a fixed extension based on a function which is inversely proportional to the distance between the way-point and the centre of the obstacle and directly proportional to radius of the obstacle. This is done to add some distance between the end of the tangent and the obstacle. 2. Also a processor can make rounding errors during calculation and draw the tangent a little inside the obstacle which might lead to mistakes in the obstacle detection function. So while drawing a tangent we maintain a small gap between the tangent line and the obstacle to which it was drawn. If the gap between two obstacles is too tight and small then functions 1 and 2 may cause problems in drawing tangents. Thus, the code will check if there is any collision in the extended length in [1] so that the extension can be reduced recursively until it fits. Also, the gap in [2] will be set to a very minute value. Optimal-path: The algorithm picks the most optimal solution based on three performance measures: 1. Number of obstacles intersecting the tangent drawn from start point. 2. Number of obstacles intersecting the line between the end point of tangent and goal point that will be drawn using the tangent chosen in the previous step. 3. Distance between endpoint of tangent and goal point. The measures are given in order of decreasing priority. Thus, decision is made not only based on the obstacle count of current tangent but also on the future consequences of choosing that tangent. The priority of the three measures can be changed to generate the most suitable result. Time Complexity: By testing the algorithm for different number of obstacles we determined that the best-case complexity is O(n) and the worst case could

Tangent Algorithm

317

go up to O(n2 ) where ‘n’ is the number of obstacles lying between the two waypoints. Testing the Tangent algorithm against RRT: We tested the pygame codes for Tangent algorithm and RRT in the IPython console and compared their run-time using the time package in python. Both algorithms were tested for two way-points and two obstacles (Fig. 5).

Fig. 5. Run-times for RRT

RRT is a probability based algorithm. Sometimes, the algorithm gets lucky and a shorter tree is able to reach the next way-point faster. As such, the algorithm showed run-times ranging from 1.5 all the way up to 16 s when there were minor changes in the coordinates of the way-points (Fig. 6).

Fig. 6. Run-time for tangent algorithm

On the other hand, the Tangent Algorithm showed a constant time of around 0.0019 s or less even with major changes in the location of way-points or obstacles. This shows that the tangent algorithm is faster and more reliable. The constant run-times are due to the fact that the same tangent is most likely to be chosen if there are small changes in the coordinates of the way-points. All tangents are checked before one is chosen for the path, so even if a different tangent is chosen it does not effect the run-time. The tangent algorithm will give even shorter run-times if there are no obstacles in the way because of its speculative approach of always trying the straight

318

A. Shetty et al.

line path first. On the other hand, RRT still takes the same amount of time because it still generates its random tree and waits for a branch to reach the next way-point.

References 1. Lalish, E., Morgansen, K.A.: Decentralized reactive collision avoidance for multivehicle systems. In: Proceedings of the 47th IEEE Conference on Decision and Control, December 2008 2. Pallottino, L., Scordio, V.G., Frazzoli, E., Bicchi, A.: Probabilistic verification of a decentralized policy for conflict resolution in multi-agent systems. In: Proceedings of the 2006 IEEE International Conference on Robotics and Automation, May 2006 3. Samek, J., Sislak, D., Volf, P., Pechoucek, M.: Multi-party collision avoidance among unmanned aerial vehicles. Technical report, Czech Technical University in Prague (2007) 4. Ruchti, J., Senkbeil, R., Carroll, J., Dickinson, J., Holt, J., Biaz, S.: UAV collision avoidance using artificial potential fields. Technical report, Auburn University, July 2011 5. Dadios, E.P., Maravillas Jr., O.A.: Cooperative mobile robots with obstacle and collision avoidance using fuzzy logic. In: Proceedings of the 2002 IEEE International Symposium on Intelligent Control, October 2002 6. Crescenzi, T., Kaizer, A., Young, T.: Collision avoidance in UAVs using dynamic sparse a*. Technical report, Auburn University (2011) 7. Ferguson, D., Stentz, A.: Anytime RRTs. In: Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2006 8. Radmanesh, M., Kumar, M., Guentert, P.H., Sarim, M.: Overview of path-planning and obstacle avoidance algorithms for UAVs: a comparative study. Unmanned Syst. 06(02), 95–118 (2018) 9. Radmanesh, M., Kumar, M., Guentert, P.H., Sarim, M.: Grey wolf optimization based sense and avoid algorithm in a Bayesian framework for multiple UAV path planning in an uncertain environment. Aerosp. Sci. Technol. 77, 168–179 (2018) 10. Radmanesh, M., Guentert, P.H., Kumar, M., Cohen, K.: Analytical PDE based trajectory planning for unmanned air vehicles in dynamic hostile environments. In: 2017 American Control Conference (ACC), pp. 4248–4253. IEEE (2017) 11. Eaton, C.M., Chong, E.K.P., Maciejewski, A.A.: Multiple-scenario unmanned aerial system control: a systems engineering approach and review of existing control methods. Aerospace 3, 1 (2016) 12. Singha, N.K., Hotab, S.: Optimal path planning for fixed-wing UAVs in 3D space, IIT Kharagpur, India, December 2017

Marathi Handwritten Character Recognition Using SVM and KNN Classifier Diptee Chikmurge1,2(B) and R. Shriram2 1 MIT Academy of Engineering, Pune, Maharashtra, India [email protected], [email protected] 2 VIT, Bhopal, Madhya Pradesh, India [email protected]

Abstract. Marathi handwritten character recognition is the most challenging task in Optical Character Recognition (OCR) research domain. The need for OCR to convert Marathi handwritten documents or scripts to editable text, which can be attained by the proposed work, which will reduce the burden of storage space, the task of data entry in forms in Marathi language and converts degraded historical documents in editable text. Moreover, Handwritten Marathi characters tend to more complicated due to their structure, shape, several strokes, and different writing styles. The character recognition involved four necessary procedures like Pre-Processing on input character images, Segmentation of characters in words, Extraction of features of segmented characters, and Classification, to recognize the Marathi characters with a different style. In this paper, handwritten Marathi single character accepted as input, and the features are extracted using the Histogram Oriented Gradient method (HOG), whereas characters classified using Support Vector Machine (SVM) and K-Nearest Neighbor algorithm (KNN). Keywords: Optical Character Recognition (OCR) · Histogram Oriented Gradient method (HOG) · Support Vector Machine (SVM) classification algorithm · K-Nearest Neighbor (KNN) algorithm classification algorithm

1 Introduction OCR is an automation technique that transforms the physical static documents into editable and searchable text. Nowadays, OCR becomes a very crucial research field/domain/area of artificial intelligence, pattern recognition, and computer vision. The automatic character recognition is the application-specific area of OCR. The OCR has successfully implemented and used for the English language. The Devanagari script generally used in most Indian languages like Marathi, Hindi, Sanskrit, and Panjabi. Most of the historical documents has written in Devanagari scripts, and these scripts unable to maintain by archeologists due to selected that these scripts suffered from the typical degradation problems. It needs to convert degraded historical documents to text for avoiding loss of ancient literature and can use for the next generation. The OCR implemented and worked for online character and offline character recognition (printed and © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 319–327, 2021. https://doi.org/10.1007/978-3-030-49336-3_32

320

D. Chikmurge and R. Shriram

handwritten characters). The handwritten character recognition is more complicated than printed character recognition due to the different styles of different writers with different moods, different pens-points, distinct writing surfaces, and various writing speed with different sizes, shapes, strokes, and curves in each character. This paper concentrates on the implementation of automatic Marathi handwritten character recognition, as nine crores people use the Marathi language in Maharashtra for documentation. 1.1 About Marathi Language Structure Devanagari script used for the Marathi language consists of 12 vowels, 36 consonants, as shown in Fig. 1. Moreover, Fig. 2 respectively. The Marathi character set categorized as Basic and Compound Characters. The Basic characters are a group of vowels and consonants, and compound characters are a combination of vowels and consonants or a combination of two consonants.

Fig. 1. Marathi vowels.

Fig. 2. Marathi consonants.

2 Related Work This section covers the overall literature survey of handwritten character recognition of various devnagari scripts, which includes single, compound characters and numeric digits. S. P., Gurjar, A. A et al., 2018 explains a streamlined OCR framework [1] for handwritten MARATHI text document recognition by applying Curvelet Transform for feature extraction of preprocessed characters. The SVM classifier used on feature vectors, which are decreased by PCA and finally applied Adaptive Cuckoo Search (ACS) for optimization. Raj, M. A. R., & Abirami, S. (2012) reviewed many research articles [2] based on Tamil character recognition and concludes that the latest research on Tamil handwritten character recognition recommended many alternatives, but logical accuracy and performance have not achieved. Joshi, N. (2017) designed [3] and tested the combinational framework of K nearest network and multilayers neural network for recognizing digits by extracting features with Gabor filters.

Marathi Handwritten Character Recognition

321

Vijaya Kumar Reddy, R & Ravi Babu, U. (2019) [4] presents and compare results of handwritten Hindi character recognition system using different Deep learning techniques. Hirwani, A., Verma, N., & Gonnade, S. (2014) used the LBP feature extractor [5] to extract features of English alphabets and classify using Nearest Neighbor Classifier. Handwritten Compound Character in Devanagari script Recognized [6] Using Zernike Moment by Kale, K. V. et al. (2013) with SVM classifier. Zernike Moment based feature extraction method follows rotation invariance that successfully applied to many Character Pattern Recognition problems. Singh, S., Aggarwal, A., & Dhir, R. (2012) demonstrated their Gurumukhi handwritten character recognition [7] with gabor filter and SV classifier. In this experiment RBF kernel implemented for SVM classifier. Herekar, R. R., & Dhotre, S. R. (2014) explore work to recognize English [8] character by applying zoning method with Euler number and endpoint for the classification of characters. Hamid, N. A., & Sjarif, N. N. A. (2017), applied HOG for feature extraction and applied and compare the result of SVM, KNN and Multilayer perception neural network [9]. The comparison concluded that SVM and KNN give a better result than Multilayer perception neural network. The Marathi handwritten characters segmented and R-HOG retrieves the features of segmented characters. Kamble, P. M., & Hegadi, R. S. (2015) [10] demonstrates experiment using SVM and Feedforward neural network for classification of charterers on 8000 datasets of characters. Gujrathi scripts contain complex characters which result in incorrect classification of characters, so Naik, V. A., & Desai, A. A. (2019), implemented [11] Gujrathi handwritten character recognition using multilayer classification in such a way that classification proceeded in the two-layer first layer using SVM Polynomial kernel and second layer using SVM linear kernel. The feature set feed to SVM layers is combine retrieved using zoning and dominant pointbased normalized chain code feature. Patil, C. H., & Mali, S. M. simplified [12] Marathi handwritten character recognition using multilevel classification. This type of classification implemented by classifying characters in different classes like characters contain Bar and No Bars, further Bar and no Bar characters classified in to enclosed and not enclosed region characters. Bar enclosed region characters divided into one component and two components. Rao, T. K., Chowdary, et al. (2019) carried out experiments [13] on printed and handwritten characters using corner points to extract text from documents and Features from Accelerated Segment Text (FAST) for extracting the document from the image. This work applicable to multilingual language. Vaidya, M., et al. (2019) proposed [14] features extraction method for recognition of Marathi numerals using a one-dimensional Discrete Cosine Transform (1-D DCT) algorithm by reducing the dimensionality of feature space of numerals and classification proceeded by a neural network. Rajput, G. G., & Anita, H. B. (2010), proposed [15] a novel method using Discrete Cosine Transform (DCT) and Wavelets of Daubechies family for feature extraction towards multi-script identification at the block level. Olivier Chapelle, Patrick Haffner, and Vladimir N. Vapnik (1999) [16] implemented Histogram based image classification using SVM. Kamble(B)and Ravindra S. Hegadi (2017) compare the SVM and KNN classifiers [17] applicable for handwritten Marathi character recognition. As per the experimental result, SVM gave the best result of handwritten character recognition.

322

D. Chikmurge and R. Shriram

3 Input Image Preprocessing The dataset is composed of all single Marathi letters with different handwriting styles. The dataset downloaded from the Kaggle dataset. The dataset contains 58000 handwritten character images of 12 vowels, 36 consonants, and 10 digits characters. The single characters need to preprocess for better results of recognition. So organized dataset pass through preprocessing module followed by feature extraction step. The preprocessing required several steps because of the removal of noise and improved quality of the acquired image. Through pre-processing are noise or disturbances in image reduced to some extent, connect broken tiny character, edge detection, region filling, normalization, and segmentation. The character preprocessed with the following steps1. Acquire the image in a suitable format (.jpeg) 2. Transform an image to grayscale. 3. Convert the image to a binary image by selecting threshold value 0.9 to separate character object image from the background. 4. Detect edge using the canny method. 5. Remove unwanted stroke and fill region using morphological processing. 6. The character cropped with segmentation using a bounding box around the character. 7. Normalize each character to bring all the input Marathi character in uniform size The input image of size 1600*1600, as shown in Fig. 3, which undergoes preprocessing steps, Fig. 4 is the output of separating the character from the background, then the character cropped with a bounding box in Fig. 5 and finally cropped character used for feature extraction after preprocessing (Fig. 6).

Fig. 3. Input image

Fig. 4. Binary image

Fig. 5. Bounding box

Fig. 6. HOG image

4 Feature Extraction The features of the preprocessed image extracted using the HOG method [18] to recognize the character. The purpose of HOG is as a feature descriptor for object detection and recognition method in image processing. In a localized part of a character image HOG method [10] measure instances of gradient orientation. The Marathi handwritten character recognized and described with the HOG method by measuring the gradient and direction of the pixel with the Sobel filter.

Marathi Handwritten Character Recognition

323

After normalization preprocessing step as the next step involves computing the gradient values by applying a mask in both vertical and horizontal directions. Precisely, this approach consists in filtering the grayscale image with the following Sobel filter kernels for the horizontal and vertical direction. Sx = [−1 0 1] ⎡

⎤ 1 Sy = ⎣ 0 ⎦ −1

(1)

So, forgiven a character image I, obtain the x and y derivatives using a convolution operation concerning vertical and horizontal Sobel filer kernel. Ix = I ∗ Sx

(2)

Iy = I ∗ Sy The gradient magnitude is given by  0.5 |S| = Ix2 + Iy2 So the orientation of the gradient is given by:  Ix θ = tan−1 Iy

(3)

(4)

After gradient computation, the next step is to generate the histogram of a gradient in cells. The histogram of gradient computed for each cell, whereas the histogram channels are uniformly spread over 0 to 360 or 0 to 180°, based on gradient magnitude, whether the gradient is “signed” or “unsigned.” Each “*” represents a cell with bins, and the magnitude of this bin is shown or given by the luminance of each direction vector. So HOG feature of single Marathi handwritten character measured as follows

|S|, θ ∈ Bini ∝= (5) 0, otherwise

5 Classification Algorithm 5.1 Support Vector Machine Algorithm SVM is the most popular classifier used for segregating classes in extracted feature space. Initially, the SVM classifier proposed and implemented by Vipnik [16]. SVM is a classification algorithm used for handwritten Marathi character classification by segregating characters in 58 classes. The primary purpose of a classifier to recognize the type of selected character with a massive amount of training dataset.

324

D. Chikmurge and R. Shriram

The main objective of the SVM supervised algorithm to identify the arrangement of independent hyperplanes linearly concerning extracted features of single characters. The optimal hyperplane selected and arranged in such a way that separate feature space in different classes with maximum distance. The new unseen testing feature data classified based on optimal hyperplane, which arranged over the decision boundary. In SVM algorithm prediction of unseen feature data depend on distance from decision hyperplane parameter so that decision function, f (x) = wT x + b = 0

(8)

Where w and b are decision hyperplane parameters which segregates feature data positive from negative class. The w is weight associated with the feature dimension. The significant contribution of the feature wi considered for classification when the value of wi is high whereas contribution feature parameter ignored if the value of wi is equal to zero. The selection of suitable hyperplane is a significant task in the SVM algorithm [11, 17] that classify the character such that distance from the nearest point on each side maximized. Once hyperplane computed, use the hyperplane to predict character class. Hypothesis function h(xi) for group classification as

+1, if w.x + b > 0 h(xi) = (9) −1, if w.x + b < 0 The feature of characters above hyperplane belongs to class +1, and the feature of characters below hyperplane belongs to class −1. The objective function of SVM minimized over w, b, £. o compute the optimal output of SVM. The SVM [16] introduced one £. ack variables ideally ranged between 0 and 1.If £ = 0 then features classified correctly with sufficient margin if £ < 1 features not classified correctly; if its value range between 0 and 1 then feature classified correctly but less of margin, this parameter is known as margin violation which should ideally spread over margin and appropriate side of hyperplane. So SVM objective function becomes 1 T w w+C £i 2

(10)

The term C used to control the number of misclassified feature spaces and the value of C selected by the user of the algorithm to assign more penalty to wrongly recognized classes. 5.2 K-Nearest Neighbor Algorithm (KNN) K-nearest neighbor classifier [19, 20] is one of the first supervised classifier, non-linear, and non-parametric classifier for performing the pattern classification task. The k-nearest neighbor algorithm (KNN) used for classification in which the input data set contains the k nearest training examples in the feature space. A handwritten character classified by a majority choice of its neighbors to match the category of most familiar among its k

Marathi Handwritten Character Recognition

325

nearest. The Euclidean distance function used in KNN for computing distance between neighbors.

n 2 i=0 (xi − yi ) (11) Euclidean_Distance(x, yi). = n The experiment using the KNN model for Marathi handwritten characters classification defined k = 9 neighbors to classify the characters based on feature space.

6 Experimental Result In our experiment, SVM and KNN classifier applied on single Marathi handwritten characters dataset from Kaggle standard dataset, which is prepared by Shalaka Deore, contained 58000 character images. This data set contains a total of 58 types of Marathi characters composed of 12 vowels, 36 consonants, and 10 digits characters of size 1600 * 1600 pixels. The features of single characters are extracted using HOG method and classifier classifies the characters in respective classes. The performance of SVM and KNN classifiers summarized by the confusion matrix [21] in our experiment, as depicted in Figs. 7 and 8. The 75% of the dataset trained and 25% dataset tested through both classifiers. The observation of SVM and KNN classifiers displayed by the confusion matrix in the training phase. In our experiment, predicted and actual classification information

Fig. 7. KNN

Fig. 8. SVM

326

D. Chikmurge and R. Shriram

data stored in the confusion matrix, which is prepared by SVM and KNN classification system model. The observation of KNN and SVM classifier, as shown in the following figures. The overall accuracy of the KNN model is 90%, whereas the accuracy of the SVM model is 95% on the testing dataset. As per our experimental study, the SVM gave the best result of classification.

7 Conclusion This paper proposed to extract the features of the single Marathi handwritten characters using the HOG method and classified the characters in 58 classes. The performance of the SVM classifier is better than the KNN classifier on the selected testing dataset. The performance of the classifier is affected due to the characteristics with corners and splitting of characters, forming it as segregate characters, so classification made a mistake to recognize such characters. It is comparative evident from the experimental results that the interpretation performed on handwritten Marathi characters most depends on the style of an individual writer.

References 1. Ramteke, S.P., Gurjar, A.A., Deshmukh, D.S.: A streamlined OCR system for handwritten Marathi text document classification and recognition using SVM-ACS algorithm. Int. J. Intell. Eng. Syst. 11(3), 186–195 (2018) 2. Raj, M.A.R., Abirami, S.: A survey on Tamil handwritten character recognition using OCR techniques. In: The Second International Conference on Computer Science, Engineering and Applications (CCSEA), vol. 5, pp. 115–127 (2012) 3. Joshi, N.: Combinational neural network using Gabor filters for the classification of handwritten digits. arXiv preprint arXiv:1709.05867 (2017) 4. Vijaya Kumar Reddy, R., Ravi Babu, U.: Handwritten Hindi character recognition using deep learning techniques. Int. J. Comput. Sci. Eng. 7, 1–7 (2019). https://doi.org/10.26438/ijcse/ v7i2.17 5. Hirwani, A., Verma, N., Gonnade, S.: Efficient handwritten alphabet recognition using LBP based feature extraction and nearest neighbor classifier. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 4(11) (2014) 6. Kale, K.V., Chavan, S.V., Kazi, M.M., Rode, Y.S.: Handwritten Devanagari compound character recognition using Legendre moment: an artificial neural network approach. In: 2013 International Symposium on Computational and Business Intelligence, pp. 274–278. IEEE (2013) 7. Singh, S., Aggarwal, A., Dhir, R.: Use of gabor filters for recognition of handwritten Gurmukhi character. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2(5), 234–240 (2012) 8. Herekar, R.R., Dhotre, S.R.: Handwritten character recognition based on zoning using Euler number for English alphabets and numerals. IOSR J. Comput. Eng. 16(4), 75–88 (2014) 9. Hamid, N. A., & Sjarif, N. N. A.: Handwritten recognition using SVM, KNN and neural network. arXiv preprint arXiv:1702.00723 (2017) 10. Kamble, P.M., Hegadi, R.S.: Handwritten Marathi character recognition using R-HOG Feature. Procedia Comput. Sci. 45, 266–274 (2015) 11. Naik, V.A., Desai, A.A.: Multi-layer classification approach for online handwritten gujarati character recognition. In: Computational Intelligence: Theories, Applications and Future Directions-Volume II, pp. 595–606. Springer, Singapore (2019)

Marathi Handwritten Character Recognition

327

12. Patil, C.H., Mali, S.M.: Handwritten Marathi consonants recognition using multilevel classification. Int. J. Comput. Appl. 975, 8887 (2019) 13. Rao, T.K., Chowdary, K.Y., Chowdary, I.K., Kumar, K.P., Ramesh, C.: Optical Character Recognition from Printed Text Images (2019) 14. Vaidya, M., Joshi, Y., Bhalerao, M., Pakle, G.: Discrete cosine transform-based feature selection for Marathi numeral recognition system. In: Bhatia, S.K., Tiwari, S., Mishra, K.K., Trivedi, M.C. (eds.) Advances in Computer Communication and Computational Sciences. AISC, vol. 760, pp. 347–359. Springer, Singapore (2019). https://doi.org/10.1007/978-98113-0344-9_30 15. Rajput, G.G., Anita, H.B.: Handwritten script recognition using DCT and wavelet features at block level. IJCA, Special issue on RTIPPR (3), 158–163 (2010) 16. Chapelle, O., Haffner, P., Vapnik, V.N.: Support vector machines for histogram-based image classification. IEEE Trans. Neural Networks 10(5), 1055–1064 (1999) 17. Kamble, P.M., Hegadi, R.S.: Comparative study of handwritten Marathi characters recognition based on KNN and SVM classifier. In: International Conference on Recent Trends in Image Processing and Pattern Recognition, pp. 93–101. Springer, Singapore (2016) 18. Li, J., Zhang, H., Zhang, L., Li, Y., Kang, Q., Wu, Y.: Multi-scale HOG feature used in object detection. In: Tenth International Conference on Graphics and Image Processing (ICGIP 2018), vol. 11069, p. 110693U. International Society for Optics and Photonics, May 2019 19. Hazra, T.K., Singh, D.P., Daga, N.: Optical character recognition using KNN on custom image dataset. In: 2017 8th Annual Industrial Automation and Electromechanical Engineering Conference (IEMECON), pp. 110–114. IEEE, August 2017 20. Duan, Z.: Characters recognition of binary image using KNN. In: Proceedings of the 4th International Conference on Virtual Reality, pp. 116–118. ACM, February 2018 21. Freitas, C.O., De Carvalho, J.M., Oliveira, J., Aires, S.B., Sabourin, R.: Confusion matrix disagreement for multiple classifiers. In: Iberoamerican Congress on Pattern Recognition, pp. 387–396. Springer, Heidelberg, November 2007

Whale Optimization Algorithm with Exploratory Move for Wireless Sensor Networks Localization Nebojsa Bacanin , Eva Tuba , Miodrag Zivkovic , Ivana Strumberger , and Milan Tuba(B) Singidunum University, 11000 Belgrade, Serbia {nbacanin,mzivkovic,istrumberger}@singidunum.ac.rs, {etuba,tuba}@ieee.org

Abstract. In the modern era, with the development of new technologies, such as cloud computing and the internet of things, there is a greater focus on wireless distributed sensors, distributed data processing and remote operations. Low price and miniaturization of sensor nodes have led to a large number of applications, such as military, forest fire detection, remote surveillance, volcano monitoring, etc. The localization problem is among the greatest challenges in the area of wireless sensor networks, as routing and energy efficiency depend heavily on the positions of the nodes. By performing a survey of computer science literature, it can be observed that in the wireless sensor networks localization domain, swarm intelligence metaheuristics have generated compelling results. In the research proposed in this paper, a modified/improved whale optimization swarm intelligence algorithm, that incorporates exploratory move operator from Hooke-Jeeves local search method, applied to solve localization in wireless networks, is presented. Moreover, we have compared the proposed improved whale optimization algorithm with its original version, as well as with some other algorithms that were tested on the same model and data sets, in order to evaluate its performance. Simulation results demonstrate that our presented hybridized approach manages to accomplish more accurate and consistent unknown nodes locations in the wireless networks topology, than other algorithms included in comparative analysis. Keywords: Node localization · Wireless sensor networks · Swarm intelligence · Hybridization · Whale optimization algorithm

1

Introduction

The wireless sensor networks (WSNs) are networks that consist of many small and cheap wireless devices, i.e. sensor nodes, used to detect different phenomena from the physical world. Due to very limited resources, every node can process just a small portion of the collected data. However, a large number of nodes c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 328–338, 2021. https://doi.org/10.1007/978-3-030-49336-3_33

WOA with Exploratory Move for WSN Localization

329

working together can measure given physical variable very precisely. WSNs rely on the coordination of a large number of nodes in dense layout to perform its task [1]. Today, with the development of novel technologies and computing paradigms like cloud computing and the internet of things (IoT), the focus is on wireless distributed sensors, distributed data processing and remote operation [7]. Additionally, low price and miniaturization of sensor nodes led to a large number of applications, such as military, forest fire detection, remote surveillance, volcano monitoring, etc. The exact location of target phenomena is often unknown, therefore distributed sensor nodes allow better coverage and closer positioning (very important for hostile areas, such as war zones, radioactive areas, etc.). Additionally, the monitored area usually does not have any existing infrastructure such as telecommunications or power supply. In such an environment, sensors that are deployed randomly, must operate with limited resources and communicate in a wireless manner [23]. It can be safely assumed that their exact positions are not known. Therefore, among the main challenges from the WSNs domains is localization, which refers to finding positions of the deployed sensors. Using the global positioning system (GPS) is not feasible, because sensors have limitations in terms of computing power and attainable energy. The WSNs localization problem is an NP-hard by nature and classical algorithms can not be implemented (for example deterministic algorithms), due to high complexity and often unacceptable computational time [3]. For tackling this problem, stochastic algorithms like swarm intelligence are capable of generating satisfying solutions within a relatively short interval. Swarm intelligence metaheuristics fall into the category of bio-inspired optimization methods. According to the literature survey, these approaches have been successfully applied to solving many complex NP-hard real-life problems. Some examples of the swarm algorithms that have many practical applications include: artificial bee colony (ABC) [22], fireworks algorithm (FWA) [19,20], bran storm optimization (BSO) [18], monarch butterfly optimization (MBO) [17], firefly algorithm [21] and tree growth algorithm (TGA) [16]. Moreover, many hybridized swarm algorithms exist [10,12,15], as well as their implementations for WSNs localization problem [11,13,14]. The research presented in this paper aims towards achieving further enhancements in tackling WSNs localization problem by applying swarm intelligence algorithms. We propose in this paper a hybrid whale optimization swarm intelligence algorithm, that adopts exploratory move operator from Hooke-Jeeves local search method, adapted for solving localization challenge. The structure of the paper can be summarized as follows. Localization problem mathematical formulation, which was used in simulations, is given in Sect. 2. The Sect. 3 presents hybridized whale optimization algorithm tuned for node localization problem. In Sect. 4, simulation environment, as well as accomplished results and side-by-side comparisons are given. The Sect. 5 wraps up the proposed paper and also provides references for the upcoming research.

330

2

N. Bacanin et al.

Background and Proposed Model

In the WSNs topology, there are typically two basic types of sensors, anchors and targets. Anchors usually utilize GPS for determining their location. Target nodes are randomly distributed in target area and their locations must be estimated by applying localization algorithms. The estimation is usually conducted in two phases [5]. In the first phase, that is known as the ranging phase, methods estimate neighboring anchors and unknown sensors nodes distance. On the contrarily, position of sensors is estimated by applying geometry principles in the second phase. The objective of localization is estimation of coordinates of randomly distributed sensors (targets), with the goal to minimize the objective function. The position of target node is estimated by the range-based localization technique. In the first phase, metric received signal strength indicator (RSSI) was used to assess the distance between the target and anchor nodes, and that signal was corrupted by Gaussian noise. In the second phase, positions of target nodes were estimated by using trilateration, together with the results from the ranging phase. In order the trilateration technique to work, the distances between at least three anchors and the node with unknown location should be known in advance. Since measurements have imprecision in both phases, swarm intelligence can be utilized to minimize the error of localization. The M target and N anchors sensors are randomly deployed in a 2D environment, which the range of transmission denoted as R. The distance between each target node and anchors in its range is given by equation dˆi = di + ni , where ni is an additive Gaussian noise, and di is the real distance determined by using the following expression:  di = (x − xi )2 + (y − yi )2 , (1) where target and anchor nodes positions are represented as (x, y) and (xi , yi ), respectively. The variance of ni , as the noise that affects the measured distance between anchors and target senors, is given as: σd2 = β 2 · Pn · di ,

(2)

Pn where Pn is the percentage noise in distance calculation di ± di ( 100 ), and β is a parameter whose value is usually adjusted to 0.1 in practical implementations. The target node is localized if there are at least three anchors with the known positions A (xa , ya ), B (xb , yb ), and C (xc , yc ), within its transmission range R, and with distance di from the target node n. The swarm intelligence metaheuristic was executed independently for each localizable target node to estimate its position. Artificial individuals are initialized within the anchor nodes centroid by using the following expression:   N N 1  1  xi , yi , (3) (xc , yc ) = N i=1 N i=1

where N denotes the number of anchors within target node range.

WOA with Exploratory Move for WSN Localization

331

The f (x, y), that represents node localization problem objective function, is formulated as the mean square distance between the anchor and target node, given in Eq. (4), where N ≥ 3 [5]. 1 f (x, y) = N



N  

2 (x − xi )2 + (y − yi )2

,

(4)

i=1

Localization error EL is given by the Eq. (5), as the mean of squared distance between the estimated (Xi , Yi ) and the real node coordinates (xi , yi ): EL =

N 1  (xi − Xi )2 + (yi − Yi )2 NL i=1

(5)

The efficiency of the algorithm is measured by the localization error average value EL and the number of non-localized sensors NN L , where NN L = M − NL .

3

Hybridized Whale Optimization Algorithm

The original implementation of the WOA was introduced in 2016 by Mirjalili and Lewis [9] for tackling unconstrained and constrained continuous optimization problems [6,8]. In this paper, hybridized WOA will be presented. The search process of the WOA is performed by mathematically modeling the humpback whales hunting behavior. In the nature, humpback whales express cooperating behavior while hunting their prey by performing a distinctive hunting strategy, which is in the literature refereed as a bubble-net feeding strategy. These whales chaise small fishes by producing a spiral bubble path which surrounds their prey and swimming up to the surface of the ocean. The WOA’s search process is being conducted by simultaneously performing diversification and intensification phases. The process of exploitation models the encircling of prey and spiral bubble-net strategy, while the exploration emulates a search for a prey. In the phase of exploitation, each candidate solution performs a search in its neighborhood and it is directed towards the location where is the current global best solution. When for each solution in the population a fitness is calculated, positions of all solutions are updated respect to the location of fittest solution [9]: → − − → → − − → D = | C · X ∗ (t) − X (t)| → − − → − → → − X (t + 1) = X ∗ (t) − A · D,

(6)

(7) − → →∗ − where X (t) and X (t) denote candidate and current best solutions in itera→ − → − tion t, A and C represent coefficients given by the following expressions [9]:

332

N. Bacanin et al.

− → → → → A = 2− a ·− r −− a

(8)

− → → C =2·− r

(9)

Original WOA version emulates the bubble-net strategy by utilizing the expression [9]: 2 , (10) maxIter where t and maxIter represent the current and maximal iteration numbers, respectively. Second mechanism that guides the process of exploitation executes in two → − steps: first, the length of the space between the fittest solution ( X ∗ (t)) and → − current solution ( X (t)) in iteration t is calculated, and then, a new (updated) → − candidate solution ( X (t + 1)) can be determined by using a spiral equation [9]: − → a =2−t

− → − → → − X (t + 1) = D · ebl · cos(2πl) + X ∗ (t), (11) − → − → → − → − where D is defined as D = | X ∗ (t) − X (t)|, b represents a constant used to define a shape of logarithmic spiral, while l denotes pseudo-random number between −1 and 1. The whales simultaneously move around the pray together with a spiral path and shrink the circle, which is simulated by choosing between shrinking and spiral-shaped path in each iteration with equal probability p: − → − − → →∗ X (t) − A · D , if p < 0.5 → − X (t + 1) = − (12) → bl − → D · e · cos(2πl) + X ∗ (t) , if p ≥ 0.5 The exploration phase is conducted by updating each candidate solution in the population with respect to the position of a randomly chosen solution rather than of the global fittest solutions, as it is the case in the process of exploitation. The following expression models WOA’s exploration phase [9]: → − − → − → → − X (t + 1) = X rnd (t) − A · D,

(13) − → where D, distance between the i-th candidate and the random solution from → − → − − → → − the population rnd at iteration t, is given by D = | C · X rnd (t) − X (t)|. By conducting empirical simulations, we concluded that the original WOA version exhibits the behavior of premature convergence, and as a consequence, algorithm usually traps in one of the suboptimal regions of the search domain. The exploration is conducted only in cases when conditions p < 0.5 and |A ≥ 1| are satisfied. The exploration process should be more intensive, especially in the early phases of algorithm’s execution. The basic WOA implementation is explained in more details in [9].

WOA with Exploratory Move for WSN Localization

333

In order to overcome observed deficiencies, we adapted exploratory move (EM) from the Hooke-Jeeves local search method, that proved to be an efficient → − optimization technique [4]. With an assumption that X ∗ represents the current best solution (the base point), fmin is the current minimum objective function → − → xt is temporary value, δ = (δ1 , δ2 , ..., δn ) denote step sizes in n directions and − vector, the main steps of EM can be summarized in Algorithm 1.

Algorithm 1. EM pseudo-code → − − Initialization: → xt = X ∗ for (i = 1 to n) do → − → − x t,i = X ∗ i + δi −) < if (f (→ x fmin ) then t continue else → − → − x t,i = X ∗ i − δi −) < if (f (→ x fmin ) then t continue else → − → − x t,i = X ∗ i end if end if end for

→ − Moreover, in our implementation we used adaptive step size δ . First, all solutions in the population are ranked based on the value of the objective function, and after the first 10% best solutions are selected for the step size calculation by m −−→ − → the following expression δj = 0.1 · ( i=1 (Xi,j − X ∗j ))/m, where δj denotes the step size in j-th dimension, and m represents the number of 10% fittest solutions from the population. With the assumption that in later iterations of the algorithm, a proper part of the search space is found, our proposed approach utilizes EM operator only in first 50% of iterations. In this phase of execution, the EM operator is executed instead of exploitation process by using Eq. (11). By incorporating EM into original WOA approach, hybridized WOA-EM is devised, which pseudo-code is summarized in the Algorithm 2.

4

Simulation Results and Analysis

Due to the research purpose and for the sake of more precise comparative analysis, we utilized the same simulation setup as in [2,14]. A two-dimensional (2D) WSN deployment area with a size of 100 U × 100 U was used. Static target sensors and anchors with coordinates (x, y) are randomly deployed on the WSN deployment area by using pseudo-random number generator. In the first set of experiments, simulations with 40 target nodes (M ) and 8 anchor nodes (N ) were performed, while in the second experiment instance, we utilized varying number of anchors (from 10 to 35) and target (from 25 to 150)

334

N. Bacanin et al.

Algorithm 2. Pseudo-code of the WOA-EM Initialization. Generate random initial population Xi (i = 1, 2, 3, ..., N ) and initialize values of control parameters. Fitness calculation. Calculate fitness of each generated solution and determine the fittest solution X ∗ while (t < maxIter) do for each candidate solution do Recalculate A, C, a, l and p if p < 0.5 then if |A| < 1 then Recalculate current candidate solution X by using Eq. (7) else Choose random solution rnd form the population Update current candidate solution X by using Eq. (13) end if else if t < maxIter ∗ 0.5 then Update current candidate solution by applying EM operator else Update current candidate solution X by using Eq. (11) end if end if end for If any solution goes beyond feasible region of the search space, modify it Evaluate all solutions in the population by calculating fitness Update position of the global best solution X ∗ if necessary t=t+1 end while return The fittest (X ∗ ) from the current population

nodes. In both experiments we have taken into account the additive Gaussian noise signal, which is given by dˆi = di + ni . For more information, please refer to Eq. (2). The size of population (N ) and the maximum iteration number (maxIter) were set to 30 and 200, respectively for both algorithms, WOA and WOA-EM. The same parameter adjustments were used in [2,14]. The basic WOA parameters were adjusted as in [9]. Also, in both experiments, as performance indicators, we took the following metrics: the mean number of non-localized nodes (NN L ) and the mean localization error (EL ). Values of performance indicators were averaged over 30 independent runs. In the first round of experiments, the goal was to measure the influence of the noise percentage (Pn ) in distance measurement on the localization accuracy. For this purpose we ran original and hybridized WOA metaheuristics with the value of Pn set to 2 and 5, respectively. With each particular value of Pn we executed all algorithms in 30 independent runs. Comparative analysis was performed between WOA-EM and original WOA, buttery optimization algorithm (BOA), firefly algorithm (FA), particle swarm optimization (PSO), elephant herding optimization (EHO), hybridized EHO (HEHO), TGA and dynamic TGA (dynsTGA). For this research we implemented WOA and WOA-EM, while the results for other approaches were taken form [2,14]. Simulation results of the proposed algorithm along with the results of the algorithms used for comparison are given in Table 1, where better results from

WOA with Exploratory Move for WSN Localization

335

each category are marked bold. Visualization of results for one run of WOA and WOA-EM, when Pn = 5 is given in Fig. 1. Table 1. Comparative analysis and simulation results for M = 40, N = 8 with different values for Pn averaged in 30 runs Algorithms

Pn = 5

Pn = 2

Mean NN L Mean EL Computing Mean NN L Mean EL Computing time (s) time (s) BOA

4.7

0.28

0.65

4.5

0.21

0.53

FA

6.6

0.72

2.15

6.2

0.69

1.94

PSO

5.9

0.81

0.54

5.6

0.78

0.49

EHO

6.8

0.79

1.1

6.2

0.71

0.9

HEHO

5.3

0.45

1.2

5.1

0.37

1.0

TGA

5.5

0.42

0.9

5.0

0.36

0.8

dynsTGA 4.5

0.19

1.2

4.3

0.16

1.1

WOA

5.9

0.75

1.1

5.6

0.73

1.1

WOA-EM 4.4

0.17

1.3

4.3

0.15

1.2

From the results presented in Table 1, it can be noticed that in average WOAEM obtains the best results. Only in the case when Pn is set to 2, for NN L indicator, WOA-EM performs the same like the dynsTGA. At the other hand, improvements of WOA-EM over the original WOA are significant in all test instances. Original WOA obtains similar performance like PSO algorithm.

Fig. 1. Visualization of results when P n = 5 for one run - WOA (left), WOA-EM (right)

Results from the second set of experiments, with the varying number of anchor and target nodes are given in Table 2.

336

N. Bacanin et al.

Table 2. Comparative analysis between WOA-EM and WOA for varying number of target and anchor nodes averaged in 30 runs Anchors Targets WOA WOA-EM Mean NN L Mean EL Mean NN L Mean EL 25

10

5

0.73529

1

0.19155

50

50

3

0.55039

2

0.22731

75

20

3

0.69401

2

0.18900

100

25

0

0.64912

0

0.17302

125

30

3

0.61857

1

0.28251

150

35

1

0.71594

1

0.49302

Based on the results that are given in Table 2, it is obvious that the WOA-EM significantly improved performance of the basic WOA in terms of convergence, as well as of results’ quality.

5

Conclusion and Future Work

The work that has been presented is aimed to improve solving localization problem in WSNs by utilizing WOA swarm approach. We have modified and improved the basic WOA and it was used for solving this problem. The scientific contribution of this paper is twofold: improvements of the original WOA metaheuristics and advances in solving WSNs localization problem. Based on the comparison with other state-of-the-art approaches, that were implemented for the same WSNs localization problem, it can be said that it has proved the robustness and effectiveness of our proposed WOA-EM approach. As part of our future research activities, we will try to further improve WOA approach, and also to apply it to other WSNs localization problem modes that are current research topics. Acknowledgment. The paper is supported by the Ministry of Education, Science and Technological Development of Republic of Serbia, Grant No. III-44006.

References 1. Ahmed, A., Ali, J., Raza, A., Abbas, G.: Wired vs wireless deployment support for wireless sensor networks. In: TENCON IEEE Region 10 Conference, pp. 1–3 (2006) 2. Arora, S., Singh, S.: Node localization in wireless sensor networks using butterfly optimization algorithm. Arab. J. Sci. Eng. 42(8), 3325–3335 (2017) 3. Goyal, S., Patterh, M.S.: Wireless sensor network localization based on cuckoo search algorithm. Wireless Pers. Commun. 79, 223–234 (2014) 4. Hooke, R., Jeeves, T.A.: “Direct Search” solution of numerical and statistical problems. J. ACM 8(2), 212–229 (1961)

WOA with Exploratory Move for WSN Localization

337

5. Lavanya, D., Udgata, S.K.: Swarm intelligence based localization in wireless sensor networks. Springer 79, 317–328 (2011) 6. Ling, Y., Zhou, Y., Luo, Q.: L´evy flight trajectory-based whale optimization algorithm for global optimization. IEEE Access 5, 6168–6186 (2017) 7. Liu, C., Liu, S., Zhang, W., Zhao, D.: The performance evaluation of hybrid localization algorithm in wireless sensor networks. Mob. Netw. Appl. 21(6), 994–1001 (2016) 8. Mafarja, M.M., Mirjalili, S.: Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing 260, 302–312 (2017) 9. Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016) 10. Strumberger, I., Tuba, E., Bacanin, N., Beko, M., Tuba, M.: Hybridized moth search algorithm for constrained optimization problems. In: International Young Engineers Forum (YEF-ECE), pp. 1–5 (2018) 11. Strumberger, I., Tuba, E., Bacanin, N., Beko, M., Tuba, M.: Monarch butterfly optimization algorithm for localization in wireless sensor networks. In: 28th IEEE International Conference Radio Elektronika, pp. 1–6 (2018) 12. Strumberger, I., Bacanin, N., Tuba, M.: Hybridized elephant herding optimization algorithm for constrained optimization. In: Hybrid Intelligent Systems. AISC, vol. 734, pp. 158–166. Springer, Cham (2018) 13. Strumberger, I., Beko, M., Tuba, M., Minovic, M., Bacanin, N.: Elephant herding optimization algorithm for wireless sensor network localization problem. In: Technological Innovation for Resilient Systems, pp. 175–184. Springer, Cham (2018) 14. Strumberger, I., Minovic, M., Tuba, M., Bacanin, N.: Performance of elephant herding optimization and tree growth algorithm adapted for node localization in wireless sensor networks. Sensors 19(11), 2515 (2019) 15. Strumberger, I., Tuba, E., Bacanin, N., Beko, M., Tuba, M.: Modified and hybridized monarch butterfly algorithms for multi-objective optimization. In: Hybrid Intelligent Systems. AISC, vol. 923, pp. 449–458. Springer, Cham (2020) 16. Strumberger, I., Tuba, E., Zivkovic, M., Bacanin, N., Beko, M., Tuba, M.: Dynamic search tree growth algorithm for global optimization. In: Technological Innovation for Industry and Service Systems. IFIP AICT, vol. 553, pp. 143–153. Springer, Cham (2019) 17. Strumberger, I., Tuba, M., Bacanin, N., Tuba, E.: Cloudlet scheduling by hybridized monarch butterfly optimization algorithm. J. Sensor Actuator Netw. 8(3), 1–44 (2019) 18. Tuba, E., Strumberger, I., Zivkovic, D., Bacanin, N., Tuba, M.: Mobile robot path planning by improved brain storm optimization algorithm. In: IEEE Congress on Evolutionary Computation (CEC), pp. 1–8 (2018) 19. Tuba, E., Strumberger, I., Bacanin, N., Tuba, M.: Bare bones fireworks algorithm for capacitated p-median problem. In: Advances in Swarm Intelligence. LNCS, vol. 10941, pp. 283–291. Springer, Cham (2018) 20. Tuba, E., Tuba, M., Beko, M.: Support vector machine parameters optimization by enhanced fireworks algorithm. In: Advances in Swarm Intelligence. LNCS, vol. 9712, pp. 526–534. Springer, Cham (2016) 21. Tuba, M., Bacanin, N.: JPEG quantization tables selection by the firefly algorithm. In: International Conference on Multimedia Computing and Systems (ICMCS), pp. 153–158. IEEE (2014)

338

N. Bacanin et al.

22. Tuba, M., Bacanin, N., Beko, M.: Multi-objective RFID network planning by artificial bee colony algorithm with genetic operators. In: Advances in Swarm and Computational Intelligence. LNCS. vol. 9140, pp. 247–254. Springer, Cham (2015) 23. Zivkovic, M., Branovic, B., Markovic, D., Popovic, R.: Energy efficient security architecture for wireless sensor networks. In: 20th Telecommunications Forum (TELFOR), pp. 1524–1527 (2012)

Facial Expression Recognition Using Histogram of Oriented Gradients with SVM-RFE Selected Features Sumeet Saurav1,2(B) , Sanjay Singh2 , and Ravi Saini2 1 Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India 2 CSIR-Central Electronics Engineering Research Institute, Pilani, India

[email protected]

Abstract. This study is an attempt towards improving the accuracy and execution time of a facial expression recognition (FER) system. The algorithmic pipeline consists of a face detector block, followed by a facial alignment and registration, feature extraction, feature selection, and classification blocks. The proposed method utilizes histograms of oriented gradients (HOG) descriptor to extract features from expressive facial images. Support vector machine recursive feature elimination (SVM-RFE), a powerful feature selection algorithm is applied to select the most discriminant features from high-dimensional feature space. Finally, the selected features were fed to a support vector machine (SVM) classifier to determine the underlying emotions from expressive facial images. Performance of the proposed approach is validated on three publicly available FER databases namely CK+, JAFFE, and RFD using different performance metrics like recognition accuracy, precision, recall, and F1-Score. The experimental results demonstrated the effectiveness of the proposed approach in terms of both recognition accuracy and execution time. Keywords: Facial expression recognition (FER) · Histogram of oriented gradients (HOG) · Feature selection · Support vector machine (SVM) classifier

1 Introduction Psychological study has revealed facial expression as one of the most powerful ways through which humans communicate their emotions, cognitive states, intensions, and opinions to each other [1]. It is a well-known fact that facial expression contain nonverbal communication cues, which helps to identify the intended meaning of the spoken words in face-to-face communication. Therefore, there is a huge demand of an efficient and robust facial expression recognition (FER) system for a real-world human computer interaction (HCI) system. An automated FER technology equipped with robots can talk to children and take care of elderly people. This technology can also be used in hospitals to monitor patients, which will in turn save precious time and money. Additionally, FER technology can be applied in a car to identify the fatigue level of the drives which will avoid accidents and save lives. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 339–349, 2021. https://doi.org/10.1007/978-3-030-49336-3_34

340

S. Saurav et al.

Recognition of facial expression of a person either using a static image or sequence of images coming from a video stream is a well-studied problem since last decades. However, the presented techniques available in literature have not yet achieved the desired performance (in terms of recognition accuracy and computation time) leading towards real-world deployment of FER systems. This may be attributed towards lack of efficient discriminative feature extractor coupled with robust classifier having real-time computing capability. Available works in literature on FER could be classified either based on appearance features or geometrical features. Since, this work has made use of appearance-based FER system, therefore, a brief review of some of the related works available in literature has been discussed below. A comprehensive review of FER system for person-independent FER based on Local Binary Pattern (LBP) has been discussed by the authors in [2]. Other works using LBP features coupled with Kernel Discriminant Isometric map (KDIsomap) and Discriminant Kernel Locally Linear Embedding (DKLLE) has been discussed in [3] and [4] respectively. The authors in [5] have proposed an automatic FER in which the LBP features were extracted from the salient facial patches and classified using support vector machine (SVM) classifier. In order to overcome the noise susceptibility of LBP, a new descriptor called Local Ternary Pattern (LTP) was proposed by [6]. Inspired by the usefulness of LTP a new descriptor called Gradient Local Binary Pattern (GLTP) [7] was proposed for automated FER. Recently, an improved version of the GLTP called Improved GLTP (IGLTP) has also been reported in [8]. A new feature descriptor called the Compound Local Binary Pattern (CLBP) has been proposed for the purpose of FER by the authors in [9]. A novel local feature descriptor called local directional number pattern (LDN) has been proposed for the purpose of face analysis and expression recognition by the authors in [10]. Another very popular descriptor called Weber local descriptor (WLD) has also been utilized for the purpose of facial expression recognition. In [11], the authors have used the multi-scale version of this descriptor for extracting facial traits which were then classified using a support vector machine-based classifier. A novel technique called Weber Local Binary Image Cosine Transform (WLBI-CT) has been proposed by authors in [12]. Apart from the texture-based information, some works have also utilized shape-based information extracted using Histogram of Oriented Gradients (HOG). Use of HOG descriptor for facial expression recognition has been possibly first discussed in [13]. Automated facial expression recognition based on histogram of oriented gradient feature vector differences has been proposed by the authors in [14]. A comprehensive study on the application of histogram of oriented gradients (HOG) descriptor in the facial expression recognition problem has been done in [15]. In this work, the authors have investigated the importance of different HOG parameters and their impact on the classification accuracy of the facial expression recognition. Inspired by the effectiveness of the HOG descriptor for preserving the local information using orientation density distribution and gradient of the edge, the authors in [16], have proposed a novel technique which consists of transforming the HOG features to frequency domain thereby making this descriptor one of the most suitable to characterize illumination and orientation invariant facial expressions. The HOG descriptor is also used in conjunction with other descriptors. For instance, in [17], facial expression recognition with fusion features extracted from the silent facial areas using LBP and HOG has been proposed.

Facial Expression Recognition Using HOG with SVM-RFE Selected Features

341

This study presents an algorithmic pipeline leading towards improvement in recognition accuracy and execution time of a FER system. Motivated by the success of Histogram of Oriented Gradients (HOG) in facial expression recognition task, we have also used this descriptor for the purpose of extracting facial traits from different expressions. Since the extracted features are usually of large size, therefore, in order to overcome the limitation of the curse of dimensionality and extensive computation, we have used an embedded feature selection algorithm called support vector machine recursive feature elimination (SVM-RFE) for the purpose of removing irrelevant and redundant features from the original feature space. The selected features extracted from different expressive facial images are classified using a support vector machine (SVM) classifier. The remainder of this paper is organized as follows: In Sect. 2, the proposed facial expression recognition is described. Experimental results and discussion have been described in Sect. 3. Final conclusion has been made in Sect. 4.

2 Proposed Methodology Facial expression recognition from a generic image requires an algorithmic pipeline that involves different operating blocks as shown in Fig. 1. The red arrow indicates the path followed during the training phase and the green one during the testing phase of the pipeline. First step detects the human face in the image under investigation which is then aligned and registered to a standard size of 65 × 59 pixels as recommended in [15]. Here we have used the Multiblock-Local Binary Pattern (MB-LBP) features based version of the Viola and Jones face detector. The pre-trained cascade classifier used has been obtained from the work mentioned in [18]. For facial landmarks detection, a robust technique called Supervised Descent Method (SDM) also known as Intraface (IF) is used [19]. Using different facial landmark coordinates obtained from IF, the facial image is registered. This registration step as mentioned in [15] makes it sure that the position of the eyes in different images have the same position. This helps the HOG descriptor in extracting features from different facial images with similar spatial reference. The vector of features extracted by HOG is then passed through an embedded feature selection (FS) block called support vector machine recursive feature elimination (SVM-RFE) proposed by the authors in [20]. SVM-RFE feature selection block selects the most discriminant features that separates the expressions and gives optimal features with reduced dimension using the iterative algorithm shown in Fig. 2. SVM-RFE uses criteria derived from the coefficients in SVM models to assess features, and recursively removes features that have small criteria. We have used the linear version of the SVM-RFE in our experiments based on One-Versus-Rest (OVR) approach for selecting features from different facial expressions. The selected features are finally used by SVM classifier to classify the facial emotions by means of One-Versus-One (OVO) multi-class classification strategy.

3 Experimental Results and Discussions In this section, we describe various experiments performed in this work. All the experiments were carried out on a laptop with 2.50 GHz Core i5 processor and 4 GB of RAM, running under Windows 8 working framework. The proposed system is simulated using Matlab 2015b tool.

342

S. Saurav et al.

Fig. 1. Proposed facial expression recognition algorithmic pipeline

Fig. 2. SVM-RFE feature selection pseudo-code

3.1 Datasets Three publicly available benchmark FER datasets namely the CK+ [22], JAFFE [23], and the RFD [24] were used for conducting experiments. The CK+ database is an extended version of the CK database which contains both male and female subjects. In this study, we used a total of 407 images obtained with the following distribution among the considered classes of expressions: anger (An: 45), disgust (Di: 59), fear (Fe: 50), happy (Ha: 69), sad (Sa: 56), neutral (Ne: 60), and surprise (Su: 68). The 6-expression version of the database excludes the neural expression images from the above distribution. The second dataset named Rebounds Faces Database (RFD) consists of facial images from 8 expressions (anger, disgust, fear, happiness, contemptuous, sadness, surprise and

Facial Expression Recognition Using HOG with SVM-RFE Selected Features

343

neutral) filmed using 67 subjects looking at three directions with 5 different face orientations. Three categories of expressions obtained from the database has been used in our experiments. The first category called RFD category 1, consists of images comprising 7 expressions (anger, contempt, disgust, fear, happy, sad, and surprise) consisting of a total of 469 images with 67 images coming from each facial expression. The second category called RFD category 2 also consists of 7 prototypic expression (anger, disgust, fear, happy, neutral, sad, and surprise) with similar distribution to that of RFD category 1 images. Finally, the third category of expressions called RFD 8 obtained from the database consists of all the eight prototypic expressions (anger, contempt, disgust, fear, happy, neutral, sad, and surprise) with a total of 536 images. Experiments were also carried out using the Japanese female facial Expression (JAFFE) dataset. This dataset contains 7 different prototypic facial expression images: anger, disgust, fear, happy, neutral, sad, and surprise. It consists of 10 female subjects performing different facial expressions with a total of 213 images. 3.2 Experimental Results on CK+, RFD, and JAFFE Database Performance of SVM-RFE selected features in terms of different performance metrics classified using OVO linear SVM classifier [21] on CK+, JAFFE, and RFD databases have been shown in Tables 1, 2, 3 and 4. Table 1. Performance analysis with different feature size on CK+ 7 database No. of features Avg. Acc. Avg. Prec. Avg. recall Avg. F1-Score 33

92.38

91.59

92.02

91.73

66

95.05

94.40

94.55

94.45

98

96.56

96.07

96.33

96.14

129

97.05

96.78

96.85

96.81

159

97.79

97.46

97.60

97.51

188

98.77

98.49

98.72

98.59

218

98.28

97.94

98.20

98.03

247

98.27

97.94

98.15

98.02

The experiments were performed using unsigned HOG features extracted using the parameter setting as in [15]: cell size 7 × 7 pixels, block size 2 × 2 with one cell overlap in both horizontal and vertical direction, and number of histogram bins equal to 7. In all the experiments, a 10-fold cross-validation testing procedure has been used wherein, average accuracy (Avg. Acc.), average precision (Avg. Prec.), average recall (Avg. Recall), and average F1-Score (Avg. F1-Score) denotes average score of the 10folds of these performance metrics. From the above tables, we find that with increase in feature dimension, there is an increase in the values of different performance metrics. However, after reaching an

344

S. Saurav et al. Table 2. Performance analysis with different feature size on JAFFE 7 database No. of features Avg. Acc. Avg. Prec. Avg. recall Avg. F1-Score 33

92.49

92.54

92.53

92.47

65

92.49

92.50

92.57

92.48

95

94.84

94.85

94.85

94.83

127

95.31

95.29

95.35

95.30

153

97.65

97.63

97.79

97.67

179

97.18

97.17

97.35

97.22

201

97.18

97.17

97.35

97.22

224

96.71

96.69

96.95

96.76

Table 3. Performance of the proposed approach on RFD 7 category 1 with different feature size No. of features Avg. Acc. Avg. Prec. Avg. recall Avg. F1-Score 32

93.39

93.39

93.38

93.38

65

95.95

95.95

95.94

95.92

94

95.74

95.74

95.83

95.73

125

96.38

96.38

96.43

96.38

151

95.49

95.52

95.56

95.51

177

97.42

97.44

97.49

97.44

203

98.72

98.72

98.73

98.72

236

98.72

98.72

98.73

98.72

Table 4. Performance of the proposed approach on RFD 7 category 2 with different feature size No. of features Avg. Acc. Avg. Prec. Avg. recall Avg. F1-Score 32

93.60

93.60

93.67

93.58

63

95.74

95.74

95.76

95.73

94

96.16

96.16

96.26

96.16

125

95.95

95.95

96.01

95.95

149

97.44

97.44

97.55

97.44

179

98.08

98.08

98.16

98.07

209

98.08

98.08

98.16

98.07

235

97.87

97.87

97.88

97.86

Facial Expression Recognition Using HOG with SVM-RFE Selected Features

345

optimal feature dimension, the performance starts degrading. This clearly indicates that not all the HOG extracted features are significant. Confusion matrices corresponding to the optimal number of selected features classified using OVO linear SVM classifier has been shown in Tables 5, 6 and 7 for CK+ 7, JAFFE 7, and RFD 7 category 1 databases respectively. As could be seen for CK+ database, the classifier successfully classified all the expression images except the anger and neutral class. Moreover, in case of JAFFE database, some of the images from the disgust class got misclassified into sad and fear class. Also, in case of RFD, the classifier performed well in classifying most of the expressions except surprise and anger. Table 5. Confusion matrix with optimal selected features on CK+ 7 database An

Di

Ha

Ne

Sa

Su

Fe

An 91.11 4.44 0

4.44

0

0

0

Di 0

100 0

0

0

0

0

Ha 0

0

100 0

0

0

0

Ne 1.67

0

0

98.33 0

0

0

Sa 0

0

0

0

100 0

0

Su 0

0

0

0

0

100 0

Fe 0

0

0

0

0

0

100

Table 6. Confusion matrix with optimal selected features on JAFFE 7 database An

Di

An 100 0

Ha

Ne

Sa

Su

Fe

0

0

0

0

0

Di 0

93.10 0

0

3.44

0

3.44

Ha 0

0

100 0

0

0

0

Ne 0

0

0

100 0

0

0

Sa 0

0

3.33 0

96.77 0

0

Su 0

0

3.33 0

0

96.67 0

Fe 0

0

0

3.12

0

0

96.88

Performance comparison of the proposed approach on different FER datasets with optimal unsigned HOG features has been shown in Table 8. Also, experimental evaluation on these databases using the signed version of the HOG descriptor is listed in Table 9. The unsigned features contain orientation bins evenly spaced over 00 –1800 whereas in the case of signed features the range is 00 –3600 . Comparing these two tables one can find that, on CK+ and JAFFE database both signed and unsigned version of the HOG descriptor performed equally well. However, on RFD database signed HOG descriptor has a lead.

346

S. Saurav et al. Table 7. Confusion matrix with optimal selected features on RFD 7 category 1 database An

Co

An 97.01 0

Di

Fe

0

Ha

Sa

Su

1.49 1.49

0

0

Co 0

100 0

0

0

0

0

Di 0

0

100 0

0

0

0

Fe 0

0

0

100 0

0

0

Ha 0

0

0

0

98.51 0

Sa 0

0

0

0

0

98.51 1.49

Su 0

0

0

0

1.49

1.49

1.49 97.01

Table 8. Performance of the proposed approach using unsigned HOG features Database

Optimal feature size

Avg. Acc.

Avg. Prec.

Avg. recall

Avg. F1-Score

CK+ 6

85

99.71

99.63

99.72

99.67

CK+ 7

188

98.77

98.49

98.72

98.59

RFD 7 category 1

203

98.72

98.72

98.72

98.72

RFD 7 category 2

179

98.08

98.08

98.16

98.07

RFD 8

172

94.22

94.22

94.33

94.18

JAFFE 6

83

97.81

97.79

97.95

97.82

JAFFE 7

153

97.65

97.63

97.79

97.67

Table 9. Performance of the proposed approach using signed HOG features Database

Optimal feature size

Avg. Acc.

Avg. Prec.

Avg. recall

Avg. F1-Score

CK+ 6

131

99.71

99.72

99.64

99.67

CK+ 7

208

99.02

98.92

99.03

98.97

RFD 7 category 1

147

98.93

98.93

98.95

98.94

RFD 7 category 2

150

98.29

98.29

98.32

98.29

RFD 8

194

97.95

97.95

97.93

97.93

JAFFE 6

100

97.27

97.24

97.38

97.27

JAFFE 7

223

97.18

97.20

97.25

97.21

Facial Expression Recognition Using HOG with SVM-RFE Selected Features

347

Performance comparison of the proposed approach with different state-of-the-art approaches having similar database distribution and testing procedure has been shown in Table 10. From the table, we find that the proposed approach attained similar/better performance compared to different approaches available in the literature. Table 10. Comparison of the proposed approach with different state-of-the-art approaches References

Database

Avg. Prec.

Avg. recall

Avg. Acc.

Avg. F1-Score

[5]

JAFFE 6

92.63

91.80

91.80

92.22

Proposed

JAFFE 6

97.79

97.95

97.81

97.82

Proposed

JAFFE 7

97.63

97.79

97.65

97.67

[8]

CK+ 6

99.40

94.10

94.10

94.10

[15]

CK+ 6

95.80

95.90

98.80

95.80

[5]

CK+ 6

94.69

94.10

94.09

94.39

[2]

CK+ 6





95.50



Proposed

CK+ 6

99.63

99.72

99.71

99.67

[2]

CK+ 7





93.40



[15]

CK+ 7

94.30

94.10

98.50

94.10

Proposed

CK+ 7

98.92

99.03

99.02

98.97

[15]

RFD 7 category 1

94.90

94.90

98.50

94.80

Proposed

RFD 7 category 1

98.93

98.95

98.93

98.94

Proposed

RFD 7 category 2

98.29

98.32

98.29

98.29

[15]

RFD 8

93.00

92.90

98.20

92.90

Proposed

RFD 8

97.95

97.93

97.95

97.93

4 Conclusion In this paper, we presented an efficient algorithmic pipeline for FER system. The algorithmic pipeline consists of a face detection unit, face alignment & registration unit followed by features extraction, feature selection and classification units. Signed and unsigned versions of the HOG descriptor have been used for extracting features from the facial image of size 65 × 59 pixels. Feature selection using SVM-RFE has been employed to select significant features from the high dimensional HOG features. Finally, a multi-class SVM classifier has been used to classify the selected features into their respective expression categories. After performing a significant number of experiments, we found that using only the SVM-RFE selected features, the proposed approach achieved state-ofthe-art results on CK+, JAFFE, and RFD FER databases with reduced processing time, computational resources, and memory requirements, as there is multi-fold reduction in the feature dimension compared to the original feature dimension. Thus, the proposed

348

S. Saurav et al.

approach could be effectively used for real-time implementation of a FER system on an embedded platform.

References 1. Knapp, M.L., Hall, J.A., Horgan, T.G.: Nonverbal Communication in Human Interaction, 8th edn. Cengage Learning, Boston, MA (2013) 2. Shan, C., Gong, S., McOwan, P.W.: Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis. Comput. 27(6), 803–816 (2009) 3. Zhao, X., Zhang, S.: Facial expression recognition based on local binary patterns and kernel discriminant isomap. Sensors 11(10), 9573–9588 (2011) 4. Zhao, X., Zhang, S.: Facial expression recognition using local binary patterns and discriminant kernel locally linear embedding. EURASIP J. Adv. Sig. Process. 2012(1), 20 (2012) 5. Happy, S.L., Routray, A.: Automatic facial expression recognition using features of salient facial patches. IEEE Trans. Affect. Comput. 6(1), 1–12 (2015) 6. Tan, X., Triggs, B.: Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Proc. 19(6), 1635–1650 (2010) 7. Ahmed, F., Hossain, E.: Automated facial expression recognition using gradient-based ternary texture patterns. Chin. J. Eng. 2013, 8 (2013) 8. Holder, R.P., Tapamo, J.R.: Improved gradient local ternary patterns for facial expression recognition. EURASIP J. Image Video Proc. 2017(1), 1–15 (2017) 9. Ahmed, F., Hossain, E., Bari, A.S.M.H., Shihavuddin, A.S.M.: Compound local binary pattern (CLBP) for robust facial expression recognition. In: 2011 IEEE 12th International Symposium on Computational Intelligence and Informatics (CINTI), pp. 391–395. IEEE (2011) 10. Rivera, A.R., Castillo, J.R., Chae, O.O.: Local directional number pattern for face analysis: face and expression recognition. IEEE Trans. Image Proc. 22(5), 1740–1752 (2013) 11. Alhussein, M.: Automatic facial emotion recognition using weber local descriptor for eHealthcare system. Cluster Comput. 19(1), 99–108 (2016) 12. Khan, S.A., Hussain, A., Usman, M.: Reliable facial expression recognition for multi-scale images using weber local binary image-based cosine transform features. Multimedia Tools Appl., 1–33 (2017) 13. Orrite, C., Gañán, A., Rogez, G.: Hog-based decision tree for facial expression classification. In: Araujo, H., Mendonça, A.M., Pinho, A.J., Torres, M.I. (eds.) Pattern Recognition and Image Analysis IbPRIA 2009. Lecture Notes in Computer Science, vol. 5524, pp. 176–183. Springer, Berlin, Heidelberg (2009) 14. Mlakar, U., Potoˇcnik, B.: Automated facial expression recognition based on histograms of oriented gradient feature vector differences. SIViP 9(1), 245–253 (2015) 15. Carcagnì, P., Del Coco, M., Leo, M., Distante, C.: Facial expression recognition and histograms of oriented gradients: a comprehensive study. SpringerPlus 4(1), 645 (2015) 16. Nazir, M., Jan, Z., Sajjad, M.: Facial expression recognition using histogram of oriented gradients based transformed features. Cluster Comput., 1–10 (2017) 17. Liu, Y., Li, Y., Ma, X., Song, R.: Facial expression recognition with fusion features extracted from salient facial areas. Sensors 17(4), 712 (2017) 18. Koestinger, M.: Efficient Metric Learning for Real-World Face Recognition. http://lrs.icg.tug raz.at/pubs/koestinger_phd_13.pdf (2013) 19. Xiong, X., De la Torre, F.: Supervised descent method and its applications to face alignment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2013) 20. Guyon, I., Weston, J., Barnhill, S., Vapnik, V.: Gene selection for cancer classification using support vector machines. Mach. Learn. 46(1–3), 389–422 (2002)

Facial Expression Recognition Using HOG with SVM-RFE Selected Features

349

21. Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines. ACM Trans. Intel. Syst. Technol. (TIST) 2(3), 27 (2011) 22. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohnkanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern RecognitionWorkshops, pp. 94–101. IEEE (2010) 23. Lyons, M.J., Akamatsu, S., Kamachi, M., Gyoba, J., Budynek, J.: The Japanese female facial expression (JAFFE) database. In: Proceedings of Third International Conference on Automatic Face and Gesture Recognition, pp. 14–16 (1998) 24. Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.H.J., Hawk, S.T., Van Knippenberg, A.D.: Presentation and validation of the Radboud Faces Database. Cogn. Emot. 24(8), 1377–1388 (2010)

Automated Security Driven Solution for Inter-Organizational Workflows Asmaa El Kandoussi(B) and Hanan El Bakkali(B) ENSIAS, Information Security Research Team, Mohammed V University, Rabat, Morocco [email protected], [email protected]

Abstract. This paper presents a new solution to deal with security in dynamic Inter-Organizational Workflow (IOW) Systems. The IOW system aims to support the collaboration between distributed business processes running in several autonomous organizations in order to complete a set of common goals. In such dynamic environments, where participating organizations (partners) in the IOW are not known before its execution, many security breaches could arise. Thus, we propose a new automated security-driven solution based on i) partner selection ii) access control partner negotiation and policy conflict resolution. Keywords: Access control · Inter-Organizational Workflow · Partner selection · Multi-Criteria Decision-Making (MCDM) · Conflict resolution

1 Introduction Inter-Organizational Workflow (IOW) is a fundamental concept to support the collaboration between several tasks running in distributed organizations [1]. Consequently, IOW’s security remains a big challenge, and Access Control is one of the most critical security measures that have to be considered. Thus, IOW systems have to support specific Access Control requirements such as [2]: organization’s autonomy, policies cohabitation, interoperability, privacy, etc. In dynamic IOW, the participating organizations (i.e., partners) are not necessarily known before the IOW execution; also, partners have the freedom to join or leave the collaboration. In this regard, the concept of Virtual Enterprise (VE) appears. Several definitions have been given to this concept. According to authors in [3]; the virtual enterprise (VE) is used to describe the temporary partnership between several enterprises with a specific set of goals. Nevertheless, this openness may cause information disclosure or security breaches. Thus, the IOW system’s security highly depends on its partner’s security. Thereby, selecting the most appropriate enterprises to participate in collaboration has great importance. Participating organizations in the IOW are chosen from the registered enterprises in the VE platform. In this regard, a reliable evaluation process has to be maintained. In that respect, it is necessary to consider different criteria for partner selection. Thus, it is not a simple process since it includes many organization’s parameters that may cause potential © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 350–361, 2021. https://doi.org/10.1007/978-3-030-49336-3_35

Automated Security Driven Solution for Inter-Organizational Workflows

351

conflict. The partner selection problem is classified as a multi-criteria decision-making (MCDM) problem by many researchers. The main purpose of MCDM methods aim to solve decision-making problems using mathematical tools. In the literature, many methods was proposed [4, 5] as the analytical hierarchy process (AHP), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), and other MCDM techniques for partner selection. Other studies [6–8] apply agents to model this problem. Moreover, the basic key for successful and secured inter-organizational collaboration is to choice the most suitable partners while respecting the main security requirements, particularly those related to access control. It is in this sense that the concept of policy negotiation takes its importance by enhancing the organization’s autonomy and security, mainly when potential policy conflict may occur. Besides, the concept of tasks and resiliency in workflow systems require more flexibility and efficiency in the context of IOW systems. Therefore, it is necessary to find the appropriate partner for secured collaboration. In this context, we propose a new security-driven architecture that includes: • Partners Selection: the organization that initiates the global workflow is the workflow initiator (WInit), it can select one or several partners able to execute the outsourced task, while respecting a set of security criteria defined by the requester. • Policies Negotiation: the workflow initiator (organization) may negotiate with the previously selected provider the access control policy rules and the conflict resolution strategy to be applied when executing a specific process. At the end of this step, we identify the partner in charge of the requested task. • Contract establishment between the workflow initiator and the selected partners to formalize their cooperation. Our proposed architecture discusses organizations’ ability to automatically and autonomously secure their tasks in a dynamic IOW. It means that the involved organization decides by themselves how to maintain the security of their cooperation. The architecture allows requester organizations to determine by themselves with whom and how to collaborate (i.e., under which security conditions). The solution can support the pre-listed services (partner selection, policy negotiation, contract establishment). The rest of this paper is organized as follows; Sect. 2 provides related work. In Sect. 3, we discuss our proposed architecture. Section 3.2 illustrates a use case scenario to demonstrate the effectiveness of the proposed approach. Section 4 summarizes the paper and notes some challenges and future research directions.

2 Related Work Several works have been done toward securing IOW systems. Particularly, proposing a new access control solution based on the well-known Role-Based Access Control (RBAC) [9] for IOW. Task Role-Based Access Control (TRBAC) [10], which inherits the RBAC model advantages and support the dynamic access feature of the Task-Based Access Control (TBAC) model. Flexible Policy-Based Access Control Model for Workflow Management Systems (PBFWs) [11] presented an excellent approach to enforce

352

A. El Kandoussi and H. El Bakkali

privacy policy in workflow environments. Authors in [12], proposed a new model RBWAC based on RBAC and introduced new rules to detect potential conflicts related to a workflow instance and a context of its execution. Also, the authors suggested the use of priority in order to resolve these conflicts. Authors in [13] proposed a new solution ACCOLLAB for automatic mapping between different access control policies in crossorganizational collaboration. The authors defined a Generic-XACML based XACML profiles and on a universal language derivative of XACML. However, the work didn’t deal with the conflict resolution strategy in this context. In [14], the authors proposed a decentralized access control framework for a dynamic collaboration in healthcare. The framework is policy-based to meet the requirements of a cross-domain environment. Nevertheless, the authors didn’t explore how to handle the existence of multiple access control policies. In the context of IOW collaboration, the security of IOW systems highly depends on its partner’s security. Therefore, selecting the most appropriate enterprises to participate in collaboration has great importance. Participating organizations in the IOW are chosen from the registered enterprises in the VE platform. The concept of Virtual Enterprise (VE) is a concept that was maturated throughout multiple research. In order to increase the flexibility and reconfigurability of the VE system, an Ontology-based Multi-Agent Virtual Enterprise (OMAVE) system is proposed in [6]. Authors in [15] proposed a novel workflow system framework which supports the fast collaboration of the Community Cloud via a process-driven method, and a Unified Scheduling Strategy for Instance- intensive Workflows (USS-I) to ensure the fast collaboration mechanism with high efficiency. Moreover, it is crucial to select the best partners for each global task when forming t global workflow in the VE and to ensure secured access between participating organizations. It is in this sense that the partner selection process has to consider different security criteria. Thus, it is not a simple process since it includes many organization’s parameters that may cause potential conflict. In the literature, many authors proposed “Multi-Criteria Decision-Making (MCDM)” methods in order to resolve the partner selection problems. In fact, MCDM methods is based on mathematical tools to construct and solve decisionmaking problems. These techniques evaluate alternatives based on the decision maker’s preferences. Many authors [4, 5] proposed the “Analytical Hierarchy Process (AHP)”, “Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)”, and other MCDM techniques for partner selection. Other studies [6–8] apply agents to model this problem. Access Control in distributed environments was addressed in many works [16–18] especially. Access Control in IOW [12, 19–21]. However, to the end of my knowledge, there is not work that proposes an automated solution for secured access control in IOW systems. Thus, this research issue still needs to be tackled.

Automated Security Driven Solution for Inter-Organizational Workflows

353

3 Automated Security Driven Partner Selection Architecture 3.1 Security Driven Partner Selection Overview The Workflow Initiator is the main responsible for each new collaboration. The first step before each IOW collaboration is the most important one. In this step, the workflow initiator has to identify global tasks that should be executed by an external organization and specify selection criteria for each global task. The proposed solution could be integrated into the workflow initiator platform; in this case, this later does not have to be registered since it’s the owner of the platform and other organizations have to be registered in order to participate in the collaboration. Also, our proposed solution could be managed by Trusted Third Party (TTP), that regulates and manage the offered services. In the least case, the workflow Initiator, like other organizations, must be registered before the collaboration opportunity creation. As illustrated in Fig. 1, once the WInit is registered, it can submit the global tasks for potential collaboration. The next step is the Partner Selection, to do so, the WInit has to specify his preference and select the pre-defined selection criteria for each global task. The selection criteria express the required skills and security threshold to respond to the collaboration opportunity. Then, the ranking process of potential partners starts. In this regard, we proposed a hybrid solution based on the “Analytic Hierarchy Process (AHP)” method and “Grey Technique for Order of Preference by Similarity to Ideal Solution (Grey TOPSIS)” in order to rank partner. After that, the ordered partners’ list is

Fig. 1. Security driven partner selection solution adapted from [22]

354

A. El Kandoussi and H. El Bakkali

generated, and the WInit must choose the most appropriate partner among the partners’ candidate that does not any potential conflict. If so, this partner is eliminated, and the next ranked partner is selected. Subsequently, a contract is established after a contract negotiation with the selected partners. Before the Global workflow creation, workflow verification has to be accomplished. Indeed, this step verifies the global workflow in a simulated environment and ensures that there are no errors before it is performed. This step is out of the scoop of this work. The next steps are Global Workflow deployment and Monitoring. 3.2 Security Driven Partner Selection Architecture This section presents the proposed architecture and defines the main components, as described in Fig. 2. This framework allows the workflow initiator to select the best partner for each global task in the IOW while respecting their security requirements. The proposed architecture ensures a successful and secured collaboration, and it is responsible for:

Fig. 2. Security-based partner selection architecture

– Registering the new organizations interested in taking part of the global IOW. – Allowing the registered organization to submit their profiles and competencies.

Automated Security Driven Solution for Inter-Organizational Workflows

355

– Searching and selecting the best partners. – Contract negotiation for collaboration establishment with registered organization in the proposed solution. As illustrated in Fig. 2, the proposed architecture includes the following modules: • • • • • •

Organization Profile Module Collaboration Module Security Calculator Module Partner selection Module Ontology Module Verification Module

The proposed architecture can be supported by Trusted-Third Party (TTP), integrated with the workflow initiator platform or implemented as Cloud Service Broker. Organization Profile Module As a first step, each organization has to register on the platform. In this regard, the organization creates its own profile by specifying its administrative information (name, activities sectors, etc.). Afterward, the organization provides its security profile (list of security certifications, privacy compliance certification, references, etc.), that should be verified by the security team. This module also includes the potential Task and Task Access Control Policy Module. Each organization uses this module in order to create its own collaboration profile. Thus, the organization has to detail the global tasks that can execute Afterward, the access control policy related to each global task is specified. The related policy will then be assessed by the criteria calculation module to calculate the PSL for the future partner selection process. The partner registration process is described in Fig. 3. Security Calculator Module In the IOW, many security requirements should be identified and supported. Indeed, this module contains the Trust & Reputation Module, the Security Module, and the privacy Module. The main focus of our work is security issues. However, other specific selection criteria can be added such as performance index, cost, execution time, technical infrastructure, etc. Its role is to calculate the security criteria for each organization, and after verification, the security information is stored in a security database. The Trust & Reputation Module calculates Trust and Reputation Level (TRL) based on partners’ feedback and organization’s references specified in its security profile. Besides, the security and privacy module calculate the Security Level (SL) and Privacy Compliance Level (PCL), respectively. The provided parameters are used to calculate the organization’s rank by the Partner Selection Module. Hereafter, we present the main security parameters proposed for secured collaboration. Security Level (SL) In the collaboration context, security risks should be well measured. In this regard, the workflow initiator has to be ensured that the participating organization in the IOW take

356

A. El Kandoussi and H. El Bakkali

adequate precautions while collecting, transmitting, storing different types of sensitive data. Thus, partners are required to implement security controls to be protected against cyber-attacks.

Fig. 3. Organization registration & partnership

Automated Security Driven Solution for Inter-Organizational Workflows

357

Depending on the collaboration nature (global task criticality), each organization needs to reach a specific compliance level. Indeed, the organization’s security compliance remains one of the essential parameters for partner selection. In this regard, the security compliance level provides huge reassurance for potential secured collaboration. Thus, the SL reflects that the organization provides adequate protection against security threats and handles confidential and sensitive information appropriately. Several compliance standards and regulations exist and each organization choose to be complain with specific regulations based on its main activity. For example, the Payment Card Industry Data Security Standard (PCI DSS) for organizations that use credit cards such as banks. Also, the well-known standards are ISO/IEC 27001 for security compliance. Authors in [23] presented a framework that uses a third-party auditor (TPA) to review, audit, and validate the Consensus Assessment Initiative Questionnaire (CAIQ) responses of the cloud Service Provider (CSP). The framework provides a specific group of auditors that can be used to evaluate and validate the security controls of CSPs. As future work, we will propose a new approach to evaluate the partner security level. Privacy Compliance Level (PCL) Actually, personal data is involved in several domains. In order to protect this precious data, several legislations are provided. In fact, organizations have to respect legislations should make the appropriate actions, procedures, policies, processes, and strategies to protect their data. For instance, the General Data Protection Regulation (GDPR) that get started in the EU on May 25, 2018, the EU Data Protection Directive (EU DPD), the HIPAA and the Sarbanes-Oxley, and the California Consumer Privacy Act (CCPA) started on January 1, 2020. In IOW systems, personal data can be shared with different partners. Thus, organizations have new obligations around data management and have the right to choose the based partner based on the privacy compliance and how they manage personal data. Policy Similarity Level (PSL) The PSL is one of the most important criterion in partner selection process, since it permits the workflow initiator to choose the closer organization based on security policy. By doing so, it can minimize the security policy changes. In our previous work [4], we defined the PSL as the percentage of the rule pairs having the same decision for any two given policies P1 , P2 . In fact, we proposed to calculate PSL for a specific task Tk . PSLij (Tk ) determines the closeness of the two policies related to the execution of the task Tk . In order to calculated this value, each registered organization has to specify its local access control policy after registration process. Based on the stored organizations’ local policy, we can calculate the policy similarity level between two organizations. Trust and Reputation Level (TRL) With inter-organizational collaboration, it is difficult to build trust between several organizations that may be unknown before collaboration. Therefore, the trust score is considered to be a fundamental key for successful collaboration. It is based on the level of confidence in the integrity and credibility of the organizations. In this regard, the trust and reputation module is requested to calculate this parameter based on the stored information and others organization’s feedbacks, scores, recommendations, and any additional information provided by organizations for the trust’s evaluation [24]. In

358

A. El Kandoussi and H. El Bakkali

literature, many works aim to calculate the level of trust; In [25], the authors described a quasi-experimental algorithm to calculate the Certificate Authority (CA) trustworthiness value. It depends on the CA reputation value as well as the Certificate Provider (CP) quality and its security maturity level. Partner Selection Module This module is based on three modules; the Criteria Preference Module, the AHP module, and the TOPSIS Module. This module allows the workflow initiator to identify the security criteria for each global task. The Partner Selection Module procedure is described in Fig. 5. The AHP and TOPSIS modules are responsible for the Organization level calculations. The exact method of ranking is presented in our previous work [4].

Fig. 4. Partner selection data flow

In our solution, the proposed hybrid solution is based on the AHP and Grey TOPSIS to select the most suitable partner for each global task. Figure 4 shows the data flow of partner selection. First, the workflow initiator defines a set of selection criteria based on each global task. After that, the workflow specifies the priority of each criterion. Then, we calculate the criteria’s weight using the AHP method. If the criteria’s weight is inconsistent, then AHP is reused to determine them. The workflow initiator determines the threshold of each criterion, based on a comparison with the pre-selected partner criteria value, and the criterion threshold, a set of the partner is selected and the module export a ranked partner list using the Grey TOPSIS method. The Ontology Module and Verification Module are out of the scoop of this work; more details will be provided in future works.

Automated Security Driven Solution for Inter-Organizational Workflows

359

Fig. 5. Partner selection sequence diagram

4 Conclusion This paper has addressed the issue of automatic security-driven partner selection in IOW. We propose a new architecture to deal with partner selection based on a hybrid MultiCriteria Decision Method and a set of security criteria. Our architecture allows the participating organization to find the best partners based on their own criteria specification, negotiation between partners, and resolving policy conflicts based on the organization’s weight, and contract agreement. Future work will involve completing our ongoing implementation prototype and evaluating our approach in a case study. We also plan to integrate negotiation-based automated conflict-resolution strategies (incorporating the organization’s weight). More future work pursues the detection of authorization rule conflicts that occur based on changes in a business-collaboration context.

References 1. Bouaziz, W., Andonoff, E.: Autonomic protocol-based coordination in dynamic interorganizational workflow: HAL Id: hal-01233227 (2015)

360

A. El Kandoussi and H. El Bakkali

2. El Kandoussi, A., El Bakkali, H.: On access control requirements for inter-organizational workflow. In: Proceedings of the 4th Edition of National Security Days, JNS4 2014 (2014) 3. Kovács, G., Kot, S.: Economic and social effects of novel supply chain concepts and virtual enterprises. J. Int. Stud. 10(1), 237–254 (2017) 4. El Kandoussi, A., El Bakkali, H.: Security based partner selection in inter-organizational workflow systems. Int. J. Commun. Networks Inf. Secur. 10(3), 462–471 (2018) 5. Jatoth, C., Gangadharan, G.R., Fiore, U., Buyya, R.: SELCLOUD: a hybrid multi-criteria decision-making model for selection of cloud services. Soft Comput. 23, 1–15 (2018) 6. Sadigh, B.L., Nikghadam, S., Ozbayoglu, A.M., Unver, H.O., Dogdu, E., Kilic, S.E.: An ontology-based multi-agent virtual enterprise system (OMAVE): part 2: partner selection. Int. J. Comput. Integr. Manuf. 30(10), 1072–1092 (2017) 7. Brahimi, M.: An agents’ model using ontologies and web services for creating and managing virtual enterprises. Int. J. Comput. Digit. Syst. 8(1), 1–9 (2019) 8. Andonoff, E., Bouaziz, W., Hanachi, C., Bouzguenda, L.: An agent-based model for autonomic coordination of inter-organizational business processes. Informatica 20(3), 323–342 (2009) 9. Sandhu, R.S.: Role-based access control. Adv. Comput. 46, 237–286 (1998) 10. Oh, S., Park, S.: Task–role-based access control model. Inf. Syst. 28(6), 533–562 (2003) 11. Ma, G.: A flexible policy-based access control model for workflow management systems (2011) 12. El Bakkali, H., Hatim, H.: RB-WAC: new approach for access control in workflows. In: 2009 IEEE/ACS International Conference on Computer Systems and Applications, AICCSA 2009, pp. 637–640 (2009) 13. Haguouche, S., Jarir, Z.: Towards a secure and borderless collaboration between organizations: an automated enforcement mechanism. Secur. Commun. Networks 2018, 13 (2018) 14. Salehi, A., Rudolph, C., Grobler, M.: A dynamic cross-domain access control model for collaborative healthcare application. In: 2019 IFIP/IEEE Symposium on Integrated Network and Service Management, IM 2019, pp. 643–648 (2019) 15. Li, W.: A community cloud oriented workflow system framework and its scheduling strategy. In: Proceedings - 2010 Symposium on Web Society, SWS 2010, pp. 316–325 (2010) 16. Alotaiby, F.T., Chen, J.X.: A model for team-based access control (TMAC 2004). In: International Conference Information Technology: Coding and Computing, ITCC, vol. 1, pp. 450–454 (2004) 17. Chakraborty, S., Ray, I.: TrustBAC - Integrating Trust Relationships into the RBAC Model for Access Control in Open Systems. In: Proceedings Eleventh ACM Symposium Access Control Models and Technologies - SACMAT 2006, p. 49 (2006) 18. Tolone, W., Ahn, G.-J., Pai, T., Hong, S.-P.: Access control in collaborative systems. ACM Comput. Surv. 37(1), 29–41 (2005) 19. Andonoff, E., Bouzguenda, L.: Agent-based negotiation between partners in loose interorganizational workflow. In: Proceedings - 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IAT 2005, June 2014, vol. 2005, pp. 619–625 (2005) 20. Hummer, W., Gaubatz, P., Strembeck, M., Zdun, U., Dustdar, S.: Enforcement of entailment constraints in distributed service-based business processes. Inf. Softw. Technol. 55(11), 1884– 1903 (2013) 21. Goel, A.: Specification and modelling of workflow management systems with state based access control (2016) 22. Mollahoseini Ardakani, M.R., Hashemi, S.M., Razzazi, M.: A cloud-based solution/reference architecture for establishing collaborative networked organizations. J. Intell. Manuf. 30, 1–17 (2018) 23. Rizvi, S.S., Bolish, T.A., Pfeffer, J.R.: Security evaluation of cloud service providers using third party auditors. In: ACM International Conference Proceeding Series (2017)

Automated Security Driven Solution for Inter-Organizational Workflows

361

24. Rahimi, H., El Bakkali, H.: CIOSOS: combined idiomatic-ontology based sentiment orientation system for trust reputation in E-commerce. Adv. Intell. Syst. Comput. 369, 189–200 (2015) 25. El Uahhabi, Z., El Bakkali, H.: Calculating and evaluating trustworthiness of certification authority. Int. J. Commun. Networks Inf. Secur. 8(3), 136–146 (2016)

Network Packet Analysis in Real Time Traffic and Study of Snort IDS During the Variants of DoS Attacks Nilesh Kunhare(B) , Ritu Tiwari, and Joydip Dhar Atal Bihari Vajpayee Indian Institute of Information Technology and Management, Gwalior, India [email protected]

Abstract. This paper discusses the functionality of port scanning techniques used for accessing the IP addresses of vulnerable hosts present in the network. These techniques usually perform for network monitoring and troubleshooting purposes. On the other hand, the attackers use this utility to find the vulnerabilities in the network, gain unauthorized access, and penetrate the network system. The primary step taken by the attacker to bombard a targeted cyber-attack is the port scanning technique. Nowadays, port scanning becomes highly dispersed, sophisticated, compound, and stealthy, hence the detection techniques are unachievable. We also discuss the working mechanism of snort intrusion detection system (IDS) tool used for intrusion detection, architecture, its installation, the configuration of files, and detection techniques. In our experiment, we have installed, configured snort IDS with rule files in one machine and the traffic monitored for other machines connected in the network. This research work demonstrates the implementation of denial of service attack (DoS) variants in the real-time network traffic and ramifications of the attacks using snort IDS tool. Keywords: Port scanning · DoS attack techniques · Network forensics.

1

· Snort · Detection

Introduction

Information security has become a significant research area due to the expansion of computation power, the enormous speed of data transfer and expansion of computer networks. The exchange and sharing of information through the internet result in a compromise of the data because of the presence of malicious activities and threats over the network [1]. A secured system should possess confidentiality, integrity and availability in it [2]. 1. Confidentiality: It includes encryption, security tokens, and biometric verification methods of the data to ensure the confidentiality information should not be accessible to unauthorized users. It includes encryption, security tokens, and biometric verification methods of the data to ensure confidentiality. c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 362–375, 2021. https://doi.org/10.1007/978-3-030-49336-3_36

Network Packet Analysis in Real Time Traffic and Study of Snort IDS

363

2. Integrity: The data should not be altered and modified by unauthorized users during the transmission. Integrity ensures the consistency, trustworthiness, and accuracy of the data. Checksums and access controls used for the verification of integrity. 3. Availability: The information must be available to the authorized users for access. The information transmits through the network in the form of data packets. Therefore data packets considered as the basic entities in network communication systems. The information transmits from source to destination in the form of streamlined flows, including infinite duplicates of the data packet [3]. The data packet encompassed in the segment of the data, which includes the information of the protocol used during the transmission, the physical address of the destination, time to live, and other relevant information. Hence the security of a network depends on the surveillance of the network packets. The vulnerability of the hosts compromised by the hackers through information gathering includes port scanning of the victim’s machine [4]. The process of port scanning defined as identifying the services available on the target hosts or network with the help of observing the response to connecting attempts. The number of ‘ports’ or ‘doors’ available by which the intruders unauthorized gain access to the resources of the network. The hackers use port scanning as the first step to look for the number of

Fig. 1. Steps to perform attack

364

N. Kunhare et al.

ports accessible on the target network and detect the malicious scans to exploit the vulnerability on the network, analyze the network traffic and collect credentials. Packet sniffing is the study of examining and observing the contents of the data segment, and their packets and log details collected with this process termed as packet logging. The packet capturing operates in a promiscuous mode, which means that entire traffic passes through the Network Interface Card (NIC) are read whether it is transmitted or not to other machines. This paper illustrates the process of packet capturing passes through the network and also includes the installation of Snort IDS, the configuration of rules files and its observations for the malicious and normal traffic in the network. Figure 1 represents the steps pursued by an attacker for performing the attack in the system.

2

Related Work

Many organizations deployed NIDS for the cyber-security to prevent malicious activities from different layers of networks [4–7]. Snort is a network-based IDS used for detecting various intrusions and attacks. The authors discussed the protocol standards, inspection mechanisms, including signature matching, application control, and anomaly detection. Furthermore, analysis of application-level vulnerabilities including cross-site scripting, SQL injection attacks have been performed [8–12]. It uses various pattern matching algorithms [13] for the configuration, installation and designing rules of this tool. The biggest feature of snort is the ability to drop packets when handling with high speed, a gigantic quantity of traffic or massive packet size. The performance of snort analyzed on different processors Celeron, Pentium with contrasting operating systems Windows 7, XP, and Vista using network speed of 100 Mbps in [14]. The comparison of snort and suricata represented over 10 Gbps network speed. They concluded that snort good in detection accuracy and suricata can handle highspeed network [15]. The rule sets of both the IDS are common, the difference reflects in the designing architecture. Snort is single-threaded whereas Suricata is multithreaded. The experimental evaluation states that Suricata requires high processing power as comparing to Snort. The paper also concluded about the detection accuracy of both IDS in real-time environment [16]. Distinctive types of port scanning approach based on types, condition, and mechanism of detection techniques described approaching various datasets in [17]. An extensive survey of DDoS flooding attacks, detection, and prevention mechanisms discussed along with the counters measures in [18]. A semi-supervised approach proposed on KDD99 dataset using snort based statistical algorithm to improve the detection rate in [19].

3

Research Gap

The snort IDS was configured and implemented on linux based system, the performance of the system analyzed using data mining techniques [19]. The alerts generated through base analysis security engine. The system configured with

Network Packet Analysis in Real Time Traffic and Study of Snort IDS

365

WinPcap packet capturing tool. However, the snort rules were not effective. Khamphakdee et al. [20] analyzed MIT- DARPA99 dataset for improving network probe attacks during four and five weeks. The wireshark tool used for the analysis of the dataset and the detection performance of network probe attacks correlated with detection scoring truth. The analysis of data took additional time to generate the pattern. The several patterns matching algorithms compared between malicious traffic and the standard dataset in [21]. The performance criteria were cpu utilization, throughput, and memory utilization. The algorithms do not give satisfactory results when performed on the dataset. However, the algorithms outperform for malicious traffic. In [22] performed stealth port scanning in the network and designed snort rules to identify the attacks and triggered alerts. However, the performance of Snort missing when increasing the number of systems in the network.

4

Research Methodology

In this paper, we have installed snort in one machine M1 and monitored the network packets passes over other machines. Figure 2 represents the proposed methodology. We performed the variants of DoS attack including ping of death attack, TCP-SYN flood attack, UDP flood attack in the network lab and observes the network pattern in snort. The details of the network traffic are captured in the log file of the snort and also captured by the wireshark tool. The algorithm used for capturing and filtering packets based on the protocol is mentioned below: Variables: —————————————————————— Pkti (flag) = Return Flag Pkti (prot) = Return Protocol —————————————————————— Inputs: Arriving Packets Outputs: Correlated PacketVector —————————————————————— Step 1. Initialize: Correlated PacketVector [pv1 , pv2 , . . . pvn ] → [0, 0, . . . 0] Step 2. Process Arriving Packets Step 3. if (Packet i (prot) equals to TCP) Go to Step 4 else go to Step 2 Step 4. if (Packet i (flag) equals to ACK or RST or ACK Go to Step 2 else go to Step 5 Step 5. Correlated PacketVector → Packet i // Summate packet to vector Go to step 2 ——————————————————————

366

5

N. Kunhare et al.

Types of Port Scanning

Many services are running in the machine, including TCP and UDP when it connects to the network. The TCP and UDP ports are used for communications between machines. There are total 65,536 ports available in a machine [23]. The attackers use these ports to gain access over the system. Table 1 show the types of ports with their ranges. 5.1

Denial of Service Attack

These type of attacks become very harmful for legitimate users, and the attackers try to block the services by sending excessive requests to the server or the network. These attacks intend to slow down the services, bandwidth and as well as the network. To implement these attacks, the sender sends millions of requests contains a large number of packets with invalid data, flooding the target system

Fig. 2. Research methodology

Network Packet Analysis in Real Time Traffic and Study of Snort IDS

367

Table 1. Types of port scanning Serial no. Type of port

Range of port

1

Conventional ports 0–1023

2

Cataloged ports

1024–49151

3

Private ports

49152–65535

in an attempt to slow down the network. The most intense form of this type of attack is Distributed Denial of Service (DDoS) attack, which makes the services unavailable to the users maliciously. Ping of death, TCP SYN flood attack, UDP floods, GET/POSTS floods, and fragmented packet attacks are the variants of these attacks [24,25]. DDoS is a form of attack where a single victim is targeted by multiple attackers (systems), causing a denial of service of the victim system. The target of the DDoS attack is to consume the availability of services providers in an attempt to make the systems unavailable for legitimate users. The DDoS attack divided into three parts: 1. Volume based attacks: The bandwidth of the network saturated by sending packet storm and the magnitude is deliberate in bits per second. 2. Protocol attacks: This type of attacks dissipates server resources, communication devices such as load balancers, firewalls, routers, switches, and is deliberate in packets per second. 3. Application layer attacks: The target of such type of attack is to clatter the webserver, and the magnitude is deliberate in solicitations per second. 5.2

Implementation of DoS Attacks

The working mechanism of the variants of DoS attacks discussed and implemented below: TCP-SYN Flood Attack: The attacker takes advantage of three-way handshake connection to allocate memory for the victim machine that never used and the legitimate users deny to access it. Whenever a TCP connection established a session is needed to be created by the host for communication. It is the starting phase for a three-way handshake. The SYN (synchronize sequence number) flag is set to 1 whenever the source node sends TCP packets to the destination. The packet comprehends source IP address associated port number, a destination IP address associated port number, and many other associated fields required in the TCP packet. The destination node reply with SYN and ACK flags for the TCP connection set to 1. One more TCP packet is dispatched by the source machine to the destination machine using the ACK flag set to 1. These steps complete the three-way handshake, and the transfer of data takes place after this. The TCP-SYN flood attack executes when the sender not able to complete the last step of communication. The following commands are used to percolate the TCP-SYN attack:

368

N. Kunhare et al.

hping3 —S —p 80 —flood —rand —source 192.168.40.66 S indicates SYN flag is set. P is the destination port. The attack is exploited from the machine 192.168.40.22 to the machine 192.168.40.66. Ping of Death Attack: A large number of ping request with maximum packet limit are sent to the target machine in order to keep busy the target system in responding to the ICMP echo replies. The attacker deliberately sends IP packets larger than 65,536 bytes to the opponent. The command to perform ping of death: ping 192.168.40.66 —t —l 65500. t indicates the packets sent to the destination till the end of program. l is the size of the packet. UDP flood attack: this type of attack is performed by the attacker by sending floods of UDP packets to the victim machine. The commands for performing UDP flood attacks. hping3 2 —S —p 80 —flood 192.168.40.66.

Fig. 3. TCP-SYN flood attack with random source

Fig. 4. Ping of death attack

Figure 3, 4, and 5 represents the TCP-SYN flood attack, Ping of death and UDP flood attack respectively captured by the wireshark tool which is an open source packet sniffing and analyzer tool for the network [26]. Figure 6 represents the observations of network bandwidth during the attacks performed.

Network Packet Analysis in Real Time Traffic and Study of Snort IDS

369

Fig. 5. UDP flood attack

Fig. 6. Increase in consumption of network bandwidth during the DoS attacks

6

Snort and Its Components

The snort IDS is configured and deployed in the network for capturing the packet passes through the network. Snort is an open source network interference detection system refined by Martin Roesch have the capabilities to capture real-time network traffic and notify for any intrusions and alert the administrator. The snort IDS can perform protocol analysis, detect various types of attacks including buffer overflow, denial of service attack, port scans, OS fingerprinting and many more probes. The snort IDS can be configured in the following way: 1. Packet Sniffer: In this method, the incoming and outgoing packets pass across the network is captured by the Snort and all the details of the packets display on a console. 2. Packet logger: In this method, the packet details are logged and captured in the text file. 3. Honeypot Monitor: The snort have the ability to deceive the malevolent party. 4. Network Intrusion Detection: The snort performs analysis based on the signature rules on the network traffic to detect the intrusions and suspicious activities in the network. The primary purpose of snort is to analyze the incoming and outgoing packet passes across the network, drop packets if it does not match with the signature rules and generate the report which includes information – packet drops, packet

370

N. Kunhare et al.

analyses, the packet received and other alerts including attacks and intrusions in the network. The architecture of snort represented in Fig. 7. The major components of snort described as follows: Packet Decoder: The task of the packet decoder is to capture the packets passes across the network from the different network interface and prepare for preprocessing of the packets. Preprocessor: The arrangement and modification of the packets take place in the preprocessing phase before it is dispatch for the analysis to the detection engine. Detection Engine: The function of this engine is to identify intrusions based on predefined definitions of the attacks. The packets are compared with the signature rules for the match if found, appropriate actions are suggested to discard or drop the packets. Log and Alert System: The log records are generated based on the results of the detection engine in the pattern of the text file or TCP-dump format. The alerts and logs can be modified using –l command. Output Modules: This module includes functions like log reports generation, database logging (MySQL), reporting to the server log.

Fig. 7. Snort architecture

6.1

Experimental Setup and Demonstration of Snort IDS

The snort is an open source network intrusion detection system which can be deployed in any plate form (Windows and Linux). To set up the snort IDS we

Network Packet Analysis in Real Time Traffic and Study of Snort IDS

371

need winpcap, nmap, wireshark tools to be installed in the system. The snort can be operated in the following modes 1. Snort as sniffer mode: This form of snort generates the network traffic summary captured during the packet transmission through the network. The network administrator can use this command in the command line prompt with the following syntax: # snort —v —d —e. —v displays the packet header with standard output. —d displays the packet payload information including UDP, TCP and ICMP packets. —e displays the link layer information. 2. Snort as packet logger mode: Once the packets are captured, the next step is to make a log record of these packets, which is performed by packet logger by using —l option in the command. The log details are stored in the /snort/log directory by default. # snort —l. To log the record of the subnet IP 192. 168.68.121 can be achieved by following syntax: # snort —vde —l C:\snort \log —h 192.168.68.121. 3. Snort as network intrusion detection mode: In this form, the Snort does not capture log file, instead, it performs detection based on the signature definition of the rules and generates the alert for any match found in the network. The command to start snort in NIDS mode is: # snort —c C:\Snort\etc\snort.conf We observed the performance of snort for TCP, UDP and ICMP packets. The Snort triggers alert whenever any TCP packet passes across the network. The alerts also generate any ICMP and UDP packets. We have examined the collection of network packets passes through the network lab and observed the behaviour of snort IDS whenever the suspicious activity triggers an alert is generated based on the signature definitions in the rules files. The network topology represents the number of machines used for implementing the variants of DDoS attacks.

7

Results and Analysis

The port scanning is performed in the network by any machine and snort IDS is installed in the machine M1 to capture the traffic represented in the Fig. 8. The malicious traffic is exploited from machine 2 to other systems and all the TCP, UDP, ICMP and other protocol supported packets are captured in the log records. The snort generates alert when any malicious traffic passes through the network. The snort triggers an alert based on the definitions of rules specified in the rule file. Some of the definitions configured in the rules are mention below. Rule 1: alert icmp any any ->any any (msg: “ICMP packet alert”; sid : 1000001;).

372

N. Kunhare et al. Table 2. Packets I/O Total Received:

25056729

Analyzed:

1786013

Dropped:

23270716 48.152%

Filtered:

0

7.128% 0.000%

Outstanding: 23270716 92.872% Injected:

0

0%

Fig. 8. Network of the lab

Rule 2: alert tcp any any ->any any (msg: “TCP packet alert”; sid : 1000002;). Rule 3: alert udp any any ->any any (msg: “UDP packet alert”; sid : 1000003;). Rule 4: alert udp any any ->any any (msg: “FTP File access alert”; sid : 1000004;). Rule 5: alert tcp any any ->any any (msg: “SYN Messages”; flags: S; sid : 1000005;). Rule 6: alert tcp any any ->any any (msg: “Scan Attack”; flow: to server, not etablished; threshold: type threshold, track by src, count 15, seconds 30; flags: S; sid: 1000006;). The rule file can be configured based on the definitions and the system triggers alerts whenever it matches the schema. Every packet transmits through the network is compared with the rule sets, if any match found the alert is stored in the log file. The parameters of the log file include a timestamp, alert message, source IP, destination IP, source port and destination port. The main objective of creating the network lab with Snort IDS is to collect the data packets

Network Packet Analysis in Real Time Traffic and Study of Snort IDS

373

Table 3. Details of intrusion with corresponding machines SN Alert

Src IP

Dst IP

1

TCP

192.168.40.22 192.168.40.66

445

3389

Y

2

TCP

192.168.40.22 192.168.40.66

80

139

N

3

FTP

192.168.40.22 192.168.40.45

1025

135

Y

4

UDP

192.168.40.22 192.168.40.45

1028

1029

Y

5

FTP

192.168.40.22 192.168.40.144 1026

1030

Y

6

UDP

192.168.40.22 192.168.40.118 3107

3106

N

7

UDP

192.168.40.22 192.168.40.25

500

138

N

8

ICMP 192.168.40.22 192.168.40.66

85401

3094

Y

9

SYN

1027

21

Y

192.168.40.22 192.168.40.45

S Port D Port Intrusion

passes across the network which includes malicious and normal traffic packets of TCP, UDP, ICMP and another relevant format. The malicious traffic is passed from machine 2 and snort triggers alert for this activity. Table 2 represents the statistics of the packet captured by the snort (Table 3).

8

Conclusion and Future Work

In this paper, we have discussed the port scanning techniques used by the attacker in order to access the information of the machines connected in the network. This paper demonstrates the denial of service attack and its variants. The simulation of DoS attacks in real time environment including TCP-SYN flood attack, Ping of death attack. We also discussed the architecture of snort IDS, installation and configuration of rule sets for the detection of intrusions in the network. The exploitation of malicious activities and normal traffic in real time systems are captured by snort IDS and alerts are triggered based on the signature definition and stored in the log file which can be used as a dataset having the information of TCP, UDP, ICMP and other relevant packet formats including the alerts for suspicious activities. The future work includes the categorization of machine learning algorithms in the snort IDS to indentify the detection rate and preciseness of the system.

References 1. Umer, M.F., Sher, M., Bi, Y.: Flow-based intrusion detection: techniques and challenges. Comput. Secur. 70, 238–254 (2017) 2. William, S.: Cryptography and Network Security: Principles and Practice, pp. 23– 50. Prentice-Hall, Inc., New York (1999) 3. Stallings, W.: Network Security Essentials: Applications and Standards, 4edn. Pearson Education India, New Delhi (2000)

374

N. Kunhare et al.

4. Inayat, Z., Gani, A., Anuar, N.B., Khan, M.K., Anwar, S.: Intrusion response systems: foundations, design, and challenges. J. Netw. Comput. Appl. 62, 53–74 (2016) 5. Guillen, E., Padilla, D., Colorado, Y.: Weaknesses and strengths analysis over network-based intrusion detection and prevention systems. In: IEEE LatinAmerican Conference on Communications, LATINCOM 2009, pp. 1–5. IEEE (2009) 6. Schaelicke, L., Slabach, T., Moore, B., Freeland, C.: Characterizing the performance of network intrusion detection sensors. In: International Workshop on Recent Advances in Intrusion Detection, pp. 155–172. Springer (2003) 7. Hoque, N., Bhuyan, M.H., Baishya, R.C., Bhattacharyya, D.K., Kalita, J.K.: Network attacks: taxonomy, tools and systems. J. Netw. Comput. Appl. 40, 307–324 (2014) 8. Baker, A.R., Esler, J.: Snort intrusion detection and prevention toolkit, vol. 1. Andrew Williams, Norwich (2007) 9. Bul’ajoul, W., James, A., Pannu, M.: Improving network intrusion detection system performance through quality of service configuration and parallel technology. J. Comput. Syst. Sci. 81(6), 981–999 (2015) 10. Salah, K., Kahtani, A.: Performance evaluation comparison of snort NIDS under linux and windows server. J. Netw. Comput. Appl. 33(1), 6–15 (2010) 11. Meng, Y., Kwok, L.-F.: Adaptive blacklist-based packet filter with a statistic-based approach in network intrusion detection. J. Netw. Comput. Appl. 39, 83–92 (2014) 12. Kim, I., Oh, D., Yoon, M.K., Yi, K., Ro, W.W.: A distributed signature detection method for detecting intrusions in sensor systems. Sensors 13(4), 3998–4016 (2013) 13. Aho, A.V., Corasick, M.J.: Efficient string matching: an aid to bibliographic search. Commun. ACM 18(6), 333–340 (1975) 14. Bulajoul, W., James, A., Pannu, M.: Network intrusion detection systems in highspeed traffic in computer networks. In: 2013 IEEE 10th International Conference on e-Business Engineering (ICEBE), pp. 168–175. IEEE (2013) 15. Shah, S.A.R., Issac, B.: Performance comparison of intrusion detection systems and application of machine learning to snort system. Future Gener. Comput. Syst. 80, 157–170 (2018) 16. Albin, E., Rowe, N.C.: A realistic experimental comparison of the suricata and snort intrusion-detection systems. In: 2012 26th International Conference on Advanced Information Networking and Applications Workshops (WAINA), pp. 122–127. IEEE (2012) 17. Bhuyan, M.H., Bhattacharyya, D.K., Kalita, J.K.: Surveying port scans and their detection methodologies. Comput. J. 54(10), 1565–1581 (2011) 18. Zargar, S.T., Joshi, J., Tipper, D.: A survey of defense mechanisms against distributed denial of service (DDoS) flooding attacks. IEEE Commun. Surv. Tutorials 15(4), 2046–2069 (2013) 19. Nadiammai, G., Hemalatha, M.: Handling intrusion detection system using snort based statistical algorithm and semi-supervised approach. Res. J. Appl. Sci. Eng. Technol. 6(16), 2914–2922 (2013) 20. Khamphakdee, N., Benjamas, N., Saiyod, S.: Improving intrusion detection system based on snort rules for network probe attack detection. In: 2014 2nd International Conference on Information and Communication Technology (ICoICT), pp. 69–74. IEEE (2014) 21. Mahajan, A., Gupta, A., Sharma, L.S.: Performance evaluation of different pattern matching algorithms of snort. Int. J. Adv. Netw. Appl. 10(2), 3776–3781 (2018)

Network Packet Analysis in Real Time Traffic and Study of Snort IDS

375

22. Singh, R.R., Tomar, D.S.: Network forensics: detection and analysis of stealth port scanning attack. Scanning 4, 8 (2015) 23. Bhuyan, M.H., Bhattacharyya, D.K., Kalita, J.K.: Network anomaly detection: methods, systems and tools. IEEE Commun. Surv. Tutorials 16(1), 303–336 (2013) 24. Bhuyan, M.H., Bhattacharyya, D., Kalita, J.K.: An empirical evaluation of information metrics for low-rate and high-rate ddos attack detection. Pattern Recogn. Lett. 51, 1–7 (2015) 25. Liao, H.-J., Lin, C.-H.R., Lin, Y.-C., Tung, K.-Y.: Intrusion detection system: a comprehensive review. J. Netw. Comput. Appl. 36(1), 16–24 (2013) 26. Orebaugh, A., Ramirez, G., Beale, J.: Wireshark & Ethereal Network Protocol Analyzer Toolkit. Elsevier (2006)

Securing Trustworthy Evidences for Robust Forensic Cloud in Spite of Multi-stakeholder Collusion Problem Sagar Rane1(B) , Sanjeev Wagh1,2(B) , and Arati Dixit1,3(B) 1 2

Department of Technology, Savitribai Phule Pune University, Pune, MH, India [email protected] Information Technology, Government College of Engineering, Karad, MH, India [email protected] 3 Applied Research Associates Inc. and NC State University, Raleigh, USA [email protected] Abstract. Many organizations are widely using cloud for their day to day business activities. But several attackers and malicious users are targeting cloud for their personal benefits. It is very important to collect and preserve admissible evidences of various activities happened in cloud securely in spite of multi-stakeholder collusion problem. Logs are one of the utmost vital elements to trace the malicious activities happened in cloud computing environment. Thus, forensic investigations involving logs face a grave challenge of making sure that the logs being investigated are consistent and not tampered with. A lot of research has been performed in this field; however with the advent of blockchain and Interplanetary File System (IPFS) new innovative approaches can be applied to secure trustworthy evidences in cloud. In this paper, we used blockchain and IPFS to build a system which stores the logs of cloud users’ activities and assurances the trustworthiness and recovery of such logs to aid in forensic investigation. The integrity of the trustworthy log evidences is assured with the help of blockchain. Using versioning nature of IPFS our system can track the modification of log files. In earlier work, the systems could assure whether a log has been altered with or not, but none provided a mechanism to recover metadata of tampered logs to their original state. With the help of IPFS our proposed technique extend the existing work by providing the original logs for interfered logs. Keywords: Cloud forensics · Forensic investigation Blockchain · Interplanetary File System (IPFS)

1

· Cloud security ·

Introduction

Cloud computing is a widespread technology now a days due to tremendous cost benefits over traditional storage services. In world, cloud based data storage market is growing due raising adoption of cloud in small and medium scale enterprises. According to NASSCOM, cloud computing market of India will cross 7 c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 376–386, 2021. https://doi.org/10.1007/978-3-030-49336-3_37

Securing Trustworthy Evidences for Robust Forensic Cloud

377

billion dollar by 2022 [15,16]. However, shifting from traditional storage to cloud is challenging due to various data security issues [1,10,14,19,26]. Some malicious users can use cloud to store illegal contents like stolen IPR documents, pornography material, bootleg documents or they can target other cloud users with the help of hosting DDOS attack, SQL Injection Attack, SPAM email servers, side channel attacks on cloud [24]. After completion of alleged activities, they can remove the traces and remain clean [4,7]. Thus, there is tremendous need of forensic investigations in cloud computing. This new branch of forensic has become known as Cloud Forensics. According to Internet Crime Report 2017 of Federal Bureau of Investigation (FBI) 3 lac plus digital crime complaints got registered which had 1418 million dollar loss in 2017 itself [15]. Size of digital forensic cases is growing day by day [25]. Currently there are number of traditional forensic techniques are available but they require to be updated to be applicable and useful in cloud environment [12]. Unfortunately, cloud virtual machines (VMs) may consist volatile data and they may be located out of the jurisdictional area as compare to traditional data storage, which makes forensic investigations more difficult from technical and legal point of view [6,7]. Activity logs of Cloud Service Consumers (CSC) can tell what events happened in cloud [12]. Thus, logs are vital element to accuse a suspicious. Presently, after any malicious activity cloud forensic investigator (CFI) has to depend and trust on the logs that provided by cloud service providers (CSP). Many attackers first target the logging services to remove the traces of malicious activities [13]. Thus, security of trustworthy log evidences is prime concern while doing forensic investigations in cloud. Any attacker or malicious user can use cloud to host the distributed DoS attacks on other applications running on cloud and will terminate their VMs or will try to alter the logs after the attack to remove the traces of events. Following is the specific problem of multi stakeholder collusion that we aim to solve. Hypothesis: Alice is an owner of prosperous multinational company who is doing her business through prevalent online food website. Bob is an old entrant in this field, hired out some virtual machines from cloud service provider and launched a distributed DoS attack on Alice’s food website using these hired virtual machines. Consequently, Alice’s food website was inactive for some time, which had huge financial loss of Alice’s company. Therefore, Alice enquired to a cloud forensic investigator to find out the culprit. While investigating the events happened on Alice’s web server machine, the cloud forensic investigator got to know that the Alice’s online food website server was targeted using some flooding tools that worked from the same cloud service provider. Finally, cloud forensic investigator has given an order to cloud service provider for making activity log files available of that specific attack period. The probable results of this situation are: (R1) Bob and CSP conspired together to modify the activity log files. Thus, CFI had no method to check the integrity of log files; Bob would be lasts untraceable. (R2) Bob stopped his hired virtual machines and removes attack footprints.

378

S. Rane et al.

Thus, cloud service provider will not be able to deliver logs to the cloud forensic investigator. (R3) CFI conspired with the cloud service provider and modified the log files to put crime finger on me, Bob could claim.

Fig. 1. A cloud multi-stakeholer collusion problem

In above Fig. 1. Initially, only CSP knows the secret of the proofs, as CSP has the custody of all data. But as per our hypothesis CSP can collude with malicious attacker or CFI to know the secrets. Contributions: Contributions of this paper are: (1) We address cloud multistakeholder collusion (MSC) problem & develop log integrity preservation mechanism in spite of MSC problem using blockchain and IPFS. (2) We develop cloud logging model with non-repudiation (digital signature) and provable integrity verification (blockchain). (3) We propose the method by employing IPFS to recover the metadata of tampered logs to their original state in order to comply with acts like HIPPA, SOX.

2

Background of Forensics

In this section we present brief background of digital forensics along with cloud forensics and cloud log forensics. We also describe blockchain and IPFS and its applicability to our problem. 2.1

Digital Forensics

According to NIST digital forensics is a process of uncovering and interpreting digital data using identification, collection, examination, and analysis steps. While doing this goal is to preserve data in such way that it will be useful for reconstructing past events. Data resided on computers like images, audios, videos and other files are potential items of evidently digital data. This digital data can be act as an evidence while investigating most of the crimes. To acquire, store and analyze the digital data strong methods and techniques are needed [6,22].

Securing Trustworthy Evidences for Robust Forensic Cloud

2.2

379

Cloud Forensics

Cloud Forensics is an advent area from cloud computing and digital forensics. Its kind of application within digital forensics but environment is the change. But, that triggers many challenges for doing forensics in cloud environment. Ultimate aim is the same, reconstruction and security of the past events [12,24,25]. 2.3

Cloud Log Forensics

Cloud log forensics is quite difficult because of the accessibility attributes of cloud logs. Accessibility means issues while accessing cloud logs and maintaining trustworthiness of cloud log files. Till now various stakeholders of cloud system are depend on CSP to get the various logs generated in system. And as per mentioned in our hypothesis there is no guarantee that CSP will provide valid logs. Due to widespread adoption of cloud, cloud attacks also increased and there is need of log forensics in cloud [12,24]. 2.4

Blockchain and IPFS

In simple terms, blockchain is a timestamped chain of immutable data records. It is completely decentralized in nature and not having single point of control. Every block of this chain is secured and intact with each other using cryptographic primitives. It is a shared and immutable series of blocks. In any blockchain based application each and every party is accountable for their day to day actions because the information in the blockchain is completely transparent and verifiable [9]. IPFS is a file system which have versioning nature. So, similar like Git, it can store files and track versions of that files over the time period. It is a protocol which is used to store and share files using content-addressing and hashing in distributed system [2].

3

Related Work

Lot of research efforts has been taken on the various dimensions (technical, legal and architectural) of cloud forensics by many researchers [7]. Marty explained how to enable logging on possible sources, how to setup secure log transport and how to tune logging configurations. He has also given the guidelines for what, when and how to log. Using these guidelines, logging tasks can be accomplished successfully. Reliable and secure transport layer is necessary for cloud application logging [13]. To procure items of potential evidences, trust between different cloud layers is must. Tools and techniques for cloud forensics has been reviewed and assessed [6]. Dykstra developed FROST, a forensic enabled tool [6,7] to accumulate API and firewall logs. On the same line, to deliver different logs to CSCs, read only application peripheral interfaces has been proposed [3]. DDoS attack detection using syslog on eucalyptus is shown in [21]. Monitoring internal and external

380

S. Rane et al.

behavior of eucalyptus software with the help of bandwidth and processor usage is possible [21]. Management layer on top of IaaS is very important whose task is to acquire logs from network blocks, VFS, and system call interfaces [17]. But, management layer itself will be a place of vulnerability for attackers [20,25]. Many researchers also proposed Trusted Platform Module (TPM) to perform digital forensics when CSP is a trusted stakeholder in cloud computing [17]. These research works specifically focused on efficient availability of logs without taking into consideration the problem of multi stakeholder collusion. Some of them focused on analysis of logs for several attack detection. Delegating log management instead of creating log management for cloud is a cost effective way. Anonymous network like TOR can be used for simulations [18]. Now a day, organizations can afford the charges of secure logging services. Though, lot of research is going in this area securing trustworthy evidences in cloud in spite of multi-stakeholder collusion and making verifiable proofs available to all is taking attention [25]. Cloud is a collection of complex virtual networks and that’s the reason it is vulnerable to many incidents [19,26]. Integrity and confidentiality preservation mechanism in cloud is still an unfocused research area [11,13]. To maintain consistency of virtual machine events happened-before relationships are used [20]. But, it is not useful in every forensic investigation. Zawoad and Hasan developed SecLaaS, an integrity preservation and verification mechanism in cloud computing using few probabilistic data structures but that too with some false positives [23,24]. Thus there is need of forensic enabled security techniques and methods in cloud computing.

4 4.1

Threat Model and Security Properties Summary of Notations

Notation

Meaning

E = {e1 , e2 , e3 , . . . en } O = {o1 , o2 , o3 , . . . on } L = {l1 , l2 , l3 , . . . ln } A = {r, w} RA = {a, m, d} R = {E × RA × O × X} I = {1, 2, 3, . . . i} D = {yes, no, error, ?} X → RI Y → DI

Events Objects Logs Access attributes Add, modify, delete Request by Event to Object Indices Decision Request sequence Decision sequence

Securing Trustworthy Evidences for Robust Forensic Cloud

4.2

381

Threat Model

• Confidentiality Violation: Violation of confidentiality of cloud users’ logs will happen when attackers or unauthorized stakeholders get access to them. • Integrity Violation: Violation of integrity of cloud users’ logs will happen when a dishonest stakeholder of cloud system self or by colluding with other parties tampered the logs. • Availability Violation: Violation of availability of cloud users’ logs will happen when CFI will stopped getting the useful logs for investigations. • Repudiation by CSP: A dishonest CSP can deny the proof of logs. • Repudiation by CSC: A CSC can claim that these logs are not mine and it is of other users considering co-mingled data of cloud. 4.3

Security Properties

• Correctness (C1): This property maintains quality of being free from error and conformity to accept correctness of evidence in forensic investigations. • Tamper Resistance (TR): This property is a resistance to tampering of logs or any trustworthy evidence in cloud. • Verifiability (V): Each and every secured evidence must be verifiable considering good accuracy and performance. • Confidentiality (C2): Only authorized users’ of cloud computing system can access the cloud data and not others. • Admissibility (A): Potential items of evidences must be secured, such that they must be admissible in the court of law for any forensic investigations.

5 5.1

Proposed Technique System Details

Fig. 2. Proposed technique to secure trustworthy evidences in cloud

382

S. Rane et al.

In Fig. 2. Cloud consumers using cloud for their day to day business activities. For every activity in cloud we are capturing the generated activity log. After capturing we are collecting these logs and then encrypting it and storing it on IPFS network to remove the storage burden of blockchain. After every day, we are storing hash of all the logs on blockchain which is any time verifiable. We assume that no cloud stakeholder (CSP, CSC, CFI) is trusted. We also assume that all stakeholders setup and distribute their encryption/decryption keys that are working properly. P KCSP and SKCSP are public and private keys of CSP respectively. P KCSC and SKCSC are public and private keys of CSC respectively. P KCF I and SKCF I are public and private keys of CFI respectively. H(M): collision resistant one way hash function of message. EncyptP K (M): Encryption of message M using public key PK. SignSK (M): Signature of message M using private key SK. M 1  M 2: consistency between two messages/proofs. 5.2

Proof Creation

Cloud service consumers CSC = {CSC1 , CSC2 , CSC3 . . . CSCn } send an event E = {e1 , e2 , e3 . . . en } to request R = {E × RA × O × X} i.e. X → RI with the help of request attributes RA = {a, m, d} using access attributes A = {r, w} to data file objects O = {o1 , o2 , o3 , . . . on } in cloud to perform their day-to-day business activities which generates logs L = {l1 , l2 , l3 , . . . ln }. Step 1: Log File Creation: Ei → Li < time, First event in the cloud system will get recorded into log say L1 . Let’s say Ei → Li where i=1; for every first event of the day. Ei → Li+1 where i is an index of previous log. Each time CSP assigns an index i to each event E and appends it to the log file. Thus, Log Sequencer: L.Seq{Li , Li+1 . . . Li+n } → SP ; maintain a sequence and generates a proof of sequence between Li and Li+1 and so on in log file LF where i ≤ i + 1. Step 2: Partial Proof Generation: In this step we are creating proofs of ten events in one file. LF.insert{E0 , E1 , . . . E10 } → PP1 ; where P P1 is a Partial Proof of events E1 to E10 , on the same line we will generate LF.insert{E11 , E12 , . . . E20 } → PP2 and LF.insert{E21 , E22 , . . . E30 } → PP3 . Step 3: Partial Proof Encryption: EncryptCSC (P P1 ) → EP1 on the same line we will generate EncryptCSC (P P2 ) → EP2 and EncryptCSC (P P3 ) → EP3 ; where EP(Encrypted Proof) bound to its version number i and signed. For an one epoch of EP(Encrypted Proof) of multiple events E = {E0 , E1 , . . . E10 }, CSP adds it into the IPFS network. Finally, we will get {L10i+1 , L10i+2 . . . L10i+10 } → Pi .T hus, PP1 ||P P2 and P P2 ||P P3 and so on. Step 4: Add Encrypted Proofs on IPFS Network: Finally, versions of proofs P0 . . . Pi , Pi+1 , Pi+2 . . . which are mutually consistent and encrypted are added on IPFS network. So, Pi → E0 . . . Ei and past proof of evidence is denoted as Pj → E0 . . . Ej for each proof IPFS return a hash that can be used further to access same proof file in future. After every day, hash of all proofs is stored on blockchain.

Securing Trustworthy Evidences for Robust Forensic Cloud

5.3

383

Proof Verification

This is second phase of our technique which provides mechanism to verify the integrity of items of potential and trustworthy evidences. In Fig. 3.

Fig. 3. Proof integrity verification

Timestamp is a time of block creation, Prev Hash is a hash of previous block which bound with its parent block. P Root is a highest level of hash of all the logs of that particular block. Nonce is an arbitrarily number to add entropy to a block header conveniently without rebuilding the Merkle tree. In this work, we have proposed two step integrity verification mechanism. One by employing IPFS and second by using blockchain. Using versioning nature IPFS generates different file for original file after tampering. Thus hash value of original file will remain same. Thus, we can easily track the hash values of modified files. In second step we are offloading final hash P Root on blockchain. Any small change in the log files will result in different hash. And thus we can take decision D in terms of yes, no, error or something else. Meaning of decision D is based on weather integrity of proofs is preserved or not. We have denoted sequence of decisions Y → DI . So, ultimately subsequent hash values of tree will get change. Thus, we can easily verify the integrity of the log files and its proofs.

6

System Setup, Security Analysis and Results

We used openstack to build cloud computing platform using Intel I7, 16 GB RAM, 1 TB HD, 64-bit Ubuntu 16.04 LTS operating system, virtual box version 5.2.18. RSA algorithm and SHA-256 algorithm have been employed for encryption and signature generation respectively. We also setup hyperledger for blockchain implementation. In Table 1 we have shown cloud multi-stakeholder collusion model, different possible attacks can be done on the system and requirement of security property for each of them. This analysis shows us the importance of security properties mentioned in our threat model. Our technique enables these security properties while preserving and verifying integrity, confidentiality of trustworthy evidences in the cloud system, and thus cloud system will be more secure and audit-able. Our results shows the time required for the integrity and signature verification.

384

S. Rane et al. Table 1. Cloud stakeholder collusion model and security requirement

Is honest?

Notation Possible attacks

Required security properties None

CSP CSC CFI Y

Y

Y

PCI

Attack free

N

Y

Y

P CI

Consumers activity disclosure from C2 logs

Y

N

Y

PC I

Other consumers’ log recovery from C2 proofs

Y

Y

N

PCI

Add, modify, delete logs

C1, TR, V, A

Y

N

N

PC I

Add, update, delete logs, other consumers’ log recovery

C1, C2, TR, V, A

N

Y

N

P CI

Add, modify, delete logs, repudiate C1, C2, TR, V, A proofs, and disclose consumer activity

N

N

Y

P C I

Add, modify, delete logs, repudiate C1, C2, TR, V, A proofs, other consumers’ log recovery and consumers’ activity disclosure

N

N

N

P C I

Add, modify, delete logs, repudiate C1, C2, TR, V, A proofs, other consumers’ log recovery and consumers’ activity disclosure

Fig. 4. Integrity and signature proof verification

From our setup and results can say that it is feasible for providing security for trustworthy evidences in cloud. We have verified the integrity and signature of logs in our technique. We have also projected the overall time required to verify both. In Fig. 4. X Axis shows the number of events in thousand, we have taken and Y Axis shows the time required to complete the verification in seconds.

Securing Trustworthy Evidences for Robust Forensic Cloud

7

385

Conclusion

In this paper, we proposed a technique to secure trustworthy evidences for building robust forensic cloud in spite of multi-stakeholder collusion problem. Our technique preserves confidentiality of cloud consumers’ data by encrypting their logs with the help of respective cloud user public key. Moreover, it preserves integrity of cloud consumers’ logs with the help of Interplanetary File System (IPFS) and blockchain. In our technique no one can add/modify/delete cloud consumers’ logs. If someone do so, we can retrieve the original logs for tampered logs. Our implementation and results demonstrates the feasibility of the proposed technique. Thus, our technique makes cloud more secure and transparent in order to comply with acts like HIPPA and SOX [5,8].

References 1. Balduzzi, M., Loureiro, S.: A security analysis of amazon’s elastic compute cloud service. In: Symposium on Applied Computing, pp. 1427–1434. ACM (2012) 2. Benet, J.: IPFS - Content Addressed, Versioned, P2P File Sys. Draft 3 (2014) 3. Birk, D., Wegener, C.: Technical issues of forensic investigations in cloud computing environments. In: SADFE, pp. 1–10. IEEE (2011) 4. Cohen, F.: Challenges to Digital Forensic Evidence in the Cloud. In: Cybercrime & Cloud Forensics: Applied for Investigation Process, pp. 59–78. IGI Global (2012) 5. Congress of the United States. Sarbanes-Oxley Act. Accessed 20 Mar 2017 6. Dykstra, J., Sherman, A.: Acquiring forensic evidence from infrastructure-as-aservice cloud computing. J. Dig. Invest. 9, S90–S98 (2016) 7. Dykstra, J., Sherman, A.: Understanding issues in cloud forensics: two hypothetical case studies. Cyber Defense Lab, Department of CSEE (UMBC) (2011) 8. Health Information Privacy. http://goo.gl/NxgkMi, Accessed 20 Mar 2017 9. Hyperledger FabricDocs Documentation, Hyperledger. Accessed March 2018 10. Infosecurity, Ddos-ers launch attacks from amazon ec2. Accessed Jan 2018 11. Kent, K., Souppaya, M.: Guide to computer security log management. Technical Report 800-92, NIST Special Publication (2006) 12. Khan, S., Gani, A., et al.: Cloud log forensics: foundations, state of the art, and future directions. ACM Comput. Surv. 49(1), 1–42 (2016). Article 7 13. Marty, R.: Cloud application logging for forensics. In: Proceedings of the: ACM Symposium on Applied Computing (SAC11), Taichung, Taiwan., pp. 178–184. ACM (2011) 14. Melland, P., Grance, T.: Nist Cloud Computing Forensic Science Challenges. NIST Cloud Forensic Science WG, IT Laboratory, Draft NISTIR 8006 (2014) 15. MRM.: Market Research Media, Global Cloud Computing Market Forecast 2019– 2024. https://www.marketresearchmedia.com/?p=839, Accessed 26 Apr 2018 16. Nasscom. India’s cloud market to cross $7 billion by 2022. Accessed July 2018 17. Patrascu, A.: Logging system for cloud computing forensic environments. J. Control Eng. Appl. Inf. 16(1), 80–88 (2014) 18. Ray, I., Belyaev, K., Rajaram, M.: Secure logging as a service delegating log management to the cloud. IEEE Syst. J. 7(2), 323–334 (2013) 19. Subashini, S.: A survey on security issues in service delivery models of cloud computing. J. Netw. Comput. Appl. 34, 1–11 (2011)

386

S. Rane et al.

20. Thorpe, S., Ray, I.: Detecting temporal inconsistency in virtual machine activity timelines. J. Inf. Assur. Secur. 7, 24–31 (2012) 21. Zafarullah, Z., et al.: Digital forensics for eucalyptus. In: FIT. IEEE (2011) 22. Zawoad, S., Hasan, R.: Digital forensics in the cloud. J. Defen. Softw. Eng. 26(5), 17–20 (2013) 23. Zawoad, S., Dutta, A.K., Hasan, R.: SecLaaS: Secure logging-as-a service for cloud forensics. In: ASIACCS, pp. 219–230. ACM (2013) 24. Zawoad, S., et al.: Towards building forensics enabled cloud through secure loggingas-a-service. IEEE Trans. Depend. Sec. Comput. 13(2), 148–162 (2016) 25. Zawoad, S., Hasan, R.: Cloud forensics: a meta-study of challenges, approaches, and open problems (2013). arXiv: 1302.6312v1 [cs.DC] 26. Zissis, D., Lekkas, D.: Addressing cloud computing security issues. Fut. Gener. Comput. Syst. 28(3), 583–592 (2012)

Threat-Driven Approach for Security Analysis: A Case Study with a Telemedicine System Raj kamal Kaur1(B) , Lalit Kumar Singh2 , Babita Pandey3 , and Aditya Khamparia1 1 Department of Computer Science and Engineering, Lovely Professional University,

Phagwara, Punjab, India [email protected], [email protected] 2 Department of Computer Science and Enginnering, IIT, Varanasi, Varanasi, India [email protected] 3 Department of Computer Applications, Babasaheb Bhimrao Ambedkar University, Lucknow, India [email protected]

Abstract. The advancement of the control system in modern critical systems ranges from a gaming system to a critical system that has opened up new possibilities in the industrial and social sectors. This technology provides remote control to the operational process of the system and delivers on-demand services to the user, which saves time and effort. However, the failure of these systems may lead to catastrophic accidents. Thus, there is a strong need to analyze the security of these safety-critical systems (SCSs) at the design level by using the state-space modeling technique. Much of the researchers have done work on the security analysis of SCSs. However, they have not considered very critical aspects as liveness and starvation to analyze the security of SCSs. In this work, we propose an innovative method to analyze the security of SCSs with missing metrics as liveness and starvation via using the mathematical modeling technique Petri net (PN). The validation of the proposed methodology has been checked via applying it on the telemedicine system. Keywords: Safety critical systems · Security · Telemedicine system · Petri nets

1 Introduction The advancement of digital technology has been found in all domains as medical, transport, power system and information system. This intelligent technology provides effective automatic management of the rapid developing state of affairs. However, improper designing of these systems may influence the working of critical systems and may lead to catastrophic accidents. In this work, we have considered the telemedicine system as a case study to analyze the security metric of dependability because at any risk dependability issue cannot be compromised in the medical domain. Telemedicine System (TMS) is an emergent medical health care technology. The enhancement of Information Technology (IT) in telecommunication technology has © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 387–397, 2021. https://doi.org/10.1007/978-3-030-49336-3_38

388

R. k. Kaur et al.

brought up many possibilities and dimensions in health care environs. This system consists of three prime domains as a hospital service or data center, a transmission system, and a patient home environment [1]. The service center provides medical instructions and assignments to the individual patient; meanwhile, it stores the personal information and medical records of the patient in the data center. The transmission and distribution system helps to create interaction between the doctors at the hospital and the patient at home. Patients used the telemedicine applications at their home so that they can remotely communicate with doctors and gain the required medical guidance. This technology provides remotely health care services to the patients and saves time, efforts, and cost to diagnosis and treats the patients. This technology transfers real-time medical data by the network to consult with experts, check-up or treatment. [4] Thus, all the communication between the doctor and patient is based on the network and accurate data. For providing the proper treatment to the patients there is a need to transfer accurate and complete (without any wrong modification) information to the experts and patients [1]. Even though telemedicine could be an accurate operative technology for decreasing medical costs and providing high - quality medical facilities, but the security attack can modify the confidential patient’s data which could result in loss of human life. Thus, there is a need to analyze this system at the design level by using the state space modeling technique which deeply analyzes the structural and dynamic behavior properties of the system. Petri net modeling technique is a good solution to rigorously model and dynamically analyze the system behavior as compared to the Fault Tree Analysis (FTA) [2] and Failure Mode and Effects Analysis (FMEA) [3]. These (FTA and FMEA) techniques are only able to model the deterministic process. PN technique systematically and rigorously models the system, and dynamically analyzes its behavior. It is required to ensure the quality (security) of the communication protocols of TMS. Many other rigorous modeling and analysis approaches are available which do not graphically and mathematically demonstrate the functionality of the system [4, 5]. Most of the traditional analysis approaches are useful in a development or implementation level, but not applicable to use at the design level [6]. Besides, the important critical attributes as starvation, liveness, and invariant metrics are not considered to evaluate the security of the system. Starvation is a state where the components in concurrent computing do not get the resources to process their work because their needed resources are allocated to the other component. Their operation could be an important data transfer, like pass the sensor data and stored process information, on which secure actions need to be taken to ensure the safety of the system. In this case, the security threat may have potential to send manipulated data. Liveness metric assures the complete lack of deadlock or starvation in the critical systems. It refers to the property of safety critical and control system that requires system to make progress even though its concurrently executing components (“processes”) might have to “take turns” in critical regions. Thus, these are the key metrics of the distributed SCSs, which needs to be ensured. To reduce the existing gap and challenges, this paper introduces the threat driven modeling and analysis approach to increases the quality (security) of the software system by using the formal and mathematical techniques. The prime contribution of this paper is to present very essential and critical missing aspects of security as starvation, liveness, and boundedness. This analysis will help to develop and analyze a secure system. The

Threat-Driven Approach for Security Analysis

389

graphical abstract of this paper is present in Fig. 1. This paper is layout as follows. Section 2 presents the review of literature along with its limitations. Our proposed threat driven approach is discussed in Sect. 3. The conclusion of this paper is represented in the last Sect. 4.

Fig. 1. Graphical abstract of proposed approach

2 Review of Literature Pendergrass et al. [4] proposed a table-driven approach to analyze the information security of telemedicine application used at a Midwestern college of medicine which provides remote clinical care for hepatitis-C and HIV patients. The proposed approach is simple and flexible; however, this approach is not applicable to graphically visualize the internal behavior of the system, and find the risky components of this system. Braber et al. [5] proposed a CORAS (Risk Assessment Platform) method for modelbased security analysis. They have validated their methodology on the Telecardiology services system. The proposed approach is simple and provides systematic guidance for security risks/vulnerabilities analysis. However, this work has some limits as i) absence of a formal representation; ii) resulting model unable to defines how vulnerabilities are conveyed to the system?; and difficult to use this approach for the large critical systems. Besides, the very critical metrics of the security evaluation as starvation and liveness are not considered in this work. Ding and Zang [6] deployed the PN technique to model the communication protocols of the tele-audiometer and analyze the communication quality (reliability and correctness) between the workstation and the remote audiometer by using MISTA tool.

390

R. k. Kaur et al.

However, this paper remains silent on a very critical metric of dependability as security, which is a pre-requisite of the dependability metrics (i.e. reliability and safety). If the system is not secure, then it cannot become a fully reliable and safe system. Liu et al. [7] modeled the telemedicine system by using component-based architecture and utilize the extension of timed automata to present the behavior of components and their interaction. They outline the distributed and real-time dynamic process systems from different perspectives. However, they have not analyzed the non-functional requirements (i.e., safety, security, reliability, and availability) of the system. Alves et al. [8] proposed the AdEQUATE model for the quality evaluation of the telemedicine system. However, it is empirical analysis; there is a need of a quantitative analysis approach to evaluate the non-functional requirements of the real-time system at the design level. Venkataraman and Ramakrishnan [12] elaborated the impacts of tele-ICU on safety and quality metrics. However, they have not identified the security issues in this system that can influence the system safety. The malfunctions of this system can cause of serious accidents that resulted in death, injury, and large financial losses. Guo et al. [13] have designed and testing the communication protocol of wireless sensor networks for the safe transmission of telemedicine data in complex topographic environs. However, this work has not evaluated the starvation and liveness of the system that can also help to assure safe functioning of the system. Abaimov and Martellini [14] outlined the types of cyber -security attacks and physical attacks relevant to the cyber-attacks on chemical, biological, radioactive, nuclear (CBRN) industrial control systems. In addition, they throw light on the types of attacks protection techniques by network layer of attack and explore the security testing methods. They have defined the security metrics as availability, integrity and confidentiality. However, this work is not considered the critical security metrics as liveness, and starvation, which ensure the security of the system. Because if the system is secure, then we are confident that it will securely perform the functions without any livelock and confliction. Lin et al. [15] proposed a formal modeling and analysis technique to find out the impacts of data integrity attack on the route guidance schemes. To handle this issue they proposed a forged data filtering scheme. However, they have missed the impacts of cyber-attacks on system dependability (i.e., reliability, availability, performance, and safety). It is required to use the state-space modeling technique to visualize the system functions, which helps to find the targeted place of cyber-attacks. Kumar et al. [16] proposed the reliability analysis of SCS by using the optimized Markov chain. They have used the shut-down system-2 of the Nuclear Power plant (NPP) to validate this proposed methodology. However, they have not emphasized the security aspect that is an essential pre-requisite of the system reliability. From the above literature, it is found that no studies have yet considered the very important critical aspects of security as starvation, and liveness to evaluate the SCSs. The existing methodologies respective to the considered case studies lack the state -space modeling that visualizes the complete operational flow of the system effectively.

Threat-Driven Approach for Security Analysis

391

3 Methodology In this section, we presented the proposed threat-driven approach and its validation with the case study of TMS (See Fig. 2).

Fig. 2. Proposed framework

Step 1: Functional and Technical Requirements Analysis of the System In this phase, the functional requirements of the safety-critical system are analyzed. Telemedicine system is comprised of sensors, processing units (a hand-held unit), communication network, and hospital server (health service center) as shown in Fig. 3 [11]. This telemedicine system integrates these units (sensors, processing unit, and communication unit) in chip bounded to the patient’s body. This will improve the patients’ movement and won’t influence their daily life during testing and treatment. In this system, medical sensor nodes are utilized to gather physiological signals comprising bio-signals, voice signals, and medical images of the patients, and transfer this data to the processing unit (devices). At the next step, the processing unit acquires signal data, process, and further send to the communication layer. The processing unit can be a computer, cell phone, DSP processor, and a microcontroller embedded system. Various researchers have constructed and employed the patient diagnosis algorithm in the telemedicine domain as a diagnosis of stress level at the initial stage [17], and real-time Electrocardiogram (ECG) classification algorithm [18]. In a store-and-forward operating mode of the system, care unit records and transfers the patients’ essential parameters (signs) to the server through the internet. In the real-time mode, when an abnormal condition (abnormal heartbeat) of the patient is detected then the health care unit immediately transmits it to the server

Emergency service

Database Cellular network GPRS

Bio-signal sensor

Family member and medical personnel

HTTP

Processing unit Internet Communication unit

Home care unit (Patient side)

Data communication network

Remote server

Management and monitoring unit (doctor side)

Fig. 3. General architecture of the telemedicine instrument for home monitoring for patient [11]

392

R. k. Kaur et al.

through the General Packet Radio Service (GPRS) network. On the server-side, the doctors can communicate with their patients via SMS if needed. This system also consists of a web interface that enables doctors to remotely monitor the patients’ status, and provide treatment accordingly [11]. Step 2: Construct the System Model In this phase, PN is adopted to model and analyze the SCS. PN is a graphical and mathematical modeling tool used for specifying software systems. It is a five-tuples (P, T , F, W , M0 ), where P and T is a finite set of places and transitions respectively. Graphically, the places and transitions are symbolized by circles and bars correspondingly. Places represent the state of the system and transition represents the events of the system. Tokens in the places show the activation of the current state of the system. Token (symbolized with a black dot) in any place of PN model represents the state of the net (system) [9]. The PN model executes by firing transitions that remove tokens from its source (input) place and transfer these tokens to its output place. The quantity and location of tokens can change during the execution of PN model. Further, F denotes the flow relation F ⊆ (P × T ) ∪ (T × P); and W : F → {1, 2, 3, . . . ..} is a weight function, and M0 signifies the initial or current marking [9]. The constructed PN model of the mentioned case study is shown in Fig. 4, and its places and transitions descriptions are illustrated in Table 1 and Table 2 respectively.

Fig. 4. Petri Nets model for Telemedicine system

Step 3: Structural and Behavioral Analysis After modeling the system operation with a PN model, it is needed to ensure the structural and dynamic correctness of the constructed PN model (Fig. 4). 3.1 Structural Properties These properties are based on the topological structure of the PN model, namely invariants and siphons.

Threat-Driven Approach for Security Analysis

393

Table 1. Places description for Telemedicine system Places

Description

P1

Telemedicine system is in active state

P2

Sensor acquired the physiological data

P3

The processing unit receive, digitizes and processes the acquired vital-sign signal

P4

Data communication technique process the obtained patients’ data by using different conditional logic and processing techniques

P5

The remote server: store and forward the received data

P6

Got call/message for emergency services

P7

Doctor analyses and examine the patients’ data

Table 2. Transition description for Telemedicine system Transitions Description T1

Patient deviated from his normal condition

T2

Transfer the obtained patient’s vital sign to the signal processing unit

T3

Processed signal transfer to the communication unit

T4/T5

Uploading the critical signs data to the remote medical server through web based interface and cellular network respectively

T6

Emergency status is detected and sent an alarm message to the emergency service center for providing immediate treatment to the patient

T7

Provide emergency services to the patients, and the system goes into its initial state

T8

Transfer the patients’ energetic signs to the expert doctor

T9

The doctors acquire the patient data from the remote server

T10

Transfer the treatment guidance to the patients and system again goes to their initial state

Invariants (P-invariant and T-invariant): These invariants are used to verify the liveness and boundedness properties of the system, which ensure the safe and secure operation of the system. P-invariant (resp.T-invariant) of PN model is defined as a vector A representing a multiset of places (resp. of transitions) such as A · C = 0(resp.C · B = 0) where C is an incidence matrix (Eq. 1) of the PN (Fig. 4), i.e., cij = cij+ − cij− .

394

R. k. Kaur et al.

(1)

P-invariant A (T-invariant B) is a set of places represent as:          ||Ai || = pj Ai pj = 0 Bj  = ti |Bj (ti ) = 0

(2)

P-invariants can be obtained from the state equation of PN are marking invariants. The total number of tokens remains constant in their corresponding places. T-invariants describe invariant properties relating to the firing sequences of the PN model. The following P-invariants and T-invariants are obtained from the incidence matrix Eq. (1) of the PN model in Fig. 4:   P − invariant 1 1 1 1 1 1 (3) ⎡ ⎤ 1111000101 ⎢1 1 1 0 1 0 0 1 0 1⎥ ⎢ ⎥ ⎢ ⎥ (4) T − invariant ⎢ 0 0 0 0 0 0 0 1 1 0 ⎥ ⎢ ⎥ ⎣1 1 1 1 0 1 1 0 0 0⎦ 1110111000 It can be verified from the computed P-invariant (Eq. (3)) that the set of places has a constant number of tokens and also satisfies the linear Eq. (2). The net is covered by positive P-invariants; therefore it is bounded, which signifies the safe property of the system. In the T-invariant Eq. (4), each column is positive because every transition occurs at least once in firing sequence σ1 ..σn . Thus, the net is also covered by positive T-invariants, which signifies the liveness and boundedness properties of the net. Siphon and Trap: The siphon and trap are used to analyze the behavior properties of the modeled system (Fig. 4) as starvation and liveness. In PN (N , Mn ) a non-empty subset of places is called siphon (resp. trap) if s• ⊆ •s (resp. • s ⊆ s•) i.e., every transition having an output (resp. input) place in S has an input (resp. output) place in S (resp. trap) [7]. If M0 (S) = p ∈ S and M0 (p) = 0, then S is called an empty siphon. If the siphon becomes empty, then it is known to be the cause of non-liveness and deadlock. Deadlock leads to a starvation situation. The liveness signifies the starvation-free operation of critical system. The computed siphon (Eq. (5)) and trap (Eq. (6)) from the constructed PN model Fig. 4 are shown below:   Siphon = P1 , P2 , P3 , P4 , P5 , P6 , P7 ,

(5)

  Trap = P1 , P2 , P3 , P4 , P5 , P6 , P7 ,

(6)

The above siphon and trap (Eq. (5, 6)) show that the modeled system (Fig. 4) is controlled by P-invariant and T-invariant; therefore, it leads to the liveness and starve free operation.

Threat-Driven Approach for Security Analysis

395

3.2 Behavior Property After ensuring the liveness and starvation of the PN model structurally, it is required to verify these properties by analyzing the running behavior of the net. To ensure the liveness and starvation properties of the PN model, it is needed to find all the possible reachable states, which can be derived from the reachability graph. The reachability graph is a simplest method to analyze the behavior of PN model. It determines whether the system is live or not. The liveness is similar to the starve free and blocking free system. It is cleared from the resultant reachability graph in Fig. 5 of Fig. 4 that: a) constructed PN (N , M0 ) is live, every transition is enabled as ∀m ∈ R(m0 ), t ∈ T , ∃m ∈ R(m); b) bounded net: R(m) is finite, and the number of token does not increase than 1, which signifies the safe property of the system, and c) all transitions can be fired, so there is no any dead/starve transition. A PN model is starvation free if an infinite sequence of marking M0 , M1 ..Mn exists such that all transitions fire infinitely often during evolution. On the other side, PN has starving transition if these transitions will never be fireable under some reachable marking. Thus, the constructed PN (See Fig. 4) of the telemedicine signifies the starve free system.

Fig. 5. Reachability graph of PN

Step 4: Identification of Security Threats The use cases and STRIDE methods are used to identify the security threats and its categorization such as spoofing identity, tampering data, repudiation, information disclosure, denial of service, and elevation of privilege respectively [10]. Use cases are used to acquire the intended functional and technical requirements, which facilitates to identify the security threats and malfunction of the system through misuse cases [10]. TMS involves the utilization of the Internet to connect the patient with doctors and provides on-demand care to the patients. On the other side, this technology is vulnerable, which is exploited by the security attacks to perform their unwanted actions. The security attack can also become the cause of the starvation condition and on the other side, this starves condition can cause a security violation.

396

R. k. Kaur et al.

In TMS various kinds of threats can occur, which have to be mitigated: (i) when the communication unit (P3) send/upload secured patient data to the remote server (P4), spoofing attack can occur which can view the sensitive information, (ii) attacker can tamper the data while uploading these data on the server, thus these attacks compromise its integrity and confidentiality respectively. This attack can be mitigated by using digital watermarking which protects the data integrity and authentication of information resources; (iii) attack can occur on the remote/medical server (P4) where the medical (patient) records are stored. In this situation, an unauthorized user tries to access and modify the patient data (Identity attack occur). Additionally, security threats can be mitigated with biometric authentication and security token authentication. In this case, only identified stations are permitted to send and retrieve information from the system. Even if the user is authenticated and authorized, he couldn’t do any transaction with these systems without being on permitted nodes. At this point, the starvation condition can violent the security which is important for the system safety, (iv) Denial of service (DoS) attack can occur when the doctor sends treatment information to the patients and access the stored record from the remote medical server. This attack can block the channel with large amounts of traffic. Thus, the doctor could not send the treatment information to the patients, which can affect the system availability. This attack also causes the starving condition in the system because the patient couldn’t get the resources until the doctor unavailable to transfer information to the patients. Firewall and intrusion detection systems can be used to block unauthorized traffic from entering into the protected network. Besides, the starvation may be caused by mistake in scheduling at the design level and deliberately produced by security attacks as denial-of-service attacks.

4 Conclusion This paper presents a threat driven approach to analyze the security of the safety-critical systems. PN mathematical modeling technique is adopted to model and analyze the structural and behavior properties of the system, which ensure the security of the critical system. The dynamic visualization of the system behavior helps to find the critical targeted component in the system. Our approach addressed the limits of existing methods, described in Sect. 2, and it can be applied to analysis the dependability metrics of the system. Besides, new critical metrics (liveness and starvation) for the security analysis have been defined and identified the possible security threats in the system model. The proposed framework is applied on TNS and the validation result shows the effectiveness of this.

References 1. Yan, Y., Dittmann, L.: Security challenges and solutions for telemedicine over EPON. In: Sixth International Conference on eHealth, Telemedicine, and Social Medicine eTELEMED, pp. 22–27 (2014) 2. Kornecki, A., Liu, M.: Fault tree analysis for safety/security verification in aviation software. Electronics 2(1), 41–56 (2013)

Threat-Driven Approach for Security Analysis

397

3. Babeshko, E., Kharchenko, V., Gorbenko, A.: Applying F (I) MEA-technique for SCADAbased industrial control systems dependability assessment and ensuring. In: IEEE Third International Conference on Dependability of Computer Systems DepCoS-RELCOMEX, pp. 309–315. IEEE (2008) 4. Pendergrass, J.C., Heart, K., Ranganathan, C., Venkatakrishnan, V.N.: A threat table based approach to telemedicine security. In: Transactions of the International Conference on Health Information Technology Advancement, Western Michigan University, vol. 2, no. 1, pp. 104– 111 (2013) 5. Braber, F., Hogganvik, I., Lund, M.S., Stølen, K., Vraalsen, F.: Model-based security analysis in seven steps—a guided tour to the CORAS method. BT Technol. J. 25(1), 101–117 (2007) 6. Ding, J., Zhang, D.: An approach for modeling and analyzing the communication protocols in a telemedicine system. In: 6th IEEE International Conference on Biomedical Engineering and Informatics, pp. 699–704. IEEE (2013) 7. Liu, J., Xiong, X., Ding, Z., He, J.: Modeling and analysis of interactive telemedicine systems. Innov. Syst. Softw. Eng. 11(1), 55–69 (2015) 8. Joýo, M., Savaris, A., von Wangenheim, C.G., Wangenheim, A.: Software quality evaluation of the laboratory information system used in the Santa Catarina state integrated telemedicine and telehealth system. In: 2016 IEEE 29th International Symposium on Computer-Based Medical Systems (CBMS), pp. 76–81 (2016) 9. Murata, T.: Petri nets: Properties, analysis and applications. Proc. IEEE 77(4), 541–580 (1989) 10. Xu, D., Nygard, K.E.: Threat-driven modeling and verification of secure software using aspect-oriented Petri nets. IEEE Trans. Software Eng. 32(4), 265–278 (2006) 11. Abo-Zahhad, M., Ahmed, S.M., Elnahas, O.: A wireless emergency telemedicine system for patients monitoring and diagnosis. Int. J. Telemed. Appl. 2014, 1–12 (2014) 12. Venkataraman, R., Ramakrishnan, N.: Safety and quality metrics for ICU telemedicine: measuring success. Telemedicine in the ICU Springer, pp. 145–154. Springer, Cham (2019) 13. Guo, G., Sun, G., Bin, S., Shao, F.: Design and analysis of field telemedicine information communication protocol based on wireless sensor network. IEEE Access 7, 50630–50635 (2019) 14. Abaimov, S., Martellini, M.: Selected issues of cyber security practices in CBRNeCy critical infrastructure. In: Cyber and Chemical, Biological, Radiological, Nuclear, Explosives Challenges, pp. 11–34. Springer, Cham ( (2017)) 15. Lin, J., Yu, W., Zhang, N., Yang, X., Ge, L.: Data integrity attacks against dynamic route guidance in transportation-based cyber-physical systems: Modeling, analysis, and defense. IEEE Trans. Veh. Technol. 67(9), 8738–8753 (2018) 16. Kumar, P., Singh, L.K., Kumar, C.: An optimized technique for reliability analysis of safetycritical systems: a case study of nuclear power plant. Qual. Reliab. Eng. Int. 35(1), 461–469 (2019) 17. Tartarisco, G., Baldus, G., Corda, D., Raso, R., Arnao, A., Ferro, M., Gaggioli, A., Pioggia, G.: Personal health system architecture for stress monitoring and support to clinical decisions. Comput. Commun. 35(11), 1296–1305 (2012). Elsevier 18. Wen, C., Yeh, M.F., Chang, K.C., Lee, R.G.: Real-time ECG telemonitoring system design with mobile phone platform. Measurement 41(4), 463–470 (2008). Elsevier

Key-Based Obfuscation Using Strong Physical Unclonable Function: A Secure Implementation Surbhi Chhabra(B) and Kusum Lata Department of ECE, The LNM Institute of Information Technology, Rupa Ki Nangal, Post-Sumel, via-Jamdoli, Jaipur, India [email protected], [email protected] Abstract. The proliferation of the emerging Internet of Things (IoT) devices demands significantly enhanced design targets, viz. cost, energy efficiency, noise, etc. However, assurance of security for the above design targets becomes very critical. Therefore, some of the cryptographic algorithms like Advanced Encryption Standard (AES) are applied to IoT devices to achieve secure transmission and reception. But, there are various threats of piracy and reverse engineering which affect the security of secret key employing in the AES Intellectual Property (IP) core. At present, key-based hardware obfuscated AES IP core has emerged as a viable solution for diminishing the effects of threats and attacks. Physical Unclonable Functions (PUFs) are one of the innovative circuit primitives in the field of hardware security used for cryptographic key generation. In this work, we present the key-based obfuscation technique using delay based PUFs. Simulation results show that the quality metrics such as uniqueness, hamming distance, and reliability are maintained with the minimum area and power overhead. Keywords: Key-based obfuscation · IP protection · Hardware Trojan · Xilinx Vivado · AES IP core · BASYS-3 FPGA · Hamming distance

1

Introduction

The worldwide integration of the semiconductor industry and outsourced offshore production has elevated the important issue on the security of Integrated Circuits (ICs). It is highly probable that chip designers have system-on-chip (SoC) designs, in which various Intellectual Properties (IPs) are integrated from distinct IP vendors. But, unfortunately, untrusted third-party IP vendors in the global supply chain may be able to manipulate the original design or introduce malicious components through reverse engineering attempts [1]. Hardware obfuscation has been one of the most promising anti-tamper techniques in the recent past. It is prominently used against various hardware attacks/threats such as IP piracy, reverse engineering, and Hardware Trojans c The Editor(s) (if applicable) and The Author(s), under exclusive license  to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 398–408, 2021. https://doi.org/10.1007/978-3-030-49336-3_39

Key-Based Obfuscation

399

(HT) [2,3]. In hardware obfuscation, the original meaning of a message or the functionality of a design is obscured so as to protect the IP. The idea is primarily to hide a part of the design and replace it at the design stage with a configurable module so that none of the manufactured chips can function correctly without being “activated” by the designer. Such activation is accomplished by securely restoring the secret key function into the chips. An attacker can not retrieve the entire design or overbuild illegal ICs without immediate access to the contents of the enabled chips configurable module [2,3]. The obfuscation schemes attempt to guarantee that the required effort for an attacker to acquire the right key is computationally infeasible. If a common key is used across all manufactured chips, the most vulnerable part of the entire obfuscation mechanism becomes such a key. The secret keys are stored in volatile or non-volatile memory in the field of cryptography. These keys can be retrieved even when the power is switched off. The volatile memory is also susceptible to attacks if one has physical access to it. Therefore, the traditional approaches to key storage are not preferred, especially in high-security applications. To diminish the above problem, a new approach, known as Physical Unclonable Functions (PUFs), has been lately studied. It is a well-known hardware security primitive used for device authentication and generation of secret keys required for cryptographic operations without the need of nonvolatile memories [4]. It is a physical system constructed in the manufacturing phase on the basis of the intrinsic process variations of chips, which can be used as a distinctive signature for each chip. It is a unique platform feature that generates an output response determined by the behavior of a complex, unclonable physical system when provided with an input challenge. PUFs are easy to execute, but their random nature makes it ideally difficult to predict their behavior and model it for an attacker [5,6]. 1.1

Related Work

Many hardware obfuscation schemes have been proposed using PUF to modify the original design. A PUF has inputs known as challenges and generates corresponding outputs, called response. The Challenge-Response Pairs (CRPs) are unique for each chip. Since it is very hard to regulate the manufacturing process, therefore, it is impossible to construct identical PUFs with same CRPs [4]. Authors in [7] presented a signal path hardware obfuscation technique with the help of PUFs. In this, the functionality of the circuit has been replaced by PUFs and Lookup Tables (LUTs). Since the PUF of each chip has unique and unpredictable functionality, these schemes closely combine the PUF with a configurable module (constituting the key), which the designer will program individually during the post-fabrication activation process. An issue with this technique was that during the manufacturing phase an untrusted manufacturer might have access to the PUF. The manufacturer was able to obtain CRPs very easily and store the results of weak PUF CRPs. In [8], another hardware obfuscation technique includes a strong PUF with large CRPs to avoid IC piracy even when the key is leaked. In order to allow the use of a strong PUF, the designer’s

400

S. Chhabra and K. Lata

characterization of the PUF was limited to only one subset of the input set. For instance, if the PUF input is p bits long, then only n bits were used by the designer to characterize where n a. Then, we integrate them into the same database. In this way, it will be a heavy task to observe this dataset to discern the true information from the fake ones. Besides, it supplies formal measures of probabilities that conclude information concerning individuals.

Towards a Better Security in Public Cloud Computing

445

This method uses subsets and not the entire database in order to answer to a specific request such as statistical analysis (Table 2). Table 2. Comparison table between the methods of anonymisation Methods

Strengths

Weaknesses

k-anonymity

- Data analysis proceeds with accurate results - safeguards against link inference and avoids people re-identification

- Does not appeal to a large volumes of data without loss of information - One must be apt to settle the generalizations to be made for, done by a human expert familiar with the domain

L-diversity

- carries more generalization to - It is possible to conclude sensitive and delicate fields information by inference

T-proximity

- Mainly enables to classify data with reference to the equivalence classes

- reducing correlations - formulating less specific analyzes

Differential confidentiality - provides formal guarantees - The major trouble appears in about the possibility to restrict the likelihood of fake data the information one can know about the other individuals - Mainly applied when one attempts to protect geo location data

• Comparative study between the methods of anonymisation: • Evaluation of anonymisation methods We will check the performance of the anonymization methods based on their resistance to individualization, correlation and inference (Table 3). Tokenization Tokenization [14] is an encryption method aiming to replace sensitive data with other substitutable (tokens) values, from which it is impossible to settle their original value. Indeed, the token8, considered as a reference to the cipher text, is stored in a repository or a data vault. There is no mathematical relationship between a token and the data value and the relationship is entirely referential. • Tokenization process When using tokens in applications, a company holding sensitive data tokenizes them before sending the token values to the cloud application for processing and storage.

446

S. Amamou et al. Table 3. Evaluation of the methods of anonymisation

Methods

Resistance Individualization

Correlation

Inference

k-anonymity

- prevents recognizing an individual within a group of k individuals

- still likely to make connections but on a group of people

- If we know which group belongs to the target individual, we can additionally gather information about him

l-diversity and t-proximity

- Prohibit individual relative values to be isolated

- No enhancement over k-anonymity

- We are in doubt about our reasoning

Noise addition

- it is possible to isolate with less reliable information

- It remains possible to - Remains possible but connect records even with differing chances if we can associate to a fictitious individual

Permutation

- Remains possible to isolate with less steady information

- avoids connecting correctly because one can link with the noncorresponding person

Differential confidentiality

- Prohibits correlation if - Remains possible to only statements are used connect with other for statistics records if multiple enquiries are used

We can extract from them every time there are logical links between the attributes or if they are compatible - Remains possible to conclude information about individuals

The company saves the original data in a safety-deposit box, encrypted itself and kept locally or at aIaas provider. • Vault-based tokenization Tokens are casually generated. We associate, for each data, a token from which a mapping9 stored in a lookup table is established. In fact, all these lookup tables are stored on a token server. • Vaultless tokenization Tokens are generated from the generic tables using this technique. Accordingly, a single lookup table for numbers and another one for characters can be created. Both of these lookup tables are calculated in advance and randomized (Table 4). • Comparative study of tokenization methods

Towards a Better Security in Public Cloud Computing

447

Table 4. Comparative table of tokenization methods Strengths

Weaknesses

Vault- based tokenization methods

- Much rather used for a data set that is not small enough, and does not vary or also does not much rise

- The bigger the lookup tables are, the increased tokens server size is - It is difficult to replicate too big tokens server

Vaultess tokenization

- Suitable when tokenizing large -involves management as well and dynamic datasets as protection of generic tables

Encryption It is a set of techniques in order to make a message intelligible to unauthorized parties, only those having the key can access. Moreover, two families of encryption emerge: symmetric encryption and asymmetric encryption [15]. • Symmetric key encryption The sender and the recipient have the same secret key for encryption and decryption. • Stream cypher This encryption system [16], in which the sender has a long kept secret key, functions bit by bit. To transform its message M composed of n bits, it proceeds as follows: 1) Taking the first n bits of the key building a sequence of bit k 2) Calculating C = k X or M so that it results on the encrypted message C 3) Throws the key part used to retrieve the message m. The recipient who has also the same secret key, calculates M = k X or C and finds the original message. Another encryption can be done when repeating the same procedure with the rest of the k • Block ciphering The concept of block ciphering is to cut the message into blocks having the same size (between 32 and 512 bits). Then, applying transpositions or substitution operations block by block. Among these block cipher algorithms, we quote: • Blow Fish DES/3DS • AES • DES (Data Encryption Standard) The DES [17] became public in 1977. It followed works carried out by a cryptographic group of IBM. It was used in commerce and private or federal organizations. First, the

448

S. Amamou et al.

text must be converted to bits and cut into 64-bit blocks. For each block of the message, the following algorithm is applied as in Figure 2.8: 1) A permutation is carried out according to a predetermined order. We thus obtain 2 parts of blocks right D0 ET left G0 2) Repeat i = 0 a nGi = Di−1 Di = Gi−1 XOR f (Di−1, Ki) up to n = 16 3) One recomposes a block B’16 by gathering D16 and G16 in this order 4) We perform the inverse permutation of the initial one. The DES showed weaknesses against the powers of calculation, as we managed to break in 22 h by hard power. • AES (Advanced Encryption Standard) The AES is the development of the DES algorithm that has become weak. It is established by NIST and approved by NASA. Its basis is to split the message into 128 bits. Then, it occurs 5 complex transformations of 10 iterations each. The key used for this encryption is 128 bits. 1) 2) 3) 4) 5)

Addition of the secret key Transformation Line Offset Columns Scrambling Addition to the scale for round

The asset of AES is based on its speed, its economy of resources and the difficulty of breaking it. • Asymmetric Key Encryption The encryption key is different from the decryption key. This is due to the fact that the first is publicly broadcasted and the other has to remain secret. Symmetric key encryption algorithms are: • RSA SA (encryption and signature) • Diffie Hellman (key exchange) • RSA The RSA algorithm [18] is based on the factorization of large prime numbers according to the following steps: 1. Selection of 2 large prime numbers p and q (of 1024 bit for example) 2. Calculating n = pq and z = (p − 1) (q − 1) / 3. Choosing an e that does not have a common factor with z. (e and z are co-prime)

Towards a Better Security in Public Cloud Computing

449

4. Finding a d such that ed − 1 is exactly divisible by z (ed mod z = 1) 5. The Public Key is: (n, e) and the Private Key is: (n, d) Alice sends to Badr c = me mod n Bob receives the message c and calculates: m = cd mod n • Comparison between symmetric and asymmetric encryption (Table 5). Table 5. Comparison between symmetric and asymmetric encryption Encryption

Advantages

Cons/Disadvantages

Symmetric encryption (DES, AES)

Short key Speed treatment

Number of pairs increases based on number of user pairs; for n persons it is necessary that n (n−1)/2 keys

Asymmetric encryption (RSA)

No need to transmit keys securely Number of keys minimized relative to symmetric encryption; for n people it takes 2n keys

Risk of recovering the secret key Slow processing

Data monitoring Monitoring can recognize and limit any risk of error and malice. In other words, we monitor data, applications, or the cloud infrastructure regularly using tools. In fact, there are several monitoring Approaches; either by agents integrated into the application itself, or with tools or also via one third party. Among the monitoring features: • Likelihood Verification: the data is compared to a “likely” data in the context (ranges of values, levels) [19]. • Detecting activity peaks over the predefined levels. • Logs of operations performd on the data

3 Proposed Approach Inspired from previous related work and the different levels of data deployment, we suggest in this section our approach in order to protect public cloud data.

450

1.

2. 3.

4. 5.

S. Amamou et al.

Encryption and anonymization: As a means of protecting the file before being sent to the cloud, symmetric encryption is opted for mass data. Anonymisation is more secured with reference to personal or sensitive data; that is to say in case of retrieve, it will be impossible for the attacker to identify the people in a group or to obtain their personal information by inference. Fragmentation: Fragmenting is used to facilitate the transfer and storage in case of a bulky file or multiple files to transfer. Replication: Replicating each fragment on multiple physical locations which makes it possible to find copies of fragments in the event of server outages or service disruption. Sampling: Keeping a sample serves for comparison with the original data. This sample must be renewed regularly by updates. Monitoring: it means to monitor through the saved sample. Monitoring acts on two axes: • Detecting errors as sooner as possible and making the needed corrective actions • Take more appropriate suitable security measures

6. 7. 8. 9. 10. 11.

Storage: to store information in the database Retrieval: the client requests retrieving information Aggregation: gathering the information which were already fragmented in step 2 Decrypted: decrypt crypt information in step 1 Sampling: Keeping a sample which allows to compare it with the original data destroyed sample: delete sample

4 Implementation In order to reduce the risk of malicious attacks, we evaluate MODEL security through CloudSim simulator in this part taking into consideration the three attacks performed by the intruder. We start by considering the prosecutor attack in which the intruder would know that a particular sensitive data exists in an anonymized dataset. He wants to detect which record be- long to this data. Then, we are concerned on journalist attack where the intruder does not deal with which sensitive data is being re-identified. Its only interest is to be able to claim that it can be done. Finally, the marketer attack in which the intruder wants to re-identify as many as possible records in the disclosed dataset. Our Cloud consists of 4 servers. Each server hosts 3 virtual machines (Fig. 1).

Towards a Better Security in Public Cloud Computing

451

Fig. 1. Public cloud data storage and retrieval security model

4.1 CloudSim CloudSim is a framework allowing the simulation of cloud with its various components such as Datacenter, clients, applications, etc. It can be integrated with development environments like Eclipse and Netbeans. In order to evaluate our model, we used 2 scenarios: In the first one, we considered a cloud without malicious host. That is, we do not have any attacks on the files stored in the data center (Fig. 2).

Fig. 2. Data risks without defense mechanism

In the second scenario, we considered a cloud with malicious hosts. That is, we have attacks that target files stored in the data center (Fig. 3).

452

S. Amamou et al.

Fig. 3. Data risks with defense mechanism

5 Discussion The final result of our simulation proves the reliability of our approach in the levels of data protection. First of all, Concerning Prosecutor Attack, the percentage of records at risk diminished from 87% to 2, 5%. Moreover, the percentage of highest risk decreased from 100% to 20%. Also, the percentage of success rate dropped from 80% to 2%. After being 85%, the percentage of records at risk declined to 5% regarding Journalist Attack, and the percentage of highest risk drops from 100% to 20%. More than that, the percentage of success rate went from 80% to 2,5%. Thirdly and with regards to Marketer Attack, the percentage of Records at risk declined to 3% after being 80%. Moreover, the percentage of highest risk was reduced from 100% to 20% and the percentage of success rate went from 80% to 3%.

6 Conclusion As a conclusion, cloud computing is considered to be a dematerialized IT model enabling its users to expand their data and resources to a third-party supplier. Due to several factors including trust to the provider and data geo-location, resources collocation and also security, this shift to a new environment can trouble the customer. In reality, several methods involving encryption, anonymization and tokenization have been proposed for the fact of protecting the data transferred to cloud. In this context and based on the previous works, we have tried to propose a model addressing safety issues based on encryption, anonymization and monitoring, another complementary method, through sampling.

Towards a Better Security in Public Cloud Computing

453

References 1. Aloqaily, M., Kantarci, B., Mouftah, H.T.: Vehicular clouds: State of the art, challenges and future directions. In: 2015 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT). IEEE (2015) 2. Kaul, S., Sood, K., Jain, A.: Cloud computing and its emerging need: advantages and issues. Int. J. Adv. Res. Comput. Sci. 8(3), 618–624 (2017) 3. Liu, F., et al.: NIST cloud computing reference architecture. NIST Spec. Publ. 500(2011), 1–28 (2011) 4. Dinh, H.T., et al.: A survey of mobile cloud computing: architecture, applications, and approaches. Wireless Commun. Mob. Comput. 13(18), 1587–1611 (2013) 5. Jadeja, Y., Modi, K.: Cloud computing-concepts, architecture and challenges. In: 2012 International Conference on Computing, Electronics and Electrical Technologies (ICCEET). IEEE (2012) 6. Zhang, L.-J., Zhou, Q.: CCOA: cloud computing open architecture. In: 2009 IEEE International Conference on Web Services. IEEE (2009) 7. Li, N., Mahalik, N.P.: A big data and cloud computing specification, standards and architecture: agricultural and food informatics. Int. J. Inf. Commun. Technol. 14(2), 159–174 (2019) 8. Ali, M., Khan, S.U., Vasilakos, A.V.: Security in cloud computing: Opportunities and challenges. Inf. Sci. 305, 357–383 (2015) 9. Singh, S., Jeong, Y.-S., Park, J.H.: A survey on cloud computing security: issues, threats, and solutions. J. Network Comput. Appl. 75, 200–222 (2016) 10. Murthy, G., Srinivas, R.: Achieving multidimensional k-anonymity by a greedy approach. In: Proceedings of the International Conference on Web Services Computing (2011) 11. Kumar, P.M.V., Karthikeyan, M.: L-diversity on k-anonymity with external database for improving privacy preserving data publishing. Proc. Int. J. Comput. Appl. 54(14) (2012) 12. Sweeney, L.: Achieving k-anonymity privacy protection using generalization and suppression. Int. J. Uncertainty Fuzziness Knowl. Based Syst. 10, 571–588 (2002). Carnegie Mellon University 13. Ashwin, M., Johannes, G., Daniel, K.: -Diversity: Privacy beyondk–Anonymity.cs .cornell.edu, p. 7 (2007) 14. Armbrust, M., Fox, A., Griffith, R., Joseph, A.D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I., Zaharia, M.: A view of cloud computing. Commun. ACM 53(4), 50–58 (2010) 15. Palgon, G.: Tokenization and Enterprise Data Security. ISSA ”09 Information Systems Security Association, p. 23 (2009) 16. Hu, L., Zhang, Y., Li, H., Yu, Y., Wu, F., Chu, J.: Challenges and trends on predicate encryption—a better searchable encryption in cloud. J. Commun. 9(12), 908–915 (2014). Jilin University, Changchun 130012, China, December 2014 17. Babbage, S.: Improved exhaustive search attacks on stream ciphers. In: European Convention on Security and Detection, IEE Conference Publication, vol. 408, pp. 161–166. IEE (1995) 18. Zadiraka, V.K., Kudin, A.M.: Cloud computing in cryptography and steganography. Cybermetics Syst. Anal. 49(4) (2013). UDC 681,3;519,72;003,.26 19. Kalpana, P., Singaraju, S.: Data security in cloud computing using RSA algorithm. Int. J. Res. Comput. Commun. Technol. IJRCCT ISSN 1(4), 2278–5841 (2012) 20. Aceto, G., Botta, A., De Donato, W., Pescapè, A.: Survey cloud monitoring: a survey. Comput. Networks 57(9), 2093–2115 (2013)

Author Index

A Abraham, Ajith, 72 Amamou, Sonia, 441 Aswani Kumar, Ch., 249 B Bacanin, Nebojsa, 328 Barmaiya, Bhavana, 299 Ben Ayed, Yassine, 31 Bhagat, Diksha, 210 Bisen, Dhananjay, 291, 299 Bose, P. S. C., 41 Boujelben, Ines, 31 C Chakraborty, Shounak, 188 Chauhan, Aakash, 210 Cherukuri, Aswani Kumar, 309 Chhabra, Surbhi, 398 Chikmurge, Diptee, 319 Chopkar, Ankit, 210 Choubey, Dilip Kumar, 165 Claudiano, Luis Andre, 10 D Damak, Alima, 176 Das, Monidipa, 52 Dhar, Joydip, 362 Dighore, Nitin, 210 Dixit, Arati, 376 Dwivedi, Vijay Kumar, 134 E El Bakkali, Hanan, 350 El Kandoussi, Asmaa, 350

Elleuch, Mohamed, 103, 240 Elouedi, Zied, 145 Exposito, Ernesto, 429 F Febrianto, Rahmad Tirta, 1 Feki, Wiem, 176 G Gargouri, Norhene, 176 Ghosh, Soumya K., 52 Ghuge, Suyash, 230 Giripunje, Lokesh M., 258 H Hassine, Motaz Ben, 429 Hellani, Hussein, 429 I Iskanderov, Yury, 83 J Jaidhar, C. D., 230 Jain, Ankur, 21, 63 Jaiswal, Ayshwarya, 134 Jambak, Ahmad Ikrom Izzuddin, 1 Jambak, Muhammad Ihsan, 1 Jambak, Muhammad Irfan, 1 Jamoussi, Salma, 31 Jonnalagadda, Annapurna, 309 K Kalita, Indrajit, 188 Kane, Lalit, 291 Kaur, Raj kamal, 387

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Abraham et al. (Eds.): HIS 2019, AISC 1179, pp. 455–456, 2021. https://doi.org/10.1007/978-3-030-49336-3

456 Khalifa, Malika Ben, 145 Khamparia, Aditya, 387 Khemakhem, Mariem, 240 Kherallah, Monji, 103, 123, 240 Khmakhem, Maher, 441 Kmimech, Mourad, 429 Kumar, Alok, 277 Kumar, Anoj, 277 Kumar, Nishant, 230 Kunhare, Nilesh, 362 L Lata, Kusum, 398 Lefèvre, Eric, 145 Lekhraj, 277 Li, Gang, 249 M Manupati, Vijayakumar, 41 Masand, Deepika, 258 Mezghani, Anis, 123, 240 Mishra, Krishn Kumar, 156 Mnif, Zaineb, 176 Mokni, Raouia, 176 Mukherjee, Abhishek, 291 Mundotiya, Rajesh Kumar, 113 N Neupane, Prasanga, 92 Nicoletti, Maria Do Carmo, 10 Noubigh, Zouhaira, 123 P Pal, Sukomal, 113 Pandey, Amritanshu, 249 Pandey, Babita, 387 Panigrahi, Suraj, 41 Patel, Atul, 52 Pautov, Mikhail, 83 Prasad, Ritu, 299 Putnik, Goran, 41

Author Index Rekik, Rim, 197 Roy, Binoy Krishna, 21 Roy, Moumita, 188 S Saikia, Prangshu, 63 Saini, Ravi, 339 Samhat, Abed Ellatif, 429 Saputra, Danny Matthew, 1 Saurabh, Praneet, 291, 299 Saurav, Sumeet, 339 Sellami, Dorra, 176 Shandilya, Shishir Kumar, 258, 268 Sharma, Shreeniwas, 92 Shetty, Adhiraj, 309 Shriram, R., 319 Singh, Anil Kumar, 113 Singh, Avjeet, 277 Singh, Lalit Kumar, 387 Singh, Sanjay, 339 Singh, Tribhuvan, 156 Sliman, Layth, 429 Snášel, Václav, 72 Srivastava, Keshav, 165 Strumberger, Ivana, 328 T Tamang, Ravi, 92 Thakare, Pratik, 210 Thaseen, Sumaiya, 249 Tiwari, Ritu, 362 Tote, Milind, 210 Trifa, Zied, 441 Trigui, Sana, 31 Tuba, Eva, 328 Tuba, Milan, 328 Tuladhar, Archana, 92 Tyagi, Amit Kumar, 220 V Varela, M. L. R., 41

Q Qureshi, Shahana Gajala, 268

W Wagh, Sanjeev, 376

R Ramakurthi, Veera Babu, 41 Rane, Sagar, 376 Ranvijay, 156 Rauti, Sampsa, 409, 419 Reddy, V. Krishna, 220 Rekha, Gillala, 220

Y Yadav, Naina, 113 Yadav, Om. Prakash, 134 Z Zivkovic, Miodrag, 328 Zjavka, Ladislav, 72