722 13 166MB
English Pages 1332 [1341] Year 2021
Advances in Intelligent Systems and Computing 1324
Pandian Vasant Ivan Zelinka Gerhard-Wilhelm Weber Editors
Intelligent Computing and Optimization Proceedings of the 3rd International Conference on Intelligent Computing and Optimization 2020 (ICO2020)
Advances in Intelligent Systems and Computing Volume 1324
Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft computing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. Indexed by SCOPUS, DBLP, EI Compendex, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and Technology Agency (JST), SCImago. All books published in the series are submitted for consideration in Web of Science.
More information about this series at http://www.springer.com/series/11156
Pandian Vasant Ivan Zelinka Gerhard-Wilhelm Weber •
•
Editors
Intelligent Computing and Optimization Proceedings of the 3rd International Conference on Intelligent Computing and Optimization 2020 (ICO 2020)
123
Editors Pandian Vasant Department of Fundamental and Applied Sciences Universiti Teknologi Petronas Tronoh, Perak, Malaysia
Ivan Zelinka Faculty of Electrical Engineering and Computer Science VŠB TU Ostrava Ostrava, Czech Republic
Gerhard-Wilhelm Weber Faculty of Engineering Management Poznan University of Technology Poznan, Poland
ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-3-030-68153-1 ISBN 978-3-030-68154-8 (eBook) https://doi.org/10.1007/978-3-030-68154-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The third edition of the International Conference on Intelligent Computing and Optimization (ICO) ICO’2020 will be held via online platform due to COVID-19 pandemic. The physical conference was held at G Hua Hin Resort & Mall, Hua Hin, Thailand, once the COVID-19 pandemic is recovered. The objective of the international conference is to bring together the global research leaders, experts, and scientists in the research areas of Intelligent Computing and Optimization from all over the globe to share their knowledge and experiences on the current research achievements in these fields. This conference provides a golden opportunity for global research community to interact and share their novel research results, findings, and innovative discoveries among their colleagues and friends. The proceedings of ICO’2020 is published by SPRINGER (Advances in Intelligent Systems and Computing). Almost 100 authors submitted their full papers for ICO’2020. They represent more than 50 countries, such as Bangladesh, Canada, China, Croatia, France, Greece, Hong Kong, Italy, India, Indonesia, Iraq, Japan, Malaysia, Mauritius, Mexico, Myanmar, Namibia, Nigeria, Oman, Poland, Russia, Slovenia, South Africa, Sweden, Taiwan, Thailand, Turkmenistan, Ukraine, USA, UK, Vietnam, and others. This worldwide representation clearly demonstrates the growing interest of the research community in our conference. For this edition, the conference proceedings cover the innovative, original, and creative research areas of sustainability, smart cities, meta-heuristics optimization, cybersecurity, block chain, big data analytics, IoTs, renewable energy, artificial intelligence, Industry 4.0, modeling, and simulation. The organizing committee would like to sincerely thank all the authors and the reviewers for their wonderful contribution for this conference. The best and high-quality papers have been selected and reviewed by International Program Committee in order to publish in Advances in Intelligent System and Computing by SPRINGER. ICO’2020 presents enlightening contributions for research scholars across the planet in the research areas of innovative computing and novel optimization techniques and with the cutting-edge methodologies and applications. This conference could not have been organized without the strong support and help from the v
vi
Preface
committee members of ICO’2020. We would like to sincerely thank Prof. Igor Litvinchev (Nuevo Leon State University (UANL), Mexico), Prof. Rustem Popa (Dunarea de Jos University in Galati, Romania), Professor Jose Antonio Marmolejo (Universidad Panamericana, Mexico), and Dr. J. Joshua Thomas (UOW Malaysia KDU Penang University College, Malaysia) for their great help and support in organizing the conference. We also appreciate the valuable guidance and great contribution from Dr. J. Joshua Thomas (UOW Malaysia KDU Penang University College, Malaysia), Prof. Gerhard-Wilhelm Weber (Poznan University of Technology, Poland; Middle East Technical University, Turkey), Prof. Rustem Popa (“Dunarea de Jos” University in Galati, Romania), Prof. Valeriy Kharchenko (Federal Scientific Agro-engineering Center VIM, Russia), Dr. Vladimir Panchenko (Russian University of Transport, Russia), Prof. Ivan Zelinka (VSB-TU Ostrava, Czech Republic), Prof. Jose Antonio Marmolejo (Universidad Anahuac Mexico Norte, Mexico), Prof. Roman Rodriguez-Aguilar (Universidad Panamericana, Mexico), Prof. Ugo Fiore (Federico II University, Italy), Dr. Mukhdeep Singh Manshahia (Punjabi University Patiala, India), Mr. K. C. Choo (CO2 Networks, Malaysia), Prof. Celso C. Ribeiro (Brazilian Academy of Sciences, Brazil), Prof. Sergei Senkevich (Federal Scientific Agro-engineering Center VIM, Russia), Prof. Mingcong Deng (Tokyo University of Agriculture and Technology, Japan), Dr. Kwok Tai Chui (Open University of Hong Kong, Hong Kong), Prof. Hui Ming Wee (Chung Yuan Christian University. Taiwan), Prof. Elias Munapo (North West University, South Africa), Prof. M. Moshiul Hoque (Chittagong University of Engineering & Technology, Bangladesh), and Prof. Mohammad Shamsul Arefin (Chittagong University of Engineering and Technology, Bangladesh). Finally, we would like convey our utmost sincerest thanks to Prof. Dr. Janusz Kacprzyk, Dr. Thomas Ditzinger, and Ms. Jayarani Premkumar of SPRINGER NATURE for their wonderful help and support in publishing ICO’2020 conference proceedings Book in Advances in Intelligent Systems and Computing. December 2020
Pandian Vasant Gerhard-Wilhelm Weber Ivan Zelinka
Contents
Sustainable Clean Energy System Adaptive Neuro-Fuzzy Inference Based Modeling of Wind Energy Harvesting System for Remote Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . Tigilu Mitiku and Mukhdeep Singh Manshahia
3
Features of Distributed Energy Integration in Agriculture . . . . . . . . . . . Alexander V. Vinogradov, Dmitry A. Tikhomirov, Vadim E. Bolshev, Alina V. Vinogradova, Nikolay S. Sorokin, Maksim V. Borodin, Vadim A. Chernishov, Igor O. Golikov, and Alexey V. Bukreev
19
Concept of Multi-contact Switching System . . . . . . . . . . . . . . . . . . . . . . Alexander V. Vinogradov, Dmitry A. Tikhomirov, Alina V. Vinogradova, Alexander A. Lansberg, Nikolay S. Sorokin, Roman P. Belikov, Vadim E. Bolshev, Igor O. Golikov, and Maksim V. Borodin
28
The Design of Optimum Modes of Grain Drying in Microwave–Convective Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dmitry Budnikov Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology and Energy Supply of Livestock Farming . . . . . . . . . . . . . . . . . Aleksey N. Vasiliev, Gennady N. Samarin, Aleksey Al. Vasiliev, and Aleksandr A. Belov Laboratory-Scale Implementation of Ethereum Based Decentralized Application for Solar Energy Trading . . . . . . . . . . . . . . . . . . . . . . . . . . Patiphan Thupphae and Weerakorn Ongsakul Solar Module with Photoreceiver Combined with Concentrator . . . . . . Vladimir Panchenko and Andrey Kovalev
36
43
53 63
vii
viii
Contents
Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vladimir Panchenko, Sergey Chirskiy, Andrey Kovalev, and Anirban Banik Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation by the Method of Orthogonal Parquetting . . . . . . . . . . . . . . . Vladimir Panchenko and Sergey Sinitsyn Determination of the Efficiency of Photovoltaic Converters Adequate to Solar Radiation by Using Their Spectral Characteristics . . . . . . . . . . Valeriy Kharchenko, Boris Nikitin, Vladimir Panchenko, Shavkat Klychev, and Baba Babaev
73
84
95
Modeling of the Thermal State of Systems of Solar-Thermal Regeneration of Adsorbents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Gulom Uzakov, Saydulla Khujakulov, Valeriy Kharchenko, Zokir Pardayev, and Vladimir Panchenko Economic Aspects and Factors of Solar Energy Development in Ukraine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Volodymyr Kozyrsky, Svitlana Makarevych, Semen Voloshyn, Tetiana Kozyrska, Vitaliy Savchenko, Anton Vorushylo, and Diana Sobolenko A Method for Ensuring Technical Feasibility of Distributed Balancing in Power Systems, Considering Peer-to-Peer Balancing Energy Trade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Mariusz Drabecki Sustainable Optimization, Metaheuristics and Computing for Expert System The Results of a Compromise Solution, Which Were Obtained on the Basis of the Method of Uncertain Lagrange Multipliers to Determine the Influence of Design Factors of the Elastic-Damping Mechanism in the Tractor Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Sergey Senkevich, Ekaterina Ilchenko, Aleksandr Prilukov, and Mikhail Chaplygin Multiobjective Lévy-Flight Firefly Algorithm for Multiobjective Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Somchai Sumpunsri, Chaiyo Thammarat, and Deacha Puangdownreong Cooperative FPA-ATS Algorithm for Global Optimization . . . . . . . . . . 154 Thitipong Niyomsat, Sarot Hlangnamthip, and Deacha Puangdownreong
Contents
ix
Bayesian Optimization for Reverse Stress Testing . . . . . . . . . . . . . . . . . 164 Peter Mitic Modified Flower Pollination Algorithm for Function Optimization . . . . 176 Noppadol Pringsakul and Deacha Puangdownreong Improved Nature-Inspired Algorithms for Numeric Association Rule Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Iztok Fister Jr., Vili Podgorelec, and Iztok Fister Verification of the Adequacy of the Topological Optimization Method of the Connecting Rod Shaping by the BESO Method in ANSYS APDL System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Sergey Chirskiy and Vladimir Panchenko Method for Optimizing the Maintenance Process of Complex Technical Systems of the Railway Transport . . . . . . . . . . . . . . . . . . . . . 205 Vladimir Apatsev, Victor Bugreev, Evgeniy Novikov, Vladimir Panchenko, Anton Chekhov, and Pavel Chekhov Optimization of Power Supply System of Agricultural Enterprise with Solar Distributed Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Yu. V. Daus, I. V. Yudaev, V. V. Kharchenko, and V. A. Panchenko Crack Detection of Iron and Steel Bar Using Natural Frequencies: A CFD Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Rajib Karmaker and Ujjwal Kumar Deb Belief Rule-Based Expert System to Identify the Crime Zones . . . . . . . . 237 Abhijit Pathak, Abrar Hossain Tasin, Sanjida Nusrat Sania, Md. Adil, and Ashibur Rahman Munna Parameter Tuning of Nature-Inspired Meta-Heuristic Algorithms for PID Control of a Stabilized Gimbal . . . . . . . . . . . . . . . . . . . . . . . . . 250 S. Baartman and L. Cheng Solving an Integer Program by Using the Nonfeasible Basis Method Combined with the Cutting Plane Method . . . . . . . . . . . . . . . . . . . . . . . 263 Kasitinart Sangngern and Aua-aree Boonperm A New Technique for Solving a 2-Dimensional Linear Program by Considering the Coefficient of Constraints . . . . . . . . . . . . . . . . . . . . 276 Panthira Jamrunroj and Aua-aree Boonperm A New Integer Programming Model for Solving a School Bus Routing Problem with the Student Assignment . . . . . . . . . . . . . . . . . . . 287 Anthika Lekburapa, Aua-aree Boonperm, and Wutiphol Sintunavarat Distributed Optimisation of Perfect Preventive Maintenance and Component Replacement Schedules Using SPEA2 . . . . . . . . . . . . . 297 Anthony O. Ikechukwu, Shawulu H. Nggada, and José G. Quenum
x
Contents
A Framework for Traffic Sign Detection Based on Fuzzy Image Processing and Hu Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Zainal Abedin and Kaushik Deb Developing a Framework for Vehicle Detection, Tracking and Classification in Traffic Video Surveillance . . . . . . . . . . . . . . . . . . . 326 Rumi Saha, Tanusree Debi, and Mohammad Shamsul Arefin Advances in Algorithms, Modeling and Simulation for Intelligent Systems Modeling and Simulation of Rectangular Sheet Membrane Using Computational Fluid Dynamics (CFD) . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Anirban Banik, Sushant Kumar Biswal, Tarun Kanti Bandyopadhyay, Vladimir Panchenko, and J. Joshua Thomas End-to-End Supply Chain Costs Optimization Based on Material Touches Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 César Pedrero-Izquierdo, Víctor Manuel López-Sánchez, and José Antonio Marmolejo-Saucedo Computer Modeling Selection of Optimal Width of Rod Grip Header to the Combine Harvester . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Mikhail Chaplygin, Sergey Senkevich, and Aleksandr Prilukov An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Nanziba Basnin, Lutfun Nahar, and Mohammad Shahada Hossain Analysis of the Cost of Varying Levels of User Perceived Quality for Internet Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Ali Adib Arnab, Sheikh Md. Razibul Hasan Raj, John Schormans, Sultana Jahan Mukta, and Nafi Ahmad Application of Customized Term Frequency-Inverse Document Frequency for Vietnamese Document Classification in Place of Lemmatization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Do Viet Quan and Phan Duy Hung A New Topological Sorting Algorithm with Reduced Time Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Tanzin Ahammad, Mohammad Hasan, and Md. Zahid Hassan Modeling and Analysis of Framework for the Implementation of a Virtual Workplace in Nigerian Universities Using Coloured Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 James Okpor and Simon T. Apeh
Contents
xi
Modeling and Experimental Verification of Air - Thermal and Microwave - Convective Presowing Seed Treatment . . . . . . . . . . . . 440 Alexey A. Vasiliev, Alexey N. Vasiliev, Dmitry A. Budnikov, and Anton A. Sharko Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes . . . 452 Jaramporn Hassamontr and Theera Leephaicharoen Models for Forming Knowledge Databases for Decision Support Systems for Recognizing Cyberattacks . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Valery Lakhno, Bakhytzhan Akhmetov, Moldyr Ydyryshbayeva, Bohdan Bebeshko, Alona Desiatko, and Karyna Khorolska Developing an Intelligent System for Recommending Products . . . . . . . 476 Md. Shariful Islam, Md. Shafiul Alam Forhad, Md. Ashraf Uddin, Mohammad Shamsul Arefin, Syed Md. Galib, and Md. Akib Khan Branch Cut and Free Algorithm for the General Linear Integer Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Elias Munapo Resilience in Healthcare Supply Chains . . . . . . . . . . . . . . . . . . . . . . . . . 506 Jose Antonio Marmolejo-Saucedo and Mariana Scarlett Hartmann-González A Comprehensive Evaluation of Environmental Projects Through a Multiparadigm Modeling Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 520 Roman Rodriguez-Aguilar, Luz María Adriana Reyes Ortega, and Jose-Antonio Marmolejo-Saucedo Plant Leaf Disease Recognition Using Histogram Based Gradient Boosting Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 Syed Md. Minhaz Hossain and Kaushik Deb Exploring the Machine Learning Algorithms to Find the Best Features for Predicting the Breast Cancer and Its Recurrence . . . . . . . 546 Anika Islam Aishwarja, Nusrat Jahan Eva, Shakira Mushtary, Zarin Tasnim, Nafiz Imtiaz Khan, and Muhammad Nazrul Islam Exploring the Machine Learning Algorithms to Find the Best Features for Predicting the Risk of Cardiovascular Diseases . . . . . . . . . 559 Mostafa Mohiuddin Jalal, Zarin Tasnim, and Muhammad Nazrul Islam Searching Process Using Boyer Moore Algorithm in Digital Library . . . 570 Laet Laet Lin and Myat Thuzar Soe
xii
Contents
Application of Machine Learning and Artificial Intelligence Technology Gender Classification from Inertial Sensor-Based Gait Dataset . . . . . . . 583 Refat Khan Pathan, Mohammad Amaz Uddin, Nazmun Nahar, Ferdous Ara, Mohammad Shahadat Hossain, and Karl Andersson Lévy-Flight Intensified Current Search for Multimodal Function Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 Wattanawong Romsai, Prarot Leeart, and Auttarat Nawikavatan Cancer Cell Segmentation Based on Unsupervised Clustering and Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 Juel Sikder, Utpol Kanti Das, and A. M. Shahed Anwar Automated Student Attendance Monitoring System Using Face Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 Bakul Chandra Roy, Imran Hossen, Md. Golam Rashed, and Dipankar Das Machine Learning Approach to Predict the Second-Life Capacity of Discarded EV Batteries for Microgrid Applications . . . . . . . . . . . . . . 633 Ankit Bhatt, Weerakorn Ongsakul, and Nimal Madhu Classification of Cultural Heritage Mosque of Bangladesh Using CNN and Keras Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Mohammad Amir Saadat, Mohammad Shahadat Hossain, Rezaul Karim, and Rashed Mustafa Classification of Orthopedic Patients Using Supervised Machine Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 Nasrin Jahan, Rashed Mustafa, Rezaul Karim, and Mohammad Shahadat Hossain Long Short-Term Memory Networks for Driver Drowsiness and Stress Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670 Kwok Tai Chui, Mingbo Zhao, and Brij B. Gupta Optimal Generation Mix of Hybrid Renewable Energy System Employing Hybrid Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . 681 Md. Arif Hossain, Saad Mohammad Abdullah, Ashik Ahmed, Quazi Nafees Ul Islam, and S. R. Tito Activity Identification from Natural Images Using Deep CNN . . . . . . . . 693 Md. Anwar Hossain and Mirza A. F. M. Rashidul Hasan Learning Success Prediction Model for Early Age Children Using Educational Games and Advanced Data Analytics . . . . . . . . . . . . . . . . . 708 Antonio Tolic, Leo Mrsic, and Hrvoje Jerkovic
Contents
xiii
Advanced Analytics Techniques for Customer Activation and Retention in Online Retail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720 Igor Matic, Leo Mrsic, and Joachim Keppler An Approach for Detecting Pneumonia from Chest X-Ray Image Using Convolution Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 Susmita Kar, Nasim Akhtar, and Mostafijur Rahman An Analytical Study of Influencing Factors on Consumers’ Behaviors in Facebook Using ANN and RF . . . . . . . . . . . . . . . . . . . . . . 744 Shahadat Hossain, Md. Manzurul Hasan, and Tanvir Hossain Autism Spectrum Disorder Prognosis Using Machine Learning Algorithms: A Comparative Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754 Oishi Jyoti, Nazmin Islam, Md. Omaer Faruq, Md. Abu Ismail Siddique, and Md. Habibur Rahaman Multidimensional Failure Analysis Based on Data Fusion from Various Sources Using TextMining Techniques . . . . . . . . . . . . . . . 766 Maria Stachowiak, Artur Skoczylas, Paweł Stefaniak, and Paweł Śliwiński Road Quality Classification Adaptive to Vehicle Speed Based on Driving Data from Heavy Duty Mining Vehicles . . . . . . . . . . . . . . . . 777 Artur Skoczylas, Paweł Stefaniak, Sergii Anufriiev, and Bartosz Jachnik Fabric Defect Detection System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 788 Tanjim Mahmud, Juel Sikder, Rana Jyoti Chakma, and Jannat Fardoush Alzheimer’s Disease Detection Using CNN Based on Effective Dimensionality Reduction Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 801 Abu Saleh Musa Miah, Md. Mamunur Rashid, Md. Redwanur Rahman, Md. Tofayel Hossain, Md. Shahidujjaman Sujon, Nafisa Nawal, Mohammad Hasan, and Jungpil Shin An Analytical Intelligence Model to Discontinue Products in a Transnational Company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812 Gabriel Loy-García, Román Rodríguez-Aguilar, and Jose-Antonio Marmolejo-Saucedo Graph Neural Networks in Cheminformatics . . . . . . . . . . . . . . . . . . . . . 823 H. N. Tran Tran, J. Joshua Thomas, Nurul Hashimah Ahamed Hassain Malim, Abdalla M. Ali, and Son Bach Huynh Academic and Uncertainty Attributes in Predicting Student Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838 Abdalla M. Ali, J. Joshua Thomas, and Gomesh Nair
xiv
Contents
Captivating Profitable Applications of Artificial Intelligence in Agriculture Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848 R. Sivarethinamohan, D. Yuvaraj, S. Shanmuga Priya, and S. Sujatha Holistic IoT, Deep Learning and Information Technology Mosquito Classification Using Convolutional Neural Network with Data Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865 Mehenika Akter, Mohammad Shahadat Hossain, Tawsin Uddin Ahmed, and Karl Andersson Recommendation System for E-commerce Using Alternating Least Squares (ALS) on Apache Spark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880 Subasish Gosh, Nazmun Nahar, Mohammad Abdul Wahab, Munmun Biswas, Mohammad Shahadat Hossain, and Karl Andersson An Interactive Computer System with Gesture-Based Mouse and Keyboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894 Dipankar Gupta, Emam Hossain, Mohammed Sazzad Hossain, Mohammad Shahadat Hossain, and Karl Andersson Surface Water Quality Assessment and Determination of Drinking Water Quality Index by Adopting Multi Criteria Decision Analysis Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907 Deepjyoti Deb, Mrinmoy Majumder, Tilottama Chakraborty, Prachi D. Khobragade, and Khakachang Tripura An Approach for Multi-human Pose Recognition and Classification Using Multiclass SVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922 Sheikh Md. Razibul Hasan Raj, Sultana Jahan Mukta, Tapan Kumar Godder, and Md. Zahidul Islam Privacy Violation Issues in Re-publication of Modification Datasets . . . 938 Noppamas Riyana, Surapon Riyana, Srikul Nanthachumphu, Suphannika Sittisung, and Dussadee Duangban Using Non-straight Line Updates in Shuffled Frog Leaping Algorithm . . . Kanchana Daoden and Trasapong Thaiupathump
954
Efficient Advertisement Slogan Detection and Classification Using a Hierarchical BERT and BiLSTM-BERT Ensemble Model . . . . . . . . . 964 Md. Akib Zabed Khan, Saif Mahmud Parvez, Md. Mahbubur Rahman, and Md Musfique Anwar Chronic Kidney Disease (CKD) Prediction Using Data Mining Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976 Abhijit Pathak, Most. Asma Gani, Abrar Hossain Tasin, Sanjida Nusrat Sania, Md. Adil, and Suraiya Akter
Contents
xv
Multi-classification of Brain Tumor Images Based on Hybrid Feature Extraction Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989 Khaleda Akhter Sathi and Md. Saiful Islam An Evolutionary Population Census Application Through Mobile Crowdsourcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000 Ismail Hossain Mukul, Mohammad Hasan, and Md. Zahid Hassan IoT-Enabled Lifelogging Architecture Model to Leverage Healthcare Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011 Saika Zaman, Ahmed Imteaj, Muhammad Kamal Hossen, and Mohammad Shamsul Arefin An Improved Boolean Load Matrix-Based Frequent Pattern Mining . . . 1026 Shaishab Roy, Mohammad Nasim Akhtar, and Mostafijur Rahman Exploring CTC Based End-To-End Techniques for Myanmar Speech Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1038 Khin Me Me Chit and Laet Laet Lin IoT Based Bidirectional Speed Control and Monitoring of Single Phase Induction Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047 Ataur Rahman, Mohammad Rubaiyat Tanvir Hossain, and Md. Saifullah Siddiquee Missing Image Data Reconstruction Based on Least-Squares Approach with Randomized SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059 Siriwan Intawichai and Saifon Chaturantabut An Automated Candidate Selection System Using Bangla Language Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071 Md. Moinul Islam, Farzana Yasmin, Mohammad Shamsul Arefin, Zaber Al Hassan Ayon, and Rony Chowdhury Ripan AutoMove: An End-to-End Deep Learning System for Self-driving Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082 Sriram Ramasamy and J. Joshua Thomas An Efficient Machine Learning-Based Decision-Level Fusion Model to Predict Cardiovascular Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097 Hafsa Binte Kibria and Abdul Matin Towards POS Tagging Methods for Bengali Language: A Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1111 Fatima Jahara, Adrita Barua, MD. Asif Iqbal, Avishek Das, Omar Sharif, Mohammed Moshiul Hoque, and Iqbal H. Sarker
xvi
Contents
BEmoD: Development of Bengali Emotion Dataset for Classifying Expressions of Emotion in Texts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1124 Avishek Das, MD. Asif Iqbal, Omar Sharif, and Mohammed Moshiul Hoque Advances in Engineering and Technology Study of the Distribution Uniformity Coefficient of Microwave Field of 6 Sources in the Area of Microwave-Convective Impact . . . . . . . . . . 1139 Dmitry Budnikov, Alexey N. Vasilyev, and Alexey A. Vasilyev Floor-Mounted Heating of Piglets with the Use of Thermoelectricity . . . 1146 Dmitry Tikhomirov, Stanislav Trunov, Alexey Kuzmichev, Sergey Rastimeshin, and Victoria Ukhanova The Rationale for Using Improved Flame Cultivator for Weed Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156 Mavludin Abdulgalimov, Fakhretdin Magomedov, Izzet Melikov, Sergey Senkevich, Hasan Dogeev, Shamil Minatullaev, Batyr Dzhaparov, and Aleksandr Prilukov The Lighting Plan: From a Sector-Specific Urbanistic Instrument to an Opportunity of Enhancement of the Urban Space for Improving Quality of Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168 Cinzia B. Bellone and Riccardo Ottavi PID Controller Design for BLDC Motor Speed Control System by Lévy-Flight Intensified Current Search . . . . . . . . . . . . . . . . . . . . . . . 1176 Prarot Leeart, Wattanawong Romsai, and Auttarat Nawikavatan Intellectualized Control System of Technological Processes of an Experimental Biogas Plant with Improved System for Preliminary Preparation of Initial Waste . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186 Andrey Kovalev, Dmitriy Kovalev, Vladimir Panchenko, Valeriy Kharchenko, and Pandian Vasant Way for Intensifying the Process of Anaerobic Bioconversion by Preliminary Hydrolysis and Increasing Solid Retention Time . . . . . . 1195 Andrey Kovalev, Dmitriy Kovalev, Vladimir Panchenko, Valeriy Kharchenko, and Pandian Vasant Evaluation of Technical Damage Caused by Failures of Electric Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204 Anton Nekrasov, Alexey Nekrasov, and Vladimir Panchenko Development of a Prototype Dry Heat Sterilizer for Pharmaceuticals Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213 Md. Raju Ahmed, Md. Niaz Marshed, and Ashish Kumar Karmaker
Contents
xvii
Optimization of Parameters of Pre-sowing Seed Treatment in Magnetic Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222 Volodymyr Kozyrsky, Vitaliy Savchenko, Oleksandr Sinyavsky, Andriy Nesvidomin, and Vasyl Bunko Development of a Fast Response Combustion Performance Monitoring, Prediction, and Optimization Tool for Power Plants . . . . . 1232 Mohammad Nurizat Rahman, Noor Akma Watie Binti Mohd Noor, Ahmad Zulazlan Shah b. Zulkifli, and Mohd Shiraz Aris Industry 4.0 Approaches for Supply Chains Facing COVID-19: A Brief Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1242 Samuel Reong, Hui-Ming Wee, Yu-Lin Hsiao, and Chin Yee Whah Ontological Aspects of Developing Robust Control Systems for Technological Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252 Nataliia Lutskaya, Lidiia Vlasenko, Nataliia Zaiets, and Volodimir Shtepa A New Initial Basis for Solving the Blending Problem Without Using Artificial Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262 Chinchet Boonmalert, Aua-aree Boonperm, and Wutiphol Sintunavarat Review of the Information that is Previously Needed to Include Traceability in a Global Supply Chain . . . . . . . . . . . . . . . . . . . . . . . . . . 1272 Zayra M. Reyna Guevara, Jania A. Saucedo Martínez, and José A. Marmolejo Online Technology: Effective Contributor to Academic Writing . . . . . . 1281 Md. Hafiz Iqbal, Md Masumur Rahaman, Tanusree Debi, and Mohammad Shamsul Arefin A Secured Electronic Voting System Using Blockchain . . . . . . . . . . . . . 1295 Md. Rashadur Rahman, Md. Billal Hossain, Mohammad Shamsul Arefin, and Mohammad Ibrahim Khan Preconditions for Optimizing Primary Milk Processing . . . . . . . . . . . . . 1310 Gennady N. Samarin, Alexander A. Kudryavtsev, Alexander G. Khristenko, Dmitry N. Ignatenko, and Egor A. Krishtanov Optimization of Compost Production Technology . . . . . . . . . . . . . . . . . 1319 Gennady N. Samarin, Irina V. Kokunova, Alexey N. Vasilyev, Alexander A. Kudryavtsev, and Dmitry A. Normov Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1329
About the Editors
Pandian Vasant is a senior lecturer at University of Technology Petronas, Malaysia, and an editor-in-chief of International Journal of Energy Optimization and Engineering (IJEOE). He holds PhD in Computational Intelligence (UNEM, Costa Rica), MSc (University Malaysia Sabah, Malaysia, Engineering Mathematics), and BSc (Hons, Second Class Upper) in Mathematics (University of Malaya, Malaysia). His research interests include soft computing, hybrid optimization, innovative computing and applications. He has co-authored research articles in journals, conference proceedings, presentation, special issues guest editor, book chapters (257 publications indexed in SCOPUS), and General Chair of EAI International Conference on Computer Science and Engineering in Penang, Malaysia (2016), and Bangkok, Thailand (2018). In the year 2009 and 2015, Dr. Pandian Vasant was awarded top reviewer and outstanding reviewer for the journal Applied Soft Computing (Elsevier). He has 30 years of working experiences at the universities. Currently, he is an editor-in-chief of International Journal of Energy Optimization & Engineering, and Member of AMS (USA), NAVY Research Group (TUO, Czech Republic) and MERLIN Research Group (TDTU, Vietnam). H-Index Google Scholar = 33; i-10-index = 107. Ivan Zelinka is currently working at the Technical University of Ostrava (VSB-TU), Faculty of Electrical Engineering and Computer Science. He graduated consequently at Technical University in Brno (1995 – MSc.), UTB in Zlin (2001 – PhD) and again at Technical University in Brno (2004 – assoc. prof.) and VSB-TU (2010 - professor). Before academic career, he was an employed like TELECOM technician, computer specialist (HW+SW), and Commercial Bank (computer and LAN supervisor). During his career at UTB, he proposed and opened seven different lectures. He also has been invited for lectures at numerous universities in different EU countries plus the role of the keynote speaker at the Global Conference on Power, Control and Optimization in Bali, Indonesia (2009), Interdisciplinary Symposium on Complex Systems (2011), Halkidiki, Greece, and IWCFTA 2012, Dalian China. The field of his expertise is mainly on unconventional algorithms and cybersecurity. He is and was responsible supervisor of three grants of fundamental xix
xx
About the Editors
research of Czech grant agency GAČR, co-supervisor of grant FRVŠ - Laboratory of parallel computing. He was also working on numerous grants and two EU projects like a member of the team (FP5 - RESTORM) and supervisor (FP7 PROMOEVO) of the Czech team and supervisor of international research (founded by TACR agency) focused on the security of mobile devices (Czech - Vietnam). Currently, he is a professor at the Department of Computer Science, and in total, he has been the supervisor of more than 40 MSc. and 25 Bc. diploma thesis. Ivan Zelinka is also supervisor of doctoral students including students from the abroad. He was awarded by Siemens Award for his PhD thesis, as well as by journal Software news for his book about artificial intelligence. Ivan Zelinka is a member of British Computer Society, editor-in-chief of Springer book series: Emergence, Complexity and Computation (http://www.springer.com/series/10624), Editorial board of Saint Petersburg State University Studies in Mathematics, a few international program committees of various conferences and international journals. He is the author of journal articles as well as of books in Czech and English language and one of the three founders of TC IEEE on big data http://ieeesmc.org/about-smcs/ history/2014-archives/44-about-smcs/history/2014/technical-committees/204-bigdata-computing/. He is also head of research group NAVY http://navy.cs.vsb.cz. G.-W. Weber is a professor at Poznan University of Technology, Poznan, Poland, at Faculty of Engineering Management, Chair of Marketing and Economic Engineering. His research is on OR, financial mathematics, optimization and control, neuro- and bio-sciences, data mining, education and development; he is involved in the organization of scientific life internationally. He received his Diploma and Doctorate in mathematics, and economics/business administration, at RWTH Aachen, and his Habilitation at TU Darmstadt. He held Professorships by proxy at University of Cologne, and TU Chemnitz, Germany. At IAM, METU, Ankara, Turkey, he was a professor in the programs of Financial Mathematics and Scientific Computing, and Assistant to the Director, and he has been a member of further graduate schools, institutes, and departments of METU. Further, he has affiliations at the universities of Siegen, Ballarat, Aveiro, North Sumatra, and Malaysia University of Technology, and he is “Advisor to EURO Conferences”.
Sustainable Clean Energy System
Adaptive Neuro-Fuzzy Inference Based Modeling of Wind Energy Harvesting System for Remote Areas Tigilu Mitiku1 and Mukhdeep Singh Manshahia2(&) 1
2
Department of Mathematics, Bule Hora University, Bule Hora, Ethiopia [email protected] Department of Mathematics, Punjabi University Patiala, Patiala, Punjab, India [email protected]
Abstract. The wind speed has a great impact on the overall performance of the wind energy harvesting system. Due to variable nature of wind speed, the system is controlled to work only in a specified range of wind speeds to protect both the generator and turbine from damage. This article presents adaptive neuro-fuzzy inference system-based control scheme for operation of the system between the cut in and rated wind speed. By controlling the generator speed to its optimum value, the generator power and speed fluctuation can be reduced. A Matlab/Simulink tool is used for the simulation and analysis of the system. The obtained results indicate that adaptive neuro-fuzzy inference system is an effective method to control the rotor speed. Keywords: Wind energy harvesting system system Wind speed
Adaptive neuro-fuzzy inference
1 Introduction Wind energy technology have shown rapid growth among renewable energy sources in most parts of the world due to depletion of fossil fuel reserves, rising pollution levels and worrying changes in the global climate created due to conventional energy sources [1]. According to Global Wind Energy Council (GWEC) report, around 50 GW of newly installed wind power was added in 2018 slightly less than that of 2017, bringing the global total wind power generation capacity to 597 GW [2]. The report indicates that 2018 was the second year in a row with growing number of new installations for energy generation. Wind Energy Harvesting System (WEHS) can operate in both fixed as well as variable speed mode of operation. In fixed speed wind turbines, the generator rotates at almost constant speed and frequency for which it is designed regardless of variation in wind speed [1]. As turbines forced to operate at constant speed, it should be extremely robust to withstand mechanical stress created due to the fluctuation of wind speeds. On the other hand, in variable speed wind turbines, the rotor of the generator is allowed to rotate freely at any speed over a wide range of wind speeds. The generator is directly
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 3–18, 2021. https://doi.org/10.1007/978-3-030-68154-8_1
4
T. Mitiku and M. S. Manshahia
connected to the grid in fixed speed operation whereas connected with the help of power electronic equipment in variable speed system [3]. Thus, it is possible to control the rotor speed by means of power electronics to maintain the optimum tip speed ratio at all the times under fluctuation of wind speeds to produce maximum power. Different types of AC generators can be used along with the modern wind turbine. Researchers classified them into synchronous and asynchronous (induction) generators. Some are Squirrel-Cage Rotor Induction Generator, WoundRotor Induction Generator, Doubly-Fed Induction Generator, Synchronous Generator (With external field excitation) and Permanent Magnet Synchronous Generator [3]. Variable Speed Wind Turbine with Permanent Magnet Synchronous Generator (VSWT-PMSG) is an attractive choice for villages, settlements and remote areas found at very far distance from grid [4]. VSWT-PMSG can operate close to optimal speed using maximum power point tracking (MPPT) for various wind-speeds. This paper has seven sections: the first section contains general introduction about WEHS and the purpose of this work is mentioned in Sect. 2. Section 3 presents literature of the control methods applied on WEHS. Section 4 describes the details of WEHS. Section 5 describes MPPT controller’s used in our work. The model of the system, simulation results and discussions are given in the Sect. 6 before the conclusion and future scope which is the final part of this paper.
2 Motivation and Objective of Research The control system is used in WEHS to produce maximum power and supply the load with a constant voltage and frequency under change in wind speed and load. WEHS are controlled to work only in a specified range of wind speeds limited by cut-in and cutout speeds. Outside of these limits, the turbine should be stopped to protect both generator and turbine from damage [5, 6]. The recent advancements in power electronics and control strategies have made it possible to regulate the voltage of the PMSG in many different ways. The main factor that affects the performance of the system is the variation in wind speed which affects power system stability and power quality. The machine-side a torque/speed controller is applied to improve the performance of the system at variable wind speed between cut-in and rated speed. Grid-side inverter is also controlled to keep the DC-link voltage at a constant value and the current injected to the grid at unity power factor to achieve maximum power delivery to grid as a desired operating condition. Conventional control methods such as proportional integral (PI) have proposed by different researchers for the control of wind energy system [7]. However, these methods need exact mathematical knowledge of dynamics system, which is very often difficult to derive for complex systems. Nowadays to overcome such problem researchers are tending to use intelligent control methods like fuzzy logic control (FLC), neural networks (NN), adaptive neuro fuzzy interfacing system (ANFIS) and genetic algorithm (GA) to control wind speed fluctuations. In this paper, ANFIS controller is proposed to control rotor speed of WEHS. The objective of ANFIS controller is to maximize energy production and ensure a continuous supply of energy to the grid by regulating the turbine speed in such a way that the optimal tip speed ratio is maintained [8]. Modelling and simulation of the system is
Adaptive Neuro-Fuzzy Inference Based Modeling
5
developed using Matlab/Simulink tool to enhancement the performance of the system with the proposed speed controller. In this paper, the VSWT-PMSG system is equipped with AC/DC/AC power electronic converters in which the active and reactive power produced by PMSG can be controlled. The present research analyzes the model of a variable speed wind turbine equipped with 1.5 MW PMSG.
3 Related Work The studies related to modelling of WEHS using Adaptive MPPT network is limited due to complexity of the system and advancement of the control techniques. Researchers applied different MPPT techniques for control applications. Boobalan et al. [9] proposed Fuzzy-PI control strategy to model WEHS using PMSG to provide optimum power to the gird. The proposed vector control provides the desired voltage at the output of the generator side convertor so as to control the generator speed. With the vector control scheme, the magnitude, phase voltage and the frequency of the generator currents are controlled and enable acceptable results in steady state with variation in wind speed. However, if the wind speed changes significantly from time to time, this method may require a long searching time to locate the maximum power point. Sarvi et al. [10] have proposed a maximum power point tracking control algorithm based on particle swarm optimization (PSO) and Fuzzy logic control for variable speed PMSG based WEHS. The PSO algorithm is used to determine optimum rotor speed of turbine generator and its maximum power according to different wind speeds whereas FLC is used to adjust the duty cycle of boost converter. Ahmed et al. [11] applied fuzzy-PID controller based MPPT to track the maximum power available from the WEHS and obtain AC voltage with constant amplitude and frequency. The error between actual rotor speed of PMSG and estimated value of rotor speed which depends on values of DC current and voltage at the terminal of boost-converter and parameters of PMSG are used to obtain maximum power. The rotor speed error is given as input to the fuzzyPID controller that controls the duty cycle of pulse width modulation (PWM) generator and its output is connected to the boost converter. Wafa Torki et al. [12] have applied vector control strategy using PI controller to model and control the direct-drive WEHS with PMSG. The actual speed of the generator is compared to its reference value which is obtained by MPPT method using the optimal tip speed ratio control and gives the reference current. Marmouh et al. [13] have applied two controllers to produce maximum active power of WEHS equipped by PMSG whose stator is connected to the grid through a back-to-back AC-DC-AC converter. The first one is FLC based MPPT control algorithm of stator side control by using a hysteresis control and an optimal generator speed reference which is estimated from different wind speeds for the generation of maximum active power and the second FLC was applied to grid side control to ensure a smooth DC voltage between the two converters to its reference value. Aamer Bilal Asghar et al. [5] have proposed hybrid intelligent learning based ANFIS for online estimation of effective wind speed from instantaneous values of wind turbine TSR, rotor speed and mechanical power. The estimated wind speed is used to design the optimal rotor speed estimator for MPPT of VSWT. Gencer [14] demonstrated fuzzy logic control system of variable speed WEHS to investigate power flow efficiency of
6
T. Mitiku and M. S. Manshahia
PMSG. Fuzzy logic controller was structured for MPPT to drive the WEHS at the optimum speed that relates to maximum power at any wind speed. Kesraoui et al. [15] examined the FLC control of the aerodynamic power for variable-speed wind turbine at high wind speeds to oversee overabundance power created during high wind speeds dependent on PMSG associated with the grid through consecutive power converter. Pitch angle was additionally controlled utilizing fluffy rationale to restrain the aerodynamic power and the power on the dc bus voltage. Nadoushan et al. [16] exhibited optimal torque control (OTC) of stand-alone variable-speed small-scale wind turbine outfitted with a PMSG and a switch-mode rectifier. Right now, control method is utilized for variable speed WEHS to remove optimal power from wind and keep the voltage at 400 V with 50 Hz frequency and produce most extreme power output. The simulation model of the framework was created utilizing MATLAB/SIMULINK framework.
4 Wind Energy Harvesting System The output voltage of the generator is not acceptable due to the variation in amplitude and frequency because of fluctuation in wind speeds throughout the day. A 1.5 MW wind turbine is considered in this paper which consists of a 1.5 MW wind turbine and PMSG connected to AC/DC converter and DC/AC converter modelled by voltage sources. AC/DC converter is used to convert AC voltage with variable amplitude and frequency at the generator side to DC voltage at the DC-link voltage. DC-link voltage should be constant for direct use, storage and for conversion from DC to AC by using inverter. Thus the obtained DC voltage again converted to AC voltage with constant amplitude and frequency at the load side for electrical utilization. The back-to-back PWM converter based power electronic interface is a suitable option for wind-power applications with PMSG. 4.1
Modelling of Wind Turbine
The mechanical power captured by wind turbine is given by [17–19] (Fig. 1). 1 Pm ¼ CP ðk; bÞqpR2 Vw3 2
ð1Þ
The mechanical torque generated from wind power by wind turbine that will be transferred through the generator shaft to the rotor of the generator is given by Tm ¼
Cp ðk; bÞqpR2 Vw3 2xm
ð2Þ
where q is density of air, A is area swept by blades, Vw is wind speed, Cp is the power extraction efficiency coefficient of the wind turbine, b is the pitch angle of the blade and k is the tip speed ratio. The power coefficient Cp is a non-linear function of
Adaptive Neuro-Fuzzy Inference Based Modeling
7
Fig. 1. PMSG based wind turbine model [21, 24]
the tip speed-ratio k that depends on the wind velocity and the rotation speed of the shaft, xm in rad/s, given by [20–22] Cp ðk; bÞ ¼ 0:5716
21 116 0:4b 5 e ki þ 0:0068 ki
ð3Þ
with 1 1 0:035 2 ¼ ki k þ 0:008 b þ 1
ð4Þ
The tip speed ratio k is defined by k¼
Rxm Rnp ¼ 30Vw Vw
ð5Þ
where R is blade radius and n is wind turbine rotor speed in revolutions per minute (rpm). Since there is no gearbox, the shaft speed of the rotor and mechanical generator speed are the same, again mechanical torque transferred to the generator is also equal to the aerodynamic torque. The maximum value of Cp is 0.59 which means that the power extracted from the wind is at all times less than 59% (Betz’s limit), this is because of the various aerodynamic losses depending on rotor construction [3]. For VSWT the pitch angle is nearly 0, therefore at, b = 0° the maximum power coefficient is 0.4412.
8
T. Mitiku and M. S. Manshahia
To maintain the tip speed ratio at optimum value, xm changes with the wind speed. Therefore, to extract the maximum power from the wind the tip speed ratio should be maintained at optimum value at any wind speed [23]. Hence, the wind turbine can produce maximum power when the turbine operates at optimum value of Cp which is Cp-opt. So it is necessary to adjust the rotor speed at optimum value of the tip speed ratio kopt as shown in Fig. 2 below.
Fig. 2. Cp (k, b) characteristic for different value of the pitch angle b [10]
4.2
Modeling of the PMSG
For the dynamic modeling of the PMSG the magneto motive force (mmf) is considered to be sinusoidal distributed and the hysteresis and eddy current effects are neglected. Transforming the three-phase stator voltages into the d-q reference frame aligned with the electrical rotor position h, the stator voltages equations of salient pole PMSG are given by Eq. (6) and (7) [21, 22]. Lq d 1 Rs id ¼ Vd id þ Pxm iq dt Ld Ld Ld
ð6Þ
d 1 Rs Ld 1 iq ¼ Vq iq Pxm ð id þ wf Þ dt Lq Lq Lq Lq
ð7Þ
where, Lq and Ld are q, d axis components of stator inductance of the generator respectively, Rs is resistance of the stator windings, iq , id and vq , vd are q, d axis components of stator current and voltage respectively, wf is the flux linkage induced by permanent magnet and P is number of pole pairs.
Adaptive Neuro-Fuzzy Inference Based Modeling
9
The generator produces an electrical torque, and the difference between the mechanical torque and the electrical torque determines whether the mechanical system accelerates, decelerates, or remains at constant speed. The electric torque produced by the generator is given by [21, 25, 26], 3 T e ¼ P w f i q þ L d Lq i d i q 2 For surface mounted PMSG, Lq
=
ð8Þ
Ld and Eq. (12) becomes
3 Te ¼ Pwf iq 2
ð9Þ
The active and reactive powers of PMSG in steady state are given by Ps ¼ Vd id þ Vq iq
ð10Þ
Qs ¼ Vd id Vd iq
ð11Þ
Since wind turbine and generator shafts are directly coupled with out the gear box, there is only one state variable. The mechanical equation of PMSG and WT is given by dxm 1 ¼ ð T e Tm f x m Þ J dt xe ¼ Pxm ;
d h ¼ xm dt
ð12Þ ð13Þ
Where Te is the electromagnetic torque of the generator in Nm, J is inertia of rotor and generator in Kg.m2, and f is the coefficient of viscous friction that can be neglected in a small scale wind turbine, xe electrical rotational speed of the rotor in rpm and h is rotor angle/electrical angle which is required for abc $ d-q transformation [21].
5 Adaptive Neuro-Fuzzy Inference System Currently fuzzy logic control has played significant role in the development and design of many real-time control applications [4]. FLC consists of three important functional blocks i.e., fuzzification which assign fuzzy variables to the crisp data using membership function; inference engine that creates fuzzy rules by mapping from input to output through the use of knowledge base in it and defuzzification which is the reverse process of fuzzification provides the final output to the plant to be controlled [18]. Different adaptive techniques can easily be implemented in it to increase the performance of the network. The ability of the network depends on the quality of the signals used for training and the performance of the training algorithms and their parameters. ANFIS is one of the neuro fuzzy networks that give best result in control of wind energy system. ANFIS is a fuzzy inference system (FIS) whose membership functions
10
T. Mitiku and M. S. Manshahia
and rule base are appropriately tuned (adjusted) by ANN. It takes advantage from both, the learning capability of ANNs and human knowledge based decision making power of FIS. It uses a combination of least squares estimation and back propagation for membership function parameter estimation.
(a)
(b) Fig. 3. (a) A first-order Sugeno fuzzy model with two–input and two rules; (b) Equivalent ANFIS architecture [16]
Adaptive Neuro-Fuzzy Inference Based Modeling
11
The sugeno’s fuzzy logic model and the corresponding ANFIS architecture is shown in Fig. 3. Assume the FIS under consideration has two inputs x, y and one output fi as shown in Fig. 3 above. Square node indicates adaptive node whereas circle node indicates fixed (non-adaptive) nodes. For a first-order Sugeno fuzzy model, a common rule set with two fuzzy if-then rule is: Rule 1: If x is A1 and y is B1 ; then f1 ¼ p1 x þ q1 y þ r1 ;
ð14Þ
Rule 2: If x is A1 and y is B1 ; then f2 ¼ p2 x þ q2 y þ r2 ;
ð15Þ
Here Oni denotes the output of the ith node (neuron) in layer n, pi, qi, and ri are consequent parameters of the first order polynomial which are updated during the learning process in forward pass by least square method [27]. Layer 1. Every node i in this layer is an adoptive node and consists of two inputs as variables with a node function O1i ¼ lAi ðxÞ for i ¼ 1; 2 and O1i ¼ lBi2 ðyÞ for i ¼ 3; 4
ð16Þ
where Ai is the linguistic label like big, very small, large, etc. associated with this node function. lAi(x) is the membership function of Ai and it specifies the degree to which the given x satisfies the quantifier Ai. It is chosen to be triangular or bell-shaped membership function. Bell-shaped membership function with maximum equal to 1 and minimum equal to 0, such as the generalized bell shaped function is selected for our study. Layer 2. Every node in this layer is non-adaptive node labeled p which multiplies the incoming signals and sends the product out to the next layer. For instance, O2i ¼ wi ¼ lAi ðxÞ lBi ðyÞ for i ¼ 3; 4
ð17Þ
The obtained result represents the firing strength of a rule. Other T-norm operators that perform generalized AND can be used as the node function in this layer. Layer 3. The neurons in this layer is non adoptive labeled by circle node N and compute the normalized firing strength which is the ratio of firing strength of a rule to the sum of the firing strengths of all rules. They compute the activation level of each rule as O3i ¼ w ¼
wi ; i ¼ 1; 2: w1 þ w2
ð18Þ
For convenience, outputs of this layer will be called normalized firing strengths.
12
T. Mitiku and M. S. Manshahia
Layer 4. Every node i in this layer is adaptive square node which multiplies the normalized firing strength of a rule with corresponding first order polynomial function which produces crisp output O4i ¼ wi fi ¼ wi fi ¼ wi ðpi x þ qi y þ ri Þ
ð19Þ
where wi is the output of layer 3, and, pi, qi, and ri are is known as consequent parameters. This layer is called as defuzzification layer. Layer 5. The single node in this layer is a fixed node labeled R that computes the overall output as the summation of all incoming signals and transforms the fuzzy classification results into a single crisp output and it is called the output layer, i.e., O5i ¼ f ¼
X
wi fi ¼
w1 w2 f1 þ f2 w1 þ w2 w1 þ w2
ð20Þ
Therefore, when the values of the premise parameters in layer are fixed, the overall output can be expressed as a linear combination of the consequent parameters in layer 4. Adaptive network functionally equivalent to a first-order Sugeno fuzzy model is constructed this way. The ANFIS learning or training algorithm is used to change all the adjustable parameters to compare ANFIS output with trained data and identify the parameters in the network. Each training period of the network is divided into two phases. In the first phase (forward pass), functional signals go forward until layer 4 and the consequent parameters are adjusted with Least-Squares Method. In the second phase (backward pass), the error rates propagate backwards and the premise parameters are updated with gradient descent (back propagation) method [28]. If these parameters are fixed, ANFIS output is expressed as the summation of all incoming signals to produce a single crisp output. Thus, a combination of gradient descent and least squares methods can easily define optimal values for the result parameters, pi, qi, and ri. The ANFIS based MPPT controller computes the optimum speed for maximum power point using information on the magnitude and direction of change in power output due to the change in command speed.
6 Simulation Results and Discussion The generator-side converter is controlled to catch maximum power from available wind power. According to Eq. (9) in order to obtain maximal electromagnetic torque Te, with minimum current this study just controls the q-axis current iqs with the assumption that the d-axis current ids = 0. Moreover, according to [21, 29], to produce maximum power, the optimum value of the rotation speed is adjusted using fuzzy logic
Adaptive Neuro-Fuzzy Inference Based Modeling
13
control technique. The relation between blade angular velocity reference xref and wind speed Vw for constant R and kopt is given by xmref ¼
kopt Vm R
ð21Þ
First the wind speed is approximated by the proposed ANFIS based MPPT controller to generate reference speed xmref for the speed control loop of the rotor side converter control to track maximum power points for the system by dynamically changing the turbine torque to operate in kopt as shown in Fig. 4. The PI controller controls the actual rotor speed to the desired value by varying the switching ratio of the PWM inverter. In the speed control loop, the actual speed of the generator is compared to its reference value xref that is obtained by the ANFIS based MPPT control kopt which is defined above. Then, the speed controller will output the reference q-axis current iqref. The control target of the inverter is the output power delivered to the load [30, 31].
Fig. 4. Speed control block diagram
The block diagram of the ANFIS-based MPPT controller module is shown in Fig. 5. The input to ANFIS network is mechanical power and speed of the turbine. The network estimates the effective wind speed used to find the reference speed of the rotor. Table 1 shows the parameters of wind turbine and generator used for the model.
14
T. Mitiku and M. S. Manshahia
Fig. 5. ANFIS-based MPPT control module of turbine rotor speed.
The simulated results of generator output voltage at average speed of 12 m/s is given in Fig. 6 below. The voltage is near to rated voltage of the generator and speed fluctuations are reducing. The obtained voltage is purely sinusoidal.
Fig. 6. Inverter output voltage
The advantages of ANFIS over the two parts of this hybrid system are: ANFIS uses the neural network’s ability to classify data and find patterns. Then it develops a fuzzy expert system that is more transparent to the user and also less likely to produce memorization errors than neural network. ANFIS removes (or at least reduces) the need for an expert. Furthermore, ANFIS has the ability to divide the data in groups and adapt these groups to arrange a best membership functions that clustering the data and deducing the output desired with minimum epochs [32]. The learning mechanism finetunes the underlying fuzzy inference system. Using a given input/output data set,
Adaptive Neuro-Fuzzy Inference Based Modeling
15
ANFIS constructs a fuzzy inference system (FIS) whose membership function parameters are tuned (adjusted) using either back propagation algorithm alone, or in combination with a least squares type of method. This allows your fuzzy systems to learn from the data [33, 34]. However, the restriction of ANFIS is only Sugeno-type decision method is available, there can be only one output and defuzzification method is weighted mean value. ANFIS can replace the anemometer for small wind energy system and reduce the size of the turbine as well as the cost. Moreover, it increases the production of power from the system. Sensor less estimation with help of ANFIS network has very good opportunity for small and medium wind energy generation system [35, 36]. Table 1. Parameters of wind turbine and PMSG Wind turbine Rated power Cut-in wind speed Rated wind speed Cut-out wind speed
6.1
1.5 MW 3 m/s 11 m/s 22 m/s
PMSG Rated power 1.5 MW Rated rotational speed 17.3 rpm P 44 Frequency 50 Hz Rated voltage 200 v
Limitation of Research
The results presented in this paper have some limitations. The data set used in this study is collected from 1.5 MW variable-speed PMSG based wind turbine. It was very good if it is supported by laboratory. However, the actual system is very expensive as well as bigger in size is not usually available in the laboratories, except those which are highly equipped and solely dedicated to wind energy research. The only option is to collect the data samples from an operational wind turbine system and then using the collected data to design estimation and control mechanism.
7 Conclusion and Future Scope As wind speed changing throughout the day, variable speed based wind power generation is useful for optimizing power output of wind energy harvesting system using MPPT methods. This paper presents modeling and simulation of variable speed based PMSG using ANFIS network. The network is applied to control the speed of the rotor to adjust it to its maximum value to give maximum power output. Results have shown that the voltage is near to rated voltage of the generator and speed fluctuations are reducing. The future work is to control the pitch angle for the speed more than the rated speed to produce rated power.
16
T. Mitiku and M. S. Manshahia
Acknowledgments. The authors acknowledge Punjabi University Patiala for providing the internet facilities and the necessary library resources.
References 1. Zhang, J., Xu, S.: Application of fuzzy logic control for grid-connected wind energy conversion system. In: Dadios, E.P. (ed.) Fuzzy Logic-Tool for Getting Accurate Solutions. pp. 50–77. IntechOpen (2015) DOI: https://doi.org/10.5772/59923 2. WWEA: Wind power capacity worldwide reaches 597 GW, 50,1 GW added in 2018. World Wind Energy Association, Brazil (2019) 3. Husain, M.A., Tariq, A.: Modeling and study of a standalone PMSG wind generation system using MATLAB/SIMULINK. Univ. J. Electr. Electron. Eng. 2(7), 270–277 (2014) 4. Ali, A., Moussa, A., Abdelatif, K., Eissa, M., Wasfy, S., Malik, O.P.: ANFIS based controller for rectifier of PMSG wind energy conversion system energy conversion system. In: Proceedings of Electrical Power and Energy Conference (EPEC), pp. 99–103. IEEE, Calgary (2014) 5. Asghar, A.B., Liu, X.: Adaptive neuro-fuzzy algorithm to estimate effective wind speed and optimal rotor speed for variable-speed wind turbine. Neurocomputing 272, 495–504 (2017) 6. Pindoriya, R.M., Usman, A., Rajpurohit, B.S., Srivastava, K.N.: PMSG based wind energy generation system: energy maximization and its control. In: 7th International Conference on Power Systems (ICPS), pp. 376–381, IEEE, Pune (2017) 7. Khaing, T.Z., Kyin, L.Z.: Control scheme of stand-alone wind power supply system with battery energy storage system. Int. J. Electr. Electron. Data Commun. 3(2), 19–25 (2015) 8. El-Tamaly, H.H., Nassef, A.Y.: Tip speed ratio and pitch angle control based on ANN for putting variable speed WTG on MPP. In: 18th International Middle-East Power Systems Conference (MEPCON), pp. 625–632, IEEE Power & Energy Society, Cairo (2016) 9. Boobalan, M., Vijayalakshmi, S., Brindha, R.: A fuzzy-PI based power control of wind energy conversion system PMSG. In: International Conference on Energy Efficient Technologies for Sustainability, pp. 577–583, IEEE, Nagercoil (2013) 10. Sarvi, M., Abdi, S., Ahmadi, S.: A new method for rapid maximum power point tracking of PMSG wind generator using PSO_fuzzy logic. Tech. J. Eng. Appl. Sci. 3(17), 1984–1995 (2013) 11. Ahmed, O.A., Ahmed, A.A.: Control of Wind Turbine for variable speed based on fuzzyPID controller. J. Eng. Comput. Sci. (JECS) 18(1), 40–51 (2017) 12. Torki, W., Grouz, F., Sbita, L.: Vector control of a PMSG direct-drive wind turbine. In: International Conference on Green Energy Conversion Systems (GECS), pp. 1–6. IEEE, Hammamet (2017) 13. Marmouh, S., Boutoubat, M., Mokrani, L.: MPPT fuzzy logic controller of a wind energy conversion system based on a PMSG. In: 8th International Conference on Modelling, Identification and Control, Algiers, pp. 296–302 (2016) 14. Gencer, A.: Modelling of operation PMSG based on fuzzy logic control under different load conditions. In: 10th International Symposium on Advanced Topics in Electrical Engineering (ATEE), pp. 736–739, IEEE, Bucharest (2017) 15. Kesraoui, M., Lagraf, S.A., Chaib, A.: Aerodynamic power control of wind turbine using fuzzy logic. In: 3rd International Renewable and Sustainable Energy Conference (IRSEC), Algeria, pp. 1–6 (2015)
Adaptive Neuro-Fuzzy Inference Based Modeling
17
16. Nadoushan, M.H.J., Akhbari, M.: Optimal torque control of PMSG-based stand-alone wind turbine with energy storage system. J. Electr. Power Energy Convers. Syst. 1(2), 52–59 (2016) 17. Aymen, J., Ons, Z., Nejib, M.M.: Performance assessment of a wind turbine with variable speed wind using artificial neural network and neuro-fuzzy controllers. Int. J. Syst. Appl. Eng. Dev. 11(3), 167–172 (2017) 18. Sahoo, S., Subudhi, B., Panda, G.: Torque and pitch angle control of a wind turbine using multiple adaptive neuro-fuzzy control. Wind Eng. 44(2), 125–141 (2019) 19. Mitiku, T., Manshahia, M.S.: Fuzzy logic controller for modeling of wind energy harvesting system for remote areas. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing and Optimization, ICO 2019. Advances in Intelligent Systems and Computing, vol. 1072. Springer, Cham (2019). https://doi-org-443.webvpn.fjmu.edu.cn/10.1007/978-3030-33585-4_4 20. Heidari, M.: Maximum wind energy extraction by using neural network estimation and predictive control of boost converter. Int. J. Ind. Electron. Control Optim. 1(2), 115–120 (2018) 21. Slah, H., Mehdi, D., Lassaad, S.: Advanced control of a PMSG wind turbine. Int. J. Mod. Nonlinear Theory Appl. 5, 1–10 (2016) 22. Medjber, A., Guessoumb, A., Belmili, H., Mellit, A.: New neural network and fuzzy logic controllers to monitor maximum power for wind energy conversion system. Energy 106, 137–146 (2016) 23. Sahu, S., Panda, G., Yadav, S.P.: Dynamic modelling and control of PMSG based standalone wind energy conversion system. In: Recent Advances on Engineering, Technology and Computational Sciences (RAETCS), pp. 1–6, IEEE, Allahabad (2018) 24. Jain, A., Shankar, S., Vanitha, V.: Power generation using permanent magnet synchronous generator (PMSG) based variable speed wind energy conversion system (WECS): an overview. J. Green Eng. 7(4), 477–504 (2018) 25. Elbeji, O., Mouna, B.H., Lassaad, S.: Modeling and control of a variable speed wind turbine. In: The Fifth International Renewable Energy Congress, pp. 425–429. IEEE, Hammamet (2014) 26. Sagiraju, D.K.V., Obulesu, Y.P., Choppavarapu, S.B.: Dynamic performance improvement of standalone battery integrated PMSG wind energy system using proportional resonant controller. Int. J. Eng. Sci. Technol. 20(4), 1–3 (2017) 27. Petkovic, D., Shamshirband, S.: Soft methodology selection of wind turbine parameters to large affect. Electr. Power Energy Syst. 69, 98–103 (2015) 28. Oguz, Y., Guney, I.: Adaptive neuro-fuzzy inference system to improve the power quality of variable-speed wind power generation system. Turk. J. Electr. Eng. Comput. Sci. 18(4), 625–645 (2010) 29. Farh, H.M., Eltamaly, A.M.: Fuzzy logic control of wind energy conversion system. J. Renew. Sustain. Energy 5(2), 1–3 (2013) 30. Sekhar, V.: Modified fuzzy logic based control strategy for grid connected wind energy conversion system. J. Green Eng. 6(4), 369–384 (2016) 31. Gupta, J., Kumar, A.: Fixed pitch wind turbine-based permanent magnet synchronous machine model for wind energy conversion systems. J. Eng. Technol. 2(1), 52–62 (2012) 32. Thongam, J.S., Bouchard, P., Ezzaidi, H., Ouhrouche, M.: Artificial neural network-based maximum power point tracking control for variable speed wind energy conversion systems. In: 18th IEEE International Conference on Control Applications, Saint Petersburg (2009) 33. Vasant, P., Zelinka, I., Weber, G.W.: Intelligent Computing & Optimization. Conference Proceedings ICO 2018. Springer, Cham (2018). ISBN 978-3-030-00978-6
18
T. Mitiku and M. S. Manshahia
34. Vasant, P., Zelinka, I., Weber, G.W.: Intelligent computing & optimization. In: Proceedings of the 2nd International Conference on Intelligent Computing and Optimization. Springer (2019). ISBN 978-3-030-33585-4 35. Mitiku, T., Manshahia, M.S.: Artificial neural networks based green energy harvesting for smart world. In: Somani, A., Shekhawat, R., Mundra, A., Srivastava, S., Verma, V. (eds.) Smart Systems and IoT: Innovations in Computing. Smart Innovation, Systems and Technologies, vol. 141. Springer, Singapore. https://doi.org/10.1007/978-981-13-8406-6_4 36. Mitiku, T., Manshahia, M.S.: Fuzzy inference based green energy harvesting for smart world. In: IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), pp. 1–4. IEEE, Madurai (2018)
Features of Distributed Energy Integration in Agriculture Alexander V. Vinogradov1 , Dmitry A. Tikhomirov1 , Vadim E. Bolshev1(&) , Alina V. Vinogradova1, Nikolay S. Sorokin2, Maksim V. Borodin2, Vadim A. Chernishov3 Igor O. Golikov2, and Alexey V. Bukreev1
,
1
Federal Scientific Agroengineering Center VIM, 1-st Institutsky proezd, 5, 109428 Moscow, Russia [email protected], [email protected], [email protected], [email protected], [email protected] 2 Orel State Agrarian University named after N.V. Parahin, Generala Rodina St., 69, 302019 Orel, Russia [email protected], [email protected], [email protected] 3 Orel State University named after I.S. Turgenev, Komsomolskaya St., 95, 302026 Orel, Russia [email protected]
Abstract. Agriculture is the guarantor of the sustainable development of the state as it supplies the population with basic necessities. Agricultural holdings and similar agricultural associations are distributed agricultural enterprises, each part of which requires reliable provision with the necessary types of energy. The article gives an understanding of the agroholdings and what problems exist in the implementation of distributed energy projects in agriculture. It also considers the options for the small generation facility structure on the example of biogas plants and their property issues. Various promising options for the use of individual and local small generation facilities in agroholdings are given listing facility types, scope of perspective application in the agroholding structure and expected key results. Keywords: Agricultural enterprises Agricultural holdings Distributed generation Energy consumption analysis Energy resources Waste recycling
1 Introduction An agroholdings is a group of legal entities engaged in agricultural activities and sales of agricultural products [1]. Thus, it is actually a distributed agricultural enterprise that can be engaged in different types of agricultural production in different territories. In one holding company there can be units engaged in crop production, livestock raising and agricultural product processing. Energy needs of the agricultural holding are formed depending on the specialization of its divisions, production volumes and applied production technologies. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 19–27, 2021. https://doi.org/10.1007/978-3-030-68154-8_2
20
A. V. Vinogradov et al.
Usually, the sources of energy supply for all structural units of holdings for electricity are electric grids (that are on the balance of power grid companies) and for heat are their own boiler houses and boiler rooms (that are on the balance of heat supply companies) or in some cases centralized heat networks. The main problems of energy supply to agricultural holdings as well as all agricultural enterprises, especially with facilities in places far from developed infrastructure, are: • low reliability of electricity supply including low reliability of electricity supply to heat supply facilities (boiler houses) what leads to a decrease in the quality of products, underproduction, livestock losses and heat supply interruptions of agricultural processes in case of in electricity interruptions of boiler rooms [2–4]; • low energy efficiency of used boiler houses what leads to an increase in the cost of holding production; • the lack of sustainable communication channels for the organization of systems for monitoring the activities of energy supply facilities, energy consumption. It does not allow efficiently monitoring of energy consumption by facilities [5–7]; • the lack of qualified specialists who can effectively organize interaction with power sales and power grid companies. First of all, the low qualification of electrical engineering personnel and engineering workers affects the difficulty of forecasting consumption, meeting the requirements of energy sales companies to the conditions of using more favorable price categories, which, in turn, does not allow agricultural enterprises to reduce the cost of electricity in the structure of agricultural holdings and hence the cost products [8].
2 Problems of Integration of Distributed Energy Projects in Agriculture Features of distributed generation allow in certain cases to solve a number of these problems. Advanced options are developed for installations for the production of energy from renewable sources [9–11]. Low generation is an opportunity to increase independence from centralized energy supply systems and the possibility of utilization of enterprise waste, which can be used as raw materials for small generation sources or heat generators. The need for distributed generation for agroholdings exists objectively and logically, but its justification has a number of constraining factors given below. The problem of the lack of effective methods for assessing the potential of using small generation facilities (SGF) in the structure of agricultural holdings contains a full range of issues ranging from the availability of land resources to accommodation and the availability of human, raw materials, etc. The existing methods are focused only on the effects of the replacement of electricity and heat as well as on the effect of waste processing and do not explore all the necessary issues of the prospects of using small generation facilities. There is an insufficient supply of ready-made integrated solutions on the SGF market, including in the ownership variant of the SGF on the balance of the relevant
Features of Distributed Energy Integration in Agriculture
21
companies. Basically, the proposals concern only the SGF directly and do not consider its effective integration into the infrastructure of the enterprise taking into account its features. In combination with the absence of a developed system of SGF operational services, this makes it difficult to decide on the use of distributed generation. Significant investments are needed in pre-project preparation of SGF implementation in agroholdings with unguaranteed validity of the effectiveness of SGF use. Understanding this, enterprises are reluctant to carry out such work. In addition, a very small number of examples of successful application of SGF in the agricultural sector also “scares” agroholdings from their use. There is a problem of consistency of legislation on the SGF use in terms of connecting them to general-use electrical networks and using them for power supply to third-party consumers. The process of joining and determining tariffs are overly bureaucratic. This imposes difficulties on the effective use of SGF generating capacities. It is necessary either to unreasonably overestimate the power of the generators or to use the SGF for incomplete coverage of the capacities of the enterprise. The construction of its own network for the transmission of electricity to third-party consumers requires the creation of its own network and distribution company or to involve them from outside, which increases the cost of electricity sold. It is necessary to coordinate legislation and regulatory documents in terms of the possibility of using renewable energy sources, installations for processing agricultural waste. There is instability in raw materials for use as a fuel for SGF. This is primarily the case of the use of biogas plants, plants for the incineration of crop waste, the forest industry, etc. The market dictates the need to adapt to it in terms of production, therefore, the structure and volume of production wastes may change and become insufficient for SGF. The use of biomass specifically cultivated for energy needs as a raw material requires justification since it needs the occupation of farmland areas for these purposes.
3 Discussion and Results It should be noted that the options of the SGF can vary greatly depending on the type, quantity and quality of raw materials as well as the needs of the customer. Consider the options for the structure of the SGF on the example of biogas plants (BGP), as one of the most promising options for SGF [12, 13]. Individual BGP. Individual BGP is designed to partially or fully meet the needs of small agricultural enterprises for electricity, fertilizer, cheap heat and in some cases for complete energy autonomy. The structural diagram of an individual BGP is presented in Fig. 1.
22
A. V. Vinogradov et al.
Fig. 1. Structural diagram of the use of individual BGP
At the same time, the property issue of BGP has 2 solutions: • BGP is owned by the company and supplies it to: a) gas and fertilizers; b) electricity, heat and fertilizers; c) gas, electricity, heat and fertilizers; • BGP is an independent enterprise buying raw materials and selling the same products in any combination. Local BGP. Local BGP is biogas plant designed to fully or partially cover the required capacities of several enterprises connected to a common energy network. The structural diagram of the local BGP is presented in Fig. 2. The end products of such BGP are: electricity, biogas, bio-fertilizers used by agricultural enterprises for technological purposes.
Fig. 2. Structural diagram of the use of local BGP
Features of Distributed Energy Integration in Agriculture
23
At the same time, the property issue of BGP has 2 solutions: • BGP is jointly owned by enterprises and supplies them with products of biomass processing in the combination they need taking into account the characteristics of enterprises; • BGP is an independent enterprise that buys raw materials and sells its products to suppliers to enterprises. Network BGP. Network BGP is biogas plant intended for the sale of energy resources (gas/electricity/fertilizers/heat) to an external network. Supply of raw materials for the network BGP is engaged in one or more agricultural enterprises, the final product of such the BGP is gas (electricity). The block diagram of the network BGP is presented in Fig. 3.
Fig. 3. Structural diagram of the use of network BGP
At the same time, the property issue of network BGP has 2 solutions: • BGP is jointly owned by enterprises and supplies its products in the network in the form of electricity, heat, fertilizer or gas; • BGP is an independent enterprise purchasing raw materials from enterprises and selling its products in the appropriate networks. BGP can be specialized and sell only gas to the gas network what significantly reduces the necessary equipment for BGP and, accordingly, reduces the cost of its products. Auxiliary product in this case can be fertilizer. In the case of an agricultural holding, BGP can also become one of its structures aimed at solving the tasks of supplying the units with the products they need.
24
A. V. Vinogradov et al.
As for the whole range of SGF, not only BGP, here, first of all, it is necessary to conduct research aimed at determining the scope of using different types of SGF. For example, if there are enterprises engaged in pasture cattle breeding in the structure of an agricultural holding, it is rational to use mobile technological centers (shepherds’ homes on wheels, milking points, haircuts, etc.) equipped with solar batteries as an energy source. Pumping installations on pastures, or stationary in areas of greenhouse, livestock farms can be equipped with both solar and wind power plants. There are promising SGF options for enterprises in the presence of a natural gas network that uses gas to generate heat and electricity. But all these options require careful scientific study with the development of practical recommendations on the required SGF capacities, their choice in different climatic zones and with different specializations of farms and other solutions. Various promising options for the use of individual and local SGF in agroholdings are listed in Table 1. Table 1. Options for the use of individual and local SGF in agroholdings SGF type Wind power plant
Biogas plant
Scope of perspective application in agroholding structure • Power supply and (or) mechanization of pumping stations; • Power supply of individual objects remote from the infrastructure
• Gas supply to facilities of different purposes; • Heating of livestock facilities located near buildings of various purposes, • Recycling of enterprise waste; • In some cases, the power supply of individual objects;
Expected key results • Reducing the cost of pumping water; • Reducing the cost of power supply to individual objects (including by reducing the cost of connecting to the network), for example, at stationary points on remote pastures • Reducing the cost of gas supply; • Reducing heating costs; • Reducing the cost of power supply to individual objects (including by reducing the cost of connecting to the networks); • Reducing the cost of fertilizers; • Effective use of fallow lands (when growing energy plants) or use of land for raw materials of BSU in crop rotation; • Reduction of fines for environmental impact (continued)
Features of Distributed Energy Integration in Agriculture
25
Table 1. (continued) SGF type Solar power plant
Power plant on natural gas (gas turbine, gas piston, etc.)
Heat generator on the enterprise waste
Scope of perspective application in agroholding structure • Power supply to pumping stations; • Power supply (electricity and heat supply) to individual objects remote from the infrastructure; • Power supply to mobile technological points (housing for shepherds on wheels, mobile milking stations, etc.); • Power supply to individual loads in enterprises (emergency lighting, back-up power for the responsible electrical receivers, etc.); • Drying of grain, energy supply to storage facilities • Heating of the enterprise facilities located near buildings for various purposes, • Power supply to the enterprise facilities; • The use of heat in technological processes • Heating of livestock facilities; • The use of heat in technological processes
Expected key results • Reducing the cost of pumping water; • Reducing the cost of electricity supply to individual objects (including by reducing the cost of connecting to the network), for example, at stationary points on remote pastures; • Improving the comfort of workplaces of mobile points • Reducing the cost of creating backup power lines; • Reducing the cost of thermal energy
• Reducing the cost of electricity supply to individual objects (including by reducing the cost of connecting to the network); • Reducing the cost of thermal energy • Reducing the cost of thermal energy
In addition to those indicated in Table 1 there can be other SGF as well as the effects of their use. The final assessment in any case should be carried out taking into account the characteristics of a particular farm. In the countries as a whole, it is rational to formulate a program to investigate the potential of SGF use in the structure of agriculture. It should provide for the solution of legislative and regulatory issues as well as technical solutions, solutions for the SGF service infrastructure, organization of advisory services and other aspects of SGF, for example, the use of the principles of intelligent electrical networks and microgrids described in [14–16] when creating projects of distributed generation.
4 Conclusions 1. Agricultural holdings are objectively interested in the implementation of distributed generation projects in the case of obtaining results from these projects, which consist in reducing the cost of production.
26
A. V. Vinogradov et al.
2. For the implementation of distributed generation projects in agroholdings, it is necessary to carry out pre-project training aimed at determining the potential use of certain types of SGF. For this, it is necessary to create methodologies for assessing this potential, covering all implementation issues from land acquisition to the obtained effects and the availability of an effective SGF service system. 3. It is rational to formulate a program to study the potential of the SGF application in the structure of agriculture providing for the solution of legislative and regulatory issues as well as technical solutions, decisions regarding the infrastructure of service and maintenance of the SGF, the organization of the advisory service and other aspects of the SGF application.
References 1. Epshtein, D., Hahlbrock, K., Wandel, J.: Why are agroholdings so pervasive in Russia’s Belgorod oblast’? Evidence from case studies and farm-level data. Post-Communist Econ. 25(1), 59–81 (2013) 2. Vinogradov, A., Vinogradova, A., Bolshev, V., Psarev, A.I.: Sectionalizing and redundancy of the 0.38 kV ring electrical network: mathematical modeling schematic solutions. Int. J. Energy Optim. Eng. (IJEOE) 8(4), 15–38 (2019) 3. Vinogradov, A., Vasiliev, A., Bolshev, V., Vinogradova, A., Kudinova, T., Sorokin, N., Hruntovich, N.: Methods of reducing the power supply outage time of rural consumers. In: Kharchenko, V., Vasant, P. (eds.) Renewable Energy and Power Supply Challenges for Rural Regions, pp. 370–392. IGI Global (2019) 4. Tikhomirov, D.A., Kopylov, S.I.: An energy-efficient electric plant for hot steam and water supply of agricultural enterprises. Russ. Electr. Eng. 89(7), 437–440 (2018) 5. Bolshev, V.E., Vinogradov, A.V.: Obzor zarubezhnyh istochnikov po infrastrukture intellektual'nyh schyotchikov [Overview of foreign sources on the infrastructure of smart meters]. Bull. South Ural State Univ. Ser. Energy 18(3), 5–13 (2018) 6. Sharma, K., Saini, L.M.: Performance analysis of smart metering for smart grid: an overview. Renew. Sustain. Energy Rev. 49, 720–735 (2015) 7. Kabalci, Y.: A survey on smart metering and smart grid communication. Renew. Sustain. Energy Rev. 57, 302–318 (2016) 8. Vinogradov, A.V., Anikutina, A.V.: Features in calculations for electric energy of consumers with a maximum power over 670 kW and a computer program for selecting the optimal price category [Osobennosti v raschetah za elektroenergiyu potrebitelej s maksimal’noj moshchnost’yu svyshe 670 kVt i komp’yuternaya programma dlya vybora optimal’noj cenovoj kategorii]. Innov. Agric. 2, 161–169 (2016) 9. Litti, Y., Kovalev, D., Kovalev, A., Katraeva, I., Russkova, Y., Nozhevnikova, A.: Increasing the efficiency of organic waste conversion into biogas by mechanical pretreatment in an electromagnetic mill. In: Journal of Physics: Conference Series, vol. 1111, no. 1 (2018) 10. Panchenko, V., Kharchenko, V., Vasant, P.: Modeling of solar photovoltaic thermal modules. In: Vasant, P., Zelinka, I., Weber, GW. (eds.) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866. pp. 108–116. Springer, Cham (2019) 11. Daus, Y., Kharchenko, V.V., Yudaev, I.V.: Managing Spatial Orientation of Photovoltaic Module to Obtain the Maximum of Electric Power Generation at Preset Point of Time. Appl. Solar Energy 54(6), 400–405 (2018)
Features of Distributed Energy Integration in Agriculture
27
12. Gladchenko, M.A., Kovalev, D.A., Kovalev, A.A., Litti, Y.V., Nozhevnikova, A.N.: Methane production by anaerobic digestion of organic waste from vegetable processing facilities. Appl. Biochem. Microbiol. 53(2), 242–249 (2017) 13. Kovalev, A., Kovalev, D., Panchenko, V., Kharchenko, V., Vasant, P.: System of optimization of the combustion process of biogas for the biogas plant heat supply. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing and Optimization. ICO 2019. Advances in Intelligent Systems and Computing, vol. 1072. Springer, Cham (2019) 14. Kharchenko, V., Gusarov, V., Bolshev, V.: Reliable electricity generation in RES-based microgrids. In: Alhelou, H.H., Hayek, G. (eds.) Handbook of Research on Smart Power System Operation and Control, pp. 162–187. IGI Global (2019) 15. Vinogradov, A., Bolshev, V., Vinogradova, A., Kudinova, T., Borodin, M., Selesneva, A., Sorokin, N.: A system for monitoring the number and duration of power outages and power quality in 0.38 kV electrical networks. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866, p. 10. Springer, Cham (2019) 16. Vinogradov, A., Vasiliev, A., Bolshev, V., Semenov, A., Borodin, M.: Time factor for determination of power supply system efficiency of rural consumers. In: Kharchenko, V., Vasant, P. (eds.) Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development, pp. 394–420. IGI Global (2018)
Concept of Multi-contact Switching System Alexander V. Vinogradov1 , Dmitry A. Tikhomirov1 , Alina V. Vinogradova1, Alexander A. Lansberg2, Nikolay S. Sorokin2, Roman P. Belikov2, Vadim E. Bolshev1(&) Igor O. Golikov2, and Maksim V. Borodin2
,
1
Federal Scientific Agroengineering Center VIM, 1-st Institutsky proezd, 5, 109428 Moscow, Russia [email protected], [email protected], [email protected], [email protected] 2 Orel State Agrarian University named after N.V. Parahin, Generala Rodina St., 69, 302019 Orel, Russia [email protected], [email protected], [email protected], [email protected]@yandex.ru, [email protected] Abstract. The paper proposes a new approach to the construction of intelligent electrical networks. It is based on the use of the multi-contact switching system switching devices having 3 or more contact groups and the control of them is carried out independently. The paper also states the main provisions of this approach. There is an example of a distribution electrical network on the basis of the multi-contact switching system and containing a transformer substation, several renewable energy sources and various types of switching equipment. The basic functions of the network are listed based on the processing of the received data, the interaction of network equipment and systems of monitoring, control, accounting and management. Keywords: Agricultural enterprises Agricultural holdings Distributed generation Energy consumption analysis Energy resources Waste recycling Smart electric networks Multicontact switching systems Sectionalizing Redundancy
1 Introduction At present, the development of “smart power grids” is a global trend related to the fact that existing 0.4 kV distribution power grids are characterized by low reliability, significant energy losses and insufficiently high power quality. The main reasons for this are the oversized length of power lines, the insufficient degree of network automation as well as the construction of networks using radial schemes which does not allow for backup power supply to consumers. The most effective way to increase the power supply system reliability is to connect distributed generation, for example, biogas plants, solar panels, etc. [1–3], enabling these networks to work as microgrids [4]. The joint work of distributed generation sources within the microgrid requires complex automation. It consists in equipping the electric network with intelligent devices that allow analyzing the operating modes and automatically reconfiguring the network to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 28–35, 2021. https://doi.org/10.1007/978-3-030-68154-8_3
Concept of Multi-contact Switching System
29
localize the place of damage and restore power to consumers connected to undamaged network sections. Distributed automation of 0.4 kV electric networks should be carried out taking into account the fact that these networks can have a large number of outgoing power lines from the main transmission lines and there is a possibility to connect small generation sources directly to 0.4 kV networks. In accordance with this, the concept of the development of electric networks using multi-contact switching systems (MCSS) is proposed in this paper.
2 MCSS A new approach to the construction of intelligent electrical networks is proposed. Its feature is in the use of multi-contact switching systems in these networks. Multi-contact switching systems (MCSS) are switching devices having 3 or more contact groups and the control of them is carried out independently [5, 6]. The use of the MCSS in electrical networks makes it possible to automatically change the network configuration when the situation changes or on the instructions of the operator. To this end, the MCSS are equipped with monitoring, accounting and control devices that allow data exchange with a single network information center, which in turn makes it possible to implement the principles of SMART GRID. The developed concept of intelligent electrical networks contains the following main provisions: • Application of distribution electric networks containing multi-contact switching systems. Multi-contact switching systems (MCSS) are switching devices having 3 or more contact groups and the control of contact groups is carried out independently. The use of the MCSS in electrical networks makes it possible to automatically change the network configuration when the situation changes or on the instructions of the operator. To this end, the MCSS are equipped with monitoring, accounting and control devices that allow data exchange with a single network information center, which in turn makes it possible to implement the principles of SMART GRID. • Equipping electrical networks by means of redundancy and partitioning. • Equipping electrical networks, power lines systems [7] by: – The control system of network equipment is designed to monitor the actual state of network elements such as switching devices and power elements. The system is based on SCADA systems. The monitoring system can be combined with monitoring and accounting systems. – The monitoring system performs the functions: monitoring the technical condition of all network elements (the state of power lines and their elements, for example, supports, the state of overgrowing of transmission lines, the state of equipment insulation, etc.); monitoring of the network operation modes and its individual elements (loading, energy flow direction, etc.); monitoring the fulfillment of contractual obligations of consumers and energy supplying companies (power grid, retail, generating, etc.); monitoring the reliability of power
30
A. V. Vinogradov et al.
supply in terms of the number of power outages and damage to network elements; monitoring the power quality; monitoring the energy efficiency of the network (energy losses, other energy efficiency indicators); monitoring other parameters of the network and the overall power supply system depending on the capabilities. The monitoring system can be combined with control and accounting systems. – The accounting system performs the functions: accounting for the amount of electricity, adjusting the electricity cost depending on its quality or reliability indicators of the power supply (for example, if the contractual terms for reliability are violated, the cost decreases); accounting for the number and duration of power supply interruptions; accounting of the number of the network equipment work; accounting of other data. The accounting system can be combined with control and monitoring systems. – The control system manages all network elements depending on its operating modes, specified switching requirements, received data from control, monitoring and accounting systems in order to improve the energy efficiency of the network, increase the power supply reliability, etc. • Creation of database systems and information processing comprising: – Databases of consumers connected to the network with their characteristics, parameters of operating modes; – Database of equipment installed on the network with the characteristics and parameters of the modes of operation; – Database of power supply reliability indicating the characteristics of power supply interruptions, equipment damageability; – Database of power quality with an indication of the quality parameters at different points of the network; – Database of technical connections with an indication of the connection characteristics, terms of implementation, etc. The information collected in the database should serve as the basis for making forecasts of electricity consumption, accident rates, etc. It is also used by control systems that change the network configuration in various modes, switching, shutting down equipment, etc. • Ensuring the possibility of using various energy sources including renewable ones (RES) and energy storage devices both in parallel with centralized systems and without being switched on for parallel operation. • Providing the ability to automatically change the network scheme in the event of various situations. • The possibility of organizing microgrids with different energy sources, operating in parallel or in isolation from each other for different consumers. Figure 1 shows an example of a distribution smart grid containing a transformer substation (TS) and several renewable energy sources (RES), various types of switching equipment. It is shown that the action of each device installed in the grid should be monitored by monitoring, accounting and control systems. Each unit must be remotely
Concept of Multi-contact Switching System
31
controlled. Communication channels in the network can be organized with different data transfer technology (JPS, JPRS, radio frequencies, PLC modems, etc. [8–10]). The following monitoring, accounting and control systems are installed at transformer substations and renewable energy sources: • Systems (sets of equipment) of determining the damage places in power transmission lines and transformer substations – DDP [11]; • Systems (sets of equipment) for regulating the power quality indexes (including means of adaptive automatic voltage regulation, for example [12]) - RPQI; • Systems (sets of equipment) of substation automation (Automatic load transfer, Automatic reclosing, Automatic frequency load shedding, relay protection, etc.) SAU (substation automation unit) [13]; • Systems (sets of equipment) of automatic reactive power compensation (regulation) – RPC [14, 15]; • Advanced metering infrastructure - AMI. The AMI TS takes into account the operation of the automation systems of the TS, operating modes, transformer loading as well as carry out electricity metering for each outgoing line, at the inputs of the transformer on the high and low sides [16–18]; • Other systems as they are developed. Information from all specified and prospective systems should be transmitted to information processing and control units having a communication channel between themselves as well as to a control center (it can be the dispatching office of the electric grid companies). This allows remote control of TS, RES and, if necessary, remote control. The inputs of all consumers are equipped with a AMI system allowing to determine the values of electric power losses, interruptions in power supply, monitor the operation of protective equipment and automation of the consumer network, detect unauthorized electrical equipment (for example, a welding transformer), detect unauthorized connection of generators without compliance safety rules, etc. In addition, consumers AMI allows for electricity cost adjustment depending on its quality, on the number and duration of interruptions in power supply and other factors [19]. It must be possible to prepay electricity and other promising functions. Power lines regardless of their performance (cable or air) are equipped with remote monitoring systems for technical condition (insulation condition, overgrowth, inclination of supports, etc.) transmitting data on the power transmission line element state to IPCU and, respectively, to the SCC. In the event of an emergency at one of the transmission line sections, automatic control of the network switching equipment is based on the data received from the control, monitoring and accounting systems. That is, the network configuration changes due to switching contacts of the MCSS, sectionalizing points (SP), universal sectionalizing points (USP), points of automatic switching on the reserve (ASR) in such a way as to isolate the damaged area and provide power to consumers in the backup areas. The example shown in Fig. 1 does not exhaust all the possibilities of using the presented systems depending on the required reliability parameters. The principles of construction of the network and the types of switching devices presented in the Fig. 1 can also be used in networks of other voltage classes.
32
A. V. Vinogradov et al.
Fig. 1. An example of the use of multi-contact switching systems for the development of microgrids containing renewable energy sources.
3 Discussion The intelligent electrical network based on the use of the multi-contact switching system allows to perform the following basic functions: • To implement switching algorithms and automatic commands for switching in networks (to perform network configuration changes, highlighting damaged areas, etc.);
Concept of Multi-contact Switching System
33
• To “see” electricity loss in real time with the ranking of technological, commercial while for power transformers to “see” the loss of idling and short circuit; • To develop recommendations for reducing losses, optimal network configuration; • To automatically change the electricity cost depending on its quality, power supply interruption time and other parameters; • To develop recommendations for changing the technological connection conditions (change in capacity, change in the scheme, etc.); • To obtain the calculated parameters of power lines and equipment, build mathematical models of the network; • To receive aggregate load schedules daily, monthly, annual …; • To develop recommendations on the volumes and parameters of electricity storage, modes of operation of balancing power plants; • To obtain reliability indicators of the network and equipment installed in it with an analysis of the most/least reliable network elements, brands and types of equipment, issuing recommendations for selection, maintenance and repair; • To develop recommendations for the selection and configuration of protection and automation; • To receive a mapping of the state of switching devices, recommendations for the development and installation of new devices; • To receive diagnostic parameters of the equipment and recommendations on the timing of maintenance, repair, replacement, forecast accidents; • To get other results based on the system capabilities. The implementation of such electrical network intellectualization systems will allow using technical and economic mechanisms to improve the efficiency of power networks and power supply systems in general. It is possible to automate the implementation of justifications for the use of microgrids and networks with varying degrees of automation, form multi-year databases on all parameters of power supply system functioning, predict the operation modes of electrical networks, generating facilities, current trends in the development of networks, electricity market. All this allows integrating power supply systems, intelligent electrical networks into the digital economy, creating new markets for equipment, communications equipment and software for intelligent electrical networks and microgrids, and markets for electricity services. The creation of new Internet equipment and Internet services based on the capabilities of managing network equipment, regulating power quality, power supply reliability, electricity storage, and the use of renewable energy sources are also elements of the digital economy involving the capabilities of intelligent electrical networks. One of the main advantages of such a concept of building autonomous networks and networks connected to centralized systems is the possibility of flexible changes in the power supply scheme. This means that if there is a shortage of power from one of the energy sources or if there is damage to one of the network sections, it is possible to automatically switch to another power source. It is also possible to transfer surplus to the centralized network solutions for coding situations in the electrical network with the peaks of energy production on renewable energy sources [4]. Solutions have been
34
A. V. Vinogradov et al.
developed for partitioning and backing up electrical networks [20], for improving the quality of electricity and regulating the voltage in electrical networks [21]. In addition, the implementation of these systems will significantly improve electrical safety in the operation of electrical networks. Receiving information about the state of the power supply system and emergency conditions in it allows to quickly and remotely localize the accident site or a section of the network where there is a danger of electric shock, for example, when a wire break.
4 Conclusions The developed concept of building intelligent electrical networks based on the use of multi-contact switching systems allows to automatically change the network configuration when using different energy sources, monitor the network situation including the parameters of the equipment technical condition, the parameters of the network operating modes, highlighting accident modes, manage switching equipment. This makes it possible to use several power sources in the network, including renewable ones which can work both in parallel and in isolation from each other.
References 1. Litti, Y., Kovalev, D., Kovalev, A., Katraeva, I., Russkova, Y., Nozhevnikova, A.: Increasing the efficiency of organic waste conversion into biogas by mechanical pretreatment in an electromagnetic mill. In: Journal of Physics: Conference Series, vol. 1111, no. 1 (2018) 2. Panchenko, V., Kharchenko, V., Vasant, P.: Modeling of solar photovoltaic thermal modules. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866. pp. 108–116. Springer, Cham (2019) 3. Daus, Y., Kharchenko, V.V., Yudaev, I.V.: Managing spatial orientation of photovoltaic module to obtain the maximum of electric power generation at preset point of time. Appl. Solar Energy 54(6), 400–405 (2018) 4. Kharchenko, V., Gusarov, V., Bolshev, V.: Reliable electricity generation in RES-based microgrids. In: Alhelou, H.H., Hayek, G. (eds.) Handbook of Research on Smart Power System Operation and Control, pp. 162–187. IGI Global (2019) 5. Vinogradov, A.V., Vinogradova, A.V., Bolshev, V.Ye., Lansberg, A.A.: Sposob kodirovaniya situacij v elektricheskoj seti, soderzhashchej mul'tikontaktnye kommutacionnye sistemy i vozobnovlyaemye istochniki energii [A way of coding situations in an electric network containing multi-contact switching systems and renewable energy sources]. Bull. Agric. Sci. Don 2(46), 68–76 (2019) 6. Vinogradov, A.V., Vinogradova, A.V., Marin, A.A.: Primenenie mul’tikontaktnyh kommutacionnyh sistem s mostovoj skhemoj i chetyr’mya vyvodami v skhemah elektrosnabzheniya potrebitelej i kodirovanie voznikayushchih pri etom situacij [Application of multi-contact switching systems with a bridge circuit and four outputs in consumer power supply circuits and coding of situations arising from this]. Bull. NIIEI 3(94), 41–50 (2019)
Concept of Multi-contact Switching System
35
7. Vinogradov, A., Vasiliev, A., Bolshev, V., Vinogradova, A., Kudinova, T., Sorokin, N., Hruntovich, N.: Methods of reducing the power supply outage time of rural consumers. In: Kharchenko, V., Vasant, P. (eds.) Renewable Energy and Power Supply Challenges for Rural Regions, pp. 370–392. IGI Global (2019) 8. Bolshev, V.E., Vinogradov, A.V.: Perspektivnye kommunikacionnye tekhnologii dlya avtomatizacii setej elektrosnabzheniya [Promising communication technologies for automation of power supply networks]. Bull. Kazan State Power Eng. Univ. 11(2), 65–82 (2019) 9. Ancillotti, E., Bruno, R., Conti, M.: The role of communication systems in smart grids: architectures, technical solutions and research challenges. Commun. Technol. 36, 1665– 1697 (2013) 10. Khan, R.H., Khan, J.Y.: A comprehensive review of the application characteristics and traffic requirements of a smart grid communications network. Comput. Netw. 57, 825–845 (2013) 11. Vinogradov, A., Bolshev, V., Vinogradova, A., Kudinova, T., Borodin, M., Selesneva, A., Sorokin, N.: A system for monitoring the number and duration of power outages and power quality in 0.38 kV electrical networks. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866, p. 10. Springer, Cham (2019) 12. Vinogradov, A., Vasiliev, A., Bolshev, V., Semenov, A., Borodin, M.: Time factor for determination of power supply system efficiency of rural consumers. In: Kharchenko, V., Vasant, P. (eds.) Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development, pp. 394–420. IGI Global (2018) 13. Abiri-Jahromi, A., Fotuhi-Firuzabad, M., Parvania, M., Mosleh, M.: Optimized sectionalizing switch placement strategy in distribution systems. IEEE Trans. Power Deliv. 27(1), 362–370 (2011) 14. Singh, B., Saha, R., Chandra, A., Al-Haddad, K.: Static synchronous compensators (STATCOM): a review. IET Power Electron. 2(4), 297–324 (2009) 15. Haque, M.H.: Compensation of distribution system voltage sag by DVR and D-STATCOM. In: 2001 IEEE Porto Power Tech Proceedings (Cat. No. 01EX502), vol. 1. IEEE (2001) 16. Bolshev, V.E., Vinogradov, A.V.: Obzor zarubezhnyh istochnikov po infrastrukture intellektual'nyh schyotchikov [Overview of foreign sources on the infrastructure of smart meters]. Bull. South Ural State Univ. Ser. Energy 18(3), 5–13 (2018) 17. Sharma, K., Saini, L.M.: Performance analysis of smart metering for smart grid: an overview. Renew. Sustain. Energy Rev. 49, 720–735 (2015) 18. Kabalci, Y.: A survey on smart metering and smart grid communication. Renew. Sustain. Energy Rev. 57, 302–318 (2016) 19. Vinogradov, A., Borodin, M., Bolshev, V., Makhiyanova, N., Hruntovich, N.: Improving the power quality of rural consumers by means of electricity cost adjustment. In: Kharchenko, V., Vasant, P. (eds.) Renewable Energy and Power Supply Challenges for Rural Regions, pp. 31–341. IGI Global (2019) 20. Vinogradov, A., Vinogradova, A., Bolshev, V., Psarev, A.I.: Sectionalizing and redundancy of the 0.38 kV ring electrical network: mathematical modeling schematic solutions. Int. J. Energy Optim. Eng. (IJEOE) 8(4), 15–38 (2019) 21. Vinogradov, A., Vinogradova, A., Golikov, I., Bolshev, V.: Adaptive automatic voltage regulation in rural 0:38 kV electrical networks. Int. J. Emerg. Electric Power Syst. 20(3) (2019)
The Design of Optimum Modes of Grain Drying in Microwave–Convective Effect Dmitry Budnikov(&) Federal State Budgetary Scientific Institution “Federal Scientific Agroengineering Center VIM” (FSAC VIM), 1-st Institutskij 5, Moscow 109428, Russia [email protected]
Abstract. The development of processing modes using electrical technologies and electromagnetic fields can reduce the energy intensity and cost of grain heat treatment processes. During development, it is necessary to consider the technological requirements of the processed material, types of used technology, the mode of operation of the mixing equipment (continuous, pulse etc.). This paper presents the results of experimental studies, on the basis of which systems for optimal control of post-harvest grain processing equipment can be built using electrophysical effects. At the same time, energy consumption can be reduced by 20–30% compared to classic mine dryers and process can be intensified by 35–40%. Keywords: Electrophysical effects Post-harvest treatment Microwave field Optimum modes
1 Introduction The high energy intensity of post-harvest grain processing dictates the need to develop new equipment that reduces energy costs for post-harvest processing [2, 5, 7]. Many researchers note the possibility of reducing these energy costs due to the use of electrophysical factors [1, 3, 4, 6, 8, 9]. These factors include ultrasound, ozone, aeroions, infrared field, microwave exposure and others. In order to reduce costs while ensuring the quality indicators, it is also necessary to develop modes and equipment for optimal management of post-harvest processing plants. At the same time, it is worth considering the possibility of managing according to the criteria of minimum energy consumption and minimum processing time.
2 Main Part 2.1
Research Method
A number of experimental studies are required to obtain the desired data. In this work, the research was carried out on installations of two types. In the first case, the installation contained one microwave power source (a 900 W magnetron) acting on a stationary grain layer with a volume of 0.015 m3. In the second case, the installation © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 36–42, 2021. https://doi.org/10.1007/978-3-030-68154-8_4
The Design of Optimum Modes of Grain Drying
37
contained six similar sources located in three levels of height with two opposite magnetrons at the level. In this case, the grain layer was moved vertically, and the volume of simultaneously processed material was 0.1 m3. At this stage, the planning of the screening experiment is aimed at searching for energy-efficient grain drying modes using electrophysical factors, such as microwave, for different states of the grain layer, taking into account the form of moisture coupling in the grain. It is obvious that natural drying modes will have the lowest energy consumption, but in this case the final material may be damaged due to limited time of safe storage. The mode of constant and pulsed microwave exposure is used. Table 1 shows the factors of the screening experiment and their variation levels. The response function is the cost of electrical energy for the drying process.
Table 1. Values of factors for obtaining drying curves. Pos.
Density, q, kg/m3
1 2 3 4 5 6 7 8 9
800 600 400 800 600 400 800 600 400
Initial moisture, W, % 17.2 17.2 16.9 17.1 16.7 17.0 16.9 16.8 16.8
Microwave operation mode
Air speed, v, m/s
1 1 1 2/3 2/3 2/3 1/3 1/3 1/3
1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
Air temperature, T, °C 20 20 20 20 20 20 20 20 20
Since the screening experiment was carried out on an installation that allows processing a small volume of grain mass, it makes sense only to dependence of the relative deviations of energy intensity from the assumed initial one. According to the results of the screening experiment, the mode with a thin layer of grain mass and constant microwave exposure has the lowest energy consumption. At the same time, the highest energy intensity is observed in the mode with pulsed microwave exposure to the suspended layer that is 2.4 times higher than the cost in the least energy-intensive mode. Despite this, it is worth considering that a significant amount of energy (about 50%) was spent on creating a fluidized layer. Thus, in the subsequent stages, it is worth dividing the total energy spent on the process and the energy spent on the impact factor. In addition, the volume of the processed material must be increased while maintaining the uniformity of the field distribution in the processed material. The measurement of grain material humidity by rapid assessment devices in the process of microwave exposure gives false results associated with changes in the shape of the moisture coupling and the properties of the material. Since that, the measurement of humidity reduction is carried out by weighing the processed volume of the material.
38
D. Budnikov
The drying curves were taken to get the required data. After the data is approximated by a standard parametric model or a user-defined model, the approximation quality can be evaluated both graphically and using various approximation fitness criteria: SSE (least squares), R-square (R-square criterion), Adjusted R-square (adjusted R-square), RSME (root mean square error). In addition, one can calculate confidence intervals for the found values of model parameters that correspond to different probability levels, and confidence bands for approximation and data that also correspond to different probability levels. The dependence of grain moisture change, W, %, on the drying time, t, min, can be represented by the following equation: W ¼ a ebt þ c;
ð1Þ
where a, b, c – proportionality coefficients. Table 2 shows the data of statistical processing of drying curves. Table 2. Experimental data processing results. Pos. 1 2 3 4 5 6 7 8 9
a 3.438 3.769 3.293 3.035 3.013 3.332 2.959 2.99 2.963
b 0.0911 0.07893 0.09729 0.05973 0.04815 0.07748 0.02602 0.04361 0.05021
c 13.9 13.48 13.57 14.07 13.79 13.81 13.95 13.96 13.92
R2 0.9819 0.9836 0.9996 0.9911 0.9864 0.9745 0.9879 0.9727 0.9667
The data logged during the experiments were processed directly in the SCADA system. In addition, data downloaded from log files can be loaded into application software packages such as Statistica, Matlab, Excel, etc., for further statistical and regression processing. Table 3 partially presents the results of experimental studies of the efficiency of a laboratory installation containing a single source of microwave power according to the selected optimality criteria. Table 3. The results of experimental studies. #
Culture
Moisture, W, %
1 2 3 4
Wheat
16 20 16 20
Moisture removal, DW, % 1.5 2 1 1
q, MJ/kg of evaporated moisture 7.3 4.6 6.2 3.7
Optimality criterion Maximum performance Minimum energy consumption (continued)
The Design of Optimum Modes of Grain Drying
39
Table 3. (continued) #
Culture
Moisture, W, %
5 6 7 8 9 10 11 12
Barley
16 20 16 20 9 12 9 12
Sunflower
Moisture removal, DW, % 1.5 2 1 1 1.5 2 1 1
q, MJ/kg of evaporated moisture 7.6 4.8 6.3 3.7 5.8 3.9 5.2 3.6
Optimality criterion Maximum performance Minimum energy consumption Maximum performance Minimum energy consumption
Analysis of experimental results shows a decrease in energy costs relative to drying in mine grain dryers up to 30%, but higher than similar indicators in active ventilation bins. The best indicators for reducing energy intensity and intensifying drying were obtained when processing sunflower, this is due to lower values of standard humidity and associated values of dielectric parameters. A new module and equipment structure were designed after evaluating the equipment structure and temperature field parameters in the microwave grain processing chamber. Modification of the design involves reducing the uneven heating of grain during processing, which will increase productivity due to the intensification of heat and moisture exchange. In addition to these, the modes with pulsed switching of microwave power sources were considered. At the next stage, research was conducted for a unit containing six microwave power sources. 2.2
Experiment
The modes presented in Table 4 was considered in the research work of the microwaveconvective module. The operating mode of sources of microwave power are: 0 without using the microwave; ½ - sources of microwave power work intermittently (this mode was implemented as 10 s with microwave treating and 10 s without the one); ¼ - sources of microwave power work intermittently (this mode was implemented as 5 s with microwave treating and 15 s without the one). In the course of experimental studies, wheat drying curves were taken from the initial humidity of 20% [10, 11]. Further, according to these data, the dependence of the rate of moisture removal in the drying process depending on the test mode of operation of the unit was studied. Figure 1 shows the drying curves obtained as a result of experimental studies. The current rate of moisture removal was also calculated from the current humidity of wheat from the test mode of operation of the plant. As a result, dependences of energy consumption for drying (evaporation of 1 kg of moisture) of wheat were obtained depending on the implemented mode of heat treatment.
40
D. Budnikov Table 4. Experimental data processing results. Pos. fig. 1–4 1 2 3 4 5 6 7 8 9 Microwave operation mode 0 0 0 1/4 1/4 1/4 1/2 1/2 1/2 20 30 40 20 30 40 20 30 40 Tair
22 21
1
2
3
4
5
6
7
8
9
10
Moisture, W, %
20 19 18 17 16 15 14 13 0
20
40
60
80
100
120
140
160
Drying time, τ, min
Fig. 1. Drying curves.
Both the rate of moisture removal and energy costs significantly depend on both the operating mode of the equipment and the current humidity. 2.3
Results and Discussion
Despite the fact that the current values of energy consumption for removing moisture in modes using microwave power may exceed the modes using heated air as a drying agent, the total cost of the drying process of these modes for the entire drying process is lower. Table 5 shows the average energy consumption for drying wheat from 20 to 14%, obtained as a result of laboratory tests under these conditions. It is also worth taking into account that the use of pulse modes of microwave exposure is similar to heating the drying agent. Table 5. Average energy consumption for drying wheat from 20 to 14%. Mode Energy consumption for evaporation of 1 kg moisture, MJ/kg
1 6.17
2 6.8
3 8.57
4 6.1
5 7.59
6 4.64
7 4.36
8 3.74
9 3.6
The Design of Optimum Modes of Grain Drying
41
These results allow to refine the developed models of equipment management and implement processing control according to the specified optimality criteria.
3 Conclusions The following conclusions can be drawn from the results of the experiment: 1. The highest energy intensity is observed in the drying mode with the pulse effect of the microwave field on the suspended grain layer and is 2.4 times higher than the cost in the least energy-intensive mode. 2. It should be taken into account that the application of the drying mode in the fluidized layer has a very limited application in the processing of cereals, and significant energy (about 50%) are spent on creating a fluidized layer. 3. When considering the energy intensity of drying from the point of view of profitability, it is necessary to take into account the cost of energy carriers and sources of generation (conversion) of energy. 4. The use of microwave fields allows to intensify the drying process in areas of humidity close to the standard by 3–4 times. 5. It is advisable to apply this technology for drying grain at moisture levels close to the standard one (from 17 to 14% for wheat).
References 1. Agrawal, S., Raigar, R.K., Mishra, H.N.: Effect of combined microwave, hot air, and vacuum treatments on cooking characteristics of rice. J. Food Process Eng. e13038 (2019). https://doi.org/10.1111/jfpe.13038 2. Ames, N., Storsley, J., Thandapilly, S.J.: Functionality of beta-glucan from oat and barley and its relation with human health. In: Beta, T., Camire, M.E. (eds.) Cereal grain-based functional foods, pp. 141–166. Royal Society of Chemistry, Cambridge (2019) 3. Basak, T., Bhattacharya, M., Panda, S.: A generalized approach on microwave processing for the lateral and radial irradiations of various groups of food materials. Innov. Food Sci. Emerg. Technol. 33, 333–347 (2016) 4. Dueck, C., Cenkowski, S., Izydorczyk, M.S.: Effects of drying methods (hot air, microwave, and superheated steam) on physicochemical and nutritional properties of bulgur prepared from high-amylose and waxy hull-less barley. Cereal Chem. 97, 483–495 (2020). https://doi. org/10.1002/cche.10263 5. Izydorczyk, M.S.: Dietary arabinoxylans in grains and grain products. In: Beta, T., Camire, M.E. (eds.) Cereal Grain-Based Functional Foods, pp. 167–203. Royal Society of Chemistry, Cambridge (2019) 6. Pallai-Varsányi, E., Neményi, M., Kovács, A.J., Szijjártó, E.: Selective heating of different grain parts of wheat by microwave energy. In: Advances in Microwave and Radio Frequency Processing, pp. 312–320 (2007) 7. Ranjbaran, M., Zare, D.: Simulation of energetic-and exergetic performance of microwaveassisted fluidized bed drying of soybeans. Energy 59, 484–493 (2013). https://doi.org/10. 1016/j.energy.2013.06.057
42
D. Budnikov
8. Smith, D.L., Atungulu, G.G., Sadaka, S., Rogers, S.: Implications of microwave drying using 915 MHz frequency on rice physicochemical properties. Cereal Chem. 95, 211–225 (2018). https://doi.org/10.1002/cche.10012 9. Intelligent Computing and Optimization. Proceedings of the 2nd International Conference on Intelligent Computing and Optimization 2019 (ICO 2019). Springer (2019). ISBN 978-3030-33585-4 10. Vasilev, A.N., Budnikov, D.A., Ospanov, A.B., Karmanov, D.K., Karmanova, G.K., Shalginbayev, D.B., Vasilev, A.A.: Controlling reactions of biological objects of agricultural production with the use of electrotechnology. Int. J. Pharm. Technol. (IJPT). 8(N4), 26855– 26869 (2016) 11. Vasiliev, A.N., Goryachkina, V.P., Budnikov, D.: Research methodology for microwaveconvective processing of grain. Int. J. Energy Optim. Eng. (IJEOE) 9(2), 11 (2020). Article: 1. https://doi.org/10.4018/IJEOE.2020040101
Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology and Energy Supply of Livestock Farming Aleksey N. Vasiliev(&) , Gennady N. Samarin , Aleksey Al. Vasiliev , and Aleksandr A. Belov Federal Scientific Agroengineering Center VIM, 1-st Institutsky passage, 5., Moscow, Russian Federation [email protected], [email protected], [email protected], [email protected]
Abstract. Livestock farming is one of the main energy consumers in agriculture. It consumes about 20% of all energy consumption. The half of these consumption (50%) is consumed by cattle farms. The energy consumption of the livestock production is about 16%. More than 75% of the energy is consumed to produce feed and is subsequently concentrated in animal waste. Improving the energy efficiency of livestock farming is an important scientific and economic issue. The improvement of food production results in the development of monocultures. This rule refers to all spheres of husbandry human activity. In order to solve a contradiction between agricultural production and the nature laws there has been paid a great attention to the organization of agroecosystems. The maintenance of permanent functioning of agroecosystems requires significant energy consumption. The study of energy consumption in agroecosystems is one of the main research methods in ecology, therefore, the current paper has considered the energy-environmental problems of livestock farming from these points of view. The current study carried out on the basis of the energy balance of the livestock complex has shown that the energy efficiency of livestock production was not more than 16%. This factor promotes a great raise of environmental issues. There has been proposed to reduce “energy capacity” of animal waste to increase environmental safety and energy independence of livestock farming. Keyword: Agroecological system consumption Livestock farming
Ecology Energy efficiency Energy
1 Introduction Livestock farming is one of the main energy consumers in agriculture. It consumes about 20% of all energy consumption. The half of these consumption (50%) is consumed by cattle farms. According to the Mindrin's study [1] in 1928 there were consumed 48 cal of total energy to produce 100 cal of a unit of product, in 1950 there were 57 cal, in 1960 there were 70 cal, and in 1990 there were 86 cal. Over a century, the energy consumption to produce a unit of product have almost doubled. The current © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 43–52, 2021. https://doi.org/10.1007/978-3-030-68154-8_5
44
A. N. Vasiliev et al.
study has considered the efficiency of energy consumption increase and the possible ways to reduce them. There have been analyzed these current issues and there have been considered the variants of production organization.
2 Materials and Methods When assessing the energy efficiency of production and determining the directions of its improvement, it is expedient to use the methodology of energy evaluation of products. It is expedient to apply a universal physical unit, whose exchange rate is constant, solid and clear. Many researchers agree that this is an energy unit. Energy is the only objective and universal measure of the value of any type of products, manufactured not only by man, but also by nature. This measure depends on neither supply and demand, nor on price. The energy approach to the study of natural and production processes has a number of significant advantages and wider opportunities compared to other methods [2]. The approach is based on the necessity to take into account the objective laws of energy conversion within the system, and at that, energy acts as a universal measure allowing us to estimate both the value of material objects and the efficiency of production processes. Without the use of energy analysis, effective production management is thought impossible [3, 4]. One of the ways to implement this methodology is the environmental and energy analysis of farm functioning. The issues of energy analysis of the geosystems’ functioning have been studied since the beginning of the new century. In particular, there has been developed an algorithm to conduct the study economic activities of agroecosystems according to their energy assessment [5]. There are other researchers who carried out thorough work in this direction [6], among them N.P. Mishurov [7]. In F.S. Sibagatullin’s work [8] there has been presented a methodology for calculating the energy content of each element that is included in livestock production. The energy content (E) was calculated on the basis of consumption in physical units, taking into account its energy equivalent according to the formula [8]: E ¼ Rn In ;
ð1Þ
where R_n is the consumption of the resource in natural units (kg, hwt of feed units, person ∙ h, kWt ∙ h); I_n is an energy equivalent of a resource [8]. The example of the conducted calculation is given in Table 1. From the above presented data, it’s clear that the largest amount of energy has been consumed on feed production. Table 1. The energy consumption to produce a unit of product [8].
Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology
45
Table 1. The energy consumption to produce a unit of product. Index
Energy equivalent of a resource, MJ
Feed, hwt of feed units Electroenergy, kWt∙h Fuel, kg Specific quantity of equipment, kg Litter, hwt Consumption of labour, person∙h Total
From 717 to 12995 8.7 10 105 17.1 43.3 –
Total energy consumption per a unit of product (per hwt), MJ 1540.8 19226.0 17160.0 256.6 3241.6 358.4 49.0 850.0 937.0 367.5 2520.0 2940.0 102.6 73.6
81.9 1095.5
2390.1
27015.0
– 982.9 22378.4
If the share of energy consumption to produce various types of products in the total amount of energy consumption is analyzed, it’s clear that 64.5% of the total amount of energy consumption is required to produce milk. According to [9], the energy consumed in the process is about 24% of total amount of energy consumption. To produce cattle live weight, the share of energy consumption is 71.2% of the total amount. In the production of pig live weight, this share rises up to 76.6%. At the same time, direct consumption of electricity and fuel are 5.8% (of the total energy consumption) for pig weight increase and 15.1% for cattle weight increase. Feed production is the main consumer of energy. In the works of E T. Sch. Fuzella [10], the energy efficiency is indicated as the ratio of the energy flow at the system output to the energy flow at the system input. So using this approach [10], there was made an estimation of the energy consumption by the technological processes to produce various products for the SPK “Nelyubino”. Some data of the conducted calculation are presented in Table 2. The energy flow at the system output has been determined on the basis of the energy equivalent. As in Table 1, the energy consumption structure has shifted to feed production with 72.5% of the total energy.
Table 2. The estimation results of energy efficiency of livestock farming on the example of SPK “Nelyubino”, TJ (1010 J). Type of consumption Initial Year 1991 Feed 60.2 Infrastructure 8.1 Electroenergy 12.3 Agromachinery 2.0 Fuel 0.5 Total 83.1
data
Output Type of products Year 1999 2004 1991 1999 2004 45.5 42.6 Milk 8.1 6.6 6.5 2.1 8.8 Beef 4.8 3.2 2.7 11.3 12.3 1.2 1.4 0.6 0.7 60.7 65.8 Total 12.9 9.8 9.2
46
A. N. Vasiliev et al.
In feed production the main share of energy is consumed on the production of crop products, which is automatically transferred to livestock products [11]. According to the data presented in the table, in 1991 there were 0.16 units of a product in energy equivalent per unit of consumed energy; in 1999 there were 0.16 units, and in 2004 there were 0.14 units. The calculation results have shown that the highest energy efficiency of livestock production was 16%. For beef production energy efficiency was not higher than 6%. That has shown that only 6% of the energy used for beef production was consumed efficiently and 94% of the used energy were lost. The problem of the lost energy is considered much more keen and urgent than a simple calculation of energy consumption figures. Consuming 16% of energy to produce livestock products, the farmer uses almost 84% of energy inefficiently. The energy conservation principle must be taken into account in all sectors of agricultural activity, and it’s necessary to find out what this energy is wasted on. At this stage in the development of technology, this part of energy begins to transform natural environment into which it has fallen, and the issue of energy inefficiency in production automatically becomes an environmental concern for people [12]. The issue is becoming keen for global agricultural production management. Among the requirements of nature to biological objects, the provision of maximum stability is of great importance. The most stable community to environmental deviations is a biological community with a maximum variety of individual characteristics. This requirement is practically inconsistent with the tasks of maximum productivity. Maximum productivity leads to a minimum diversity. The best way of maximum productivity is to cultivate a monoculture. However, if a monoculture is cultivated, diversity is at a zero level, and a monoculture is absolutely unstable. To ensure the stability of a monoculture, people must make additional efforts, i.e. to increase energy consumption. The natural development and modification of biological objects in nature results in an improvement of their diversity, that in its turn results in energy dissipation. The organized use of a biological community, the purpose of which is to seize the obtained product, reduces dissipation, and hence the stability of a biological system. Thus we can conclude that any human activity that damages a balance of diversity conflicts with the laws of nature and results in environmental problems. The development of agroecosystems [13] has turned to be one of the way to solve this global contradiction between humanity and the biological community. An agroecosystem is referred to as the territory organized by man, where there is to be ensured a balance to obtain products and return them to the fields to provide an organized circulation of organic substances. Ration-ally organized agroecosystems are necessarily to include pastures, livestock farming complexes, arable lands, though even a perfectly organized agroecosystem cannot provide a complete cycle of substances. This is due to the fact that part of the mineral substances is consumed by the yield. The imbalance is eliminated due to the application of necessary fertilizers. Due to such approach, agroecosystems are extremely unstable and incapable of self-regulation. The additional energy resources introduced by a person must ensure the stability of the agroecosystem. The experience has shown that agroecosystems, in which grain crops prevail, cannot exist more than one year. In the case of a monoculture or perennial grasses, the agroecosystem can break down after 3‒4 years.
Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology
47
There are several ways to organize livestock agroecosystems. When free ranging of cattle, an extensive option is implemented when the consumption of anthropogenic energy is at a minimum level. The energy is consumed for support of cattlemen and the primary processing of livestock products. An intensive way to implement an agroecosystem involves the production of products at large complexes, while feed is obtained due to the high energy input in the fields of the agroecosystem. Part of the manure is brought to the fields, but before application it should be processed, that requires additional energy. The amount of obtained manure can be larger than is necessary to be laid into the soil, and the environmental problem remains unsolved again. Results and discussion. By the example of livestock farming there has been considered the management of such system, the effect of the components of the management system on environmental friendliness of production [14, 15]. Figure 1 has presented a structural diagram that allows discussing this issue in detail.
Fig. 1. Structural management scheme of agroecosystem ‘livestock farming’.
The main object of management is the livestock complex. The species of animals are not important, since the control scheme will have the same structure. There should be taken several parameters as controlled ones, namely volume of output (P), given in tons; cost of production (C) in dollar per t; volume of production given in energy equivalent (Ee), J; production waste (W), given in units of energy, J. The use of the energy equivalent Ee as a controlled quantity is logical, since the study of energy flows in ecosystems is one of the main research methods in ecology. To implement technological production processes (Fig. 1), these are used complexes that need general energy supply (Et) and energy supply for feed production (Ef). The sources of energy I provide common energy supply, which is divided into indirect (Ei) and direct (Ed) sources. Indirect energy supply is the main energy given in units of energy consumed on buildings and structures, equipment, machinery.
48
A. N. Vasiliev et al.
Direct energy consumption includes the cost of fuel and electricity, used directly in technological processes to produce finished products. The sources of energy II are represented by feed (Efe), which are obtained in the second element of the agroecosystem, i.e. plant growing. Initially the amount of livestock production fully conforms to the existing arable land and is provided with balanced crop rotation. The system of mechanisms changes an energy flow (Efl) to implement technological processes in the livestock complex. Production results are evaluated using a monitoring system. The obtained data are compared with a given indicator (Pp). Depending on the obtained comparison result (ΔP) the control system management is to be corrected and updated. Deviations of the controlled values from the given values can occur due to such disturbing effects as changes in climate parameters, changes in feeding rations, changes in animal age and weight, and other factors affecting the physiological state and productivity of animals. The livestock agroecosystem presented in the form of a structural diagram is also unstable, like any agroecosystem. From the point of view of the automatic control theory, a system is considered unstable if, due to such disturbing effects, it deviates from the given value and cannot return to its original state by itself. We have considered the results of stability loss of a given system through each controlled value. As for such controlled value as volume of production (P), the stability loss for this value means an irrevocable loss in animal productivity. This problem can occur due to a significant influence of disturbing effects (e.g. an epidemic), as well as a lack of regulatory measures (Efl), i.e. during the epidemic there were not provided appropriate sanitary measures and vaccinations. To ensure sustainability of such controlled value as cost price (C), there are required restrictions on energy consumption. According to the data in Table 1 and Table 2 it is necessary to reduce the energy consumption on feed production (Efe). However, with the nowadays technologies, this can result in a production volume decrease (P). Therefore, at present it is only possible to introduce restrictions on the amount of consumed energy [16]. It has turned out that the stability of any controlled effect is fully ensured by the energy supply of technological processes. Even for such controlled value as volume of production, this thesis is not correct. The current study has paid attention to the analysis of the system through equivalent energy consumption. As for equivalent energy consumption (Ee), the energy efficiency of the system should strive for 1. The control system must reduce the energy flows Rt and Ee, while simultaneously increasing the volume of output and its energy “costs”. In this case, the “energy cycle” occurs without its significant, uncontrolled accumulation in production waste. Those innovative plans, which are being actively discussed and adopted, have got the exact opposite position. Indirect energy consumption is ready to be significantly increased to produce products (e.g. to develop and use satellite systems for precise farming, to use robotic systems, etc.). Moreover, the increase in such expenses can be justified by only an improved product quality, with a reduced content of chemicals in the final product, and a lower content of impurities. At the same time, direct energy consumption is expected to decrease. As a result, there is created another contradiction: to solve environmental problems the indirect costs of production are increased and the energy efficiency of production is
Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology
49
reduced, and it results in the amount of energy waste increase, that makes the environmental problems more urgent. Thus, on the one hand, the problem of the sustainability of the livestock agroecological system is being solved by increasing energy regulatory effects, on the other hand, these are simultaneously being created serious prerequisites for an environmental disaster. One of the ways to solve the concern is to use waste as energy carriers. There is so much energy in the agricultural waste that if it is obtained in the form of heat, fuel and electricity, it will be more than enough for several cycles of animal production [17]. Energy obtaining from organic and solid domestic waste is currently being successfully implemented in various types of bioenergy mechanisms (biogas machines, generators operating on mixed fuels obtained from organic waste, pyrolysis mechanisms, etc.). In the world, this issue has been successfully studied by a wide range of researchers [18]. Nowadays technologies and equipment are being improved, which allows stating that bioenergy is able to ensure energy independence of the agro-industrial sector. Bioenergy is commonly considered as an element of renewable energy sources, but its main task is to prevent an environmental disaster from energy stored in waste. It should be noted that the simple waste burning with thermal energy release does not improve the environmental situation. This energy remains unused, only its form changes, and it does not return to the livestock production process. The use of energy stored in waste reduces the share of hydrocarbons and gas in direct energy consumption to produce livestock products. This increases the stability of the livestock agroecosystem due to its cost price. However, the energy efficiency of the system can only increase on 10‒15%. Still, more than 80% of the energy consumed to produce livestock products is wasted. The necessity to increase livestock production will require new energy costs for feed, which will result in an even greater decrease of energy efficiency, and will lead to instability of the system according to cost price. The issue could be solved by the use of non-standard solutions in the energy supply of livestock. Another way to increase the energy efficiency of livestock farming is to change the structure of agroecosystems. Analyzing the data presented in Fig. 1 “the sources of energy I” and “the sources of energy II” are the external ones to the agroecosystem ‘livestock farming’. Livestock farming largely (more than 75%) depends on the ‘external’ energy of crop production. The current trends in agriculture (precise farming) show that the share of indirect energy consumption for feed production will grow constantly and significantly. It is in this place that it is necessary to break the energy cycle. It is necessary to reduce the share of energy consumption for feed production. Therefore, there should not be developed the technologies to provide feed for the livestock farming agroecosystem from the plant-farming agroecosystem. This is possible due to establishing so-called “isolated” agroecosystems. In this case, external energy influx into the agroecosystem should be minimal. The animal husbandry should provide the maximum feed requirements by its own resources. Most of necessary technologies already exist and are undergoing serious production testing. One such technology is the production and use of microalgae feed [19]. Microalgae are actively used as feed additives in livestock and poultry farming, as it increases animal immunity, weight, fertility and survival of young animals. In poultry, it results
50
A. N. Vasiliev et al.
in an increase in egg production and egg size. To implement this technology, the farms specializing in cattle and poultry farming provide themselves with algal water pools where livestock waste is utilized. As a result, 40% of the nitrogen from the wastewater again enters the algae biomass and is eaten by animals. Another technology that realizes the potential of livestock farming to produce feed is the technology of growing fly larvae on organic waste, as they quickly multiply. Larvae’s weight increases by 300‒500 times for a week. Bio-mass from a pair of flies and their progeny with the full realization of the genetic potential will be more than 87 tons by the end of a year, i.e. it will be equal to the weight of six elephants [20]. Biomass of housefly larvae is a complete protein feed for pigs, calves, poultry, fur animals, and fish. It contains 48‒52% of protein, 7‒14% of fat, 7‒10% of fiber, 7% of BEV, 11‒17% of ash, as well as such biologically active substances as vitamins, ecdysone, etc. The processing of organic waste which is environmentally unfriendly and unsuitable for use in crop cultivation technologies can produce fertile humus. Its application as a soil for growing feed in a protected ground will meet the needs of livestock farming. Though organic waste of livestock farming should not be considered as energy raw materials. The use of waste as an energy carrier will reduce the energy intensity of livestock farming and increase its energy efficiency by no more than 15%. The issue could be radically solved by using organic materials to produce highly nutritious feed in closed systems, combined into an isolated agroecological system. In this case, only one restriction must be fulfilled. The volume of the livestock and its productivity (manufactured products and production waste) should maximize the necessary amount of feed for the livestock. Ideally, the chain ‘feed ! animals ! a technological process of production ! waste ! feed’ should have minimal energy supply from the outside. The renewable energy sources acquire a special role as the sources of direct energy supply (Ed) as shown in Fig. 1. Their use can increase the share of capital costs, but significantly reduce the environmental problems of energy supply.
3 Conclusions Analyzing production efficiency through energy supply and consumption allows us to identify and study the structural and functional correlation between the components of agricultural systems, as well as to study the dynamics of the effect of various energy sources on the work of systems. The energy consumption of the livestock production is about 16%. More than 75% of the energy is consumed to produce feed and is subsequently concentrated in animal waste. One of the main ways to increase energy efficiency of livestock farming can be a change of the agroecosystem structure. It should be implemented in the establishing “isolated” agroecosystems, where organic waste technologies for producing livestock products compensate energy consumed to produce feed for livestock farming. It is inexpedient to use organic waste of livestock farming as raw materials for energy. The use of microalgae production technologies, fly larvae for processing of
Isolated Agroecosystems as a Way to Solve the Problems of Feed, Ecology
51
livestock waste can be efficiently applied to produce high-protein animal feed and reduce energy consumption by its products. It is reasonable to use renewable energy sources to compensate direct energy costs in the production of livestock products.
References 1. Mindrin, A.S.: Energoekonomicheskaya ocenka sel'skohozyajstvennoj produkcii [Energyeconomic assessment of agricultural products], 187 p (1987) 2. Bulatkin, G.A.: Analiz potokov energii v agroekosistemah [Analysis of energy flows in agroecosystems]. Vestnik Rossijskoj Akademii Nauk tom 82(8), 1–9 (2012) 3. Perez-Neira, D., Soler-Montiel, M., Gutierrez-Pena, R., et al.: Energy assessment pastoral dairy goat husbandry from an agroecological economics perspective. A case study in Andalusia (Spain). Sustainability 10 (2018). Article number: 2838. 4. Guzman, G.I., de Gonzalez, M.G.: Energy efficiency in agrarian systems from an agroecological perspective. Agroecology Sustain. Food Syst. 39, 924–952 (2015) 5. Pozdnyakov, A.V., Semenova, K.A., Fuzella, T.Sh.: Energeticheskij analiz funkcionirovaniya agroekosistem v usloviyah estestvennogo nasyshcheniya-pervye rezul'taty [Energy analysis of the functioning of agroecosystems in conditions of natural saturation - first results]. Uspekhi sovremennogo estestvoznaniya (2), 124‒128 (2018) 6. da Silva, N.F., da Costa, A.O., Henriques, R.M., Pereira, M.G., Vasconcelos, M.A.F.: Energy planning: Brazilian potential of generation of electric power from urban solid wastes—under “waste production liturgy” point of view. Energy Power Eng. 7(5), 193 (2015) 7. Mishurov, N.P.: Metodologicheskie osnovy energeticheskoj ocenki proizvodstva moloka [Methodological foundations of energy assessment of milk production]. Tekhnika i oborudovanie dlya sela (5), 16‒19 (2017) 8. Sibagatullin, F.S., Sharafutdinov, G.S., Shajdullin, R.R., Moskvichyova, A.B.: Bioenergeticheskaya ocenka i osnovnye puti snizheniya energoemkosti proizvodstva produkcii zhivotnovodstva [Bioenergy assessment and main ways to reduce energy intensity of livestock production]. Uchenye zapiski Kazanskoj gosudarstvennoj akademii veterinarnoj mediciny im. N.E. Baumana T. 216, 295–302 (2013) 9. Rajaniemi, M., Jokiniemi, T., Alakukku, L., et al.: Electric energy consumption of milking process on some Finnish dairy farms. Agric. Food Sci. 26, 160–172 (2017) 10. Fuzella, T.Sh.: Energeticheskaya ocenka funkcionirovaniya agroekosistemy (na primere SPK «Nelyubino») [Energy assessment of the functioning of the agroecosystem (on the example of the SEC ``Ne-Lyubino'')]. Vestnik Tomskogo gosudarstvennogo universiteta vypusk (326), 203‒207 (2009) 11. Stavi, I., Lal, R.: Agriculture and greenhouse gases, a common tragedy. A review. . Agron. Sustain. Dev. 33, 275–289 (2013) 12. Ghosh, S., Das, T.K., Sharma, D., Gupta, K., et al.: Potential of conservation agriculture for ecosystem services. Indian J. Agric. Sci. 89, 1572–1579 (2019) 13. Marks-Bielska, R., Marks, M., Babuchowska, K., et al.: Influence of progress in sciences and technology on agroecosystems. In: Conference: Geographic Information Systems Conference and Exhibition (GIS Odyssey), Trento, Italy, 04–08 September 2017, pp. 254‒263 (2017) 14. Lachuga, Yu.F., Vasilyev, A.N.: Napravleniya issledovanij v bioenergetike [Research areas in bioenergy]. Vestnik Rossijskoj sel'skohozyajstvennoj nauki (2), 4‒7 (2015)
52
A. N. Vasiliev et al.
15. Vasilyev, A.N.: Reshenie energo-ekologicheskih problem zhivotnovodcheskoj agroekosistemy [Solution of energy and environmental problems of livestock agroecosystem]. Tekhnologii i tekhnicheskie sredstva mekhanizirovannogo proizvodstva produkcii rastenievodstva i zhivotnovodstva (88), 19‒25 (2016) 16. Lehtonen, H.: Evaluating adaptation and the production development of Finnish agriculture in climate and global change. Agric. Food Sci. 24, 219–234 (2015) 17. Mancini, F.N., Milano, J., de Araujo, J.G., et al.: Energy potential waste in the state parana (Brazil) Brazilian archives of biology and technology 62 (2019) 18. Aberilla, J.M., Gallego-Schmid, A., Adisa, A.: Environmental sustainability of small-scale biomass power technologies for agricultural communities in developing countries. Renew. Energy 141, 493–506 (2019) 19. Sui, Y., Vlaeminck, S.E.: Dunaliella microalgae for nutritional protein: an undervalued asset. Trends Biotechnol. 38, 10–12 (2020) 20. Kavran, M., Cupina, A.I., Zgomba, M., et al.: Edible insects - safe food for humans and livestock. In: Scientific Meeting on Ecological and Economic Significance of Fauna of Serbia. Ecological and Economic Significance of Fauna of Serbia Book Series, Belgrade, Serbia, 17 November 2016, vol. 171, pp. 251‒300. Serbian Academy of Sciences and Arts Scientific Meetings (2018)
Laboratory-Scale Implementation of Ethereum Based Decentralized Application for Solar Energy Trading Patiphan Thupphae and Weerakorn Ongsakul(&) Department of Energy, Environment and Climate Change, Asian Institute of Technology, Khlong Nueng, Thailand [email protected]
Abstract. The decentralized application (DApp) is an application that has a backend operation on the distributed computing nodes system. DApp has been built on decentralized technology such as blockchain. The advantages of DApp are security, transparency, and reliability. There are several use cases of DApp for many aspects such as prediction potential trading gains on Augur, sharing economy of computing power by Golem and browsing, chatting, and payment on Status. However, these DApps are utilized without any details about how to implement it. This paper address this issue by presenting the implementation of solar energy trading. Ethereum Blockchain – an open-source platform for DApp has been proposed and applied for solar energy trading. The token is created by using the ERC20 token for trading. The wallet is deployed by Metamask. Transactions, assets, and participants are made by Ganache and tested by Truffle. Moreover, the trading algorithm has been shown to check the correction between seller and buyer with the smart contract on Solidity. Lastly, React- a javascript library for building user interfaces has been deployed as a front- end to make users interactive in solar energy trading. Keywords: Blockchain
Decentralized application Solar energy trading
1 Introduction One of the most progressive distributed technology today - Blockchain, The blockchain is a decentralized digital database to store inside any nodes in the network called Distributed Ledger Technology (DLT). The most popular example which uses blockchain as a back-end operation system is Bitcoin. Bitcoin is an innovative payment network and a new kind of money [1]. Moreover, blockchain can apply to many aspects such as the hospital which uses blockchain with the information record and payment [2]. In the financial system, blockchain is applied to make payments with the transactions across the border. Blockchain also has more applications such as supply chain, voting, and energy supply [3]. The decentralized application (DApp) plays an important role to apply with blockchain technology. DApp overcomes the limitation of the locally running program which is the performance limitation and cannot respond to the requirements of many applications [4]. DApp could have four categories - open source, internal cryptocurrency © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 53–62, 2021. https://doi.org/10.1007/978-3-030-68154-8_6
54
P. Thupphae and W. Ongsakul
support, decentralized consensus, and no central point of failure [4]. This study represents implement of DApp using the Ethereum open-source platform in Sect. 2. In Sect. 3 shows implement of ERC 20 token. Section 4 shows the implementation of energy trading and Sect. 5 shows the deployment and experimental result.
2 Implement of DApp Using the Ethereum Open-Source Platform The proposed energy trading was implemented using the Ethereum blockchain framework [5]. It includes the following three steps: (1) set up the Ethereum environment; (2) set up the React and (3) set up identities with Ganache and wallets with Metamask for participants. 2.1
Set Up the Ethereum Environment
In this study, the Ethereum platform was selected to implement this experiment. Ethereum has many resources for learning such as Crypto zombies [6], Ethernauts [7], Remix [8], etc. Computer running MacOs 10.14.4 operating system with 2.3 GHz. Intel Core i-5 was used. Pre-requisites such as Node.js, node-gyp, Python2.7.x, Truffle framework, Ganache truffle suit, Metamask – A crypto wallet, Web3.js, git, Chrome web browser, and React-app was installed. Several package were installed inside package.json, including ‘apexcharts’ (represent a candlestick), ‘babel-polyfill’ (emulate a full 2015+ environment), ‘babelpreset-env’ (allow to use the latest Javascript without needing to micromange which syntax transform), ‘babel-preset-es2015’, babel-preset-stage-2, babel-preset-stage-3 (an array of Babel plugins), ‘babel-register’ (automatically compile files on the fly), ‘bootstrap’ (front-end open source toolkit), ‘chai’ (Test drive development/Bahavior driven development assertion library for node), ‘chai-as-promised’ (asserting facts about promises), ‘chai-bignumber’ (assertions for comparing arbitrary-precision decimals), ‘dotenv’ (zero-dependency module that loads environment variables from .env file into process .env), ‘ladash’ (taking the hassle out of working with array, number, etc.), ‘moment’ (Parse, validate, manipulate, and display dates and times in JavaScript.), ‘openzeppelin-solidity’ (library for secure smart contract development), ‘react’ (javascript library for building user interfaces),‘react-apexcharts’ (create charts in react), ‘react-bootstrap’ (front-end framework rebuilt for react), ‘react-dom’ (provide DOM-specific methods), ‘react-redux’ (predictable state container for javaScript application)), ‘react-scripts’ (include scripts and configuration), ‘redux’ (predictable state container for JS Apps), ‘redux-logger’ (can replay problem in app),‘reselect’ (compute derived data), ‘solidity-coverage’, ‘truffle’ (testing framework and asset pipeline), ‘truffle-flattener’ (verify contracts developed on truffle), truffle-hdwalletprovider’ (to sign transactions) and ‘truffle-hdwallet-provider-privkey’ (to sign transactions for address derived from a raw private key string). Then ‘web3’- is a library that allows interacting with local or remote Ethereum nodes.
Laboratory-Scale Implementation of Ethereum
2.2
55
Set Up React
This section show example to create React and how to connect React with smart contract and testing to show at the console of Chrome web browser. To set up react, npm was installed use ($ npm install) to install all dependencies on 2.1 in the package. json file. After that create React by using $ create-react-app follow with the name of the project for example ($ create-react-app Dapp project) and then make sure ganache running. To start React use ($ npm start) which is automatic run on port 3000 on Chrome web browser. After that go to index.js and import ‘bootstrap/dist/css/bootstrap. css’ from package.json. Users can play around with App.js to make his own DApp. In this study, Design DApp by using cards [9], navs [10] from bootstrap, and layout module from CSS flexbox [11]. Inside App.js using Web3.js [12] very heavily such as connecting the blockchain, load the account, load the smart contract, load the smart contract account, and call the smart contract function. Inside App.js (see Fig. 1) use web3.js form ($ const web3 = new Web3 (Web3.givenProvider || ‘https://localhost:7545’) to connect with the specific provider. In this study use Metamask to talk with local blockchain on port:7545. Moreover, use function componentWillMount [13] to load blockchain Data. This function comes up with the React component which is the extensions of the Component of the class App. The componentWillMount is the React life cycle method and can use this lifecycle diagram as a cheat sheet.
class App extends Component { componentWillMount() { this.loadBlockchainData() } async loadBlockchainData() { const web3 = new Web3(Web3.givenProvider || 'http://localhost:7545') } } Fig. 1. A command-line uses to connect to the local blockchain.
After doing the step above, check the web3 loaded inside the console. By ($console.log(“web3”,web3)). The result would shown in Fig. 2.
Fig. 2. The result from loading web3 inside console.
56
P. Thupphae and W. Ongsakul
The next step is detecting the network connected to by using ($web3.eth.net. getNetworkType()) [14] (see Fig. 3). Then checking the web3 loaded inside the console by ($ console.log(“network”,network)). The result shows in Fig. 4. In addition, you can change to the other networks in the Metamask. To fetch the account network that we are connected with. Go to web3.eth.getAccounts [15] and use ($web3.eth. getAccounts()) (see Fig. 3). The result returns an array when checking with ($ console. log(“accounts”, accounts)). In this study sets an account index [0] to show in the console. The token has loaded by import Token.sol file and fetches the token from the blockchain wih ($web3.eth.Contract) [16]. To set the network Id using ($ web3.eth.net. getId()) [17] (see Fig. 3). Then the result when using ($console.log(“token”, token)) returns both abi and address (see Fig. 4). The next step is using ($methods. myMethod.call) [18] to get total supply. (see Fig. 5).
class App extends Component { componentWillMount() { this.loadBlockchainData() } async loadBlockchainData() { const web3 = new Web3(Web3.givenProvider || 'http://localhost:7545') const network = await web3.eth.net.getNetworkType() const accounts = await web3.eth.getAccount() const networkId = await web3.eth.net.getId() const token = web3.eth.Contract(Token.abi, Token.networks[networkId]. address) } } Fig. 3. A command-line use to detect the network, account, network ID and import the token.
Fig. 4. The result command-line from Fig. 3.
Laboratory-Scale Implementation of Ethereum
57
class App extends Component { componentWillMount() { this.loadBlockchainData() } async loadBlockchainData() { const web3 = new Web3(Web3.givenProvider || 'http://localhost:7545') const network = await web3.eth.net.getNetworkType() const accounts = await web3.eth.getAccount() const networkId = await web3.eth.net.getId() const token = web3.eth.Contract(Token.abi, Token.networks[networkId]. address) const totalSupply = token.methods.totalSupply().call() } } Fig. 5. A command-line use to fetch the totalSupply.
Then the result should return the total supply which we defined = 10^18 Wei in Fig. 6.
Fig. 6. Return of total supply.
2.3
Set Up Identities with Ganache and Wallets with Metamask for Participants
Ganache is a personal blockchain [19] that provides public key, private key, and testing Ethereum coin. To set up Ganache go to [19] download and set up. Then at the MNEMONIC tab, there are some phrases as passwords. This MNEMONIC use for fillin Metamask to login local blockchain into Metamask wallet. Metamask wallet is the HD wallet that holds the Ethereum accounts and the amount of cryptocurrency. Metamask [20] is also a wallet interface that allows the user to manage multiple accounts. To set up Metamask users can go to [20] on Chrome web browser download and add to Chrome extension. Metamask can work as an extension of the web browser such as Chrome and Firefox. When Metamask is running, the user can create a private network or can connect to the public network and can also import the account into the Metamask wallet.
58
P. Thupphae and W. Ongsakul
3 Implement of ERC20 Token The implement of ERC20 Token has standard API to use with smart contracts [21]. The function of ERC20 Token consists of name, symbol, decimals, totalSupply, balanceOf, transfer, transferFrom, approve, and allowance. In this study focus on implementing these functions on smart contract to DApp. The flowchart in Fig. 7. Represent the flow of smart contract to create ERC20 token and use them to transfer. To make the ERC20 token, First need to set name, symbol, decimal, and totalSupply of the token. In this study set Name = Energy Token, Symbol = ET, Decimals = 18 (1 Ether = 1018 Wei) and totalSupply = 1,000,000. Then set the balance of deployer equal to totalSupply to test the transfer function. To execute the transfer function, the return values must return true to execute this function. This function checks the balance of the deployer must more than the transfer value. Then check the invalid receiver. If these return true, the transfer function is executed. The balance of the deployer is decreased and the balance of receiver is increase with the transfer value. The last of this function is to emit information from this transfer.
Start Set; Name: Energy Token Symbol : ET Decimals : 18 TotalSupply : 1,000,000 tokens
BalanceOf; Set Deployer :1,000,000 tokens Receiver : 0 Tokens
Transfer; Check1 : !balancceOfDeployer >= _value Check2 : !invalidRecipient True balancceOfDeployer = balancceOfDeployer - _value balancceOfReceiver = balancceOfReceiver + _value False Emit transfer _from, _to, _value, address_sender,address_receiver, _amount
End
Fig. 7. Flowchart of transfer function.
Laboratory-Scale Implementation of Ethereum
59
4 Implement Energy Trading 4.1
The User Uses ET Token to Do Both Sell Orders and Buy Orders
To set amount of energy at a new order tab which wants to do in terms of a kilowatthour (kWh). Then set the price of each kWh of energy. The DApp calculates the total price and shows it’s below. Figure 8 represents the functions which use to create sell and buy order. The variables tokenGet is token to buy order while amountGet convert the amount to Wei [22] which is the smallest denomination of ether. The tokenGive is ether for a buy order. The amountGive is the variable that calculates the price in Wei. Then the function makeBuyOrder call to make order function in the smart contract. The function can return an error if one of the variables is a mistake and show pop up ‘There was an error!’ The makeSellOrder function does the opposite way. The tokenGet is the ether to sell order. The amountGet calculates price in Wei. The tokenGive is the token to sell order and the amountGive convert amount to Wei.
export const makeBuyOrder = (dispatch, exchange, token, web3, order, account) => { const tokenGet = token.options.address const amountGet = web3.utils.toWei(order.amount, ‘ether’) const tokenGive = ETHER_ADDRESS const amountGive = web3.utils.toWei((order.amount * order.price).toString(), ‘ether’) exchange.methods.makeOrder(tokenGet, amountGet, tokenGive, amountGive).send({ from: account }) .on(‘transactionHash’, (hash) => { dispatch(buyOrderMaking()) }) .on(‘error’,(error) => { console.error(error) window.alert(`There was an error!`) }) } export const makeSellOrder = (dispatch, exchange, token, web3, order, account) => { const tokenGet = ETHER_ADDRESS const amountGet = web3.utils.toWei((order.amount * order.price).toString(), ‘ether’) const tokenGive = token.options.address const amountGive = web3.utils.toWei(order.amount, ‘ether’) exchange.methods.makeOrder(tokenGet, amountGet, tokenGive, amountGive).send({ from: account }) .on(‘transactionHash’, (hash) => { dispatch(sellOrderMaking()) }) .on(‘error’,(error) => { console.error(error) window.alert(`There was an error!`) }) }
Fig. 8. The function which creates sell and buy order.
4.2
Trading
In a smart contract, the trading function which is internal function lives inside fillOrder function. This time when the _tokenGet fetch the msg.sender balances which is the person who called the filling order set equal to his balances minus the amountGet. While the _tokenGet for the user is set to the user balances added the amountGet. The _user is the person who creates the order. For the tokenGive use the balances of the user who creates the order minus with the amountGive. This amountGive is added to the msg.sender which is the person who fills the order. Moreover, the fee amount is going to be paid by the person who fills the order and takes out of the amountGet this case is the msg.sender. In this work, fee percent is set to 10% and add
60
P. Thupphae and W. Ongsakul
to the amountGet of the msg.sender. The feeAccount was created to update the balance to the feeAmount (see Fig. 9).
Function _trade(uint256 _orderId, address _user, address _tokenGet, uint256 _amountGet, address _tokenGive, uint256 _amountGive) internal { uint256 _feeAmount = _amountGive.mul(feePercent).div(100); tokens[_tokenGet][msg.sender] = tokens[_tokenGet][msg.sender].sub(_amountGet.add(_feeAmount)); tokens[_tokenGet][_user] = tokens[_tokenGet][_user].add(_amountGet); tokens[_tokenGet][feeAccount] = tokens[_tokenGet][feeAccount].add(_feeAmount); tokens[_tokenGive][_user] = tokens[_tokenGive][_user].sub(_amountGive); tokens[_tokenGive][msg.sender] = tokens[_tokenGive][msg.sender].add(_amountGive); emit Trade(_orderId, _user, _tokenGet, _amountGet, _tokenGive, _amountGive, msg.sender, now); }
Fig. 9. Trade function in smart contract.
5 Deployment and Experimental Result The front-end of Solar-Energy Trading has been designed as Fig. 10. This user interface has been deployed on the localhost:3000. In this study, suppose user1 has defined to make buy order with the total payment amount is 1 ether and send to user 2. Figure 11 represents transferring the total payment amount from user1 to user 2 in the terminal interface (by setting 1 Ether = 11018 tokens). Figure 12 shows the transaction between user 1 and 2 on Ganache personal Ethereum blockchain. User 1 and user 2 have the address 0xb877dCcB80F27b83E4f863c41f050f18FfAEcb9b and 0x12e622A7c90CE-fF482Fc79ADe96a3AcD17C9F282 respectively. The exchange contact address is 0x0FDA0BA4c75c3552A42B9877c9b48fC6fddc022D.
Fig. 10. User interface of Solar energy trading.
Laboratory-Scale Implementation of Ethereum
61
Fig. 11. Transferring total amount payment between user 1 to user 2.
Fig. 12. Transaction from user 1 on Ganache personal Ethereum blockchain.
6 Conclusion The increase of solar PV and the progress of blockchain technology play an important role in the energy section. The future market will become peer to peer market in the microgrid. The excess energy is going to make a profit to prosumer. This paper presents a decentralized application based on the blockchain technology framework. The description is provided in technical details on setting up the ethereum blockchain, make the token by using ERC 20 token, useful dependencies to set up DApp, and some part of trading with smart contract.
References 1. B. Project. “Bitcoin is an innovative payment network and a new kind of money,” MIT license (2009–2020). https://bitcoin.org/en/ 2. Tasatanattakool, P.: Blockchain: challenges and applications. In: International Conference on Information Networking (ICOIN), Thailand (2018) 3. Xu, X.: Architecture for Blockchain Applications. Springer (2019) 4. Wei Cai, Z.W.: Decentralized applications: the blockchain-empowered software system. IEEE Access 6, 53019–53033 (2018) 5. Ethdotorg: Ethereum.org, Stiftung Ethereum. https://ethereum.org/en/ 6. Cleverflare: Learn to Code Blockchain DApps By Building Simple Games, Loom. https:// CryptoZombies.io 7. Santander, A.: Hello Ethernaut, OpenZeppelin. https://ethernaut.openzeppelin.com 8. Remix - Ethereum IDE. https://remix.ethereum.org 9. Team, B.: Bootstrap, MIT. https://getbootstrap.com/docs/4.0/components/card 10. Team, B.: Bootstrap, MIT. https://getbootstrap.com/docs/4.0/components/navs 11. W3Schools: W3schools.com, W3.CSS. https://www.w3schools.com/Css/css3_flexbox.asp 12. Sphinx: web3.js, Core team (2016). https://web3js.readthedocs.io/en/v1.2.11/
62
P. Thupphae and W. Ongsakul
13. OpenSource, F.: React.Component,Facebook Inc. (2020). https://reactjs.org/docs/reactcomponent.html 14. Sphinx. Web3.js, Core team (2016). https://web3js.readthedocs.io/en/v1.2.9/web3-eth. html#net 15. Sphinx. Web3.js, Core team (2016). https://web3js.readthedocs.io/en/v1.2.9/web3-eth. html#getaccounts 16. Sphinx. Web3.js, Core team, (2016). https://web3js.readthedocs.io/en/v1.2.9/web3-ethcontract.html#eth-contract 17. Sphinx, Web3.js, Core team (2016). https://web3js.readthedocs.io/en/v1.2.9/web3-eth-net. html?highlight=getId) 18. Sphinx. Web3.js, Core team (2016). https://web3js.readthedocs.io/en/v1.2.9/web3-ethcontract.html?highlight=method.#methods-mymethod-call 19. McVay, W.: Truffle suit, Truffle Blockchain group (2020). https://www.trufflesuite.com/ ganache 20. Davis, A.: METAMASK, ConsenSys Formation (2020). https://metamask.io 21. Vitalik Buterin, F.V.: Ethereum improvement proposals, EIPs (2015). https://eips.ethereum. org/EIPS/eip-20 22. Sphinx. Ethereum Homestead, Core team (2016). https://ethdocs.org/en/latest/ether.html
Solar Module with Photoreceiver Combined with Concentrator Vladimir Panchenko1,2(&) and Andrey Kovalev2 1
Russian University of Transport, Obraztsova st. 9, 127994 Moscow, Russia [email protected] 2 Federal Scientific Agroengineering Center VIM, 1st Institutsky passage 5, 109428 Moscow, Russia [email protected]
Abstract. The paper considers the design of the solar photovoltaic air cooling module with a paraboloid type concentrator. Photovoltaic converters are located on the surface of the concentrator, which ensures their cooling using a metal radiator, the functions of which are performed by the solar concentrator itself. At the same time, the profile of the solar radiation concentrator provides uniform illumination of photovoltaic cells, which favorably affects their efficiency. It is advisable to use high-voltage matrix silicon photovoltaic solar cells as photovoltaic converters, which have high electrical efficiency and maintain it at a high concentration of solar radiation and heating with concentrated solar radiation. Keywords: Solar energy converters Air heat sink
Solar concentrator Silicon photovoltaic Uniform illumination Profile Efficiency
1 Introduction The use of solar radiation concentrators can reduce the number of photovoltaic converters, which favorably affects the cost of the installation and the electricity received with its help [1–3]. However, when photovoltaic cells convert concentrated solar radiation, their heating occurs significantly, as a result of which their electrical efficiency decreases [4], which indicates the need for their cooling [5–10]. When photovoltaic converters operate in a concentrated solar stream, their current-voltage characteristic acquires a triangular shape, which indicates a significant decrease in their efficiency [11]. All planar solar cells have this property, but matrix high-voltage silicon photovoltaic converters do not lose electrical efficiency in concentrated solar flux due to their structure [12, 13]. The use of a paraboloid type concentrator in the form of an air radiator, in the focal region of which such matrix photovoltaic converters are located simultaneously on its surface, eliminates the need for an active cooling system design. However, for their stable operation, it is necessary to ensure uniform illumination of the entire surface of the photovoltaic converters, which requires a special geometric approach and design methods [14] for the profile of the solar radiation concentrator.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 63–72, 2021. https://doi.org/10.1007/978-3-030-68154-8_7
64
V. Panchenko and A. Kovalev
2 Creation of the Geometry of Working Profile of the Paraboloid Type Solar Radiation Concentrator Developed method is proposed to be applied for the calculations of the profile of a paraboloid type concentrator, which provides uniform illumination of photovoltaic converters at a relatively high degree of solar radiation concentration and ensures stable electricity generation. On the surface of the developed solar radiation concentrator there are photovoltaic converters, which have thermal contact with its metal surface, which allows heat energy to be removed due to heating from the solar radiation by the entire surface of the metal concentrator (Fig. 1). Thanks to the efficient heat sink, the photovoltaic converters do not overheat and operate at the nominal operating mode without losing their electrical efficiency.
Fig. 1. The design of the solar photovoltaic module, where the concentrator is also an air-cooled radiator for photovoltaic converters
Thanks to the use of silicon high-voltage matrix photovoltaic converters (Fig. 2), it appears possible to obtain a high-voltage direct current (1000 V or more) at the output of one solar module, as well as an increase in the electrical conversion efficiency of solar radiation and, accordingly, a reduction in cost as module (specific electric power) and generated electric energy. Due to the use of sealing technology of photovoltaic converters with two-component polysiloxane compound, the period of the rated power of the photovoltaic converters also increases.
Solar Module with Photoreceiver Combined with Concentrator
65
Fig. 2. Silicon matrix high-voltage photovoltaic converters with a voltage of more than 1000 V
The developed method allows obtaining the necessary concentration of solar radiation on the surface of photovoltaic converters due to the calculation of the working profile of the solar radiation concentrator. Solar photovoltaic module (Fig. 3) consists of the paraboloid type concentrator 1, which in turn consists of different zones a – b, b – c, c – d. The zones under consideration provide a cylindrical focal region of concentrated solar radiation with sufficiently uniform illumination in the focal region – on the surface of the photovoltaic receiver 2 (silicon matrix high-voltage photovoltaic converters). Photovoltaic receiver 2 is made in the form of a truncated cone from commutated silicon high-voltage matrix photovoltaic converters of height d0, which are located on the reflective (front) surface of the concentrator in the zone b – c. Paraboloid type solar radiation concentrator for silicon high-voltage matrix photovoltaic converters is also the radiator of passive air cooling.
Fig. 3. Principle of operation and the scheme of the concentrator photovoltaic solar module
66
V. Panchenko and A. Kovalev
Solar radiation incident on the reflecting surface of the solar concentrator 1 is reflected from the surface of the concentrator of zones a – b and c – d in such a way that a sufficiently uniform illumination of the photoelectric receiver 2 is provided by concentrated solar radiation in the focal region, which in turn is located on the profile of the concentrator in the zone b – c. The profile of the reflecting surface of the solar concentrator X (Y) under consideration is determined by a system of various equations corresponding to the illuminance condition of the surface of the photoelectric receiver (silicon matrix high-voltage photovoltaic converters). The values of the X and Y coordinates of the working profile of the solar radiation concentrator in the zone a – b are determined by the following system of equations:
1 þ tg b ; cos b
X ¼2f
Y¼
X2 ; 4f
ð1Þ ð2Þ
ð2 R X Þ2 ¼ 4 f Y;
ð3Þ
ð2 R Xc Þ ¼ ð2 R Xb Þ þ d0 sin a0 ;
ð4Þ
(
12 ) Y0 2 tg b0 1þ 1 ; f
Xb ¼ 2 f tg b0
Yb ¼
Xb2 ; 4f
ð5Þ
ð6Þ
Xc ¼ Xb d0 sin a0 ;
ð7Þ
Yc ¼ Yb h;
ð8Þ
h ¼ d0 sin a0 ;
ð9Þ
d0 ðYc f Þ ¼ : sin a sin d sin u0
ð10Þ
The focal length of the solar concentrator f is calculated by the formula: f ¼R
1 tg b0 ; cos b0
ð11Þ
where b0 is the angle between the sunbeam, reflected from the surface of the concentrator profile in the zone a – b at the point with coordinates Xa and Ya, coming into the focal region of the concentrator (on the surface of the photovoltaic photoreceiver) at the point with coordinates (2 R – Xb), Yb and parabola focal length f;
Solar Module with Photoreceiver Combined with Concentrator
67
the angle b ¼ b0 n=N varies from 0º to b0, the values of the coefficient n are selected from a series of integers n ¼ 0; 1; 2. . . N; c0 is the angle between the sunbeam reflected from the surface of the concentrator profile in the zone a – b at the point with coordinates Xb and Yb, coming into the focal region of the concentrator (on the surface of the photovoltaic photoreceiver) at the point with coordinates ð2 R Xc Þ, Yc and parabola focal length f; a0 is the angle of inclination of the working surface of the profile of the concentrator in zone b – c; the maximum radius of the solar concentrator is R; the angle d is determined by the expression d ¼ p=2 þ a0 b; angles a0 and b0 are selected in accordance with the boundary conditions. The distribution of the concentration of solar radiation Kab on the surface of the photovoltaic photoreceiver, reflected from the upper surface of the working profile of the solar radiation concentrator (zone a – b) is calculated according to the formulas: 2 Rn þ 1 R2n ; ¼ Ddn Rbcn þ 1 þ Rbcn
Kab
ð12Þ
Ddn ¼ dn þ 1 dn ;
ð13Þ
Rn ¼ Xn R;
ð14Þ
Rbcn ¼ Xbcn R:
ð15Þ
The values of the X and Y coordinates in the zone of the working profile of the concentrator c – d are determined using the system of equations: Ym ¼
Yc tgam1
Xm ¼ Xc tgam1 ¼ tgam ¼ Xm ¼
ðXb Xc Þ ; 1 tga m tgam1
Yb tgam
ð Y c Ym Þ ; tga0m
ð16Þ
ð17Þ
ðYb Ym1 Þ ; Xm1 ð2 R Xc Þ
ð18Þ
ðYb Ym Þ ; Xm ð2 R Xc Þ
ð19Þ
ð2 R Xc Þ þ ðYc Ym Þ ; tgam1 a0m ¼ a0 þ am ;
ð20Þ ð21Þ
68
V. Panchenko and A. Kovalev
( Xd ¼ 2 f tg b0
Y0 1þ f tg2 b0
Yh ¼ Yd h Yd ¼
12
) 1 ;
Xd2 ; 4f
ð22Þ
ð23Þ
where a0m is the angle of inclination of the working surface of the profile of the concentrator in zone c – d; am is the angle between the sunbeam, reflected from the surface of the profile of the concentrator in the zone c – d at the point with coordinates Xm and Ym coming to the focal region of the concentrator (on the surface of the photovoltaic receiver) at the point with coordinates Xb and Yb and the level of the coordinate Ym; values of the coefficient m are selected from a series of integers m ¼ 0. . . M. Distribution of the concentration of solar radiation Kcd on the surface of the photovoltaic receiver is calculated according to the formulas: Kcd
M X
M X
R2mn R2mðn þ 1Þ ; ¼ Km ¼ m¼0 m¼0 Ddn Rbcðn þ 1Þ þ Rbcn
ð24Þ
Rmn ¼ Rm DXmn ;
ð25Þ
Rm ¼ Xm R;
ð26Þ
DXmn ¼ DXm dn ;
ð27Þ
DXm ¼ Xm Xm1 :
ð28Þ
The presented systems of equations make it possible to calculate the coordinates of the working surface of the profile of the solar radiation concentrator in various its zones.
3 Results of Calculating the Profile of the Concentrator Paraboloid Type Using the formulas presented above, the coordinates of the working profile of the paraboloid type solar concentrator are calculated, which ensures uniform illumination of the focal region (silicon matrix high-voltage photovoltaic converters) with concentrated solar radiation. Figure 4 shows the profile of the reflecting surface of the paraboloid type solar radiation concentrator with a maximum diameter of about 400 mm. Three different zones of the concentrator profile can be distinguished on the profile of the solar radiation concentrator, one of which has a flat shape and represents the landing surface of the photovoltaic converters. This kind of photovoltaic solar modules can be installed together on one frame with a system for constant tracking the position
Solar Module with Photoreceiver Combined with Concentrator
69
Fig. 4. Working surface of the concentrator profile of the paraboloid type
of the Sun. For a better layout and optimal filling of the frame space, the concentrators of solar modules can be squared and closely mounted to each other. Figure 5 on the left shows a three-dimensional model of the solar module with the profile of the solar concentrator, calculated according to the developed method and providing uniform distribution of concentrated solar radiation over the surface of photoreceiver (silicon matrix high-voltage photovoltaic converters) in the focal region, which is presented in the Fig. 5 on the right.
Fig. 5. Three-dimensional model of the solar module (on the left) and distribution of concentrated solar radiation over the surface of photoreceiver (on the right)
The distribution of illumination by concentrated solar radiation over the surface of the photoreceiver is relatively uniform, varies from 7 to 11 times and averages 9 times. The presented distribution will favorably affect the operation of photovoltaic converters, providing illumination of the entire surface of the photovoltaic converters with
70
V. Panchenko and A. Kovalev
uniform concentrated solar radiation, due to which the output electric power of the solar module will be at a nominal level. Thanks to the calculated profile of paraboloid type solar concentrator using computer-aided design system, it becomes possible to create their three-dimensional solid-state model. Figure 6 shows such concentrator of solar radiation of paraboloid type and its single component (petal), into which it can be divided for subsequent manufacture from metallic reflective material.
Fig. 6. Three-dimensional model of a paraboloid type solar concentrator and its single component made of reflective metal
Based on the developed three-dimensional model, the subsequent manufacture of a paraboloid type solar concentrator from reflective metal material with the help of single components is possible. Number of single components may vary depending on the required manufacturing accuracy and optical efficiency of the solar concentrator itself. The more single elements that make up the solar radiation concentrator, the more accurate its manufacture and the greater its optical efficiency. However, with a large number of components, the complexity of manufacturing also increases, in view of which other methods of manufacturing solar concentrators may be more relevant (centrifugal method, method of electroforming, method of glass bending, manufacture of a reflecting surface from flat curved mirrors) [14].
4 Conclusion Photovoltaic solar module with a paraboloid type concentrator and a photoreceiver, based on silicon high-voltage matrix photovoltaic converters located directly on the surface of the concentrator, which is a passive air-cooled radiator for them, has been developed. Solar radiation concentrator provides a fairly uniform illumination of the surface of the photovoltaic receiver. The use of silicon high-voltage matrix
Solar Module with Photoreceiver Combined with Concentrator
71
photovoltaic converters makes it possible to increase the electrical efficiency in the conversion of concentrated solar radiation even when they are overheated. Excessive heat is removed from the photovoltaic converters due to the entire area of the solar radiation concentrator. The use of a two-component polysiloxane compound in the manufacturing process of the photovoltaic receiver allows increasing the term of the rated power of the photovoltaic converters in the concentrated solar radiation flux. Concentrators of solar radiation of the solar modules can be squared to optimally fill the frame of the system for constant tracking the position of the Sun, and be manufactured using various methods and manufacturing technologies. Acknowledgment. The research was carried out on the basis of financing the state assignments of the All-Russian Institute of the Electrification of Agriculture and the Federal Scientific Agroengineering Center VIM, on the basis of funding of the grant “Young lecturer of RUT” of the Russian University of Transport, on the basis of funding of the Scholarship of the President of the Russian Federation for young scientists and graduate students engaged in advanced research and development in priority areas of modernization of the Russian economy, direction of modernization: “Energy efficiency and energy saving, including the development of new fuels”, subject of scientific research: “Development and research of solar photovoltaic thermal modules of planar and concentrator structures for stationary and mobile power generation”.
References 1. Rosell, J.I., Vallverdu, X., Lechon, M.A., Ibanez, M.: Design and simulation of a low concentrating photovoltaic/thermal system. Energy Convers. Manage. 46, 3034–3046 (2005) 2. Nesterenkov, P., Kharchenko, V.: Thermo physical principles of cogeneration technology with concentration of solar radiation. Adv. Intell. Syst. Comput. 866, 117–128 (2019). https://doi.org/10.1007/978-3-030-00979-3_12 3. Kemmoku, Y., Araki, K., Oke, S.: Long-term performance estimation of a 500X concentrator photovoltaic system. In: 30th ISES Biennial Solar World Congress 2011, pp. 710–716 (2011) 4. Kharchenko, V., Nikitin, B., Tikhonov, P., Panchenko, V., Vasant, P.: Evaluation of the silicon solar cell modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 328–336 (2019). https://doi.org/10.1007/978-3-03000979-3_34 5. Sevela, P., Olesen, B.W.: Development and benefits of using PVT compared to PV. Sustain. Build. Technol. 90–97 (2013) 6. Ibrahim, A., Othman, M.Y., Ruslan, M.H., Mat, S., Sopian, K.: Recent advances in flat plate photovoltaic/thermal (PV/T) solar collectors. Renew. Sustain. Energy Rev. 15, 352–365 (2011) 7. Kharchenko, V., Nikitin, B., Tikhonov, P., Gusarov, V.: Investigation of experimental flat PV thermal module parameters in natural conditions. In: Proceedings of 5th International Conference TAE 2013, pp. 309–313 (2013) 8. Hosseini, R., Hosseini, N., Khorasanizadeh, H.: An Experimental study of combining a Photovoltaic System with a heating System. In: World Renewable Energy Congress, pp. 2993–3000 (2011)
72
V. Panchenko and A. Kovalev
9. Panchenko, V., Kharchenko V., Vasant, P.: Modeling of solar photovoltaic thermal modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 108–116 (2019). https://doi.org/10.1007/978-3-030-00979-3_11 10. Panchenko, V.A.: Solar roof panels for electric and thermal generation. Appl. Solar Energy 54(5), 350–353 (2018). https://doi.org/10.3103/S0003701X18050146 11. Kharchenko, V., Panchenko, V., Tikhonov, P., Vasant, P.: Cogenerative PV thermal modules of different design for autonomous heat and electricity supply. In: Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development, pp. 86–119 (2018). https://doi.org/10.4018/978-1-5225-3867-7.ch004 12. Panchenko, V.: Photovoltaic solar modules for autonomous heat and power supply. IOP Conf. Ser. Earth Environ. Sci. 317 (2019). 9 p. https://doi.org/10.1088/1755-1315/317/1/ 012002 13. Panchenko, V., Izmailov, A., Kharchenko, V., Lobachevskiy, Y.: Photovoltaic solar modules of different types and designs for energy supply. Int. J. Energy Optim. Eng. 9(2), 74–94 (2020). https://doi.org/10.4018/IJEOE.2020040106 14. Sinitsyn, S., Panchenko, V., Kharchenko, V., Vasant, P.: Optimization of Parquetting of the concentrator of photovoltaic thermal module. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 1072, pp. 160–169 (2020). https://doi. org/10.1007/978-3-030-33585-4_16
Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module Vladimir Panchenko1,2(&), Sergey Chirskiy3, Andrey Kovalev2, and Anirban Banik4 1
Russian University of Transport, Obraztsova st. 9, 127994 Moscow, Russia [email protected] 2 Federal Scientific Agroengineering Center VIM, 1st Institutsky passage 5, 109428 Moscow, Russia [email protected] 3 Bauman Moscow State Technical University, 2nd Baumanskaya st. 5, 105005 Moscow, Russia [email protected] 4 Department of Civil Engineering, National Institute of Technology Agartala, Jirania 799046, Tripura (W), India [email protected]
Abstract. The paper describes the modeling of the thermal state and visualization of the operating mode of the bilateral photoreceiver of the concentrator photovoltaic thermal solar module. Based on the results obtained during the simulation, the geometric parameters of the components of the photoreceiver of the solar module are substantiated and selected. The developed design of the solar module will provide high electrical and thermal efficiency of the solar module. These types of concentrator photovoltaic thermal solar modules can operate autonomously or in parallel with the existing power grid to power consumers. Keywords: Solar energy Solar concentrator High-voltage matrix photovoltaic converters Photovoltaic thermal solar module Power supply
1 Introduction Recently solar energy converters have been developing at a pace that outstrips the development of converters of other renewable energy sources [1–6]. Along with increasing the efficiency of photovoltaic converters, when the technology of their manufacture is being improved [7–9], the cogeneration method is also used to increase the overall efficiency of the solar module – when, along with electricity, the consumer receives thermal energy. Such solar modules can be either a planar design [10] or a concentrator design [11] when various concentrators and reflectors of solar energy are used. Such devices must be manufactured with high accuracy, and their design is an important and difficult task [12]. Concentrator thermal photovoltaic solar plants generate electrical and thermal energy, while the temperature of the coolant can reach © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 73–83, 2021. https://doi.org/10.1007/978-3-030-68154-8_8
74
V. Panchenko et al.
higher temperatures compared to planar photovoltaic thermal modules. The design of such modules is complicated by the need to simulate the thermal state of the photoreceiver located in the focus of the concentrator, since at its high temperature, the electrical efficiency of the photovoltaic converters decreases. Along with the design and manufacture of solar concentrators, an important role is played the design and simulation of photoreceivers of solar modules in both planar and concentrator designs [13]. In addition to the method for designing such modules, methods of modeling and visualizing the processes occurring in the photoreceivers of such solar photovoltaic thermal solar modules are also necessary.
2 Three-Dimensional Modeling of Bilateral Photoreceivers of Photovoltaic Thermal Solar Modules The developed method for creating three-dimensional models of photoreceivers of solar photovoltaic thermal modules allows create photoreceivers with unilateral and bilateral photoelectric converters, as well as with facial, rear and two-sided heat removal (Fig. 1).
Fig. 1. Components of solar thermal photovoltaic modules of various designs and the model of the bilateral photoreceiver “Model 4”
Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module
75
Bilateral photovoltaic converters make it possible to create concentrator photovoltaic thermal solar modules where photovoltaic converters are illuminated from two sides and in this case the number of used photovoltaic converters is halved and given that the solar concentrator also is part of the installation, the savings will be even more significant when compared with planar solar modules. Two-sided heat removal allows more efficient cooling of photovoltaic converters and the resulting heat increases the overall efficiency of the solar module. When creating a photoreceiver with bilateral photovoltaic converters and two-sided heat removal, the components of the “Model 4” are used (Fig. 1), which consists of 7 different components. In concentrator solar photovoltaic thermal modules it is advisable to use not standard silicon planar photovoltaic converters, but silicon high-voltage matrix ones, which were originally developed for concentrator modules. The efficiency of highvoltage matrix photovoltaic modules increases when working in concentrated solar flux, and its high value is maintained even when the concentration of solar radiation is more than 100 times. Moreover, the high electrical efficiency of the matrix photovoltaic converters is maintained with increasing temperature, while the efficiency of planar photovoltaic converters significantly decreases with increasing temperature. Silicon photovoltaic converters of this kind are used in “Model 4” as an bilateral electrogenerating component. By scaling the length of the high-voltage matrix photovoltaic converters, it is possible to achieve higher module voltages, and an increase in the concentration of solar radiation can proportionally increase the electric current and electric power of the entire module.
3 Modeling and Visualization of the Thermal State of the Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module After creating a three-dimensional model of the photoreceiver of the solar photovoltaic thermal module in the computer-aided design system, it is advisable to study this model in order to determine its thermal state in the Ansys finite element analysis system. As a result of modeling the thermal state, it is possible to analyze and draw a conclusion regarding the operation mode of the photoreceiver and then perform layer-by-layer optimization of the model components in order to achieve maximum overall module efficiency (maximum thermal efficiency or maximum electric efficiency). As a result of modeling according to the developed method, it becomes possible to determine the temperature fields of the model and visualize the coolant flow [14]. As an example of modeling the thermal state of the bilateral photoreceiver of the concentrator photovoltaic module, we consider a receiver with matrix photovoltaic converters located in the focus of the concentrator and cooled by nitrogen (Fig. 2). The model of the photoreceiver in the Ansys finite element analysis system is presented in the Fig. 2. Half of the photoreceiver is presented in view of its symmetry to accelerate the modeling process in the finite element analysis system.
76
V. Panchenko et al.
Fig. 2. Model of the bilateral photoreceiver and coolant movement
For calculation in a finite element analysis system, the photoreceiver model was divided into finite elements, as well as broken down into components representing various physical bodies. When modeling the thermal state in the finite element analysis system, the following parameters and module characteristics were adopted: module length 600 mm; thermophysical properties of substances: silicon: thermal conductivity: 120 W/m K, heat capacity: 800 J/kg K, density: 2300 kg/m3; glass: thermal conductivity: 0,46 W/m K, heat capacity: 800 J/kg K, density: 2400 kg/m3; rubber: thermal conductivity: 0,15 W/m K, heat capacity: 1800 J/kg K, density: 1200 kg/m3; nitrogen: atmospheric pressure: 101300 Pa, 20 ºC, dynamic viscosity: 1710–6 Pa s), thermal conductivity: 0,026 W/m K; heat capacity at constant pressure: 1041 J/kg K; density: 1,182 kg/m3. At the next stage of modeling, the following parameters were adopted: specific heat release on the photovoltaic converter: 2 W/cm2, gas temperature at the inlet: 15 ºC; gas inlet speed: 10 m/s; gas pressure: 10 atm; total gas flow: mass: 0,066 kg/s, volumetric at operating pressure and temperature: 5,50 l/s, volumetric at atmospheric pressure and temperature 20 ºC: 56,12 l/s. The thermal state of the photovoltaic converter is shown in the Fig. 3 on the left, the temperature of the coolant and other structural components is shown in the Fig. 3 on the right.
Fig. 3. Thermal state of the photovoltaic converter (on the left) and temperature of the coolant and other structural components (on the right)
The temperature of the silicon photovoltaic converter ranged from 44 °C at the edges of the cell to 53 °C in the center of the cell. The temperature of the coolant in the central section of the model ranged from 15 ºC (did not differ from the inlet coolant temperature) to 34 ºC, which indicates the heating of the coolant layer adjacent to the photovoltaic converter and removal heat from it. By varying various parameters, such
Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module
77
as the concentration value (specific heat release on the photovoltaic converter), heat carrier velocity and temperature of the heat carrier at the inlet, the necessary temperature values of the photovoltaic converter and heat carrier can be obtained. Thanks to the developed modeling method [14], a study was made of the bilateral photoreceiver of the solar photovoltaic thermal module (“Model 4”), as a result of which temperature distributions, flows, and coolant velocities were obtained, which made it possible to analyze and optimize the layer-by-layer structure of the model. High-voltage bilateral matrix photovoltaic converters are located in the focal region of the concentrator of the module. Bilateral photovoltaic thermal photoreceiver located in the focus of a solar radiation concentrator was subjected to simulation. Bilateral photoreceiver of the solar photovoltaic thermal module consists of 7 different components (Fig. 1), the layered structure of which is presented in the Table 1. Modeling the thermal state in a finite element analysis system with obtaining an array of temperatures allows getting a more detailed three-dimensional picture of the thermal state of the module, in contrast to two-dimensional analytical modeling, which allows comparing the thermal state of all components and each separately for different parameters. Table 1. Parameters of the components of the bilateral photoreceiver of the solar photovoltaic thermal module (“Model 4”) Component
Thick., mm
Comp. mat. Therm. cond., W/mK
Dens., kg/m3
Kin. visc., Pas
Dyn. visc., m2/s
Heat cap., J/kgK
Coef. of therm. exp., 1/K
1. Bilateral electrogenerating 2. Transparent sealing 3. Transparent insulating 4. Transparent heat removal 5. Transparent insulating heat sink 6. Transparent heat-insulating 7. Transparent protective
0,2
Silicon
148
2330
–
–
714
2,54 10–6
0,2
Polysiloxane Polyethylene Water
0,167
950
–
–
1175
100 10–6
0,14
1330
–
–
1030
60 10–6
0,569
1000
1788 1,78 10–6 10–6 – –
4182
–
1030
60 10–6
0,1 5 0,3
Polyethylene
0,14
1330
3; 5; 7
Air
0,0244
1,293
4
Tempered glass optiwhite
0,937
2530
17,2 10–6 –
13,210– 1005 6
–
750
– 8,9 10–6
Since the model of the photoreceiver is symmetric, the axis of symmetry will be the middle of the bilateral electrogenerating component (silicon matrix high-voltage photoelectric converter) (Fig. 4 above) in longitudinal section (Fig. 4 below). The thicknesses of the components in the drawing are taken preliminary.
78
V. Panchenko et al.
Fig. 4. Bilateral electrogenerating component (silicon matrix high-voltage photoelectric converter) (above), model of bilateral module and module drawing shown in section (below)
When modeling the thermal state of the model of the photoreceiver, the influence of the concentration of solar radiation (3; 6 and 9 times), transparent heat-insulating component (the air gap) (3; 5 and 7 mm), as well as the flow rate of the coolant (0,5; 5 and 50 g/s) on the thermal state of the coolant is considered. The temperature of the bilateral electrogenerating component (silicon matrix high-voltage photoelectric converter) should be below 60 ºC, since when the photovoltaic converters are heated, their electrical efficiency gradually decreases. As a preliminary calculation, a part of the bilateral electrogenerating component is considered: a strip 10 mm wide, conventionally located in the middle part of the photovoltaic thermal photoreceiver, the length of this part being equal to the length of the entire module. Since the module is symmetrical with respect to the average horizontal plane, only its upper half is considered. The flow of the cooling agent is considered uniform throughout the cross section of the cooling cavity. Three variants of the module design are considered, differing in the height of the transparent heat-insulating component (air layer thickness) – 3 mm, 5 mm and 7 mm (Fig. 5).
Fig. 5. Three variants of the module design (height of the transparent heat-insulating component 3 mm, 5 mm and 7 mm)
The influence of the concentration of solar radiation on the thermal state of the module was studied – three cases were considered: a concentration of 3, 6 and 9 times for each side of the module. Taking into account the fraction of solar radiation converted into electrical energy, the heat flux is taken to be 2400 W/m2, 4800 W/m2 and 7200 W/m2.
Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module
79
For the case of water cooling with the temperature of 293 K, the thermal state was simulated for mass flow rates of 0,05 kg/s, 0,005 kg/s and 0,0005 kg/s. It was noted that at the mass flow rate of water of 0,05 kg/s, the water layer (transparent heat removal component) does not warm up to the entire thickness, therefore, the calculation for this flow rate was performed only for a design variant with a heat insulator thickness of 3 mm. To assess the effect of non-uniformity in the supply of the cooling agent (transparent heat removal component), a solid-state model of a quarter of the photoreceiver is constructed, shown in the Fig. 6. A design variant with a thickness of the transparent heat-insulating component (air layer thickness) equal to 3 mm is considered. The layers of transparent heat-insulating component and transparent heat removal component are divided by thickness into 10 layers of elements (Fig. 6). The components of the model with a small thickness are divided by thickness into 3 layers of elements.
Fig. 6. Solid-state model of a quarter of the photoreceiver
The symmetry planes of the module are the average vertical and horizontal planes. In the Fig. 7 on the left, the symmetry planes are marked in green. The coolant enters and exits through the central holes marked in green in the Fig. 7 on the right. The flow rate of the cooling agent is set at 0,205 kg/s and 0,0205 kg/s, which is equivalent to a flow rate of 0,05 kg/s and 0,005 kg/s for a part of the module with a width of 10 mm.
Fig. 7. Symmetry planes of the model (on the left) and coolant inlet (on the right)
As a result of the simulation, the temperature distributions of the components of the solar module model are obtained, as well as the velocity and flow line of the coolant (Figs. 8 and 9).
80
V. Panchenko et al.
Fig. 8. Temperature distribution of the components of the model from the side of the inlet and outlet of the coolant (above) and the velocity of the coolant from the side of the inlet and outlet of the coolant (below)
Fig. 9. Temperature distribution of the components of the model along the entire length – top view (above) and coolant velocity along the entire length – top view (below)
Based on temperature distributions, the temperature values of the various components of the model were determined, which were entered in the Table 2 for further optimization of the thicknesses of the components. Also analyzed the quality of washing a transparent radiator (transparent insulating component) and the uniformity of the flow lines of the coolant (transparent heat removal component) at various flow rates; analyzed the places of overheating of photovoltaic converters (bilateral electrogenerating component) and underheating of the coolant, its stagnation in the module. Presented figures (the result of three-dimensional modeling of the thermal state) carry the necessary information for analysis and optimization of the thermal state of the components of the module and the quality of its cooling. As a result of the analysis of the temperature array of the “Model 4” components obtained in the process of modeling the thermal state of the three-dimensional model using the developed method, optimization was carried out according to the criterion of the changing the coolant temperature (heat removal component). With the same change in the main criterion (coolant temperature), the change in the total criterion is calculated. As a result of design optimization, an air gap (transparent heat-insulating component) of 7 mm, a concentration of 6 times and a coolant flow rate of 0,5 g/s for the selected part of the module were selected. The number of models can be expanded with an increase in the ranges of geometric characteristics of other components and coolant flow.
Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module
81
Table 2. Temperature characteristics of the photoreceiver components and component optimization criteria Air gap (transparent heat-insulating component), mm Mass flow rate of coolant (transparent heat removal component), g/s Glass temperature at the coolant inlet (transparent protective component), °C Glass temperature at the coolant outlet (transparent protective component), °C Change in criterion, % (priority of the third category – optimum minimum) Air temperature in the gap at the coolant inlet (transparent heat-insulating component), °C Air temperature in the gap at the coolant outlet (transparent heat-insulating component), °C Change in criterion, % (priority of the fourth category - optimum minimum) Temperature of photovoltaic converters at the coolant inlet (bilateral electrogenerating component), °C Temperature of photovoltaic converters at the coolant outlet (bilateral electrogenerating component), °C Change in criterion, % (priority of the second category – optimum minimum) Temperature of the coolant at the outlet of the module (transparent heat removal component), °C Change in criterion, % (priority of the first (main) category – optimum maximum) Change in the total criterion (maximum priority)
3
5
7
0,5
5
0,5
5
0,5
5
20
20
20
20
20
20
29
20
29
20
29
20
45
0
45
0
45
0
20
20
20
20
20
20
35
20
34
20
34
20
75
0
70
0
70
0
26
26
26
25
26
26
44
32
44
32
44
32
69
23
69
28
69
23
38
22
38
23
38
21
90
10
90
15
90
5
−60,6
−57,1
−57,1
Optimization of the air gap and the flow rate of the coolant were carried out according to the criterion for changing the temperature of the coolant (priority of the first category). The remaining secondary criteria by which optimization did not occur were of secondary importance, but also had priority depending on their contribution to the heating of the coolant (the second category – coefficient 0,9, the third – 0,8, the fourth category – 0,7 and the fifth – 0,6) (Table 2). Also, for changes in the criteria, Table 2 shows the achievement of their desired values in terms of finding the optimum – minimum or maximum, which is also taken into account when summing up the secondary criteria, that is, the smaller the change, the better the heating of the coolant.
82
V. Panchenko et al.
If a change in the value of a secondary criterion reduces the main criterion (the priority of the first category – heating of the coolant, relative to the value of which a decision is made on the appropriateness of a particular module design), then when summing up the magnitude of the change of this criterion is given with a negative sign. With the same change in the main criterion (coolant temperature), the change in the total criterion is calculated, which includes changes in all secondary criteria and preference is given to the maximum value of the total criterion: X
K ¼1 K1 0; 9 K2 0; 8 K3 0; 7 K4 0; 6 K5
The total criterion reflects a change not only in the parameter of the main criterion (change in the temperature of the coolant), but also takes into account changes in other criteria (change in the temperatures of the remaining components), which plays an important role in the optimization process in view of the level of their contribution to the thermal state of the module and heating of the coolant in particular.
4 Conclusion As a result of the developed method for designing solar photovoltaic thermal modules the designer can develop solar cogeneration modules of various designs, which can then be tested in the finite element analysis system using the developed method for modeling and visualization of thermal processes. As a result of optimization of the geometric parameters of the bilateral photoreceiver of a concentrator solar photovoltaic thermal module, a module design is proposed that will allow obtaining high values of the overall efficiency of the solar module when the photovoltaic converters do not overheat and the coolant heats up to high values. Solar photovoltaic thermal modules of this kind can serve as cogeneration modules for the simultaneous generation of electric and thermal energy for the consumers own needs, which can operate both offline and in parallel with existing energy networks. Acknowledgment. The research was carried out on the basis of financing the state assignments of the All-Russian Institute of the Electrification of Agriculture and the Federal Scientific Agroengineering Center VIM, on the basis of funding of the grant “Young lecturer of RUT” of the Russian University of Transport, on the basis of funding of the Scholarship of the President of the Russian Federation for young scientists and graduate students engaged in advanced research and development in priority areas of modernization of the Russian economy, direction of modernization: “Energy efficiency and energy saving, including the development of new fuels”, subject of scientific research: “Development and research of solar photovoltaic thermal modules of planar and concentrator structures for stationary and mobile power generation”.
Modeling of Bilateral Photoreceiver of the Concentrator Photovoltaic Thermal Module
83
References 1. Buonomano, A., Calise, F., Vicidimini, M.: Design, simulation and experimental investigation of a solar system based on PV panels and PVT collectors. Energies 9, 497 (2016) 2. Ibrahim, A., Othman, M.Y., Ruslan, M.H., Mat, S., Sopian, K.: Recent advances in flat plate photovoltaic/thermal (PV/T) solar collectors. Renew. Sustain. Energy Rev. 15, 352–365 (2011) 3. Kemmoku, Y., Araki, K., Oke, S.: Long-term perfomance estimation of a 500X concentrator photovoltaic system. In: 30th ISES Biennial Solar World Congress 2011, 710–716 (2011) 4. Nesterenkov, P., Kharchenko, V.: Thermo physical principles of cogeneration technology with concentration of solar radiation. Adv. Intell. Syst. Comput. 866, 117–128 (2019). https://doi.org/10.1007/978-3-030-00979-3_12 5. Rawat, P., Debbarma, M., Mehrotra, S., et al.: Design, development and experimental investigation of solar photovoltaic/thermal (PV/T) water collector system. Int. J. Sci. Environ. Technol. 3(3), 1173–1183 (2014) 6. Sevela, P., Olesen, B.W.: Development and benefits of using PVT compared to PV. Sunstain. Build. Technol. 4, 90–97 (2013) 7. Kharchenko, V., Nikitin, B., Tikhonov, P., Panchenko, V., Vasant, P.: Evaluation of the silicon solar cell modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 328–336 (2019). https://doi.org/10.1007/978-3-03000979-3_34 8. Panchenko, V.: Photovoltaic solar modules for autonomous heat and power supply. IOP Conf. Ser. Earth Environ. Sci. 317 9 p. https://doi.org/10.1088/1755-1315/317/1/012002. 9. Panchenko, V., Izmailov, A., Kharchenko, V., Lobachevskiy, Y.: Photovoltaic solar modules of different types and designs for energy supply. Int. J. Energy Optim. Eng. 9(2), 74–94. https://doi.org/10.4018/IJEOE.2020040106. 10. Panchenko, V.A.: Solar roof panels for electric and thermal generation. Appl. Solar Energy 54(5), 350–353 (2018). https://doi.org/10.3103/S0003701X18050146 11. Kharchenko, V., Panchenko, V., Tikhonov, P., Vasant, P.: Cogenerative PV thermal modules of different design for autonomous heat and electricity supply. In: Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development, pp. 86–119. https://doi.org/10.4018/978-1-5225-3867-7.ch004 12. Sinitsyn, S., Panchenko, V., Kharchenko, V., Vasant, P.: Optimization of parquetting of the concentrator of photovoltaic thermal module. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 1072, pp. 160–169 (2020). https://doi. org/10.1007/978-3-030-33585-4_16 13. Panchenko, V., Kharchenko, V., Vasant, P.: Modeling of solar photovoltaic thermal modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 108–116 (2019). https://doi.org/10.1007/978-3-030-00979-3_11 14. Panchenko, V., Chirskiy, S., Kharchenko, V.V.: Application of the software system of finite element analysis for the simulation and design optimization of solar photovoltaic thermal modules. In: Handbook of Research on Smart Computing for Renewable Energy and AgroEngineering, pp. 106–131 https://doi.org/10.4018/978-1-7998-1216-6.ch005
Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation by the Method of Orthogonal Parquetting Vladimir Panchenko1,2(&) and Sergey Sinitsyn1 1
Russian University of Transport, Obraztsova st. 9, 127994 Moscow, Russia [email protected] 2 Federal Scientific Agroengineering Center VIM, 1st Institutskiy passage 5, 109428 Moscow, Russia
Abstract. The paper discusses the geometric aspects of the surface design of solar radiation concentrators by adjusting its surface with single elements. The proposed method of orthogonal parquetting allows optimizing the accuracy of manufacturing a solar radiation concentrator and the smoothness of its working profile. The geometric characteristics of paraboloid type concentrators are considered. The article also discusses various methods of manufacturing solar radiation concentrators, including the fan-surface parquetting method, which is related to the orthogonal method of parking. The optimization of the number of elements used in the developed method occurs either by the least number of curved components of the concentrator or by the maximum number of flat elements. As an example of the implementation of the developed method two solar radiation concentrators designed for photovoltaic and photovoltaic thermal photoreceivers are presented. Concentrations of solar radiation provide uniform illumination in focal areas. Keywords: Solar energy Concentrator of solar radiation Paraboloid Optimization Orthogonal parquetting Concentrator solar module
1 Introduction Solar plants are the fastest growing stations based on renewable energy converters [1–5]. Most of solar stations consist of silicon planar photovoltaic modules that generate exclusively electrical energy. To increase the overall efficiency of solar modules, reduce their nominal power and accelerate the payback period, photovoltaic thermal solar modules are relevant, which, along with electrical energy, also generate thermal energy in the form of a heated coolant. Such photovoltaic thermal solar modules are divided according to the type of construction used: planar [6–9] and concentrator (with different concentrators of solar radiation) [10–12]. Since concentrator photovoltaic thermal modules can work with bilateral photovoltaic converters, including silicon matrix high-voltage [4, 5, 12] (which reduces the number of used photovoltaic converters), as well as receive a higher temperature coolant at the output, which increases the overall efficiency of a solar installation, concentrators of such installations require more complex calculations related both to the development of a profile for uniform © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 84–94, 2021. https://doi.org/10.1007/978-3-030-68154-8_9
Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation
85
illumination of the focal region where the photovoltaic converters are located and to methods for its manufacture according to geometrically specified requirements. Methods for calculating the thermal state of concentrator solar photovoltaic thermal modules are also necessary [13], since as the temperature of the photovoltaic converters increases, their electrical efficiency decreases [14], which implies the need for their effective cooling. The main types of concentrators for use in concentrator photovoltaic thermal installations are paraboloid type solar concentrators, however, methods for calculating their working profile and its manufacture are also relevant and important for research.
2 Paraboloid Type Concentrators of the Solar Radiation Ideal parabolic concentrator of solar radiation (Fig. 1) focuses parallel sunrays to the point, which corresponds to an infinite degree of concentration [15], which does not allow evaluating the capabilities of the concentrator of solar radiation, since the Sun has finite dimensions.
Fig. 1. Scheme of the formation of the focal region of the paraboloid concentrator of solar radiation
Figure 1 shows the diagram of the formation of the focal spot of the paraboloid concentrator of solar radiation: an elementary sunray with an angular size (2 0,004654 rad) is reflected from the surface of the concentrator of solar radiation and falls on the focal plane, where the trace of this ray is an elementary ellipse with half shafts: a ¼ p u0 =ð1 þ cos uÞ cos u; b ¼ p u0 =ð1 þ cos uÞ, where p = 2 f – the focal parameter of the parabola; f – focal length [15].
86
V. Panchenko and S. Sinitsyn
From different radial zones of the concentrator of solar radiation (with different angles U), the ellipses have different sizes, which, overlapping each other, form the density of the focal radiation. An approximate estimate of the maximum radiation density in focus is the calculation by the formula [15]: EF ¼ q
1 sin2 Um E0 ; u20
ð1Þ
where q – the reflection coefficient of the concentrator of solar radiation; u0 – the opening angle of the elementary sunray; Um – the largest opening angle of the paraboloid to the side; E0 – density of the solar radiation. The imperfection of the reflecting surface of the concentrator of solar radiation leads to blurring of the spot due to the mismatch of their centers according to a random law. The focal spot illumination is best described by the normal Gaussian distribution curve [15]: ET ¼ Emax ecr ; Emax ¼
180 p
2
ð2Þ
E0 q h2 sin2 u;
ð3Þ
2 h ð1 þ cos uÞ2 ; p
ð4Þ
2
c ¼ 3;283 103
where r – the radius in the focal plane; h – the measure of the accuracy of the concentrator of solar radiation. In the calculations, the focal length f and half-opening angle Um are considered known, from where, using the representation of the parabola equation in the polar coordinate system, the diameter of the concentrator of solar radiation is obtained [15]: D¼
4 f sin um : 1 þ cos um
ð5Þ
The optimal value Um at which the average coefficient of concentration of solar radiation will be maximum is equal to 45° at the achieved concentration Kmax equal 11300 [16] and the concentration of solar radiation at the focus of the ideal concentrator of solar radiation takes the form [17]: Efid ¼ Rs
1:2 sin2 um E0 u20
ð6Þ
where Rs – the integral transmittance of the system; u0 – the angular radius of the Sun equal to 0,004654 rad.; E0 – the density of direct solar radiation. The curves of the distribution of concentration of solar radiation in the focal plane for four concentrators of solar radiation (spherical, quasiparabolic and parabolotoric) with the same diameters and focal lengths (500 mm each) are shown in the Fig. 2 [18].
Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation
87
Fig. 2. Distribution of concentration of solar radiation in the focal plane of spherical, quasiparabolic and parabolotoric concentrators of solar radiation
Solar radiation concentrators, the concentration distribution of which is shown in the Fig. 2, can be successfully used in conjunction with solar radiation photoreceiver, however, uniform distribution of concentrated solar radiation over the focal region is necessary for solar photovoltaic converters. Therefore, when designing the working profile of paraboloid type concentrators, it is necessary to pay considerable attention to the distribution of illumination in the focal region of the concentrator. For the manufacture of paraboloid type solar radiation concentrators, the following methods are mainly used [15]: centrifugal method; the method of electroforming; the method of glass bending; the manufacture of a reflective surface from flat curved mirrors. Along with the methods considered, parabolic concentrators with small manufacturing accuracy for cooking and heating water are manufactured in southern and not rich countries, which are presented in the Fig. 3 [19–21]. Of great interest is the fan-shaped method of parqueting of surface of the paraboloid type concentrator of solar radiation [22], with the help of which the concentrators in the Fig. 3 on the right are manufactured.
Fig. 3. Paraboloid type solar radiation concentrators, the base of which is made of reed, guide ribs, satellite dishes and using the fan-shaped method of parqueting of surface
88
V. Panchenko and S. Sinitsyn
The term “parquetting” refers to the task of approximation of surfaces, which allows simplifying the technological process of their production while respecting the differential geometric characteristics of finished products. The task of selecting the shape of the parquet elements of the shell and their dimensions is solved in two ways: by dividing the structure into small flat elements and dividing the shell into elements of a curvilinear outline. As elements of the parquet are considered figures the planned projections of which are straight lines.
3 Method of the Orthogonal Parqueting of Surface of the Paraboloid Type Concentrator of the Solar Radiation As the initial data for parqueting, a planned projection of the surface of the paraboloid is set (Fig. 4). The method for solving the task is related to the breakdown into projection elements with the subsequent finding on the surface of the third coordinates corresponding to the planned breakdown.
Fig. 4. The scheme of orthogonal parqueting of the surface of the paraboloid of the concentrator
According to the classification of types of parquet accepted: a) the planned projection of the elements in the form of closed n - squares with the inclusion of circular arcs; b) the equidistant arrangement of internal and external surfaces [22].
Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation
89
When solving it is taken into account that the mathematical model of the inner surface is given by the equation x2 þ y2 ¼ 2 p z. The surface of the paraboloid S is dissected by one-parameter families of projecting planes (Fig. 5). In this case, the planned projection is divided by a network with the shape of the cell, depending on the relative position of the projecting planes. It is most advisable to split the surface into elements with a planned shape of rectangles or arbitrary quadrangles.
Fig. 5. The sectional diagram of the surface of the paraboloid by families of projecting planes
The cell parameters of the surface partition network are determined taking into account the accuracy of the surface approximation ðDn D nÞ, where the numerical value of the limiting error parameter D n ¼ 0;15%. If the surface of a paraboloid is represented by an approximated set of elements having the shape of equal squares in plan (Fig. 4) Si, then their angles on the surface S correspond to the set of points of the skeleton, approximately taken as regular. To determine the side of the square of the unit cell of the partition, in the circle of the trace of the paraboloid on the plane X0Y, will enter a square with a side b (Fig. 4). The parameter b is calculated by the area of a circle of radius r: rffiffiffiffiffiffiffi Scir b ¼ 1;4 : p The constructed square S is a trace of the surface part of the paraboloid area.
ð7Þ
90
V. Panchenko and S. Sinitsyn
Based on the relation: rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi h i n pffiffiffi o pffiffiffiffi D n ¼ 4 exp 2 ln S ln ð1 þ Nsm ÞNsm ð N 1Þ CNsm þ 2D2 a:
ð8Þ
For the regular point skeleton, the dimension parameter N is calculated: N ¼ f S; Nsm ; D n; CNsm at Da = 0:
ð9Þ
where Nsm = 1 – the smoothness order of the approximating contour of the surface composed of parquet elements; D n ¼ 0;15%.; CNsm – statistical parameter. A point frame of dimension N = 680 is placed in the grid nodes of a square with a side b, including segments of its sides, so the total number of unit cells of the partition is determined by the formula: N ¼
pffiffiffiffi 2 N 1 ¼ 625
ð10Þ
So, for example, for an area S = 250 m2 of a piece of the surface of a paraboloid having a square trace on a plane X0Y, one cell has: Si ¼
S 250 ¼ ¼ 0;4ðm2 Þ; N 625
therefore, the square cell of the partition of the planned projection has a side li equal to: li ¼
pffiffiffiffi pffiffiffiffiffiffiffi Si ¼ 0;4 ¼ 0;63:
Given the approximations, we finally obtain: li ¼ 0; 9 li ¼ 0;57: So, the surface of a paraboloid is dissected by a family of horizontally projecting planes ki ¼ ð1; 2; . . .; k þ 1Þ perpendicular to a straight line a, selected on the plane X0Y, and the distance between the planes is li = 0,57 m. The line a is defined on the plane by two parameters: the angle a and the distance R from the center of coordinates. The set of elements located between adjacent planes is conventionally called a row or a belt. Of further interest are cells that partially or fully cover the surface projection. Technological information about the nodal points of all elements of the parquet, including those located outside the “square b”, is presented in the form of a matrix: y1 ; y1 ; XN1;1 ; XK1;1 ; . . .; XN1;K1 ; XK1;K1 min max 2 2 ð11Þ kSk ¼ ymin ; ymax ; XN2;1 ; XK2;1 ; . . .; XN2;K2 ; XK2;K2 ; yn ; yn ; XNn;1 ; XKn;1 ; . . .; XNn;Kn ; XKn;Kn min
max
Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation
91
where Kj is the number of elements in the i row (Fig. 4); n is the number of rows; (XN and XK) are designations X1 of the coordinates of the boundaries of the planned projection of the parquet element. So, for example, the selected element of the plan projection Si (Fig. 4) is located in the i row with borders yimin ; yimax and in the j column with the X1 coordinates of the borders (XNj,i and XKj,i). Given the equal width of all belts yj+1 = yj + li, without taking into account the gaps between the elements of the parquet, the matrix (11) is simplified and takes the form: y1 ; XN1;1 ; XN1;2 ; . . .; XK1;K1 ð12Þ kSk ¼ y2 ; XN2;1 ; XN2;2 ; . . .; XK2;K2 : yn ; XNn;1 ; XNn;2 ; . . .; XKn;Kn Based on the approximated surface model x2 þ y2 ¼ 2 p z, the third coordinates of the corner points of the outer surface of the parquet elements are calculated, taking into account which a complete information model of the parquet is used, which is used in the preparation of control programs for the technological cycle of manufacturing parquet elements.
4 Paraboloid Type Concentrators of the Solar Radiation for Various Photoreceivers The developed method for calculating the surface of a paraboloid profile using orthogonal parquetting can be applicable for the manufacture of the working surface of paraboloid type solar radiation concentrators. Similar solar radiation concentrators are used in solar modules, where photovoltaic converters (including high-voltage matrix ones), thermal photoreceivers and also combined photovoltaic thermal photoreceivers can be used as photoreceivers of concentrated solar radiation. In concentrator solar photovoltaic thermal modules, along with electrical energy, the consumer receives thermal energy in the form of a heated coolant at the output. The profiles of such paraboloid type solar radiation concentrators, providing uniform illumination in the focal region on the surface of the photoreceivers, are presented in the Fig. 6 on the left, where the solar radiation concentrator is intended for photovoltaic converters located on the solar concentrator itself, which is an air-cooled radiator for photovoltaic converters. Figure 6 below shows the profile of a paraboloid type solar radiation concentrator that provides uniform illumination in the focal region of the photovoltaic thermal cylindrical photoreceiver, in which the side surface converts concentrated solar radiation into electrical energy and the upper end surface converts concentrated solar radiation into thermal energy.
92
V. Panchenko and S. Sinitsyn
Fig. 6. Profiles of working surfaces of paraboloid type solar radiation concentrators (on the left), partition of their surfaces into unit elements (in the middle) and individual elements of solar radiation concentrators themselves (on the right)
Concentrators of solar radiation can be made of reflective metal sheet material. Breakdown of the working surface of solar radiation concentrators into individual elements by projecting planes can occur on cells with a side of 20 mm (Fig. 6 in the middle) (or with other sizes depending on the manufacturing requirements), as a result of which unit cells of the surfaces of solar radiation concentrators are formed (Fig. 6 on right).
5 Conclusion Thus, it should be noted that the planned development of solar plants will occur at a steadily increasing pace, and the share of use of hybrid photovoltaic thermal installations will increase, which will significantly increase the need for the design, manufacture and study of solar radiation concentrators and in particular the paraboloid type. The presented method of orthogonal parquetting of the surface of paraboloid type solar concentrators will allow optimizing the manufacturing process of concentrators already at the design stage and will make it possible to obtain almost any working profile of paraboloid type concentrators. The orthogonal scheme of parquetting surfaces of rotation, in particular a paraboloid, is easy to implement data preparation. Also presented method will help to control and set the manufacturing error at the design stage of this kind of parabolic concentrators in order to ensure the expected characteristics of the focal region of the concentrator. The developed method of parquetting surfaces allows designing and manufacturing surfaces of rotation, including solar radiation concentrators, with specified differential-geometric requirements, which will ensure the expected distribution of the solar radiation illumination in the focal area. Such solar radiation concentrators provide uniform illumination in the focal region of various types of photoreceivers (photovoltaic, thermal, photovoltaic thermal), which will positively affect the efficiency of solar modules and their specific power.
Formation of Surface of the Paraboloid Type Concentrator of Solar Radiation
93
Acknowledgment. The research was carried out on the basis of financing the state assignments of the All-Russian Institute of the Electrification of Agriculture and the Federal Scientific Agroengineering Center VIM, on the basis of funding of the grant “Young lecturer of RUT” of the Russian University of Transport, on the basis of funding of the Scholarship of the President of the Russian Federation for young scientists and graduate students engaged in advanced research and development in priority areas of modernization of the Russian economy, direction of modernization: “Energy efficiency and energy saving, including the development of new fuels”, subject of scientific research: “Development and research of solar photovoltaic thermal modules of planar and concentrator structures for stationary and mobile power generation”.
References 1. Ibrahim, A., Othman, M.Y., Ruslan, M.H., Mat, S., Sopian, K.: Recent advances in flat plate photovoltaic/thermal (PV/T) solar collectors. Renew. Sustain. Energy Rev. 15, 352–365 (2011) 2. Nesterenkov, P., Kharchenko, V.: Thermo physical principles of cogeneration technology with concentration of solar radiation. Adv. Intell. Syst. Comput. 866, 117–128 (2019). https://doi.org/10.1007/978-3-030-00979-3_12 3. Buonomano, A., Calise, F., Vicidimini, M.: Design, simulation and experimental investigation of a solar system based on PV panels and PVT collectors. Energies 9, 497 (2016) 4. Panchenko, V.: Photovoltaic solar modules for autonomous heat and power supply. IOP Conf. Ser. Earth Environ. Sci. 317 (2017). 9 p. https://doi.org/10.1088/1755-1315/317/1/ 012002. 5. Panchenko, V., Izmailov, A., Kharchenko, V., Lobachevskiy, Y.: Photovoltaic Solar Modules of Different Types and Designs for Energy Supply. Int. J. Energy Optim. Eng. 9(2), 74–94 (2020). https://doi.org/10.4018/IJEOE.2020040106 6. Rawat, P., Debbarma, M., Mehrotra, S., et al.: Design, development and experimental investigation of solar photovoltaic/thermal (PV/T) water collector system. Int. J. Sci. Environ. Technol. 3(3), 1173–1183 (2014) 7. Sevela, P., Olesen, B.W.: Development and benefits of using PVT compared to PV. Sustain. Build. Technol. 90–97 (2013) 8. Kharchenko, V., Nikitin, B., Tikhonov, P., Gusarov, V.: Investigation of experimental flat PV thermal module parameters in natural conditions. In: Proceedings of 5th International Conference TAE 2013, pp. 309–313 (2013) 9. Hosseini, R., Hosseini, N., Khorasanizadeh, H.: An Experimental study of combining a Photovoltaic System with a heating System. World Renew. Energy Congress 2993–3000 (2011) 10. Kemmoku, Y., Araki, K., Oke, S.: Long-term performance estimation of a 500X concentrator photovoltaic system. In: 30th ISES Biennial Solar World Congress 2011, pp. 710–716 (2011) 11. Rosell, J.I., Vallverdu, X., Lechon, M.A., Ibanez, M.: Design and simulation of a low concentrating photovoltaic/thermal system. Energy Convers. Manage. 46, 3034–3046 (2005) 12. Kharchenko, V., Panchenko, V., Tikhonov, P.V., Vasant, P.: Cogenerative PV thermal modules of different design for autonomous heat and electricity supply. In: Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development, pp. 86–119 (2018). https://doi.org/10.4018/978-1-5225-3867-7.ch004
94
V. Panchenko and S. Sinitsyn
13. Panchenko, V., Kharchenko, V., Vasant, P.: Modeling of solar photovoltaic thermal modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 108–116 (2019). https://doi.org/10.1007/978-3-030-00979-3_11 14. Kharchenko, V., Nikitin, B., Tikhonov, P., Panchenko V., Vasant, P.: Evaluation of the silicon solar cell modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 328–336 (2019). https://doi.org/10.1007/978-3-03000979-3_34 15. Strebkov, D.S., Tveryanovich, E.V.: Koncentratory solnechnogo izlucheniya (Concentrators of the Solar Radiation), Moskva, GNU VIESKH [Moscow, GNU VIESH], 12–30 (2007). (in Russian) 16. Andreev, V.M., Grilikhes, V.A., Rumyantsev, V.D.: Fotoehlektricheskoe preobrazovanie koncentrirovannogo solnechnogo izlucheniya (Photoelectric conversion of concentrated solar radiation), Leningrad, Nauka (Leningrad, Science) (1989). 310 p. (in Russian) 17. Zahidov, R.A., Umarov, G.Y., Weiner, A.A.: Teoriya i raschyot geliotekhnicheskih koncentriruyushchih system (Theory and calculation of solar concentrating systems), Tashkent, FAN (Tashkent, Science) (1977). 144 p. (in Russian) 18. Alimov, A.K., Alavutdinov, J.N., et al.: Opyt sozdaniya koncentratorov dlya modul'nyh fotoehlektricheskih ustanovok (Experience in creating concentrators for modular photovoltaic plants), Koncentratory solnechnogo izlucheniya dlya fotoehlektricheskih ustanovok (Concentrators of solar radiation for photovoltaic plants) 17–18 (1986). (in Russian) 19. Hassen, A.A., Amibe, D.A.: Design, manufacture and experimental investigation of low cost parabolic solar cooker. ISES Solar World Congress 28 August–2 September 2011 (2011). 12 p. https://doi.org/10.18086/swc.2011.19.16 20. Chandak, A., Somani, S., Chandak, A.: Development Prince-40 solar concentrator as do it yourself (DIY) kit. In: ISES Solar World Congress August–2 Sept. 2011 (2011). 8 p. https:// doi.org/10.18086/swc.2011.23.02 21. Diz-Bugarin, J.: Design and construction of a low cost offset parabolic solar concentrator for solar cooking in rural areas. In: ISES Solar World Congress August–2 Septe 2011 (2011). 8 p. https://doi.org/10.18086/swc.2011.30.05 22. Sinitsyn, S., Panchenko, V., Kharchenko, V., Vasant, P.: Optimization of parquetting of the concentrator of photovoltaic thermal module. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 1072, pp. 160–169 (2020). https://doi. org/10.1007/978-3-030-33585-4_16
Determination of the Efficiency of Photovoltaic Converters Adequate to Solar Radiation by Using Their Spectral Characteristics Valeriy Kharchenko1, Boris Nikitin1, Vladimir Panchenko2,1(&), Shavkat Klychev3, and Baba Babaev4 1 Federal Scientific Agroengineering Center VIM, 1st Institutskiy Passage 5, 109428 Moscow, Russia [email protected] 2 Russian University of Transport, Obraztsova st. 9, 127994 Moscow, Russia [email protected] 3 Scientific and Technical Center with a Design Bureau and Pilot Production, Academy of Sciences of the Republic of Uzbekistan, 100125 Tashkent, Uzbekistan [email protected] 4 Dagestan State University, Gadzhieva str., 43-a, 367000 Makhachkala, Russia [email protected]
Abstract. The paper presents experimental studies of silicon photovoltaic solar radiation converters. An experimental evaluation of the photoconversion efficiency is considered, the spectral dependences of the current photoresponses of the photoelectric converter are determined and the spectral densities of the current photoresponses of various photovoltaic converters from an artificial light source are compared. Based on the results obtained, it becomes possible to determine the efficiency of the photovoltaic converter for a given wavelength of monochromatic radiation, as well as to calculate the efficiency of the photovoltaic converter for a given radiation spectrum. Keywords: Solar energy Efficiency Photovoltaic converters Spectral density Wavelength Spectrum of solar radiation Monochromatic radiation
1 Introduction The experimental assessment of the spectral values of the efficiencies of photovoltaic converters is a very urgent task. By the nature of this dependence, it is possible to obtain information on the corresponding values of the efficiency of the studied converter for certain wavelengths of monochromatic radiation and also to calculate the real value of the efficiency of the photoconversion for a predetermined radiation spectrum, including for standard solar radiation AM 1.5 [1–11]. The solution of the tasks posed is possible by comparing the current spectral (using specially selected light filters) photoresponses of the reference and studied photoconverters. A reference photoconverter should be considered a semiconductor converter calibrated in the area of the photoactive part for a given standard solar radiation semiconductor in the form of the spectral density of short circuit currents of the converter. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 95–102, 2021. https://doi.org/10.1007/978-3-030-68154-8_10
96
V. Kharchenko et al.
2 Comparison of the Spectral Densities of Current Photoresponses of a Reference Silicon Photovoltaic Converter Under Various Exposure Conditions A prerequisite for the planned analysis is the adjustment of the equipment used to ensure the equality of the integral (without the use of light filters) short circuit currents of the reference converter from standard solar radiation and from a laboratory light source. A comparison of the spectral (with the help of light filters) current photoresponses of the reference converter from two compared light sources gives an idea of the values of the spectral averaged (over the bandwidth of the selected filter) power of the monochromatic line of the used laboratory light source. Figure 1 shows the spectral dependences of the current photoresponses of a reference silicon photoconverter when illuminating with standard solar radiation AM 1.5 (according to the rating data for calibration of this converter) and when illuminated from a laboratory light source (halogen incandescent lamp).
Fig. 1. Comparison of the spectral densities of current photoresponses of the reference silicon photovoltaic converter when exposed to standard solar radiation AM 1.5 (sunlight exposure) and a laboratory light source (halogen lamp with a water filter h = 42 mm.) for equal integral short circuit currents
From the analysis of these dependencies it follows that the spectrum of the laboratory light source is depleted in the intensity of light fluxes in its short-wave part, but differs in increased intensity in the long-wave part. The areas bounded by these dependencies are equal in size, since they have the integral values of short circuit currents equal.
Determination of the Efficiency of Photovoltaic Converters Adequate
97
3 Averaged Spectral Densities of Power of the Photoactive Part of the Spectrum for Silicon Figure 2 presents the averaged spectral densities of power of the light bands (50 nm wide in accordance with the calibration of the reference converter) of the photoactive part (for Si) of the standard solar radiation AM 1.5 spectrum. It also shows the numerical values of the power of the light flux of each such band. Figure 2 shows the DP0kiAM1;5 values of standard solar radiation AM 1.5 obtained by dividing its photoactive part of the spectrum for silicon into bands with a width of 50 nm.
Fig. 2. Averaged spectral densities of power of the bandwidths of 50 nm wide of the standard solar radiation AM 1.5 of the photoactive part of the spectrum for silicon (numerical spectral densities of power of the light bands are shown above the graph and have a dimension of W/m2)
The data in Fig. 1 and Fig. 2 make it possible to determine the power of the light bands of the used laboratory light source for each selected wavelength.
4 Comparison of the Spectral Densities of Current Photoresponses of the Silicon Photovoltaic Converters Figure 3 shows the experimentally recorded spectral dependences of the current photoresponses of two different photoconverters (reference photoconverter and the studied one) at the same level and illumination spectrum from a laboratory light source. From the analysis of the figure it follows that, due to the individual design features and the technology of manufacturing photoconverters, these dependencies are not identical.
98
V. Kharchenko et al.
Comparison of the photoresponses of the converters allows to determine for the studied photoconverter sample the current photoresponse per unit of the power of the light current for a given spectrum wavelength.
Fig. 3. Comparison of the spectral densities of the current photoresponses of the silicon reference and investigated photoconverters under exposure from a laboratory light source
The volt-ampere characteristics of the investigated converter, taken by the traditional method under illumination from the used laboratory light source (naturally differing in spectrum from standard solar radiation), contains important data for further calculations: 1) The converter short circuit current is integral Io, and it can be represented as the sum of the contributions of all monochromatic lines of the used laboratory light source. The same can be said about the standard solar radiation, since the short circuit currents are equal in accordance with the above. 2) The maximum power of the photoelectric converter is denoted as Pmax.
5 Dependence of the Efficiency of the Silicon Photovoltaic Converter Versus the Wavelength According to the well-known definition, the efficiency of the photovoltaic converter is equal to the ratio of the useful power to the total consumed power of the total light flux. In our case, under the useful power of the photoconverter, it should be taken the maximum power taken from the converter according to the volt-ampere characteristics.
Determination of the Efficiency of Photovoltaic Converters Adequate
99
It should be noted that the nature (form) of the volt-ampere characteristics of the converter should not change if the light fluxes of some lines are mentally replaced by equivalent ones according to the current photoresponse of other lines, and therefore, the position of the operating point on the volt-ampere characteristics will not change. This circumstance is due to the fact that mobile charge carriers (primarily electrons), according to the kinetic energy obtained by the absorption of high-energy photons, almost instantly thermolize according to the thesis described in [3]. This circumstance makes these mobile charge carriers indistinguishable by the history of their origin. In accordance with the foregoing, the power of monochromatic radiation with a wavelength ki, generating the total short circuit current I0, is determined by the expression:
P
ð1Þ
P
where: DP0kiAM1;5 is the density of power of the light fluxes of standard solar radiation AM 1.5 with a wavelength ki and bandwidth of 50 nm; SPVC is the area of the investigated photovoltaic converter; i0kilab:ref : is the spectral density of the short circuit current of the reference photovoltaic converter from the laboratory light source at a wavelength ki; i0kiAM1;5cal: is the spectral density of the short circuit current of the reference transducer from the solar radiation AM 1.5 at a wavelength ki; I0 is the short circuit current of the investigated photovoltaic converter according to the volt-ampere characteristics; i0kilab:PVC is the spectral density of the short circuit current of the investigated photovoltaic converter under illumination from a laboratory light source at a wavelength ki; Δkcal. is the step of the calibration of the reference photovoltaic converter (usually 50 nm). The spectral values of the coefficient of performance of the investigated photovoltaic converter will be determined according to the expression: COPki ¼
Pmax PVC ; Pki
ð2Þ
where PmaxPVC is the maximum power of the photovoltaic converter, calculated by its volt-ampere characteristics, taken under illumination from the laboratory light source tuned to standard solar radiation by the integrated value of the short circuit current I0 using a reference photovoltaic converter; Pki is the power of monochromatic radiation with a wavelength ki, which causes the same value of the short circuit current I0 of the photovoltaic converter under study, which corresponds to the current-voltage characteristic. The value of such power is defined by the expression 1.
100
V. Kharchenko et al.
Figure 4 shows the dependence of the spectral values of efficiency of the investigated serial silicon photovoltaic converter [12, 13] versus wavelength. From the analysis of the obtained dependence, it follows that the COPki gradually increases from zero (at ki = 0.4 lm) to 40% with ki = 0.95 lm with increasing wavelength. An almost zero value of the COPki is observed under conditions when photons of the corresponding (small) wavelength are completely absorbed by the doped layer of the converter without p-n separation – by the transition of electron-hole pairs.
Fig. 4. The characteristic form of the spectral values of the efficiency of silicon photovoltaic converters
Based on the obtained experimentally recorded spectral dependence of the efficiency of the photoconverter, it seems possible to determine the generalized efficiency of the photovoltaic converter for a predetermined emission spectrum. The generalized efficiency of the photoconverter can be represented as the sum of the partial contributions of the COPki for each band of a given spectrum, taking into account the fraction of these bands in the power of the total light flux of radiation. The generalized efficiency of the photoconverter, as adequate for a given radiation spectrum, is described by the expression: COPgen ¼
X
Kki COPki ;
ð3Þ
where COPki are spectral values of the efficiency of the investigated photoconverter obtained according to expression 2; Kki is the fraction of the luminous flux power of the wavelength ki in the total luminous flux power of a given spectrum.
Determination of the Efficiency of Photovoltaic Converters Adequate
101
The efficiency of the photoconverter, adequate to the standard solar radiation of AM 1.5 (1000 W/m2), calculated according to expression 3 in the Fig. 4 is presented in the form of a “big dot” with a value of 13, located on the spectral curve of the dependence of the efficiency versus wavelength.
6 Conclusion Thanks to the considered method, it becomes possible to experimentally estimate the spectral values of the efficiencies of photovoltaic converters. Using the considered provisions, one can obtain information on the corresponding values of the efficiencies of photovoltaic converters for specific wavelengths of monochromatic radiation. It is also possible to calculate the efficiency of the photovoltaic converter for a given emission spectrum. Acknowledgment. The research was carried out on the basis of financing the state assignments of the All-Russian Institute of the Electrification of Agriculture and the Federal Scientific Agroengineering Center VIM, on the basis of funding of the grant “Young lecturer of RUT” of the Russian University of Transport, on the basis of funding of the Scholarship of the President of the Russian Federation for young scientists and graduate students engaged in advanced research and development in priority areas of modernization of the Russian economy, direction of modernization: “Energy efficiency and energy saving, including the development of new fuels”, subject of scientific research: “Development and research of solar photovoltaic thermal modules of planar and concentrator structures for stationary and mobile power generation”.
References 1. Bird, R.E., Hulstrom, R.L., Lewis, L.J.: Terrestrial solar spectral, data sets. Sol. Energy 30 (6), 563–573 (1983) 2. Kharchenko, V., Nikitin, B., Tikhonov, P., Panchenko, V., Vasant, P.: Evaluation of the silicon solar cell modules. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 866, pp. 328–336 (2019). https://doi.org/10.1007/978-3-03000979-3_34 3. Arbuzov, Y.D., Yevdokimov, V.M.: Osnovy fotoelektrichestva (Fundamentals of Photoelectricity), Moscva, GNU VIESKH (Moscow, GNU VIESH) (2007). 292 p. (in Russian) 4. Kharchenko, V.V., Nikitin, B.A., Tikhonov, P.V.: Theoretical method of estimation and prediction of PV cells parameters. Int. Sci. J. Alt. Energy Ecol. 4(108), 74–78 (2012) 5. Kharchenko, V.V., Nikitin, B.A., Tikhonov, P.V.: Estimation and forecasting of PV cells and modules parameters on the basis of the analysis of interaction of a sunlight with a solar cell material. In: Conference Proceeding - 4th International Conference, TAE 2010, pp. 307– 310 (2010) 6. Nikitin, B.A., Gusarov, V.A.: Analiz standartnogo spektra nazemnogo solnechnogo izlucheniya intensivnost'yu 1000 Vt/m2 i ocenka na ego osnove ozhidaemyh harakteristik kremnievyh fotoelektricheskih preobrazovatelej (Analysis of the standard spectrum of ground-based solar radiation with an intensity of 1000 W/m2 and assessment based on it of the expected characteristics of silicon photovoltaic converters). Avtonomnaya energetika: tekhnicheskij progress i ekonomika (Auton. Power Eng. Tech. Progress Econ. no. 24–25, 50–60 (2009). [in Russian]
102
V. Kharchenko et al.
7. Strebkov, D.S., Nikitin, B.A., Gusarov V.A.: K voprosu ocenki effektivnosti raboty fotopreobrazovatelya pri malyh i povyshennyh urovnyah osveshchennosti (On the issue of evaluating the efficiency of the photoconverter at low and high levels of illumination). Vestnik VIESKH (Bulletin VIESH), 119 (2012). (in Russian) 8. Nikitin, B.A., Mayorov, V.A., Kharchenko, V.V.: Issledovanie spektral'nyh harakteristik solnechnogo izlucheniya dlya razlichnyh velichin atmosfernyh mass (Investigation of the spectral characteristics of solar radiation for various atmospheric masses). Vestnik VIESKH [Bulletin of VIESH], 4(21), 95–105 (2015). (in Russian) 9. Nikitin, B.A., Mayorov, V.A., Kharchenko V.V.: Vliyanie velichiny atmosfernoj massy na spektral'nuyu intensivnost' solnechnogo izlucheniya (The effect of atmospheric mass on the spectral intensity of solar radiation). Energetika i avtomatika (Energy Autom.) 4(26), pp. 54– 65 (2015). (in Russian) 10. Kharchenko, V., Nikitin, B., Tikhonov, P., Adomavicius, V.: Utmost efficiency coefficient of solar cells versus forbidden gap of used semiconductor. In; Proceedings of the 5th International Conference on Electrical and Control Technologies ECT-2010, pp. 289–294 (2010) 11. Strebkov, D.S., Nikitin, B.A., Kharchenko, V.V., Arbuzov, Yu.D., Yevdokimov, V.M., Gusarov V.A., Tikhonov, P.V.: Metodika analiza spektra issleduemogo istochnika sveta posredstvom tarirovannogo fotopreobrazovatelya i komplekta svetofil'trov (Method of spectrum analysis of the studied light source by means of a calibrated photoconverter and a set of light filters). Vestnik VIESKH (Bulletin of VIESH), 4(9), 54–57 (2012). (in Russian) 12. Panchenko, V.: Photovoltaic solar modules for autonomous heat and power supply. IOP Conf. Ser. Earth Environ. Sci. 317, 9 p. (2019). https://doi.org/10.1088/1755-1315/317/1/ 012002 13. Panchenko, V., Izmailov, A., Kharchenko, V., Lobachevskiy, Ya.: Photovoltaic solar modules of different types and designs for energy supply. Int. J. Energy Optim. Eng. 9(2), 74–94 (2020). https://doi.org/10.4018/IJEOE.2020040106
Modeling of the Thermal State of Systems of Solar-Thermal Regeneration of Adsorbents Gulom Uzakov1, Saydulla Khujakulov1, Valeriy Kharchenko2, Zokir Pardayev1, and Vladimir Panchenko3,2(&) 1
3
Karshi Engineering - Economics Institute, Mustakillik str. 225, Karshi, Uzbekistan [email protected] 2 Federal Scientific Agroengineering Center VIM, 1st Institutskiy passage 5, 109428 Moscow, Russia [email protected] Russian University of Transport, Obraztsova st. 9, 127994 Moscow, Russia [email protected]
Abstract. Energy supply of fruit and vegetable storages, especially in places remote from centralized energy supply using solar energy is especially relevant. The authors of the paper propose systems for the thermal regeneration of adsorbents based on the use of solar energy. The paper presents studies of heat transfer and thermal regime of the developed system of solar-thermal regeneration of the adsorbent (activated carbon) in non-stationary mode. The paper also offers a mathematical model of the temperature field of the adsorbent layer during solar heating using a solar air collector. The proposed mathematical model of the thermal regime of the solar adsorption installation allows qualitatively controlling the technological process of thermal regeneration of adsorbents and significantly reducing the cost of traditional energy resources. Keywords: Solar air heater Air temperature energy Adsorbent Temperature Air flow
Thermal efficiency Solar
1 Introduction The leadership of Uzbekistan has set tasks to reduce the energy and resource intensity of the economy, the widespread introduction of energy-saving technologies in production, and the expansion of the use of renewable energy sources. The implementation of these provisions, including increasing the efficiency of the use of solar energy in thermotechnological processes of fruit and vegetable storages, is considered one of the most important tasks. Based on our studies, we developed a solar air-heating system for thermal regeneration of adsorbents and active ventilation of fruit and vegetable chambers [1–3]. Studies have shown that the thermal regime of thermal regeneration of adsorbents depends on the intensity of convective heat transfer between the surface of the adsorbent and the hot washer air. The main heat engineering parameters of convective heat transfer is the heat transfer coefficient, which depends on many factors [4–8]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 103–110, 2021. https://doi.org/10.1007/978-3-030-68154-8_11
104
G. Uzakov et al.
2 Research Problem Statement The aim of this work is to study the heat transfer and thermal regime of the developed system of solar-thermal adsorbent regeneration in non-stationary mode. Convective heat transfer during forced air movement through a fixed adsorbent layer in the solar regeneration mode has a pronounced unsteady character, that is, air temperatures and adsorbent temperatures change both in time and in space. In order to calculate the cycle of adsorption plants, determine the duration of the solar regeneration regime, and select the optimal thermotechnical parameters for their implementation, it is necessary to calculate the temperature field and calculate the change in the temperature of the adsorbent along the length of the adsorber at the studied time.
3 Research Results The heat transfer coefficient in the granular layer of activated carbon in the processes of thermal regeneration is determined by the method of modeling convective heat transfer using criteria-based similarity equations [9, 10]. The experimental results were processed using the following criteria similarity equations, that is, according to the formula of V.N. Timofeeva: at 20 \ Reliq \ 200 it is used Nuliq ¼ 0; 106 Reliq
ð1Þ
at Re [ 200; it is used Nuliq ¼ 0; 61 Re
ð2Þ
The results of studies to determine the heat transfer coefficient from air to adsorbent are shown in the Table 1.
Table 1. The results of studies of heat transfer in a granular adsorbent layer during thermal regeneration by atmospheric air № 1 2 3 4 5 6
W, m/s 0,2 0,3 0,4 0,5 0,6 1,0
Reliq 141,2 211,7 282,3 352,9 423,5 705,9
Nuliq a, W/m2 K 14,95 35 22 51,15 26,7 62,1 31 72 35,1 81,6 49,4 114,6
The process of unsteady heat transfer in the main apparatus of the adsorption unit – the adsorber – allows presenting the physical model of the process as follows. The process is one-dimensional, there is no heat exchange with the environment through the side surface of the adsorber, at any time the temperature of the adsorbent particle can be considered constant throughout its volume, convective heat transfer between the air
Modeling of the Thermal State of Systems
105
flow and the adsorbent layer is decisive, heat transfer is heat conduction through the air and the layer in the axial direction and the heat transfer by radiation is small and can be neglected, the air pressure during movement through the layer remains unchanged, the mass flow rate of air is constant, all the thermal properties of the air and layer are considered constant and independent of temperature [11–13]. When considering non-stationary temperature fields in adsorbers in heat transfer processes, in addition to the above assumptions, it is assumed that the adsorption properties of the layer do not affect the heat transfer process – the specific mass air flow in any section remains constant, and the thermal effects associated with adsorption and desorption are insignificant, and they can be neglected. To derive the basic equations for air and particles enclosed in the elementary volume of the layer, heat balance equations are compiled. For air: the change in the enthalpy of air over the considered period of time plus the heat introduced by the flow is equal to the amount of heat transferred to the layer during convective heat transfer. For particles of the layer: the change in the enthalpy of particles over the considered period of time is equal to the amount of heat transferred to the layer during convective heat transfer [14–18].
Fig. 1. Design scheme of unsteady heat transfer during heating of the adsorbent layer
The desired functions are the air temperature ta and the temperature of the particles of the layer tl, these functions are functions of two independent variables (Fig. 1). The temperature of the adsorbent layer is determined by the following data, among which mad – the mass of the adsorbent, kg; cl – specific heat capacity of the adsorbent J/ (kg °C); t0 – initial temperature of the adsorbent layer, °C; tl = t (s) – temperature of the adsorbent at the time s (s 0).
106
G. Uzakov et al.
The amount of accumulated heat in the adsorbent grain layer at time s is determined by: Q ¼ cl mad tðsÞ ¼ cl mad t;
ð3Þ
Q0 ¼ cl mad t0 :
ð4Þ
and at time s = 0:
Within the time ds, the amount of heat received by the adsorbent layer will increase by: dQ ¼ cl mad dtðsÞ:
ð5Þ
This amount of heat is transferred to the adsorbent by air at the constant temperature during convective heat transfer between the air and the adsorbent: dQ ¼ a ðta tl Þ ds;
ð6Þ
where a – the heat transfer coefficient, W/m2 ºC; ta – air temperature, ºC. Equating (5) and (6), the following heat balance equation is obtained: cl mad dt ¼ a ðta tl Þ ds:
ð7Þ
By separating the variables as follows: dt a ¼ ds: ta tl cl mad
ð8Þ
and integrating Eq. (8): Z
dt a ¼ ta tl cl mad
Z ds
ð9Þ
dt a ~1 ¼ sþN ta tl cl mad
ð10Þ
determined by: Z
where C1 – an arbitrary integration constant. To calculate the integral on the left side of Eq. (10), the following substitution is introduced: ta tl ¼ x;
ð11Þ
Modeling of the Thermal State of Systems
107
it turns out d(ta – tl) = dx, if we assume that ta = const, dtl ¼ dx:
ð12Þ
Taking into account expressions (11), (12) and the presence of the integration constant C1 in Eq. (10), after integration we obtain: Z dx ¼ ln x: x
ð13Þ
Replace the left side of Eq. (10), taking into account (12): ln x ¼
a ~1 s þ ln N cl mad
ð14Þ
The resulting Eq. (13) after the conversion, taking into account (10), has the following form: a 1 s t ¼ ta ecl mad :
ð15Þ
1
Under the initial condition s = 0, t (0) = t0, from (15) we get: t0 ¼ ta
1 0 1 1 e ¼ ta or ¼ ta t0 : ~1 ~1 ~1 N N N
ð16Þ
Further, for the convenience of calculations, we introduce the following notation: b¼
a ; cl mad
ð17Þ
from (15) taking into account (17) we obtain the final equation: t ¼ ta ðta t0 Þ ebs :
ð18Þ
The obtained Eq. (18) allows us to determine the temperature of heating of the adsorbent in the solar air-heating installation, taking into account the environmental parameters and thermal characteristics of the adsorbent itself (a, cl, mad). From the analysis of the obtained dependence, we can conclude that the heating temperature of the adsorbent in the adsorber is determined by its mass and heat capacity, initial temperature, heat transfer coefficient, and duration of heating. The resulting Eq. (18) will solve the following problems:
108
G. Uzakov et al.
1) Given the maximum temperature for heating the adsorbent, it is possible to determine the maximum duration of its heat treatment with air through a solar air heater (the duration of the thermal regeneration of adsorbents). 2) Given the maximum duration of heat treatment of the adsorbent, we determine the possible temperature of its heating. The resulting mathematical Eq. (18) is convenient for practical engineering calculations of thermal regeneration systems of adsorbents and does not require a large amount of initial and experimental data. We will calculate based on the following source data: W = 0,2 m/s; a = 35 W/(m2 °C); cl = 0,84 kJ/(kg°C); mad = 28,8 kg; ta = 60 °C; t0 = 18 °C; b¼
a 35 ¼ 1;44 103 ¼ cl mad 0;84 103 28;8
Based on the calculation results, a graph of the temperature change of heating the adsorbent in the adsorber was constructed (Fig. 2).
Fig. 2. The graph of the temperature of heating the adsorbent (activated carbon) in the adsorber of a solar air-heating installation
4 Conclusion 1. The proposed mathematical model of the temperature field of the adsorbent layer during solar heating allows one to determine the change in the temperature of the adsorbent along the length of the adsorber and in time and also takes into account the influence of the thermal characteristics of the adsorbent itself. 2. At the maximum heating temperature of the adsorbent, it is possible to determine the duration of heat treatment with hot air heated in a solar air-heating installation.
Modeling of the Thermal State of Systems
109
3. As can be seen from Fig. 2 with increasing heat transfer coefficient (a) from air to the adsorbent, the heating intensity of the adsorbent layer increases. 4. Thus, the obtained results of the study of the thermal regime of solar-thermal adsorbent regeneration and the mathematical model of the temperature field of the adsorbent layer make it possible to qualitatively control the process of solar-thermal regeneration of the adsorbent and choose the optimal technological parameters of the gas medium regulation systems in fruit storage.
References 1. Khuzhakulov, S.M., Uzakov, G.N., Vardiyashvili, A.B.: Modelirovanie i issledovanie teplomasso- i gazoobmennyh processov v uglublennyh plodoovoshchekhranilishchah (Modeling and investigation of heat and mass and gas exchange processes in in-depth fruit and vegetable storages). Problemy informatiki i energetiki (Prob. Inf. Energy), no. 6, 52–57 (2010). (in Russian) 2. Khujakulov, S.M., Uzakov, G.N., Vardiyashvili, A.B.: Effectiveness of solar heating systems for the regeneration of adsorbents in recessed fruit and vegetable storages. Appl. Solar Energy 49(4), 257–260 (2013) 3. Khujakulov, S.M., Uzakov, G.N.: Research of thermo moisten mode in underground vegetable storehouses in the conditions of hot-arid climate. Eur. Sci. Rev. 11–12, 164–166 (2017) 4. Uzakov, G.N., Khuzhakulov, S.M.: Geliovozduhonagrevatel'naya ustanovka s solnechnotermicheskoj regeneraciej adsorbentov (Helio-air-heating installation with solar-thermal regeneration of adsorbents). Tekhnika. Tekhnologii. Inzheneriya [Technics. Technol. Eng.], no. 2, 7–10 (2016). https://moluch.ru/th/8/archive/40/1339/. (in Russian) 5. Uzakov, G.N., Khuzhakulov, S.M.: Issledovanie teploobmennyh processov v sistemah solnechno-termicheskoj regeneracii adsorbentov (Study of heat exchange processes in systems of solar-thermal regeneration of adsorbents). Tekhnika. Tekhnologii. Inzheneriya (Tech. Technol. Eng.) (2), 10–13 (2016). https://moluch.ru/th/8/archive/40/1340/. (in Russian) 6. Uzakov, G.N., Khuzhakulov, S.M.: Issledovanie temperaturnyh rezhimov geliovozduhonagrevatel'noj ustanovki dlya sistem termicheskoj regeneracii adsorbentov (Study of the temperature conditions of a solar air heating installation for thermal regeneration of adsorbents). Geliotekhnika (Solar Eng.) (1), 40–43 (2017). (in Russian) 7. Abbasov, E.S., Umurzakova, M.A., Boltaboeva, M.P.: Effektivnost’ solnechnyh vozduhonagrevatelej (Efficiency of solar air heaters). Geliotekhnika (Solar Eng.) (2), 13–16 (2016). (in Russian) 8. Klychev, Sh.I., Bakhramov, S.A., Ismanzhanov, A.I.: Raspredelennaya nestacionarnaya teplovaya model’ dvuhkanal’nogo solnechnogo vozduhonagrevatelya (Distributed nonstationary thermal model of a two-channel solar air heater). Geliotekhnika (Solar Eng.) (3), 77–79 (2011). (in Russian) 9. Akulich, P.V.: Raschety sushil'nyh i teploobmennyh ustanovok (Calculations of drying and heat exchange plants). Minsk, Belarus. navuka (Minsk, Belarus. Navuka) (2010). 443 p. (in Russian) 10. Mikheev, M.A., Mikheeva, I.M.: Osnovy teploperedachi (Fundamentals of heat transfer). Moskva, Energiya (Moscow, Energy) (1977). 320 p. (in Russian)
110
G. Uzakov et al.
11. Yanyuk, V.Ya., Bondarev, V.I.: Holodil'nye kamery dlya hraneniya fruktov i ovoshchej v reguliruemoj gazovoj srede (Refrigerators for storing fruits and vegetables in a controlled gas environment). Moskva, Legkaya i pishchevaya promyshlennost’ (Moscow, Light and food industry) (1984). 128 p. (in Russian) 12. Kharitonov, V.P.: Adsorbciya v kondicionirovanii na holodil'nikah dlya plodov i ovoshchej (Adsorption in conditioning on refrigerators for fruits and vegetables). Moskva, Pishchevaya promyshlennost’ (Moscow, Food industry) (1978). 192 p. (in Russian) 13. Chabane, F.: Design, developing and testing of a solar air collector experimental and review the system with longitudinal fins. Int. J. Environ. Eng. Res. 2(I. 1), 18–26 (2013) 14. Henden, L., Rekstad, J., Meir, M.: Thermal performance of combined solar systems with different collector efficiencies. Sol. Energy 72(4), 299–305 (2002) 15. Kolb, A., Winter, E.R.F., Viskanta, R.: Experimental studies on a solar air collector with metal matrix absorber. Sol. Energy 65(2), 91–98 (1999) 16. Kurtas, I., Turgut, E.: Experimental investigation of solar air heater with free and fixed fins: efficiency and exergy loss. Int. J. Sci. Technol. 1(1), 75–82 (2006) 17. Garg, H.P., Choundghury, C., Datta, G.: Theoretical analysis of a new finned type solar collector. Energy 16, 1231–1238 (1991) 18. Kartashov, A.L., Safonov, E.F., Kartashova, M.A.: Issledovanie skhem, konstrukcij, tekhnicheskih reshenij ploskih solnechnyh termal’nyh kollektorov (Study of circuits, structures, technical solutions of flat solar thermal collectors). Vestnik YUzhno-Ural’skogo gosudarstvennogo universiteta (Bull. South Ural State Univ.), (16), 4–10 (2012). (in Russian)
Economic Aspects and Factors of Solar Energy Development in Ukraine Volodymyr Kozyrsky, Svitlana Makarevych, Semen Voloshyn, Tetiana Kozyrska, Vitaliy Savchenko(&), Anton Vorushylo, and Diana Sobolenko National University of Life and Environmental Sciences of Ukraine, St. Heroiv Oborony, 15, Kyiv 03041, Ukraine {epafort1,t.kozyrska}@ukr.net, [email protected], [email protected], [email protected], [email protected]
Abstract. The paper is devoted the development of renewable energy in Ukraine with particular directions of state stimulation this branch. The functional diagram of interrelations the factors influencing their efficiency and mass using is given (on the example of solar power plants). The concept of MicroGrid is considered as a power supply system for remote territories. The concept of “dynamic tariff” is proposed as an integrated indicator of the current cost of electricity at the input of the consumer. It is formed on the basis of the real cost of electricity from sources in the MicroGrid system, the cost of electricity losses during transportation, taxes, planned profits and a number of functional factors that determine the management of the balance of electricity generation and consumption, the impact of consumers and the MicroGrid system on electricity quality. Keywords: Renewable energy sources Solar electricity Solar energy station Electricity storage MicroGrid system Dynamic tariff Reclosers
Renewable energy sources (RES) are one of the priorities of energy policy and instruments to reduce carbon emissions. Efforts by countries to address this issue under the Kyoto Protocol are not yielding the expected effect. The first steps of the world community to solve this problem began at the 18th Conference of the Parties to the UN Framework Convention and the 8th meeting of the Parties to the Kyoto Protocol, which took place from November 26 to December 7, 2012 in Doha (Qatar) [1]. Under the second period of the Kyoto Protocol (2013–2020), Ukraine has committed itself to reducing greenhouse gas emissions by 20% (from 1990 levels) and has announced a long-term goal by 2050 - to reduce emissions by 50% compared to 1990. Why is the number of RES use projects growing in Ukraine, despite the fact that tariffs are falling? It turns out that at the same time there were several positive factors that determine the development of alternative energy. The first factor is regulatory. In addition to reducing green energy tariffs, the state guarantees that support will last long enough to recoup investment in energy facilities. As in some European countries, where the tariff decreases as the EU targets are met (20% of energy is obtained from renewable sources by 2020), Ukraine also has © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 111–117, 2021. https://doi.org/10.1007/978-3-030-68154-8_12
112
V. Kozyrsky et al.
mechanisms in place to reduce it. The green tariff will be reduced from the base level in 2009 by 20% in 2020 and another 30% in 2025, and abolished in 2030 as a temporary incentive for the development of RES. The second factor in stimulating the construction of stations is that the conditions for the construction of new facilities have been liberalized. In particular, the Verkhovna Rada changed the rule on the mandatory local component in the equipment. Instead, an incentive mechanism was introduced: the more domestic components in the station, the higher the rate of tariff increase. Another important factor is the rapid decline in technology in the world. In recent years, capital expenditures for the construction of SES have decreased significantly, as the cost of equipment has decreased. As of 2017, investments in 1 MW of capacity in Ukraine fluctuate at the level of 0.75–1.05 million euros. Payback periods of projects are 6–7 years. China intends to put into operation 15–20 GW of solar capacity annually. By 2020, their total number should triple. Such a plan envisages investments in China’s energy - $368 billion, this amount of investment (one of the main manufacturers of solar energy station (SES) components) will help reduce the cost of SES. A factor in reducing the cost of SES, too, is to increase the efficiency of the elements of SES (photovoltaic panels (FEP), batteries, etc.). To date, the production of most commercial modules of solar cells is based on crystalline Si (I generation FE) and amorphous thin-film SE with a large area of FE with a value of η * 5–8% (IIgeneration FE). The concept of III-generation is the use of nano- and microstructures (microwires). The main characteristic of FEP is the efficiency of photoelectric conversion or efficiency (efficiency), which for the currently available technological industrial FEP is in the range from 7% to 18% [1–7], and in laboratory developments reaches 39–43% [4, 8]. Thus, the improvement of technologies and efficiency of SES elements has made solar generation one of the leaders in terms of capacity growth not only in Ukraine but also in the world. One of the main elements of SES are batteries for electricity storage. Electricity storage technologies are also developing rapidly - lithium-ion batteries, hydrogen energy storage technologies, supercapacitors. These innovative developments have higher productivity and lower cost. Unfortunately, the current electricity storage technologies in Ukraine have a high cost. One kW of storage capacity costs from $ 500 to $ 3,000. It is expected that within 3–5 years the price will drop by a third or more. The factor that negatively affects the market of the domestic segment of RES is the low technical condition of electrical networks. For example, due to the high density of buildings in the Kyiv region, private homes have problems due to insufficient capacity of electrical networks. Importantly, the capacity of solar panels increased almost sevenfold: from 2.2 MW (2015) to 16.7 MW at the end of 2016. That is, households began to install more powerful panels. For all those who installed solar panels in 2017, there will be a fairly high green tariff of 19 cents per 1 kWh. Gradually, this tariff will be reduced on the same principle that works for industrial stations - up to 14 cents by 2030. An important factor that encourages private individuals to install panels is the reduction in the cost of small SES technologies. Ekotechnik Ukraine Group forecasts a
Economic Aspects and Factors of Solar Energy Development in Ukraine
113
5–10% drop in prices for solar power plant equipment for private households. In 2017, the average price of the entire set of equipment is about 900–950 euros per 1 kW of SES. Ukrainian startup SolarGaps has offered a new solution - the world’s first sun blinds have appeared. The project has already attracted the first investments and is preparing to enter various markets around the world. The equipment of a usual window blinds in the apartment, according to the startup, will cost $300. SolarGaps blinds in a threeroom apartment with windows facing south will be able to produce up to 600 W h, or about 4 kW h per day (100 kW h hour per month). Thus, solar energy is gradually becoming the cheapest source of energy in many countries in the long run. Ukraine is no exception. Solar activity in Ukraine is enough to ensure a return on investment for 6–7 years using the green tariff and 13–15 years without it. This term can be compared with the payback of a classic thermal power plant. Ukraine has all the natural and regulatory prerequisites for the development of RES. The share of green generation while maintaining favorable factors may reach 30% by 2035 due to the construction of 15 GW of new SES and wind farms. Investments in RES and Ukraine’s economy will create tens of thousands of jobs – which is an important social factor. It is important in determining the conditions for the development of SES to establish the degree of influence of various factors. In Fig. 1 presents a functional diagram of the relationship of factors for two types of SES with a capacity of P > 30 kW and P < 30 kW (private sector). Analytical relationship between the function of SES development and factors: ð1Þ ð2Þ Determination of coefficients can be performed by the method of expert assessments. The direction of Smart Grid technologies is considered to be a promising concept for the development of human energy supply systems, and for remote areas, such as rural areas – the creation of MicroGrid systems [8]. A number of pilot projects of similar systems have already been implemented in the world [9–12]. However, this concept creates a number of scientific, technical and economic problems that need to be solved to ensure the successful operation of MicroGrid systems. For example, the economic ones include the formation of a tariff policy in a closed MicroGrid system. MicroGrid system – a local power system containing two or more, homogeneous or heterogeneous power sources, means of energy storage, elements of transportation, distribution of electricity and switching, power consumers, systems for receiving, transmitting and analyzing information. The system is considered as a single integrated
114
V. Kozyrsky et al.
and controlled unit of subsystems “generation-accumulation-transportationdistribution-consumption” of electricity with an intelligent control system. The MicroGrid system can operate autonomously and as an integrated subsystem in a centralized power system.
SES P > 30 kW
SES P < 30 kW
State regulation (tariff) X1
State regulation (non-tariff) X2 State regulation (liberalization of construction) X3 Reducing the cost of technology X4 Increasing the efficiency of the elements X5 The level of reliability and bandwidth of electrical networks X6
Social factor X7
Logistics X8
Climatic factor X9
Environmental factor X10
Fig. 1. Functional diagram of the relationship between the factors of development of solar electricity.
Economic Aspects and Factors of Solar Energy Development in Ukraine
115
Since electricity is a commodity, and the cost of its generation and transportation changes over time, it is advisable to consider a new principle - the formation of the tariff for each consumer in the dynamics (during the period Dt, when the components of the tariff are constant). This tariff can be called - dynamic. Dynamic tariff (DT) is an integrated indicator of the current cost of electricity at the input of the consumer, which is formed on the basis of the actual cost of electricity from sources in the MicroGrid system (see Fig. 1), the cost of electricity losses during transportation, taxes, planned profits and a number of functional factors, which determine the management of the balance of generation and consumption of electricity, the impact of consumers and the MicroGrid system on the quality of electricity.
Fig. 2. Example of electrical network diagram in MicroGrid system
The scheme of Fig. 2 shows the main elements of the MicroGrid system: consumers – various objects of the settlement (utility and industrial spheres); SES - solar power plants (can be with electricity storage); reclosers – switching devices with automation and remote control; MicroGrid system control center – a server with a computer center and an intelligent system control algorithm; information transmission system from each element of the MicroGrid system to the control center (elements –
116
V. Kozyrsky et al.
SES, electricity storage, consumer meters, reclosers – equipped with means of transmitting and receiving information). During the operation of the MicroGrid system during the day the electrical load of consumers changes and as the loads in the nodes of the circuit Fig. 2 will be controlled on part of the reclosers, and the other part of the shutdown – the circuit configuration will be optimized by the minimum total power loss in the MicroGrid system RDPmin . Thus, the scheme of electricity transportation from SES can change and the consumer can receive electricity from different sources during the day. Functionality of determination of the dynamic tariff on input of the consumer of the electric power: ТД(Ci, Ki) = (Т1(С1)+Т2(С2) +Т3(С3) +Т4(С4) – Т5(С5)) · К1(S) · К2(I1) · К3(І2)
ð3Þ
where the components of the electricity tariff: T1(C1) – the dependence of the tariff for a particular consumer on the cost of electricity produced; T2(C2) – dependence of the tariff on the cost of electricity losses for transportation; T3(C3) – the dependence of the tariff on the amount of taxes; T4(C4) – a component determined by the planned profit of the MicroGrid system; T5(C5) – a component determined by the sale of electricity to the centralized power supply system at a “green tariff”; K1(S) – coefficient that takes into account the management of the balance of generation and consumption of electricity; K2(I1) – coefficient that takes into account the negative impact of the consumer on the quality of electricity; K3(I2) – is a factor that takes into account the negative impact of the MicroGrid system on the quality of electricity. The coefficient K1(S) is determined by the total electrical load S of the MicroGrid system and motivates consumers to use electricity during hours of minimum electrical load of the system and can take values within n K1(S) m, for example – n > 0, and m < 2 and can be determined on a contractual basis between electricity consumers, if they are co-owners of the MicroGrid system, or consumers and the owner of the system. The coefficients K2(U1) and K3(U2) are determined by the deviation of electricity quality parameters from the norms and determine the motivation of the consumer and the MicroGrid system not to reduce the quality of electricity. Component T1(C1) determines the dependence of the tariff on the cost of electricity produced by the generating installation, from which the consumer receives electricity during the time Dt. In the case of a closed mode of operation of electrical networks (see Fig. 2) of the MicroGrid system, the daily change of electrical loads of consumers will be a switching change of the network. In this case, in order to optimize the mode of operation of the network by the criterion RDPMIH (minimum total power losses during the time period Dt) there is a change in the configuration of the circuit and the consumer can receive electricity from one or more generating units. Since the cost of electricity produced from different generating units will be different, the dynamics of this component of the tariff for a particular consumer will change. Component T2(C2) determines the dependence of the tariff on the cost of electricity losses for its transportation to the consumer. Based on the conditions of formation in the MicroGrid system of component T1(C1), and periodic changes in the configuration time of the electrical network, it can be concluded that the component of tariff T2(C2) should be calculated for the consumer for each time period Dt.
Economic Aspects and Factors of Solar Energy Development in Ukraine
117
Component T3(C3) determines the dependence of the tariff on the amount of taxes and is a constant value at constant tax rates. Component T4(C4) is determined by the planned profit of the MicroGrid system and can be determined on a contractual basis by consumers as owners, or consumers and owners of the MicroGrid system. Component T5(C5) is determined by the sale of electricity to the centralized power supply system at a “green tariff” and can be included in the formation of the tariff on a contractual basis by consumers, owners of the MicroGrid system.
1 Conclusions 1. The most promising concept for the development of human energy supply systems is the direction of Smart Grid technology, and for remote areas, such as rural areas the creation of MicroGrid systems. 2. Modern technical means allow to create a system of online accounting of the cost of electricity input to the consumer (in real time), which requires the solution of a number of economic problems. 3. Introduction of the concept of dynamic tariff - an integral indicator of the current cost of electricity at the input of the consumer will reduce electricity losses for its transportation, which will significantly affect the cost of electricity consumed by consumers.
References 1. Introduction to Microgrids – What is a Microgrid. ABB. https://new.abb.com/distributedenergy-microgrids/introduction-to-microgrids 2. International Energy Agency. https://www.iea.org/ 3. European Commission. https://ec.europa.eu/ 4. Bondarenko, S.A., Zerkina, O.O.: Smart Grid as a basis for innovative transformations in the electricity market. BUSINESSINFORM, no. 4, pp. 105–111 (2019) 5. Cheremisin, M.M., Cherkashina, V.V., Popadchenko, S.A.: Features of introduction of smart grid technologies in the electric power industry of Ukraine. Sci. J. Sci. Rise /4/2(9), pp. 27– 31 (2015) 6. Kozyrsky, V.V., Guy, O.V.: SMART GRID technologies in energy supply systems: monograph. Comprint, Kyiv (2015). 336 p. 7. LAW of Ukraine on Voluntary Association of Territorial Communities (Vidomosti Verkhovnoi Rady (VVR)), no. 13, p. 9 (2015) 8. Forst, M.: Germany’s module industry poised for growth. SUN Wind Energy 5, 256–263 (2011) 9. Bekirov, E.A., Khimich, A.P.: Computer modeling of complex power systems with solar energy concentrators. Renew. Energy 1(24), 74–81 (2011) 10. Solar energy: industry review: based on materials from Nitol Solar Limited. - World Wide Web access mode: https://nitolsolar.com/rusolarenergy/ 11. Eckart von Malsen. Opportunities for large-scale projects. SUN Wind Energy 5, 254–255 (2011) 12. Solar energy/Wikipedia. [Free Internet Encyclopedia]. – World Wide Web access mode: https://en.wikipedia.org/wiki/Solar_energy
A Method for Ensuring Technical Feasibility of Distributed Balancing in Power Systems, Considering Peer-to-Peer Balancing Energy Trade Mariusz Drabecki(&) Institute of Control and Computation Engineering, Warsaw University of Technology, 15/19 Nowowiejska Street, Warsaw, Poland [email protected] Abstract. In this paper a method for ensuring network feasibility of power flow (in terms of all network constraints), when energy in power system is balanced via peer-to-peer contracts, is proposed and analyzed. The method considers subjective benefits (utility functions) of market participants. The method is based on two optimization problems originated from Optimal Power Flow standard formulations, which can be solved by system’s operator. It possibly can be used in power systems with high penetration of distributed energy resources (DERs) to give incentive to build and actively control those resources by market participants. The method was tested on 9-bus test system, under three different preference scenarios. Keywords: Balancing energy market flow Network constraints
Peer-to-peer energy trade Power
1 Introduction Currently, electrical energy is traded on specially designed markets. Their basic architecture is similar worldwide. Depending on when the trade happens, as opposed to the actual delivery date, one can distinguish following markets: long-term contracts market, where energy is traded bilaterally long before the delivery, day-ahead market where trade happens for the next day, intraday market where participants trade for the same day, at least one hour prior to delivery and the balancing market, being the real (or nearly real) time market. Through the balancing market, system’s operator assures that supplied energy exactly equals its consumption (transmission losses included) and that the power system operates safely and securely – i.e. that all technical constraints are met. This is assured by the fact that the operator participates in every buy/sell transaction and, by consequence that the operator is responsible for final dispatch of generating units [1]. However, it is believed that due to such a centralized balancing scheme, operations on the market are not optimal from each of the market participants’ subjective perspective. This is true even despite that the balancing is performed by the operator © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 118–131, 2021. https://doi.org/10.1007/978-3-030-68154-8_13
Ensuring feasibility of power flow
119
aiming to maximize the social welfare function, which usually means minimizing the overall generation cost, subject to all power flow technical constraints. Although being least costly for the aggregated set of energy consumers, this approach completely neglects individual preferences of market participants and of bilateral, peer-to-peer, agreements made between them. I argue that allowing them in the balancing process would add more freedom to the market itself. Such a liberated way of energy balancing is especially interesting when considering market mechanisms for power systems integrating many distributed energy resources (DERs). This energy balancing scheme is important to systems with high DERs penetration, as it directly gives incentives to market participants to both build DERs and actively control their power output, to maximize participants’ subjective profits. Distributed power systems, where energy is traded on a peer-to-peer basis are gaining recognition both theoretically and practically in multiple integration trials performed worldwide [2]. Yet, as described in the following section, such a distributed self-balancing and unit self-commitment may cause problems in terms of technical feasibility of power flow, resulting from so-dispatched units. Some other problems may also arise while trying to optimally balance demand and supply of energy in the system. Addressing these technical issues lay in the scope of this paper and form its contribution. Some interesting research in this field has already been conducted. Authors of [2] outlined that depending on whether network constraints are taken into account or not, and whether individual actions of peers are controlled centrally multiple ways of peerto-peer transactions can be identified, with their associated control models. One more approach towards optimizing distributed p2p trade on the market was given in [3]. Yet, authors of the latter neglected all network constraint and arising technical problems. These were addressed in [4], yet limited to active power only. However, both of these papers, together with [5] addressed the issue of maximizing subjective benefits of market participants. Authors of [6] have pointed that p2p trading might be even more beneficial when consumers optimize their power consumption and later trade negawatts. Some other examples of related research on relevant models of trade optimization are given in [7–9]. However, the role of system’s operator in these works remains unclear and possibly suppressed. According to authors of [2, 7, 10], the technical feasibility issues (network constraints) might be addressed by special control systems at generation/load bus level – Energy Management Systems. They will need to have capabilities of limiting possible generation/demand of a given market participant to ensure network feasibility of the dispatch. However, these systems would require correct integration and control knowing the full picture of current situation in the grid. Therefore, it is reasonable to assume that, at least in the period of transition from centralized to fully distributed systems architecture, the system’s operator is a relevant entity to guard security and stability of power supply to its customers, as it is the operator who knows all technical issues of the grid. This paper addresses the problem of possible infeasibility of power flow (both active and reactive) when energy balancing in the system is accomplished through peer-to-peer trading. In other words, the paper proposes a method for finding feasible generating units dispatch when peer-to-peer balancing energy contracts are made.
120
M. Drabecki
For this a multi-step method basing on optimization models is proposed. As an assumption to it, a typical scenario in which the power system under consideration may be a local grid, or a wider area sub-network, managed by the system operator striving for system self-balancing is considered.
2 Notion of Power Flow and Its Feasibility The main goal behind any electrical power system is to safely and securely provide the demanded amounts of electrical power to its consumers at each time instant. This consists of two main tasks, namely generation of the power in system’s generation buses and its transmission to the load buses. For the first task it is necessary to produce the amount of apparent power which equals exactly the demanded amount of power plus transmission losses. Transmission of the power through the system is referred to as the power flow. It results from combination of generating units setpoints (both for active and reactive power), demand attached to every load bus and from current technical parameters of the power grid itself. Normally, the power flow is estimated by numerically solving a set of nonlinear equations as shown in [11]. The power flow can only be technically attainable (in this paper referred as feasible) if all the variables, i.e. generating units setpoints, branch power flows, nodal voltages and nodal voltage angles fall within their technical limits. As it was shown in [12], network feasibility of the power flow, considering both active and reactive flow, depends highly on grid model used for determining the power dispatch of generating units. Thus, depending on the model of dispatching the units, despite the fact that all generation setpoints lay within generating units capabilities, a non-feasible power flow may be obtained.
3 Proposed Method In this section the proposed method for ensuring technical feasibility of power flow, when generating units dispatch is obtained via p2p balancing energy trade is described in detail. It is assumed that the operator is only responsible for checking if the generating units dispatch resulting from so-agreed contracts yields a feasible power flow and not for any a priori dispatch of generating units. This check is to be performed at the moment of accepting bilateral contracts. If resulting power flow is not feasible, the operator proposes a direction in which the market participants should be adjusting their contractual position, so to get as close to the power flow feasible value as possible. This is obtained by applying an additional constraint on accepted contracts. We can thus say, that the operator serves only as a feasibility guard of the power flow without imposing anything on the market participants for as long as possible. What is more, it is assumed that only active power balance is subject of trade between peers. The method is based on specially designed optimization problems. As contracts are signed directly between market players basing on their subjective goals, overall generation cost (social welfare function) is unknown and is of little interest to the operator.
Ensuring feasibility of power flow
121
The optimization goal is thus to minimize the violations of agreed balancing contractual positions, subject to all power flow technical constraints. We shall consider two network flow optimization sub-problems formulated further in this paper. The first sub-problem (Formulation (2)) allows for identifying the unit, whose total amount of contracted power should be adjusted the most in order to make the power flow feasible. This unit shall be considered as most problematic in terms of taken contracts and should be the first to change its contractual position, while in search of network-feasible power dispatch. The second formulation (Formulation (3)) however, arbitrarily imposes changes on generating units contractual positions, to obtain a feasible power flow while maximizing the use bilateral contracts between suppliers and consumers. I propose, the operator to use this formulation when market participants could not come to a networkfeasible dispatch through negotiations in a considerable amount of time/rounds when (2) was applied. Formulation (3) was first proposed in [13] and is cited in this paper as part of the now-proposed method. Both proposed formulations are the restrictions of the standard Optimal Power Flow (OPF) problem [14]. Thus, any feasible solution of one of these two subproblems would yield a network feasible power flow in terms of satisfying all grid technical constraints. The method can be summarized in the following generic steps: 1. Accept contracts between suppliers and consumers that lay within all technical limits of generating units, i.e. which do not exceed their technical maxima/minima. Architectural design of an appropriate IT platform is beyond the scope of this article and will not be elaborated. 2. Check the network feasibility of power flow that results from accepted contracts between suppliers and consumers. In case of infeasibility, apply Formulation (2) to identify the most problematic unit and to estimate transmission losses. 3. Calculate new constraint to be added. 4. Return information on most problematic unit and on the new constraint to market participants. 5. Enforce market participants to adjust their market positions through bilateral negotiations following the information issued in step 3 of this method, respecting the additional constraint. After they agree on new contracts distribution go to step 0. 6. Use Formulation (3) to impose changes on market positions of the participants, if no feasible solution is found through negotiations in a considerable amount of time (or rounds of negotiations). 3.1
Optimal Power Flow Problem (OPF)
The OPF problems are well-known and widely used nonlinear and non-convex optimization problems, solved by system operators for determining feasible active and reactive generating units dispatch. Usually, OPF is a problem of minimizing the total generation cost, with respect to all system constraints such as technical maxima/minima of generating unit constraints, line flow constraints, voltage levels and angle constraints and power balance
122
M. Drabecki
constraints. However, different other cost functions can be also used, such as: minimization of transmission losses or re-dispatch of reactive power for enhancing the level of system’s stability such as in [15]. Below, in (1), I cite a simplified formulation of the OPF problem as was given in [14], with standard cost function i.e. minimization of the overall generation costs: min
ð1aÞ
fP
subject to: D Pinj i Pi þ Pi ¼ 0
8i 2 N
ð1bÞ
D Qinj i Qi þ Qi ¼ 0
8i 2 N
ð1cÞ
Pi Pmax Pmin i i
8i 2 N G
ð1dÞ
Qi Qmax Qmin i i
8i 2 N G
ð1eÞ
U i U max U min i i
8i 2 N
ð1fÞ
Hmin Hi Hmax i i
8i 2 N
ð1gÞ
0 Sl Smax l
8l 2 N f
ð1hÞ
where: f P – the total cost of generation and transmission, N- set of indices of all buses in the system, N G - set of indices of all generating units, N f – set of indices of all inj
branches in the system, Pinj i =Qi - active/reactive power injection at bus i calculated D using standard, highly nonlinear power flow equations [11], PD i =Qi - active/reactive min=max
power demand at bus i, Pi =Qi – active/reactive output of unit i, Qi generation limits of unit i, U i – voltage magnitude at bus i, min=max
min=max Ui
min=max
=Pi
–
- limits on voltage
magnitude of bus i, Hi – voltage angle at bus i, Hi - limits on voltage angles of – maximum value of apparent bus i, Sl – apparent power flow through line l, Smax l power flow through line l. The above standard OPF problem formulation provides basis for the proposed optimization sub-problem Formulations (2) and (3) described below. 3.2
Formulation (2) – Identification of Most Problematic Unit
In the step 1 of method presented in this paper I propose to identify the unit, whose total contractual position (total amount of output power) should be adjusted the most to make the power flow network-feasible. To obtain Pareto-optimal solution, deviations from other units’ contractual positions should also be minimized. Below in (2), the model is presented. It assumes that reactive power is not subject of trade and that market players contract supply of active power only. As feasible set of
Ensuring feasibility of power flow
123
presented problem is a restriction of the standard OPF’s feasible set shown in (1), once a feasible solution to (2) is found, the resulting power flow is guaranteed to be technically feasible. min
X
G= þ
ð2aÞ
sG P;i T
8i 2 N G
ð2bÞ
þ sG P;i T
8i 2 N G
ð2cÞ
c1 T þ c2
i2N G
ðsP=Q;i Þ; where c1 c2
subject to:
X k2CN;i
PkC;i sG P;i Pi
X k2CN;i
Pmin Pi Pmax i i G= þ
sP;i
0
þ PkC;i þ sG P;i
8i 2 N G
ð2dÞ
8i 2 N G
ð2eÞ
8i 2 N G
ð2fÞ
þ constraintsð1bÞ ð1hÞ
ð2gÞ G þ =
where c1 ; c2 – arbitrarily chosen positive costs of contracts violation, sP;i - slack variable for making violation of limits possible for active power, CN; i – set of contracts signed with generating unit i, PkC;i – contracted volume of active/reactive power for generating unit i with contract k. This formulation allows for identification of most problematic generating unit. Once it is identified, one or more new constraints shall be formulated to limit the set of acceptable contracts by the operator, as proposed for step 4 of the method. This procedure is explained in Sect. 3.2.1. 3.2.1 Addition of New Constraint In step 4 of the method it is proposed to add a new constraint (calculated in step 2) after each round of negotiations between market participants. This is to ensure that in the power flow obtained in each round is closer to its technical network feasibility. The constraint is not added directly to the optimization problem, but is to be imposed on market participants negotiations– they simply need to take it into account while agreeing on new contractual positions. Contracts violating this constraint shall not be accepted by the operator. In the proposed method the constraint is formulated basing on the identified most problematic unit – as the result of problem (2). It is thus known by how much the active power output setpoint needs to deviate from desired contractual position in order to make the power flow network-feasible. The easiest way to get this exact information is to calculate differences d ðr Þ between the contractual positions and corresponding calculated optimal solution for most problematic unit p, in round r of negotiations
124
M. Drabecki
P (dðrÞ ¼ k2CN;p PkC;p ðr Þ Pp ðrÞ). Next, if dðrÞ 6¼ 0 a new constraint on accepted contracts in next round of iterations (r þ 1) shall be added for the most problematic P unit. If dðrÞ [ 0: the constraint shall take form of k2CN;i PkC;p ðr þ 1Þ Pp ðr Þ dðrÞ, P and of k2CN;i PkC;p ðr þ 1Þ Pp ðrÞ þ dðrÞ when d\0. Since it is guaranteed that the deviation for most problematic unit is minimal, it is obvious that constraining its accepted contracts by d will drive the dispatch closer to feasibility. Once added, the method returns to end of step 4 – negotiations between peers. 3.3
Formulation (3) – Operator’s Corrective Actions
Despite the fact of adopting a method for helping market participants agree on dispatch resulting in technically feasible power flow (as in Sects. 3.1–3.2), it is still possible that market participants would be unable to find one in a considerable amount of time or rounds of negotiations. Thus, in this section the optimization problem from [13] for allowing the operator to arbitrarily change generating units’ setpoints to achieve network feasibility of power flow, while maximizing the use of bilateral balancing contracts is cited. Similarly to Sect. 3.2 , it is assumed that market participants trade and contract supply of active power only. What is more, it assumed that each generating unit can change its contractual position – either by generating more or by reducing the output within its technical limits. Yet, we assume that such an action can be accomplished at a certain unit cost. This cost is known to the operator through offers submitted by power suppliers, prior to the balancing energy delivery, for changing their contractual positions. The discussed formulation is given in (3). X
min
i2N G
G þ = G þ = cP;i sP;i
ð3aÞ
subject to: X
PkC;i sG P;i Pi
X k2CN;i
k2CN;i
Pmin Pi Pmax i i G= þ
sP;i
0
þ PkC;i þ sG P;i
8i 2 N G 8i 2 N G
þ constraintsð1bÞ ð1hÞ G þ =
8i 2 N G
ð3bÞ ð3cÞ ð3dÞ ð3eÞ
where: cP;i – positive cost (price) of violation of upper/lower limits on generation of unit i for active power.
Ensuring feasibility of power flow
125
4 Assessment of the Method In the paper I propose to add more freedom into the balancing energy market. As a result, this should add more economical efficiency to the market, whose amount should be maximal. Yet, the freedom is achieved through rounds of bilateral negotiations, whose number should be minimal. Thus, given the above, I propose to assess the performance of the method by considering two criteria - measure of market efficiency and number of iterations it takes to come to consensus. 4.1
Measure of Market Effectiveness
Number of rounds of negotiations is fairly straightforward to calculate. However, things are different when it comes to market efficiency. One should keep in mind that the amount of efficiency is very subjective for each participant and differs from the simple social welfare function, as discussed previously. Thus, for the sake of assessing the performance let me formulate an effectiveness measure (f e ) that combines subjective goals of participants with objective total cost of generation, which should undergo maximization. Formulation of the measure is given in (4). f E ¼ 2f S=L cG cA ;
ð4Þ
where f S=L is the total amount of subjective benefits for making contracts between suppliers and consumers (sometimes referred to as utility function), and cG is the total cost of generation and cA is the total cost of adjustments of units operating points when formulation (3) is used. Assuming that subjective benefits of consumer j for making a bilateral contract with generating unit i is equal to subjective benefit of i making a contract with j, f S=L is formulated as f S=L ¼
X
X j2N L
a PD i2N G ji ji
¼
X
X j2N L
a PD ; i2N G ij ji
ð5Þ
where aji – factor of benefit for delivering balancing power from i to j quantifying how important the benefit is, PD ji – amount of delivered balancing power from i to j, N L – set of indices of consumers. For assessment, it is assumed that total generation cost of each unit is well known. Normally it can be of any form, yet most likely quadratic formulation is used, as in (6) where ai ; bi ; ci are cost coefficients and Pi is the active power output of unit i. X cG ¼ ðai P2i þ bi Pi þ ci Þ ð6Þ i2N G
4.2
Simulation Results
The proposed approach is tested on a nine-bus test system (presented in Fig. 1 and proposed in [16]) over three scenarios. The system for tests is deliberately chosen small, so that results are more evident and interpretable than on a larger test system.
126
M. Drabecki
In the tests we look at the measures described previously – i.e. market efficiency ðf E Þ and number of negotiation rounds (max. Round). Resulting value of f E is benchmarked with the result of standard OPF problem (1) – to check performance as opposed to currently used dispatching methods. All simulations were coded in Matlab and solved with MATPOWER MIPS solver [17].
Fig. 1. Test system topology
Test system comprises of nine buses, three of whom being generator buses and three load buses. All generating units have their technical minima equal to 10 MW and technical maxima of unit 1, 2 and 3: 250 MW, 300 MW, 270 MW respectively, giving the total generating capacity of 820 MW. The load is assumed constant and attached to buses 5, 7 and 9. Value of load in each of these buses is as follows, given as pairs (active demand, reactive demand): bus 5 – (90 MW, 30 MVAR); bus 7 – (100 MW, 35 MVAR); bus 9 – (125 MW, 50 MVAR), giving the total demand of (315 MW, 115 MVAR). The system consists also of nine transmission lines. Their ratings (maximum flow capacities) can be found in [16]. For simulations it is assumed that system topology, with locations of load and values of the load and with costs of generation are constant. It is also assumed that, demanded load and technical limits of generating units are known and guarded in step 0 of the method. What is more, I assume that subjective benefit for unit i for making a contract with consumer j is symmetrical and equal to the benefit for consumer j, i.e. aij ¼ aji . Apart from this, I also assume that after estimation of transmission losses, consumers are forced to contract their compensation evenly, i.e. in the discussed test system each consumer contracts exactly one third of the losses. Generation costs for each of the generating units are presented in Table 1. They correspond to quadratic formulation given in (6). For simplicity it is assumed here that generating units sell energy at its cost, meaning that sell price equals generation costs. Benefit coefficients are assumed as given in Table 2. These reflect both benefit of the consumers and of suppliers meaning that benefit should be summed only once into f E .
Ensuring feasibility of power flow
127
Previous statements yield that economic surplus of the market is produced only through the benefits. It is assumed here that each consumer wants to contract as much demand with its preferred unit, as possible. From generating unit’s perspective – the unit accepts those contracts, which bring most benefit to it until its maximum capacity is reached. What is more, it is also assumed that operator’s IT system guards that sum of contracts made for each unit falls within this unit’s technical capacities – from minimum to maximum. For all of the scenarios, with the load as written previously, the objective of standard OPF problem (1) is equal to 5,296.69 USD, which in terms of benefit function equals f E ¼ 5; 296:69 USD. This is the cost value for optimal solution: P1 ¼ 89:7986MW,P2 ¼ 134:3207MW,P3 ¼ 94:1874MW. Table 1. Generation cost coefficients Generation cost coefficients ai [USD/MW2] Gen. Unit 1 0.1100 Gen. Unit 2 0.0850 Gen. Unit 3 0.1225
bi [USD/MW] 5.0000 1.2000 1.0000
ci [USD] 150.0000 600.0000 335.0000
Table 2. Benefit coefficients assumed in test cases Benefit coefficients [100 USD/MW] Scenario Consumer 1 (bus 5) Consumer 2 (bus Gen. Gen. Gen. Gen. Gen. Unit 1 Unit 2 Unit 3 Unit 1 Unit 2 S1 0.2 0.7 0.1 0.3 0.6 S2 0.2 0.1 0.7 0.3 0.1 S3 1 0 0 1 0
7) Gen. Unit 3 0.1 0.6 0
Consumer 3 (bus Gen. Gen. Unit 1 Unit 2 0.1 0.8 0.1 0.1 1 0
9) Gen. Unit 3 0.1 0.8 0
4.2.1 Numerical Results In this section brief description of each of the scenario is given with most important results presented in Table 3. In Scenario 1, each of the consumers finds most benefit in making a bilateral balancing contract with generating unit 2. Yet, each of the consumers to a slightly different extent. Thus, it is possible for the units to maximize their benefits and accept those contracts, from whom they can benefit the most. At first, most of the power was to be supplied by unit 2. Yet, this highly violated maximum flow capacity of line adjacent to it. Therefore, significant changes in dispatch were required. In this scenario market participants agreed on a network feasible solution in the third round of negotiations. During the course of the method two different units were identified as most problematic – in round 1: unit 2 and in round 2: unit 1. To guide the search new constraints were imposed on their acceptable contracts. Results are shown in Table 3.
128
M. Drabecki
Scenario 2, however, is slightly different from the previous one. Here the preferred unit for all of consumers was unit 3. Similarly to previously considered example, benefit coefficients differed making it easy for the generating unit to decide on which contracts to accept. However, this time most of the problem with network feasibility resulted from transmission losses which were not considered by the participants at first round of balancing. They were estimated by solving (2) and an additional constraint was added on most problematic unit’s acceptable contracts. Market participants managed to find consensus in two rounds of negotiations. Results are presented in Table 3. Scenario 3 differs significantly from both previously presented cases. This time all consumers wish to contract all energy supply from unit 1 without any will to compromise. After a considerable round of negotiations (here assumed 10), market participants did not come to a conclusion on dispatch yielding a network- feasible power flow. Therefore, the Operator used formulation (3) to arbitrarily adjust operating points G þ = of generating units. It was assumed that prices cP;i were all equal to 100 for all units. They form the cost cA which is considered while calculating the value of f E . Table 3. Results of numerical examples Scenario 1, f E ¼ 12; 484:40USD, max. Round = 3 Gen. Unit 1 Original contract [MW] Customer 1 Customer 2 Customer 3 Total [MW] Additional constraint added? Customer 1 Customer 2 Customer 3 Total [MW] Additional constraint added? Customer 1 Customer 2 Customer 3 Total [MW] Additional constraint added?
3.3400 3.3300 3.3300 10.0000 P
Feasible contract [MW] 4.8505 61.5815 4.8405 71.2726
PkC;1 71:28 (in round 2) k2CN;i
Gen. Unit 2 Original contract [MW]
Feasible contract [MW]
83.3200 83.0896 93.3400 38.1096 118.3400 118.1096 295.0000 239.3087 P k P 249 k2CN;i C;2 (in round 1)
Scenario 2, f E ¼ 9; 901:26USD, max. Round = 2 3.3400 4,9968 3.3400 4.9968 28.3300 29.9868 3.3300 4.9868 3.3300 4.9868 3.3300 4.9868 35.0000 39.9704 10.0000 14.9704 P k No k2CN;i PC;2 14
Gen. Unit 3 Original contract [MW]
Feasible contract [MW]
3.3400 3.3300 3.3300 10,0000 No
4.8505 3.0996 4.8405 12.7906
83.3200 68.3400 118.3400 270.0000 No
83.3200 68.3400 118.3400 270.0000
3.3400 3.3300 3.3300 10.0000 No
3.3333 3.3333 3.3333 10.0000
(in round 2) Scenario 3, f E ¼ 7; 820:06USD, max. Round = 10+ 3.3400 20.5085 83.3200 83.1551 3.3300 20.5085 93.3400 83.1551 3.3300 20.5085 118.3400 83.1551 10.0000 61.5255 295.0000 249.4652 No Yes, without satisfactory results
Ensuring feasibility of power flow
129
The results show that under the assumptions made, adding more freedom to the balancing energy market might add some significant benefit as opposed to the standard dispatching method, e.g. solving the regular Optimal Power Flow. As shown in Table 3, this is the case for all scenarios. Not surprisingly, it is more beneficial to the market when participants manage to decide on contracts alone (as in Scenario 1 and in Scenario 2) rather than as when these contracts are arbitrarily changed by the operator – as in Scenario 3. What is more, the results show that using the proposed method it is possible to find a network-feasible units dispatch for every discussed scenario, which actually maximizes use of bilateral balancing energy contracts made between participants.
5 Conclusions and Discussion Power systems with high penetration of distributed energy resources are nowadays becoming of interest to both academia and industry. Balancing of energy in such a system ideally should performed via peer-to-peer trading between energy market participants, as the participants should be given direct incentives to both develop DER infrastructure and to actively control it to maximize their subjective profit. Therefore, in this paper a multi-step method for assuring network-feasibility of power flow resulting from generating unit dispatch obtained via bilateral, peer-to-peer, balancing contracts is presented and analyzed. The proposed method allows market participants to make bilateral contracts between consumers and suppliers for provision of necessary amounts of balancing energy. The method assumes that market participants do not have sufficient knowledge on technical situation in the power system and thus, do not know a priori what dispatch to agree on. Therefore, it is assumed that it is system operator’s responsibility to guide participants in the search of network-feasible dispatch. The method however, returns necessary information on direction of this search, as a result of optimization problem (2) with heuristically added constraints on accepted bilateral contracts. If despite having this information market participants are unable to agree on technically feasible contracts, the method foresees the ability for the operator to arbitrarily adjust those contracts using optimization problem (3). Yet, when such a situation arises, suppliers are to be paid for adjusting their contractual positions. The basis of the proposed method is formed by two optimization problems given in formulations (2) and (3), which aim to maximize use of such contracts, while always guaranteeing feasibility of resulting power flow. Proposed approach was tested in simulations over three scenarios. All tests showed that allowing free trade on the balancing energy market may bring more benefits to the market than currently used centralized balancing options – namely when it is the system’s operator who is always a party in any transaction made on the balancing market. What is more, it has been shown in the paper, that it is possible to derive methods which can satisfactorily help to achieve technical feasibility of power flow resulting from so obtained dispatch of generating units.
130
M. Drabecki
Some perspective for further research can be identified. First of all, one may consider designing architecture for an operator’s IT system for accepting contracts – such as assumed in step 0 of described method. Another perspective may be development of new market efficiency measures or new models for assuring technical feasibility of power flow. They could be developed basing on different optimization, modelling and simulation problems, possibly including development of agent systems. When all of the above issues are deeply considered and followed-up by academic communities worldwide, hopefully it will be possible to establish highly effective bilateral balancing energy markets, that may be able to give incentives for transition towards high penetration of green energy. Specific design of these markets shall be also given thorough consideration, to make room for all their participants, regardless of their size and power supply capabilities.
References 1. Wang, Q., et al.: Review of real-time electricity markets for integrating distributed energy resources and demand response. Appl. Energy 138, 695–706 (2015) 2. Guerrero, J., et al.: Towards a transactive energy system for integration of distributed energy resources: Home energy management, distributed optimal power flow, and peer-to-peer energy trading. Renew. Sustain. Energy Rev. 132, 110000 (2020) 3. Lee, W., et al.: Optimal operation strategy for community-based prosumers through cooperative P2P trading. In: 2019 IEEE Milan PowerTech. IEEE (2019) 4. Guerrero, J., Archie, C., Verbič, G.: Decentralized P2P energy trading under network constraints in a low-voltage network. IEEE Trans. Smart Grid 10(5), 5163–5173 (2018) 5. Pasha, A.M., et al.: A utility maximized demand-side management for autonomous microgrid. In: 2018 IEEE Electrical Power and Energy Conference (EPEC). IEEE (2018) 6. Okawa, Y., Toru, N.: Distributed optimal power management via negawatt trading in realtime electricity market. IEEE Trans. Smart Grid 8(6), 3009–3019 (2017) 7. Zhang, Y., Chow, M.Y.: Distributed optimal generation dispatch considering transmission losses. In: 2015 North American Power Symposium (NAPS), Charlotte (2015) 8. Kar, S., Hug, G.: Distributed robust economic dispatch: a consensus + innovations approach. In: 2012 IEEE Power and Energy Society General Meeting, San Diego (2012) 9. Lin, C., Lin, S.: Distributed optimal power flow with discrete control variables of large distributed power systems. IEEE Trans. Power Syst. 23(3) (2008) 10. Garrity T.F.: Innovation and trends for future electric power systems. In: 2009 Power Systems Conference, Clemson (2009) 11. Machowski, J., Lubosny, Z., Bialek, J.W., Bumby, J.R.: Power System Dynamics: Stability and Control. Wiley (2020) 12. Drabecki, M., Toczyłowski, E.: Comparison of three approaches to the security constrained unit commitment problem. Zeszyty Naukowe Wydziału Elektrotechniki i Automatyki Politechniki Gdańskiej (62) (2019). http://yadda.icm.edu.pl/baztech/element/bwmeta1.ele ment.baztech-0345a0bb-f574-41c1-be5d-71f50b8e060c?q=5838bcf6-a5ee-44ec-93f8-ea79d 2cd037b$1&qt=IN_PAGE 13. Drabecki, M., Toczyłowski, E.: Obtaining feasibility of power flows in the deregulated electricity market environment. Przegląd Elektrotechniczny 95 (2019)
Ensuring feasibility of power flow
131
14. Zhu, J.: Optimization of Power System Operation. Wiley, Hoboken (2015) 15. Drabecki, M.: A method for enhancing power system’s steady-state voltage stability level by considering active power optimal dispatch with linear grid models. In: Zeszyty Naukowe Wydziału Elektrotechniki i Automatyki Politechniki Gdańskiej (62) (2019). http://yadda. icm.edu.pl/baztech/element/bwmeta1.element.baztech-2eb9eb53-193d-4970-827d-4dba236e 0936?q=5838bcf6-a5ee-44ec-93f8-ea79d2cd037b$2&qt=IN_PAGE 16. Chow, J.H. (ed.): Time-Scale Modelling of Dynamic Networks with Applications to Power Systems. Lecture Notes in Control and Information Sciences vol. 26, pp. 59–93. Springer. Berlin (1982) 17. Zimmerman R.D., Murillo-Sanchez C.E., Thomas R.J.: MATPOWER: steady-state operations, planning and analysis tools for power systems research and education. IEEE Trans. Power Syst. 26(1) (2011)
Sustainable Optimization, Metaheuristics and Computing for Expert System
The Results of a Compromise Solution, Which Were Obtained on the Basis of the Method of Uncertain Lagrange Multipliers to Determine the Influence of Design Factors of the Elastic-Damping Mechanism in the Tractor Transmission Sergey Senkevich(&), Ekaterina Ilchenko, Aleksandr Prilukov, and Mikhail Chaplygin Federal Scientific Agroengineering Center VIM, 1st Institute pas. 5, Moscow 109428, Russia [email protected], [email protected], [email protected], [email protected]
Abstract. The article is devoted to the research of a compromise solution for finding the optimal parameters of the Elastic Damping Mechanism (EDMṇ) in the 14 kN class tractor transmission. The tractor was part of three different machine-tractor units and performed the main agricultural operations: plowing, cultivation and sowing. The task was to define a single function (compromise solution), which can be used to describe the processes in the transmission when performing these operations. The Lagrange multiplier method was used to obtain a compromise solution. It was necessary to create one General mathematical model out of three mathematical models, which should correctly reflect the nature of the ongoing processes. It was necessary to determine the Lagrange multipliers k1, k2…km for this purpose. All calculations were made in the software environment Maple and MatLab. Compromise problem solution was found. The extremum of the «transmission transparency degree» function is found based on Lagrange multipliers. A compromise model is obtained that expresses the influence of the main EDM parameters on the «transmission transparency degree». The values of factors included in the resulting Lagrange function are found. Keywords: Method of uncertain lagrange multipliers mechanism Transmission Optimal parameters
Elastic damping
1 Introduction Optimizing the operation of certain elements and replacing outdated mechanisms with new developments can significantly improve performance. A large number of studies exist in the field of improving mobile technology.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 135–144, 2021. https://doi.org/10.1007/978-3-030-68154-8_14
136
S. Senkevich et al.
One of the significant elements requiring optimization is the transmission of mobile power equipment. Improvements will allow us to significantly influence the efficiency of resource use and control precision. These questions are being investigated all over the world. For example, a dynamic transmission model has been created for improved gear shifting by some researchers, analysis of gear shifting processes has been conducted, and coordinated strategies for managing these processes have been developed with the replacement of multiple clutches while the tractor is operating in the field [1]. A numerical model of dynamic characteristics for a new transmission with multistage end gears as the main component was developed by other scientists [2]. A simulation model of the synchronous transmission unit was created, which differs from other models in the simplicity and quality of the model, which can be computed in real time [3]. A large number of analyses of different types of transmissions exist, for example, such as analysis of the control of a hybrid automatic manual transmission and a hybrid dual-clutch transmission [4]. The dynamic analysis of a vehicle with three different transmission configurations is of interest [5]: with spring buffer; machine with spring buffer in combination with stabilizer bars; machine with spring damper in combination with hydraulically linked suspension on the surface of the roll. Linear stability analysis for groups of vehicles researches were conducted to reduce fuel consumption and increase the fuel reserve not so long ago [6]. Analysis and modeling of various systems provide the basis for creating new transmissions, for example, compact 18-speed epicyclic transmission for a small vehicle, that presented in the work [7]. Authors showed the presented optimal power distribution algorithms and algorithms for determining the optimal mode are effective by using process modeling in their work [8]. Other authors presented new rolling-resistant hydraulically linked suspension with two batteries for each liquid line in their work [9]. Correct computing of rolling-resistant hydraulically linked suspension is a key to quality management system [10]. Reducing dynamic loads on drive shafts has a positive effect on system reliability [11]. The effective interaction of the chassis with the ground also helps to increase productivity and reduce fuel consumption [12, 13]. The review shows the relevance of research on mobile machines transmissions optimization. These questions, which are investigated all over the world, allow us to significantly improve the parameters of the mobile tractor.
2 Purpose of Research Our previous works [14–17] were aimed at finding the optimal parameters of the Elastic Damping Mechanism (EDM) in the 14 kN class tractor transmission. Conducted research is described in detail in these papers [14–17]. The indicator «P» is proposed to assess the protective qualities of the mechanism («transmission transparency degree») as the ratio of the current amplitude of vibrations of the engine speed to its maximum value [14, 15, 18]. If P = 1 is reduction gear is absolutely «transparent», the engine is not protected from fluctuations in the traction load (this happens in serial transmissions). If P = 0, reduction gear is absolutely «not transparent» and will
The Results of a Compromise Solution, Which Were Obtained
137
completely absorb vibrations transmitted to the engine. Research has been conducted for next agricultural operations: plowing, cultivation and seeding. Purpose of this research is finding the conditional extremum of the “transmission transparency degree” function regarding restrictions as a result of a compromise solution based on Lagrange multipliers. The following tasks were solved to achieve this goal: 1. Acquisition a compromise model that expresses the EDM main parameters influence on the «degree of transparency of the transmission»; 2. Finding the factors values included in the resulting Lagrange function.
3 Materials and Methods Main limitations for conducting this research the restrictions were chosen based on our previous research [14, 15, 17, 18]. The name and designation of the elastic damping mechanism factors are given in Table 1. Table 1. Name and designation of factors of the elastic damping mechanism installed in the tractor transmission. N 1 2 3 4 5
Factor name Factor identification The throttle cross-sectional area Sth, m2 Volume of hydropneumatic accumulator (HPA) Vhpa, m3 The air pressure in HPA Pa, Pa Inertia moment of additional load Jth, kgm2 The oscillation frequency of the traction load f, Hz
Code mark x1 x2 x3 x4 x5
The method of Lagrange multipliers was used to create a compromise solution based on three previously obtained models [14, 15, 18], the method consists of this: First. The lagrange function must be composed to find a conditional extremum, this has the form in General for a function of several variables n f(x1, x2,…,xn) and m coupling equations (herewith n > m) in general (1) [19]: F ðx1 ; x2 ; . . .; xn ; k1 ; k2 ; . . .; km Þ ¼ f þ k1 u1 þ k2 u2 þ þ km um
where k1, k2…km is Lagrange multiplier; f is function selected as the main one; u1, u2…um is functions selected as a constraint.
ð1Þ
138
S. Senkevich et al.
Restrictions are presented as equalities in our study, however, the Lagrangian method allows for the constraint by inequalities and other functions. Second. The necessary conditions for the extremum are set by a system of Eqs. (2), consisting of partial derivatives of an Eq. (1) and constraints, equal to zero, stationary points are determined from this system of equations [20]: ¼ 0; ði ¼ 1. . . nÞ uj ¼ 0; ðj ¼ 1. . . mÞ dF dxi
ð2Þ
The presence of a conditional extremum can be determined based on this. The symbol d2F is a sufficient condition that can be used to find out the nature of the extreme. If d2F > 0 at a stationary point, the function f(x1, x2,…, xn) has a conditional minimum at this point, if d2F < 0, function has conditional maximum. A compromise solution had to be found in this study: to create one General mathematical model out of three mathematical models, which should correctly reflect the nature of the ongoing processes. It was necessary to determine the Lagrange multipliers k1, k2…km for this purpose.
4 Discussion Results Providing all models. Model (3) that was taken as the main function f(x1, x2,…, xn): y1 ðx1 ; x2 ; x3 ; x4 ; x5 Þ ¼ 0:578 þ 2:29 103 x1 3:72 103 x2 þ 0:053 x3 þ 4:51 103 x4 þ 0:021 x5 1:34 103 x1 x2 3:14 103 x1 x3 2:41 103 x1 x4 þ 3:32 104 x1 x5 ð3Þ 1:25 103 x2 x3 þ 1:37 103 x2 x4 0:044 103 x2 x5 6:04 105 x3 x4 0:035 x3 x5 þ 8:09 103 x4 x5 þ 0:03 x21 þ 0:03 x22 þ 0:031 x23 þ 0:03 x24 þ 0:095 x25 : The model (4) and (5), which were taken as the conditions and restrictions. y2 ðx1 ; x2 ; x3 ; x4 ; x5 Þ ¼ 0:722 6:325 103 x1 6:657 103 x2 þ 0:033 x3 5:526 103 x4 3:072 103 x5 þ 6:981 103 x1 x2 þ 6:851 103 x1 x3 þ 5:709 103 x1 x4 7:045 103 x1 x5 þ 8:462 103 x2 x3 þ 7:245 103 x2 x4 5:554 103 x2 x5 þ 7:115 103 x3 x4 þ 2:725 103 x3 x5 6:826 103 x4 x5 þ 2:905 103 x21 þ 2:466 103 x22 7:031 103 x23 þ 3:445 103 x24 þ 0:038 x25 : ð4Þ
The Results of a Compromise Solution, Which Were Obtained
y3 ðx1 ; x2 ; x3 ; x4 ; x5 Þ ¼ 0:6212 þ 0:06538 x3 þ 0:06613 x5 þ 0:04968 x23 0:06994 x1 x2 þ 0:04868 x2 x4 0:03494 x2 x5 0:04581 x3 x5 :
139
ð5Þ
All further calculations were made in the Maple and MATLAB software environment for convenience. The Lagrange function is an Eq. (6) in general form in this case: Lðx1 ; x2 ; . . .; x5 ; k1 ; k2 Þ ¼ y1 ðx1 . . .x5 Þ þ k2 y2 ðx1 . . .x5 Þ þ k3 y3 ðx1 . . .x5 Þ
ð6Þ
that takes the form (7) when Eqs. (3), (4), and (5) are substituted into it: L :¼ 0:578 þ 2:29 103 x1 3:72 103 x2 þ 0:053 x3 þ 4:51 103 x4 þ 0:021 x5 1:34 103 x1 x2 3:14 103 x1 x3 2:41 103 x1 x4 þ 3:32 104 x1 x5 1:25 103 x2 x3 þ 1:37 103 x2 x4 0:044 103 x2 x5 6:04 105 x3 x4 0:035 x3 x5 þ 8:09 103 x4 x5 þ 0:03 x21 þ 0:03 x22 þ 0:031 x23 þ 0:03 x24 þ 0:095 x25 þ k2 ð0:722 6:325 103 x1 6:657 103 x2 þ 0:033 x3 5:526 103 x4 3:072 103 x5 þ 6:981 103 x1 x2 þ 6:851 103 x1 x3 þ 5:709 103 x1 x4 7:045 103 x1 x5 þ 8:462 103 x2 x3 þ 7:245 103 x2 x4 5:554 103 x2 x5 þ 7:115 103 x3 x4 þ 2:725 103 x3 x5 6:826 103 x4 x5 þ 2:905 103 x21 þ 2:466 103 x22 7:031 103 x23 þ 3:445 103 x24 þ 0:038 x25 Þ þ k3 ð0:6212 þ 0:06538 x3 þ 0:06613 x5 þ 0:04968 x23 0:06994 x1 x2 þ 0:04868 x2 x4 0:03494 x2 x5 0:04581 x3 x5 Þ: ð7Þ Partial derivatives were taken from the variables x1, x2, x3, x4, x5 in Eq. (7) to find the extremes of the Lagrange function, then they were equated to zero. The resulting equations, the constraint Eqs. (4) and (5), those are equated to zero, constitute a system of equations, during the solution of which all the variables x (x1, x2, x3, x4, x5) were expressed in terms of k2 and k3 those are means L2 and L3 in the program text [21]. Next, the values x1…x5 were inserted into the main equation and the constraint equations, then into the Lagrange function. This part of the calculations was performed in the Maple 5 software environment (Fig. 1).
140
S. Senkevich et al.
Fig. 1. Maple software environment.
Two small programs were written to find Lagrange coefficients in the MATLAB software environment (Fig. 2).
Fig. 2. MATLAB software environment.
The Results of a Compromise Solution, Which Were Obtained
141
The results of calculations of variables x1…x5 and coefficients k2 and k3 and the resulting value of the Lagrange function are presented in Table 2. Table 2. Results of calculating included in the General Lagrange function variables. x2 x3 x4 x5 k2 k3 L x1 1 0,35 −0,0872 0,0528 −1,2677 −0,0067 −1,6901 −1 1 2 0,68 −0,1258 0,1496 0,4461 0,0435 0,1184 1 −1
Analysis of the Table 2 shows that the best version of the Lagrange function is shown in the first row of the table. But these values are not applicable because the factors x3 and x5 are outside the scope of research. So, the values of line 2 are assumed to be optimal values. The Lagrange compromise model (8) was obtained by substituting optimal values: L ¼ 0:032905 x21 þ 0:032466 x22 0:025711 x23 þ 0:033445 x24 þ 0:133 x25 0:004035 x1 þ 0:075585 x1 x2 þ 0:003715 x1 x3 þ 0:003304 x1 x4 0:0067131 x1 x5 ð8Þ þ 0:007213 x2 x3 0:040062 x2 x4 þ 0:073386 x2 x5 þ 0:00977963 x3 x4 þ 0:01081 x3 x5 þ 0:001259 x4 x5 þ 0:6788 0:010373 x2 þ 0:02062 x3 0:001013 x4 0:048202 x5 ; Graphical dependencies of the Lagrange function are shown in Fig. 3. They are constructed using Eq. (9). Factors not shown on the graph are fixed at zero (equal to zero) in the function (8). Figure 3 shows the surface plot of the Lagrange function depending on the variables x1 and x5 which is obtained using the Eq. (9):
Fig. 3. Graph of the Lagrange function dependence on variables x1 and x5
142
S. Senkevich et al.
Lðx1 ; x5 Þ ¼ 0:6788 0:004035 x1 0:048202 x5 þ 0:032905 x21 þ 0:133 x25 0:0067131 x1 x5 ;
ð9Þ
5 Conclusions A compromise solution to this problem was found in this study. The extremum of the “transmission transparency degree” function is found using Lagrange multipliers. A compromise model is obtained that expresses the influence of the main EDM parameters on the «transmission transparency degree». The values of factors included in the resulting Lagrange function are found. We have determined the value of the desired function is as a solution to the compromise problem L = 0.68; the factor values are x1 = −0.1258, x2 = 0.1496, x3 = 0.4461, x4 = 0.0435, x5 = 0.1184, they are included in the Lagrange function. Finding natural values is determined by formulas those are described in detail in the research [14]. The values are: Sth = 2.285 10–4 m2, Vhpa, = 4.030 10–3 m3, Pa, = 4.446 5 10 Pa, Jth, = 4.740 10–3 kgm2, f, = 0.940 Hz. Acknowledgments. The team of authors expresses recognition for the organization of the Conference ICO'2020, Thailand, and personally Dr. Pandian Vasant. The authors are grateful to anonymous referees for their helpful comments.
References 1. Li, B., Sun, D., Hu, M., Zhou, X., Liu, J., Wang, D.: Coordinated control of gear shifting process with multiple clutches for power-shift transmission. Mech. Mach. Theory 140, 274– 291 (2019). https://doi.org/10.1016/j.mechmachtheory.2019.06.009 2. Chen, X., Hu, Q., Xu, Z., Zhu, C.: Numerical modeling and dynamic characteristics study of coupling vibration of multistage face gearsplanetary transmission. Mech. Sci. 10, 475–495 (2019). https://doi.org/10.5194/ms-10-475-2019 3. Kirchner, M., Eberhard, P.: Simulation model of a gear synchronisation unit for application in a real-time HiL environment. Veh. Syst. Dyn. 55(5), 668–680 (2017). https://doi.org/10. 1080/00423114.2016.1277025 4. Guercioni, G.R., Vigliani, A.: Gearshift control strategies for hybrid electric vehicles: a comparison of powertrains equipped with automated manual transmissions and dual-clutch transmissions. Proc. Inst. Mech. Eng. Part D J. Autom. Eng. 233(11), 2761–2779 (2019). https://doi.org/10.1177/0954407018804120 5. Zhu, S., Xu, G., Tkachev, A., Wang, L., Zhang, N.: Comparison of the road-holding abilities of a roll-plane hydraulically interconnected suspension system and an anti-roll bar system. Proc. Inst. Mech. Eng. Part D J. Autom. Eng. 231(11), 1540–1557 (2016). https://doi.org/10. 1177/0954407016675995
The Results of a Compromise Solution, Which Were Obtained
143
6. Sau, J., Monteil, J., Bouroche, M.: State-space linear stability analysis of platoons of cooperative vehicles. Transportmetrica B Transp. Dyn. 1–26 (2017).https://doi.org/10.1080/ 21680566.2017.1308846 7. Kim, J.: Design of a compact 18-speed epicyclic transmission for a personal mobility vehicle. Int. J. Automot. Technol. 17(6), 977–982 (2016). https://doi.org/10.1007/s12239016-0095-9 8. Park, T., Lee, H.: Optimal supervisory control strategy for a transmission-mounted electric drive hybrid electric vehicle. Int. J. Automot. Technol. 20(4), 663–677 (2019). https://doi. org/10.1007/s12239-019-0063-2 9. Chen, S., Zhang, B., Li, B., Zhang, N.: Dynamic characteristics analysis of vehicle incorporating hydraulically interconnected suspension system with dual accumulators. Shock Vib. 2018, 1–5 (2018). https://doi.org/10.1155/2018/6901423 10. Ding, F., Zhang, N., Liu, J., Han, X.: Dynamics analysis and design methodology of rollresistant hydraulically interconnected suspensions for tri-axle straight trucks. J. Franklin Inst. 353(17), 4620–4651 (2016). https://doi.org/10.1016/j.jfranklin.2016.08.016 11. Kuznetsov, N.K., Iov, I.A., Iov, A.A.: Reducing of dynamic loads of excavator actuators. In: Journal of Physics: Conference Series, vol. 1210, no. 1, p. 012075. IOP Publishing (2019). https://doi.org/10.1088/1742-6596/1210/1/012075 12. Ziyadi, M., Ozer, H., Kang, S., Al-Qadi, I.L.: Vehicle energy consumption and an environmental impact calculation model for the transportation infrastructure systems. J. Clean. Prod. 174, 424–436 (2018). https://doi.org/10.1016/j.jclepro.2017.10.292 13. Melikov, I., Kravchenko, V., Senkevich, S., Hasanova, E., Kravchenko, L.: Traction and energy efficiency tests of oligomeric tires for category 3 tractors. In: IOP Conference Series: Earth and Environmental Science, vol. 403, p. 012126 (2019). https://doi.org/10.1088/17551315/403/1/012126 14. Senkevich, S., Kravchenko, V., Duriagina, V., Senkevich, A., Vasilev, E.: Optimization of the parameters of the elastic damping mechanism in class 1, 4 tractor transmission for work in the main agricultural operations. In: Advances in Intelligent Systems and Computing, pp. 168–177 (2018). https://doi.org/10.1007/978-3-030-00979-3_17 15. Senkevich, S.E., Sergeev, N.V., Vasilev, E.K., Godzhaev, Z.A., Babayev, V.: Use of an elastic-damping mechanism in the tractor transmission of a small class of traction (14 kN): Theoretical and Experimental Substantiation. In: Handbook of Advanced Agro-Engineering Technologies for Rural Business Development, pp. 149–179. IGI Global, Hershey (2019). https://doi.org/10.4018/978-1-5225-7573-3.ch006 16. Senkevich, S., Duriagina, V., Kravchenko, V., Gamolina, I., Pavkin, D.: Improvement of the numerical simulation of the machine-tractor unit functioning with an elastic-damping mechanism in the tractor transmission of a small class of traction (14 kN). In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing and Optimization. ICO 2019. Advances in Intelligent Systems and Computing, vol. 1072, pp. 204–213. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-33585-4_20 17. Senkevich, S.E., Lavrukhin, P.V., Senkevich, A.A., Ivanov, P.A., Sergeev, N.V.: Improvement of traction and coupling properties of the small class tractor for grain crop sowing by means of the hydropneumatic damping device. In: Kharchenko, V., Vasant, P. (eds.) Handbook of Research on Energy-Saving Technologies for EnvironmentallyFriendly Agricultural Development, pp. 1–27. IGI Global, Hershey (2020). https://doi.org/ 10.4018/978-1-5225-9420-8.ch001
144
S. Senkevich et al.
18. Senkevich, S., Kravchenko, V., Lavrukhin, P., Ivanov, P., Senkevich, A.: Theoretical study of the effect of an elastic-damping mechanism in the tractor transmission on a machinetractor unit performance while sowing. In: Handbook of Research on Smart Computing for Renewable Energy and Agro-Engineering, pp. 423–463. IGI Global, Hershey (2020). https:// doi.org/10.4018/978-1-7998-1216-6.ch017 19. Nocedal, J., Wright, S.: Numerical Optimization, p. 664. Springer, Heidelberg (2006) 20. Härdle, W., Simar, L.: Applied Multivariate Statistical Analysis, p. 580. Springer, Berlin (2015) 21. Malthe-Sorenssen, A.: Elementary Mechanics Using Matlab: A Modern Course Combining Analytical and Numerical Techniques, p. 590. Springer, Heidelberg (2015)
Multiobjective Lévy-Flight Firefly Algorithm for Multiobjective Optimization Somchai Sumpunsri, Chaiyo Thammarat, and Deacha Puangdownreong(&) Department of Electrical Engineering, Faculty of Engineering, Southeast Asia University, 19/1 Petchkasem Road, Nongkhangphlu, Nonghkaem, Bangkok 10160, Thailand [email protected], {chaiyot,deachap}@sau.ac.th
Abstract. The firefly algorithm (FA) was firstly proposed during 2008–2009 as one of the powerful population-based metaheuristic optimization techniques for solving continuous and combinatorial optimization problems. The FA has been proved and applied to various real-world problems in mostly single objective optimization manner. However, many real-world problems are typically formulated as the multiobjective optimization problems with complex constraints. In this paper, the multiobjective Lévy-flight firefly algorithm (mLFFA) is developed for multiobjective optimization. The proposed mLFFA is validated against four standard multiobjective test functions to perform its effectiveness. The simulation results show that the proposed mLFFA algorithm is more efficient than the well-known algorithms from literature reviews including the vector evaluated genetic algorithm (VEGA), non-dominated sorting genetic algorithm II (NSGA-II), differential evolution for multiobjective optimization (DEMO) and multiobjective multipath adaptive tabu search (mMATS). Keywords: Lévy-flight firefly algorithm techniques Multiobjective optimization
Metaheuristic optimization search
1 Introduction Regarding to optimization context, many real-world optimization problems usually consist of many objectives which conflict each other [1, 2]. It leads to trade-off phenomenon among the objectives of interest. Also, it makes the multiobjective problems much more difficult and complex than single-objective problems. The multiobjective problem often possesses multiple optimal solutions (or non-dominated solutions) forming the so-called Pareto front [1, 2]. The challenge is how to perform the smooth Pareto front containing a set of optimal solutions for all objective functions. Following the literature, the multiobjective optimization problems can be successfully solved by some powerful metaheuristic optimization search techniques, for examples, vector evaluated genetic algorithm (VEGA) [3], non-dominated sorting genetic algorithm II (NSGA-II) [4] and differential evolution for multiobjective optimization (DEMO) [5].
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 145–153, 2021. https://doi.org/10.1007/978-3-030-68154-8_15
146
S. Sumpunsri et al.
During 2008–2009, the firefly algorithm (FA) was firstly proposed by Yang [6] based on the flashing behavior of fireflies and a random drawn from the uniform distribution for randomly generating the feasible solutions. The FA has been widely applied to solving real-world problems, such as industrial optimization, image processing, antenna design, business optimization, civil engineering, robotics, semantic web, chemistry, meteorology, wireless sensor networks and so on [7, 8]. Afterwards in 2010, the Lévy-flight firefly algorithm (LFFA) was proposed by Yang [9]. The LFFA utilized a random drawn from the Lévy distribution to randomly generate new solutions. The LFFA was tested and its results outperformed the genetic algorithm (GA) and the particle swarm optimization (PSO). The state-of-the-art and its applications of the original LF and LFFA algorithms are reported [7–9]. In early 2020, the multiobjective Lévy-flight firefly algorithm (mLFFA) is proposed for particularly solving multiobjective optimization problems [10]. In this paper, the mLFFA is applied to solve well-known standard multiobjective optimization problems. After an introduction given in Sect. 1, the original FA and LFFA algorithms are briefly described, and the proposed mLFFA algorithm is illustrated in Sect. 2. Four standard multiobjective test functions used in this paper are detailed in Sect. 3. Results are discussions of the performance evaluation are provided in Sect. 4. Conclusions are followed in Sect. 5.
2 Multiobjective Lévy-Flight Firefly Algorithm In this section, the original FA and LFFA algorithms are briefly described. The proposed mLFFA algorithm is then illustrated as follows. 2.1
Original FA Algorithm
Development of the original FA algorithm by Yang [6] is based on the flashing behavior of fireflies generated by a process of bioluminescence for attracting partners and prey. The FA’s algorithm is developed by three following rules. Rule-1: Rule-2:
Rule-3:
All fireflies are assumed to be unisex. A firefly will be attracted to others regardless of their sex. Attractiveness is proportional to the brightness. Both attractiveness and brightness decrease when distance increases. The less brighter firefly will move towards the brighter one. If there is no brighter one, it will move randomly. Brightness of each firefly will be evaluated by the objective function of interest.
The light intensity I of the FA will vary according to the inverse square law of the distance r, I = Is/r2, where Is is the intensity at the source. For a fixed light absorption coefficient, the light intensity I varies with the distance r as stated in (1), where I0 is the original light intensity. The attractiveness b will vary according to the distance r as stated in (2), where b0 is the attractiveness at r = 0. From (1) and (2), the scaling factor
Multiobjective Lévy-Flight Firefly Algorithm
147
c is defined as the light absorption coefficient. The distance rij between two fireflies i and j at their locations xi and xj can be calculated by (3). In FA algorithm, a new solution xt+1 can be obtained by an old solution xt as stated in (4), where ei is a vector of random numbers drawn from a uniform distribution. The randomization parameter at can be adjusted as stated in (5), where a0 is the initial randomness scaling factor. The original FA algorithm can be described by the pseudo code as shown in Fig. 1. I ¼ I0 ecr b ¼ b0 ecr
ð1Þ 2
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u d X u rij ¼ xi xj ¼ t ðxi;k xj;k Þ2
ð2Þ ð3Þ
k¼1
xti þ 1 ¼ xti þ b0 ecrij ðxtj xti Þ þ at eti 2
at ¼ a0 dt ;
ð0\d\1Þ
Initialize: The objective function f(x), x = (x1,…,xd)T Generate initial population of fireflies xi = (i = 1, 2,…,n) Light intensity Ii at xi is determined by f(x) Define light absorption coefficient γ while (Gen ≤ Max_Generation) for i =1 : n all n fireflies for j =1 : i all n fireflies if (Ij > Ii) - Move firefly i towards j in d-dimension via Lévy-flight distributed random end if - Attractiveness varies with distance r via exp[-γr] - Evaluate new solutions and update light intensity end for j end for i Rank the fireflies and find the current best x* end while Report the best solution found x* Fig. 1. Pseudo code of the original FA algorithm.
ð4Þ ð5Þ
148
2.2
S. Sumpunsri et al.
LFFA Algorithm
The LFFA is the one of the modified versions of the original FA [9]. Movement of a firefly in (4) is changed to (6) by using the random numbers drawn from the Lévy distribution. The product ⊕ stands for the entrywise multiplications. The sign[rand1/2] where rand 2 [0, 1] provides a random direction. From (6), a symbol Lévy(k) represents the Lévy distribution as stated in (7). The step length s can be calculated by (8), where u and v stand for normal distribution as stated in (9). Also, the standard deviations of u and v are expressed in (10), where C is the standard Gamma function. ð6Þ
ð7Þ s ¼ u=jvj1=b u Nð0; r2u Þ; ru ¼
2.3
ð8Þ
v Nð0; r2v Þ
Cð1 þ bÞ sinðpb=2Þ C½ð1 þ bÞ=2b2ðb1Þ=2
ð9Þ
1=b ;
rv ¼ 1
ð10Þ
mLFFA Algorithm
The mLFFA is represented by the pseudo code as shown in Fig. 2 [10]. The multiobjective function F(x) is formulated in (11), where f1(x), f2(x),…, fn(x) are the objective functions, gj(x) is the inequality constraints and hk(x) is the equality constraints. F(x) in (11) will be simultaneously minimized by the mLFFA according to gj(x) and hk(x). The best solution will be updated in every iteration. If it is a non-dominated solution, it will be sorted and stored into the Pareto optimal set P* as stated in (12). After the search terminated, the solutions stored in P* will be conducted to perform the Pareto front PF* as expressed in (13), where s is the solutions stored in set S and s* is non-dominated solutions. All non-dominated solutions appeared on the PF* are the optimal solutions. 9 Min FðxÞ ¼ ff1 ðxÞ; f2 ðxÞ; . . .; fn ðxÞg = subject to gj ðxÞ 0; j ¼ 1; . . .; m ; hk ðxÞ ¼ 0; k ¼ 1; . . .; p
ð11Þ
P ¼ fx 2 Fj9x 2 F : Fðx Þ FðxÞg
ð12Þ
PF ¼ fs 2 Sj9s 2 S : s sg
ð13Þ
Multiobjective Lévy-Flight Firefly Algorithm
149
Multiobjective function: F(x) = {f1(x), f2(x),…,fn(x)}, x = (x1,…,xd)T Initialize LFFA1,…,LFFAk, and Pareto optimal set P* Generate initial population of fireflies xi = (i = 1, 2,…,n) Light intensity Ii at xi is determined by F(xi) Define light absorption coefficient γ while (Gen ≤ Max_Generation) for z =1: k all k LFFA for i =1 : n all n fireflies for j =1 : i all n fireflies if (Ij > Ii) - Move firefly i towards j in d-dimension via Lévy-flight distributed random end if - Attractiveness varies with distance r via exp[-γr] - Evaluate new solutions and update light intensity end for j end for i end for z - Rank the fireflies and find the current best x* - Sort and find the current Pareto optimal solutions end while - Pareto optimal solutions are found and - Pareto front PF* is performed. Fig. 2. Pseudo code of the mLFFA algorithm.
3 Multiobjective Test Functions In order to perform its effectiveness, the mLFFA is thus evaluated against several multiobjective test functions. In this work, four widely used standard multiobjective functions, ZDT1 – ZDT4, providing a wide range of diverse properties in terms of Pareto front and Pareto optimal set are conducted [11, 12]. ZDT1 is with convex front as stated in (14) where d is the number of dimensions. ZDT2 as stated in (15) is with non-convex front, while ZDT3 with discontinuous front is expressed in (16), where g and xi in functions ZDT2 and ZDT3 are the same as in function ZDT1. For ZDT4, it is stated in (17) with convex front but more specific. In evaluation process, the error Ef between the estimated Pareto front PFe and its corresponding true front PFt is defined in (18), where N is the number of solution points. ZDT1 : f1 ðxÞ ¼ x1 ; pffiffiffiffiffiffiffiffiffi f2 ðxÞ ¼ gð1 f1 =g Þ; P . g ¼ 1 þ 9 di¼2 xi ðd 1Þ; xi 2 ½0; 1;
9 > > > = > > > i ¼ 1; . . .; 30 ;
ð14Þ
150
S. Sumpunsri et al.
9 ZDT2 : = f1 ðxÞ ¼ x1 ; ; f2 ðxÞ ¼ gð1 f1 =gÞ2
ð15Þ
9 ZDT3 : > = f1 ðxÞ ¼ x1; pffiffiffiffiffiffiffiffiffi > f2 ðxÞ ¼ g 1 f1 =g f1 sinð10pf1 Þ=g ; ZDT4 : f1 ðxÞ ¼ x1 ; pffiffiffiffiffiffiffiffiffi f2 ðxÞ ¼ gð1 f1 =gÞ; P g ¼ 1 þ 10ðd 1Þ þ di¼2 x2i 10 cosð4pf1 Þ ; xi 2 ½0; 1; Ef ¼ kPFe PFt k ¼
N X
ðPFej PFt Þ2
ð16Þ 9 > > =
i ¼ 1; . . .; 30:
> > ;
ð17Þ
ð18Þ
j¼1
4 Results and Discussions The proposed mLFFA algorithms were coded by MATLAB version 2018b run on Intel (R) Core(TM) i5-3470 [email protected] GHz, 4.0 GB-RAM. Search parameters of each LFFA in the mLFFA are set according to Yang’s recommendations [6, 9], i.e. the numbers of fireflies n = 30, a0 = 0.25, b0 = 1, k = 1.50 and c = 1. These parameter values are sufficient for most optimization problems because the LFFA algorithm is very robust (not very sensitive to the parameter adjustment) [6, 9]. In this work, the termination criteria (TC) either use a given tolerance or a fixed number of generations. As simulation results, it was found that a fixed number of generations is not only easy to implement, but also suitable to compare the closeness of Pareto front of test functions. Therefore, for all test functions, Max_Gen = 2,000 is set as the TC. For comparison, the results obtained by the proposed mLFFA over all four standard multiobjective test functions are compared with those obtained by the well-known algorithms from literature reviews, i.e. VEGA [3], NSGA-II [4], DEMO [5] and multiobjective multipath adaptive tabu search (mMATS) [13]. The performance of all algorithms is measured via the error Ef stated in (18) and for all algorithms, a fixed number of generations/iterations of 2,000 (Max_Gen) is set as the TC. The results obtained from all test functions are summarized in Table 1 – Table 2, and the estimated Pareto fronts and the true front of functions ZDT1 – ZDT4 are depicted in Fig. 3, Fig. 4, Fig. 5, Fig. 6, respectively. It was found from Fig. 3, Fig. 4, Fig. 5 and Fig. 6 that the mLFFA can satisfactory provide the Pareto front containing all Pareto optimal solutions of all test functions very close to the true front of each multiobjective test function. Referring to Table 1 – Table 2, the mLFFA shows superior results to other algorithms with the less Ef and search time consumed.
Multiobjective Lévy-Flight Firefly Algorithm 1.0 True Pareto front mLFFA
0.9 0.8 0.7
f2(x)
0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.1
0.2
0.3 0.4 f1(x)
0.5
0.6
0.7
Fig. 3. Pareto front of ZDT1. 1.0 0.9 0.8 0.7
f2(x)
0.6 0.5 0.4 0.3 0.2
True Pareto front mLFFA
0.1 0
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 f1(x)
Fig. 4. Pareto front of ZDT2. Table 1. Error Ef between PFe and PFt. Algorithms Error Ef ZDT1 VEGA 2.79e-02 NSGA-II 3.33e-02 DEMO 2.08e-03 mMATS 1.24e-03 mLFFA 1.20e-03
ZDT2 2.37e-03 7.24e-02 7.55e-04 2.52e-04 2.48e-04
ZDT3 3.29e-01 1.14e-01 2.18e-03 1.07e-03 1.01e-03
ZDT4 4.87e-01 3.38e-01 2.96e-01 1.02e-01 1.01e-01
151
S. Sumpunsri et al. 1.0 True Pareto front mLFFA
0.8 0.6 0.4
f2(x)
0.2 0 -0.2 -0.4 -0.6 -0.8 -1.0 0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 f1(x)
Fig. 5. Pareto front of ZDT3. 1.0 True Pareto front mLFFA
0.9 0.8 0.7 0.6
f2(x)
152
0.5 0.4 0.3 0.2 0.1 0
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 f1(x)
Fig. 6. Pareto front of ZDT4. Table 2. Search time consumed. Algorithms Search time (sec.) ZDT1 ZDT2 ZDT3 VEGA 125.45 132.18 121.40 NSGA-II 126.82 145.63 158.27 DEMO 89.31 98.44 102.32 mMATS 65.54 72.33 82.47 mLFFA 52.42 65.18 71.53
ZDT4 122.24 165.51 120.86 78.52 64.78
Multiobjective Lévy-Flight Firefly Algorithm
153
5 Conclusions The multiobjective Lévy-flight firefly algorithm (mLFFA) has been presented in this paper for global optimization. Based on the original FA and LFFA, the mLFFA has been tested against four standard multiobjective test functions to perform its effectiveness of multiobjective optimization. Details of the performance evaluation of the mLFFA algorithm have been reported. As simulation results, it was found that the mLFFA algorithm performs more efficient than the well-known algorithms including the VEGA, NSGA-II, DEMO and mMATS. This can be noticed that the mLFFA algorithm is one of the most effective metaheuristic optimization search techniques for solving global optimization problems. For future research, the proposed mLFFA algorithm will be applied to various real-world engineering optimization problems.
References 1. Zakian, V.: Control Systems Design: A New Framework. Springer, London (2005) 2. Talbi, E.G.: Metaheuristics form Design to Implementation. Wiley, Hoboken (2009) 3. Schaffer, J.D.: Multiple objective optimization with vector evaluated genetic algorithms. In: The 1st International Conference on Genetic Algorithms, pp. 93–100 (1985) 4. Deb, K., Pratap, A., Agarwal, S., Mayarivan, T.: A fast and elitist multiobjective algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6, 182–197 (2002) 5. Robič, T., Filipič, B.: DEMO: Differential Evolution for Multiobjective Optimization. Lecture Notes in Computer Sciences, vol. 3410, pp.520–533 (2005) 6. Yang, X.S.: Firefly algorithms for multimodal optimization. In: Stochastic Algorithms, Foundations and Applications. Lecture Notes in Computer Sciences, vol. 5792, pp. 169–178 (2009) 7. Fister, I., Fister Jr., I., Yang, X.S., Brest, J.: A comprehensive review of firefly algorithms. In: Swarm and Evolutionary Computation, vol. 13, pp. 34–46. Springer, Heidelberg (2013) 8. Fister, I., Yang, X.S., Fister, D., Fister Jr., I.: Firefly algorithm: a brief review of the expanding literature. In: Yang, X.S. (ed.) Cuckoo Search and Firefly Algorithm, pp. 347– 360 (2014) 9. Yang, X.S.: Firefly algorithm, Lévy flights and global optimization. In: Research and Development in Intelligent Systems, vol. XXVI, pp. 209–218. Springer, Heidelberg (2010) 10. Sumpunsri, S., Puangdownreong, D.: Multiobjective Lévy-flight firefly algorithm for optimal PIDA controller design. Int. J. Innovative Comput. Inf. Control 16(1), 173–187 (2020) 11. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach. IEEE Trans. Evol. Comput. 3, 257–271 (1999) 12. Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. 8, 173–195 (2000) 13. Puangdownreong, D.: Multiobjective multipath adaptive tabu search for optimal PID controller design. Int. J. Intell. Syst. Appl. 7(8), 51–58 (2015)
Cooperative FPA-ATS Algorithm for Global Optimization Thitipong Niyomsat, Sarot Hlangnamthip, and Deacha Puangdownreong(&) Department of Electrical Engineering, Faculty of Engineering, Southeast Asia University, 19/1 Petchkasem Road, Nongkhangphlu, Nonghkaem, Bangkok 10160, Thailand [email protected], {sarotl,deachap}@sau.ac.th
Abstract. This paper presents the novel cooperative metaheuristic algorithm including the flower pollination algorithm (FPA) and the adaptive tabu search named the cooperative FPA-ATS. The proposed cooperative FPA-ATS possesses two states. Firstly, it starts the search for the feasible solutions over entire search space by using the FPA’s explorative property. Secondly, it searches for the global solution by using the ATS’s exploitative property. The cooperative FPA-ATS are tested against ten multimodal benchmark functions for global minimum finding in order to perform its search performance. By comparison with the FPA and ATS, it was found that the cooperative FPA-ATS is more efficient than the FPA and ATS, significantly. Keywords: Cooperative FPA-ATS algorithm Adaptive tabu search Global optimization
Flower pollination algorithm
1 Introduction Metaheuristic algorithms have been consecutively developed to solve combinatorial and numeric optimization problems over five decades [1]. The metaheuristic optimization search technique can be classified by two main properties, i.e. exploration (or diversification) and exploitation (or intensification) [1, 2]. The explorative property is the ability to generate diverse solutions to explore the overall search space on the global scale. Another is the exploitative property performing the ability to generate intensive solutions on the local scale of search space. With these two properties, metaheuristics are thus divided into two types, i.e. population-based and single solution-based metaheuristic algorithms [2]. The population-based metaheuristic algorithms have strong explorative property, whereas the single-solution based metaheuristic algorithms have strong exploitative property [1, 2]. From literature reviews, one of the most efficient population-based metaheuristic algorithms is the flower pollination algorithm (FPA) proposed by Yang in 2012 [3]. The FPA was successfully applied to solve many real-world problems such as process control, image processing, antenna design, wireless network, robotics and automatic control problems [3, 4]. Moreover, the FPA algorithm has been proven for the global convergence properties [5]. On the other hand, one of the most powerful single-solution based metaheuristic algorithms is the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 154–163, 2021. https://doi.org/10.1007/978-3-030-68154-8_16
Cooperative FPA-ATS Algorithm for Global Optimization
155
adaptive tabu search (ATS) proposed by Sujitjorn, Kulworawanichpong, Puangdownreong and Areerak in 2006 [6]. The ATS was developed from the original TS proposed by Glover in 1989 [7, 8]. The ATS was widely conducted to solve several real-world optimization problems such as model identification, control system design, power system design and signal processing [6]. In addition, the ATS algorithm has been proven for the global convergence [6, 9, 10]. To combine two main properties, the novel cooperative metaheuristic algorithm called the cooperative FPA-ATS is proposed in this paper. The proposed cooperative FPA-ATS includes the FPA and the ATS in cooperating search for global optimization. This paper consists of five sections. After an introduction in Sect. 1, the original FPA and ATS algorithms are briefly described and the proposed cooperative FPA-ATS algorithm is illustrated in Sect. 2. Ten standard multimodal benchmark functions used in this paper are detailed in Sect. 3. Results and discussions of the performance tests of the FPA, ATS and cooperative FPA-ATS algorithms against ten selected multimodal benchmark functions are performed in Sect. 4. Finally, conclusions are given in Sect. 5.
2 Cooperative FPA-ATS Algorithm In this section, the original FPA and ATS algorithms are briefly described. Then, the proposed cooperative FPA-ATS algorithm is elaborately illustrated as follows. 2.1
FPA Algorithm
The FPA algorithm mimics the pollination of the flowering plants in nature. It can be divided into cross (or global) pollination and self (or local) pollination [3]. For the cross-pollination, pollen will be transferred by the biotic pollinator. In this case, the new position (or solution) xt+1 can be calculated by (1), where xt is the current position, g* is the current best solution, L is the random drawn from Lévy flight distribution as stated in (2) and C(k) is the standard gamma function. For the self-pollination, pollen will be transferred by the abiotic pollinator. For this case, xt+1 can be calculated by (3), where e is the random drawn from a uniform distribution as expressed in (4). The proximity probability p is used for switching between cross-pollination and selfpollination. The FPA algorithm can be described by the flow diagram shown in Fig. 1, where TC stands for the termination criteria. xit þ 1 ¼ xti þ Lðxti gÞ L
kCðkÞ sinðpk=2Þ 1 ; p s1 þ k
ðs [[ s0 [ 0Þ
xti þ 1 ¼ xti þ eðxtj xtk Þ
ð1Þ ð2Þ ð3Þ
156
T. Niyomsat et al.
eðqÞ ¼
2.2
1=ðb aÞ; a q b 0; q\a or q [ b
ð4Þ
ATS Algorithm
The ATS method is based on iterative neighborhood search approach [6]. It consists of the tabu list (TL), adaptive-radius (AR) mechanism and back-tracking (BT) mechanism. The TL is used to record the list of visited solutions. The AR is conducted to speed up the search process by reducing the search radius. Once the local entrapment is occurred, the BT is activated by using some solutions collected in TL to escape from the entrapment and move to new search space. The ATS algorithm is represented by the flow diagram in Fig. 2.
Start
Start - Objective function f(x), x = (x1, x2,…, xd) - Initialize a population of n flowers (pollen gametes) with random solutions - Find the best solution g* among the initial population via f(x) - Define a proximity probability p ∈ [0, 1] rand > p
- Objective function f(x), x = (x1, x2,…, xd) - Initialize a search radius R, weighting search radius α, 0 0. Static and Dynamic Switch Probability. The static switch probability (p) is 0.8 because this works best in most applications [17]. The dynamic switch probability (pn+1 ) in iteration n + 1 is adapted from [14]: Niter − n , (3) pn+1 = pn − λ Niter where pn is the value of the switch probability in the previous iteration, Niter is the maximum number of iterations, n is the current iteration, and the scaling factor λ = 1.5. The value of the dynamic switch probability begins from 0.9 and decreases using Eq. (3). Algorithm 1 shows the flow of the FPA.
Meta-Heuristic Algorithms in PID Tuning
253
Algorithm 1 FPA pseudo code 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20:
3.2
Initialize popsize N Initialize iteration number maxIT Define objective function f(x) Find global best solution g ∗ Define dynamic switch probability function while t < maxIT do Evaluate dynamic switch probability function Obtain switch probability p for i = 1 : N do if rand < p then Draw a d-dimensional step vector L Global pollination: xt+1 = xti + L(xti − g ∗ ) i else Draw from a uniform distribution in [0,1] Randomly choose j and k among solutions Local pollination: xt+1 = xti + (xtj − xtk ) i Evaluate new solutions Compare with previous solution Update new solutions if they are better Find the current best solution g ∗
Teaching-Learning-Based Optimization Algorithm
The Teaching-Learning-Based Optimization (TLBO) algorithm is comprised of two phases: the teacher phase and the learner phase. The teacher phase is where the learners gain knowledge from the teacher and the learner phase is where learners interact and gain knowledge from each other. The advantage of the TLBO is that it only requires non-algorithmic specific parameters (i.e. common parameters), such as the population size and number of generations [19]. In this particular study, each learner of the population has four subjects being the three PID gains and the N filter coefficient. Algorithm 2 shows how the TLBO was implemented in this study. Note that in Algorithm 2, ri , Tf , and Mi represent a random number between 0 and 1, a teaching factor which can either be 0 or 1, and the mean result of the learners in that particular iteration, respectively.
4
Computational Experimental Design
This study used a basic design of experiments (DOE) [20] methodology which is the 2k factorial design technique to compare the outcome effect of two values for each parameter. Table 1 shows these two parameters, along with the two values of each parameter.
254
S. Baartman and L. Cheng
Algorithm 2 TLBO pseudo code 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18:
Initialize population N Choose iteration number maxIT Define objective function f(X) Evaluate population and find best learner k∗ while t < maxIT do for i = 1 : N do procedure Teacher Phase ¯ k,i = ri (Xk∗ ,i − Tf Mi ) Find difference mean: Δ ¯ k,i New solution: Xk,i = Xk,i + Δ procedure Learner Phase Randomly choose learners P & Q s.t XP,i = XQ,i if XP,i < XQ,i then Update P: XP,i = XP,i + ri (XP,i − XQ,i ) if XQ,i < XP,i then Update Q: XP,i = XP,i + ri (XQ,i − XP,i )
Evaluate new solutions Update new solutions if they are better Find the current best learner k∗
These parameter values were chosen considering various factors. A low population size may result in the algorithm not having enough population members to find the optimal solution, and a high population size may result in a waste of computational space. A low search space bound may result in the optimal solution not included in the search space, and a high search space bound may reduce the probability of the algorithm obtaining the optimal solution. The detail of each algorithm design is shown in Table 2. Table 1. 2k factorial design bounds Factor
Level 1 Level 2
Search space upper bound 100 Population size
20
500 60
The algorithm design and operation are evaluated for two problem instances: a step input which represents a constant target velocity of 1 rad s−1 and a ramp input which represents a constant acceleration of 1 rad s−2 . The study ran each algorithm ten times (with random seed) in order to observe their reactive nature, as also done in [11]. This research used the best PIDN solutions for performance comparison because in a real-life scenario, the best solution from the number of runs would be chosen [21]. The maximum iteration counts for the FPA and TLBO algorithm were kept constant throughout the experiments at 50 and 25 respectively, as the TLBO performs two fitness function evaluations at each iteration.
Meta-Heuristic Algorithms in PID Tuning
255
Table 2. Factorial design AD Population size Search space
5 5.1
1
20
100
2
60
100
3
20
500
4
60
500
Results and Discussions Parameter Optimization
Table 3 shows that the TLBO algorithm, implemented with algorithm design 4 (AD = 4), obtained the lowest fitness value. Furthermore, the lowest mean fitness value was obtained by AD = 4 for all the algorithms, showing that AD = 4 is the best performing for this problem instance when observing the fitness value. Table 3. Fitness value statistics AD Best (Min) Mean
Max
Std Dev
Dynamic 1 2 FPA 3 4
34.7076 34.7114 14.8154 14.8167
35.0040 34.7304 25.4771 21.2140
35.6922 34.7682 34.7096 34.4665
0.343004 0.0139906 9.19568 6.71812
Static FPA
1 2 3 4
34.7128 34.7207 15.4595 16.0833
35.1717 34.8006 29.3366 24.4707
36.8325 34.9912 35.2087 34.5326
0.6085149 0.0871953 8.25672 6.35065
TLBO
1 2 3 4
34.6111 34.5891 14.2111 13.9728
34.6611 34.6242 23.1669 21.0753
34.6753 34.6699 34.4206 34.4200
0.017996 0.0311257 9.247853 8.781703
The results on Table 3 also show that the dynamic FPA generally obtained lower fitness values compared to the static FPA, i.e., the problem benefits from fine-tuning and exploitation in later iterations rather than a constant higher probability of global pollination throughout all iterations. This could also be the reason why the TLBO algorithm obtained the lowest fitness value, as this algorithm works by fine-tuning the solutions at each iteration. Both the FPA and TLBO resulted in a PI controller solution, as the FPA resulted in a low derivative gain, and the TLBO resulted in a low filter coefficient
256
S. Baartman and L. Cheng Table 4. Optimal PID parameters and results AD
KP
KI
KD
N
tR (s)
tS (s)
%OS
Dynamic
1
94.7108
45.0245
0
82.7842
0.0260
0.0389
0.0295
FPA
2
99.5786
46.3494
0.0239934
72.0288
0.0258
0.0389
0.0368
3
25.0692
219.082
0
368.749
0.0445
0.3159
13.3413
4
24.9139
278.149
0
500
0.0415
0.2687
16.2584
Static
1
98.8912
47.1230
0
100
0.0258
0.0380
0.0382
FPA
2
99.6802
46.182
0
84.5793
0.0257
0.0378
0.0339
3
23.0263
175.111
0
500
0.0497
0.3559
12.4789
4
21.7208
143.351
0
409.259
0.0544
0.3959
11.4223
1
95.7066
48.0759
97.7518
0.0138523
0.0259
0.0384
0.0425
2
99.2521
53.2571
91.8365
0.287085
0.0251
0.0341
0.0303
3
25.1757
25.7884
0
462.389
0.0668
0.1197
0.0135
4
26.5995
23.8218
0
455.577
0.0634
0.1163
0.0030
N/A
4.5
3.6
0.05
100
0.4422
0.8526
0
TLBO
Journal results
1.2
1
A D1 A D2
1
A D3
A D2 A D3 A D4
0.9
journal input
Step Response
Step Response
A D4
0.8
A D1
0.95
0.6
0.4
journal input
0.85 0.8 0.75 0.7
0.2 0.65 0
0
0.5
1
Time (s)
(a) Steady-State Response
1.5
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Time (s)
(b) Zoomed in Figure of 1a
Fig. 1. Step response comparing the algorithm design performance for the TLBO
as seen on Table 4. The PI controller solution means that the system is sufficiently damped as the purpose of the derivative controller is to reduce overshoot and oscillations, i.e. increasing damping. The best performing algorithm design is AD = 2 as seen on Fig. 1 for the TLBO algorithm, and Fig. 2 for the FPA, with the static FPA performing the best. In order to find the overall best performing algorithm and algorithm design, the best performing FPA algorithm (implemented with the best performing algorithm design) was compared with the TLBO algorithm, to observe whether this problem instance prefers algorithms with algorithm-specific parameters. The algorithm design chosen as best performing for the all the algorithms is AD = 2 as it showed a good compromise with the most favourable response, yet with a reasonable fitness value. Since the static FPA (implemented using AD = 2) resulted in the lowest rise and settling time, and followed the commanded input the closest, this algorithm was chosen to be compared with the TLBO algorithm. Figure 3 shows that the TLBO algorithm reaches the commanded input faster.
Meta-Heuristic Algorithms in PID Tuning
257
1.2 1.02
D-AD1
D-AD1
D-AD2
1
D-AD2
D-AD3
D-AD3
1
D-AD4
S-A D1
0.8
S-A D2
Step Response
Step Response
D-AD4
S-A D3 S-A D4
0.6
journal input
0.4
S-A D1
0.98
S-A D2 S-A D3
0.96
S-A D4 journal input
0.94 0.92
0.2
0
0.9
0
0.5
1
1.5
0.045
Time (s)
0.05
0.055
0.06
0.065
0.07
0.075
Time (s)
(a) Steady-State Response
(b) Zoomed in Figure of 2a
Fig. 2. Step response comparing the algorithm design performance for the FPA 1.2
TLBO Static FPA input
0.99 0.98 0.97
0.8
Step Response
Step Response
1
TLBO Static FPA input
1
0.6
0.4
0.96 0.95 0.94 0.93 0.92 0.91
0.2
0.9 0
0
0.5
1
Time (s)
(a) Steady-State Response
1.5
0.03 0.035 0.04 0.045 0.05 0.055 0.06 0.065 0.07 0.075
Time (s)
(b) Zoomed-in view of Figure 3a
Fig. 3. The comparison of the step response of the algorithm performance
Thus, the TLBO algorithm, using AD = 2 was chosen to be the best performing for this problem instance. 5.2
Robustness Analysis
Table 5 shows that the increase in population size from AD = 1 to AD = 2 resulted in a decrease in the fitness value for all the algorithms. However, increasing the search space bound, as done from AD = 2 to AD = 3, caused a significant decrease in the fitness value. This is because of the increase in the integral controller gain solutions (KI ) obtained, as seen on Table 6. An increase in this term shows its importance when the input is changed from step to ramp. The lowest fitness values were obtained using AD = 4 for all the algorithms, with the lowest obtained from the TLBO algorithm. Figure 4 shows that the increase in the integral gain in the algorithm designs reduced this steady-state error which resulted in a decreased fitness value.
258
S. Baartman and L. Cheng Table 5. Fitness value statistics AD Best
Mean
Max
Std Dev
Adaptive 1
5.9056
6.0110
6.1584
0.079769
2
5.8152
5.9067
5.9618
0.04944
3
1.8489
2.4535
4.4354
0.7344
4
1.7788 1.8828
2.077
0.07754
Static
1
5.9162
6.04216
6.1049
0.05763
FPA
2
5.86
5.9424
6.0185
0.05194
3
1.7962 2.6258
4.3594
0.88795
4
1.8326
1
5.8078
6.0329
6.2369
0.1189
2
5.7446
5.8962
6.1151
0.09516
3
1.7793
1.7851
1.791
0.003700
4
1.759
1.7761
1.7883 0.007449
FPA
TLBO
1.91394 2.1113 0.08125
Table 6. Algorithm solutions AD KP
KI
Kd
N
Adaptive 1
68.4011 34.2160 13.2017 100
FPA
2
97.7425 0
13.5612 94.1247
3
22.6570 500
0
36.0330
0
165.944
4
25.2946 500
Static
1
59.0788 55.6679 14.3738 89.4286
FPA
2
37.3178 79.6967 13.6054 99.8358
3
25.2989 500
0
84.1660
4
24.0282 500
0
415.586
1
42.9984 66.3752 13.491
2
43.2900 83.3955 13.4509 100
3
24.806
4
25.0514 499.932 0
TLBO
500
0
99.8793 500 358.493
This problem favoured the dynamic FPA instead of the static FPA, as the dynamic FPA gave the lowest fitness value from the two. This means this problem instance is not multi-modal and requires fine-tuning of a specific region in the search space. This experiment resulted in AD = 4 performing best with the lowest mean fitness value and best response behaviour for all the algorithms. Because the dynamic FPA gave the lowest fitness function value when compared to the static FPA, the dynamic FPA was chosen for comparison, along with the TLBO using the algorithms ramp response shown in Fig. 6. This Figure illustrates that though the TLBO algorithm resulted in the lowest fitness value, the best performing algorithm is the dynamic FPA with it following the ramp input closest. The low fitness value from the TLBO algorithm, when compared to the dynamic FPA, may be a result of the dynamic FPA taking slightly longer to settle. Thus, there is no best performing algorithm in this instance as the
Meta-Heuristic Algorithms in PID Tuning 1.6
0.068
A D1 A D2
1.4
A D4
Step Response
Step Response
1 0.8 0.6
0.06
0.056 0.054
1
A D4
0.058
0.2
0.5
A D3 input
0.062
0.4
0
A D2
0.064
input
0
A D1
0.066
A D3
1.2
259
1.5
0.052 0.054 0.056 0.058 0.06 0.062 0.064 0.066 0.068 0.07 0.072
Time (s)
Time (s)
(a) Steady-State Response
(b) Zoomed in Figure of 4a
Fig. 4. Ramp response comparing the algorithm design performance for the TLBO 1.6
0.075
D-AD1 D-AD2
1.4
D-AD3 D-AD4
1.2
Step Response
Step Response
S-A D2 S-A D3 S-A D4
0.8
D-AD2 D-AD3 D-AD4
0.065
S-A D1
1
D-AD1
0.07
input
0.6
S-A D1 S-A D2
0.06
S-A D3 S-A D4
0.055
input
0.05 0.4 0.045 0.2 0.04 0
0
0.5
1
1.5
0.055
0.06
0.065
Time (s)
0.07
0.075
0.08
0.085
Time (s)
(a) Steady-State Response
(b) Zoomed in Figure of 5a
Fig. 5. Ramp response comparing the algorithm design performance for the FPA 1.6
0.887
input TLBO Dynamic FPA
1.4
input TLBO Dynamic FPA
0.886
1
Step Response
Step Response
1.2
0.8 0.6
0.885
0.884
0.883
0.4 0.882
0.2 0
0
0.5
1
Time (s)
(a) Steady-State Response
1.5
0.881 0.8825
0.883
0.8835
0.884
0.8845
0.885
0.8855
0.886
Time (s)
(b) Zoomed-in view of Figure 6a
Fig. 6. The comparison of the ramp response of the algorithm performance
260
S. Baartman and L. Cheng
TLBO algorithm gave the lowest fitness value, but the dynamic FPA follows the input the fastest, though it results in a longer time to settle. 5.3
Suggestions for Future Practitioners
– If the target is already moving with a constant velocity (step input), the proportional gain should be high since the gimbal is starting from zero velocity and needs to reach the target velocity. If the target begins from zero as well and moves with constant acceleration (ramp input), the integral gain is significant which shows the error over time because the gimbal moves with the target rather than attempting to reach the target motion. The search space bound of the algorithm must be adjusted accordingly to ensure that optimal results are within the search space. – Ensure that the algorithm-specific parameter values chosen give a higher probability for exploitation rather than a constant exploration in the iteration. – Since the best solutions for all the algorithms resulted in higher PI controller gains for both the problem instances but with lower derivative controller gains or filter coefficients, one must reduce the search space upper bound of the derivative controller so that this may increase the number of viable solutions (Fig. 5).
6
Conclusion
The tuning of PID controllers is imperative to ensuring a good performance as classical tuning methods have proven to only produce sub-optimal results and the need for better tuning method remains. Nature-inspired meta-heuristic algorithms have shown to be a good alternative solution to PID controller tuning. However, these algorithms have their own (meta) parameters which need to be tuned in order to ensure good algorithm performance. This study made use of the 2k factorial method for tuning non-algorithm specific parameters, and compared self-tuning methods with static parameter values for tuning algorithm-specific parameters. The performance of the algorithms were then compared on two different simulation conditions which represent two different target motions. It was found that the TLBO algorithm resulted in the lowest (best) fitness value and reached the near optimal solution with the lowest iteration number. Most gimbal stabilization systems have two, three or even more axes. Future work should consider how these algorithms perform when optimising PID systems for multi-axis gimbal systems that include Coriolis-force cross-coupling disturbance torque between the axes.
References ˇ 1. Jan, J.A., Sulc, B.: Evolutionary computing methods for optimising virtual reality process models. In: International Carpathian Control Conference, May 2002
Meta-Heuristic Algorithms in PID Tuning
261
2. Memari, A., Ahmad, R., Rahim, A.R.A.: Metaheuristic algorithms: guidelines for implementation. J. Soft Comput. Decis. Support Syst. 4(7), 1–6 (2017) 3. Hilkert, J.M.: Inertially stabilized platform technology concepts and principles. IEEE Control Syst. Mag. 28, 26–46 (2008) 4. Bujela, B.W.: Investigation into the robust modelling, control and simulation of a two-dof gimbal platform for airborne applications. Msc. Thesis, University of the Witwatersrand, Johannesburg, SA (2013) 5. Ribeiro, J.M.S., Santos, M.F., Carmo, M.J., Silva, M.F.: Comparison of PID controller tuning methods: analytical/classical techniques versus optimization algorithms. In: 2017 18th International Carpathian Control Conference (ICCC), pp. 533–538, May 2017 6. Jalilvand, A., Kimiyaghalam, A., Ashouri, A., Kord, H.: Optimal tuning of pid controller parameters on a DC motor based on advanced particle swarm optimization algorithm, vol. 3, no. 4, p. 9 (2011) 7. Salem, A., Hassan, M.A.M., Ammar, M.E.: Tuning PID controllers using artificial intelligence techniques applied to DC-motor and AVR system. Asian J. Eng. Technol. 02(02), 11 (2014) 8. Rajesh, R.J., Kavitha, P.: Camera gimbal stabilization using conventional PID controller and evolutionary algorithms. In: 2015 International Conference on Computer Communication and Control (IC4), (Indore), IEEE, September 2015. https://doi.org/10.1109/IC4.2015.7375580 9. Dash, P., Saikia, L.C., Sinha, N.: Flower pollination algorithm optimized PI-PD cascade controller in automatic generation control of a multi-area power system. Int. J. Electr. Power Energy Syst. 82, 19–28 (2016) 10. Jagatheesan, K., Anand, B., Samanta, S., Dey, N., Santhi, V., Ashour, A.S., Balas, V.E.: Application of flower pollination algorithm in load frequency control of multiarea interconnected power system with nonlinearity. Neural Comput. Appl. 28, 475–488 (2017) ˇ 11. Fister, D., Fister Jr., I., Fister, I., Safariˇ c, R.: Parameter tuning of PID controller with reactive nature-inspired algorithms. Robot. Autonom. Syst. 84, 64–75 (2016) 12. Sabir, M.M., Khan, J.A.: Optimal design of PID controller for the speed control of DC motor by using metaheuristic techniques. Adv. Artif. Neural Sys. 2014, 1–8 (2014). https://doi.org/10.1155/2014/126317 13. Rajinikanth, V., Satapathy, S.C.: Design of controller for automatic voltage regulator using teaching learning based optimization. Procedia Technol. 21, 295–302 (2015) 14. Li, W., He, Z., Zheng, J., Hu, Z.: Improved flower pollination algorithm and its application in user identification across social networks. IEEE Access 7, 44359– 44371 (2019) 15. Jia, R., Nandikolla, V., Haggart, G., Volk, C., Tazartes, D.: System performance of an inertially stabilized gimbal platform with friction, resonance, and vibration effects. J. Nonlinear Dyn. 2017, 1–20 (2017). https://doi.org/10.1155/2017/ 6594861 16. Pan, F., Liu, L.: Research on different integral performance indices applied on fractional-order systems. In: 2016 Chinese Control and Decision Conference (CCDC), pp. 324–328, May 2016 17. Yang, X.-S.: Flower pollination algorithm for global optimization. In: DurandLose, J., Jonoska, N. (eds.) Unconventional Computation and Natural Computation vol. 7445, pp. 240–249. Springer, Heidelberg (2012)
262
S. Baartman and L. Cheng
18. Alyasseri, Z.A.A., Khader, A.T., Al-Betar, M.A., Awadallah, M.A., Yang, X.-S.: Variants of the flower pollination algorithm: a review. In: Yang, X.-S. (ed.) NatureInspired Algorithms and Applied Optimization, vol. 744, pp. 91–118. Springer, Cham (2018) 19. Rao, R.V.: Teaching-Learning-Based Optimization and Its Engineering Applications. Springer, Cham (2016) 20. Adenso-D´ıaz, B., Laguna, M.: Fine-tuning of algorithms using fractional experimental designs and local search. Oper. Res. 54, 99–114 (2005) 21. Eiben, A.E., Jelasity, M.: A critical note on experimental research methodology in EC. In: Proceedings of the 2002 Congress on Evolutionary Computation, CEC 2002, vol. 1, pp. 582–587. IEEE, Honolulu, 12 May 2002
Solving an Integer Program by Using the Nonfeasible Basis Method Combined with the Cutting Plane Method Kasitinart Sangngern and Aua-aree Boonperm(&) Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University, Bangkok, Thailand [email protected], [email protected]
Abstract. For solving an integer linear programming problem, a linear programming (LP) relaxation problem is solved first. If the optimal solution to the LP relaxation problem is not integer, then some techniques to find the optimal integer solution is used. For solving the LP relaxation problem, the simplex method is performed. So, the artificial variables may be required which it means that the problem size will be expanded and wasted more computational time. In this paper, the artificial-free technique, called the nonfeasible basis method (NFB), is used combined with the cutting plane method for solving an integer linear programming problem, called the nonfeasible-basis cutting plane method (NBC). It performs in condensed table which is the smaller simplex tableau. From the computational results, we found that the computational time can be reduced. Keywords: Integer linear programming basis method Artificial-free technique
Cutting plane method Nonfeasible
1 Introduction In many real-world problems, there are a lot of problems which require integer solutions such as travelling salesman problems [1], packing problems [2], knapsack problems [3], assignment problems [4], hub location problems [5] etc. To solve these problems, integer linear programming problems are formulated which there are many methods such as branch and bound method [6], cutting plane method [7, 8] and branch and cut method [9] can be used to find the optimality. The steps of these methods are solving an LP relaxation problem which the integer condition is dropped, then linear programming subproblems are solved for finding the integer solution. One of the most widely used to solve the integer programming problem is branch and bound method [6] which is a divide-and-conquer strategy. This method starts by considering the fractional optimal solution to the LP relaxation problem. Then, it is branched on generating sub-problems and bounded by the integer solution, infeasible solution and a fractional solution that gives the worse objective value than the current
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 263–275, 2021. https://doi.org/10.1007/978-3-030-68154-8_26
264
K. Sangngern and A. Boonperm
best solution. However, for solving a large problem, the branch and bound method has a lot of sub-problems for solving which this leads to the expensive computational time. In 1958, Gomory [7] introduced the technique named the cutting plane method to solve an integer linear programming problem. This technique starts by solving the LP relaxation problem. If an optimal solution to the LP relaxation problem is not an integer, then the cut is added to find the integer solution. The construction of an efficient added cut is one of the interesting issues to improve the cutting plane method. The well-known cut is the Gomory’s cut which is obtained from the optimal simplex tableau after the LP relaxation problem is solved. This cut is constructed by the coefficients of nonbasic variables in the row that has a fractional solution. Later in 2005, Glover and Sherali [10] proposed the improvement of the Gomory’s cut, named Chvatal-Gomory-tier cut. In 2011, Wesselmann et al. [11] developed the Gomory’s cut enhance the performance of the Gomory mixed integer cut. Additionally, the Gomory’s cut was improved by many researchers [8, 10]. From the above researches, each method which is used to solve the optimal integer solution requires the algorithm to solve subproblems which are linear programming problems. Therefore, the efficient algorithm for solving linear programming problems has been investigated. The well-known method for solving a linear programming problem is the simplex method which is the traditional iterative method introduced by Dantzig in 1947 [12]. However, the simplex method is not the best method since Klee and Minty [13] demonstrated the simplex method has poor worst-case performance. Many issues that researchers attempt to improve the simplex method are proposed, for example, proposed the technique for removing the redundant constraints [14], introduced the new method for solving the specific linear programming [15, 16] and developed the artificial-free techniques [17–24]. Before the simplex method starts, the linear programming problem in the standard form is required. So, the slack or surplus variables are added. Then, a coefficient matrix of constraints is separated into two sub matrices called the basic matrix (the basis) and the nonbasic matrix. The variables corresponding to the basis are the basic variables while the remaining variables are the nonbasic variables. The simplex method starts by choosing an initial basis to construct a basic feasible solution, then the basis is updated until the optimal solution is found. However, it is hard to construct an initial feasible basis by choosing basic variables among decision variables. In the case that surplus variables are added, the basic variables should consist of slack variables and artificial variables. It leads to the enlarged problem size. To solve this issue, some researchers develop a new method for solving linear programming problems which start by infeasible initial basis. In 1969, Zionts [17] introduced the artificial-free technique named criss-cross technique. This technique focuses on both of primal and dual problems. It alternates between the (primal) simplex method and the dual simplex method until a feasible solution of a primal problem or a dual problem is found. Later in 2000, Pan [18] proposed two novel perturbation simplex variants combined with the dual pivot rules for solving a linear programming problem without using the artificial variable.
Solving an Integer Program by Using the Nonfeasible Basis Method
265
In 2003, Paparrizos et al. [19] introduced a method for solving a linear programming by using the infeasible basis. The chosen basis is generated by two artificial variables associated with the additional constraint with a big-M number. Furthermore, this infeasible basis returns the dual feasible solution. In 2009, Nabli [20] described the artificial-free technique name the nonfeasible basis method. The nonfeasible basis method starts when the initial basis gives the primal infeasible solution. It performs on the condensed tableau which is the briefly version of the simplex tableau. In the condensed tableau, the columns of nonbasic variables are only considered because the column of basic variables for each iteration is not changed. By the operation of the nonfeasible basis method, its performance is better than two-phase method. Later in 2015, NFB was improved by Nabli and Chahdoura [21] by using the new pivot rule for choosing an entering variable. In 2020, Sangngern and Boonperm [22] proposed the method for constructing an initial basis which closes to the optimal solution by considering the angles between the objective function and constraints. If the current basic is infeasible, then the nonfeasible method will be performed. In 2014, Boonperm and Sinapiromsaran [23, 24] introduced the artificial-free techniques by solving the relaxation problem which is constructed by considering the angle between the objective function and constraints. Due to the above researches, we found that one of techniques for solving an integer program and one of techniques for solving a linear programming problem are compatible. That is the nonfeasible basis method and the cutting plane method with the Gomory’s cut. Since the added Gomory’s cut is generated by only the coefficients of nonbasic variables like the nonfeasible basis method, which is performed on the condensed tableau, we will use this compatibility to develop the algorithm for solving an integer linear programming problem by the combination of the nonfeasible basis method and the cutting plane method. Then, it is named the nonfeasible-basis cutting plane method (NBC). In this paper, it is organized into five sections. First section is the introduction. Second, all tools that are used for the proposed method are described. Third, the proposed method is detailed. The computational result is shown in the fourth section. Finally, the last section is the conclusion.
2 Preliminaries In this section, the cutting plane method with the Gomory’s cut for solving an integer linear programming problem and the artificial-free technique named the Nonfeasible Basis method are described. We begin with the cutting plane method.
266
2.1
K. Sangngern and A. Boonperm
Cutting Plane Method
Consider a (pure) integer linear programming problem: ðILPÞ
max s:t:
cT x Ax b x 0 and integer;
where A 2 Rmn ; b 2 Rm ; c 2 Rn and x 2 Rn . Before an integer linear programming problem is solved, the solution to the LP relaxation is required. The LP relaxation problem (LPR) of ILP is its integer condition dropped which it can be written as follows: ðLPRÞ
max s:t:
cT x Ax b x 0:
To solve LPR, if the origin point is a feasible point, then the simplex method can start. Otherwise, the two-phase method is performed. T Let A ¼ ½ B N ; x ¼ xTB xTN and cT ¼ cTB cTN . The standard form of LPR associated to the basis B, the nonbasic matrix N, the basic variable xB and the nonbasic variable xN is written as follows: max s:t:
0T xB þ cTN cTB B1 N xN ¼ cTB B1 b I m xB þ B1 NxN ¼ B1 b xB ; xN 0
For the cutting plane method, after LPR is solved and its optimal solution is not an integer solution, then the cuts are added to the LP relaxation problem for cutting of a non-integer solution until an optimal integer solution is found. In this paper, we are interested in the Gomory’s cut which can be defined below. Let B be the optimal basis of LPR. The associated tableau is as follows:
Im
B 1N
0
cTN cTB B 1 N xN
xB
B 1b
xB
cTB B 1b
Suppose that there is at least one row of the optimal tableau, called the ith row, with ðB1 bÞi is fraction. This row is corresponding to the equation xBi þ
X
aij xj ¼ bi ;
ð1Þ
j2IN
where aij ¼ ðB1 N Þij ; bj ¼ ðB1 bÞj and IN is the set of indices of nonbasic variables.
Solving an Integer Program by Using the Nonfeasible Basis Method
The Gomory’s cut generated by the ith row is defined by X aij aij xj þ xn þ 1 ¼ bi bi ;
267
ð2Þ
j2IN
where xn þ 1 is the non-negative slack variable. Algorithm 1 : Cutting Plane method (with Gomory’s cut) Input Optimal tableau of LPR * Let x be the optimal solution to LPR *
While x is not integer do Add the Gomory’s cut into the optimal tableau Perform the dual simplex method If the optimal solution is found then The optimal integer solution is found Else The optimal integer solution does not exist End End
The step of the cutting plane method can be described as follows: Before the cutting plane method is performed, LPR must be solved by the simplex method or the two-phase method. If the two-phase method is chosen to perform, then artificial variables are required. Consequently, the size of problem must be expanded, and the computational time will be huge. To avoid the usage of artificial variables, the artificial-free technique is introduced. This technique can deal with the artificial variable and reduce the computational time. 2.2
Nonfeasible Basis Method
The nonfeasible basis method (NFB) is one of the artificial-free techniques performed on the condensed tableau which is described below. Consider the following the original simplex tableau:
Im
0 xB
B 1N T N
c
T B
B 1b 1
c B N xN
T B
xB
1
c B b
For all iterations, the column matrix of basic variables ðxB Þ is the identity matrix. Thus, the condensed tableau is introduced instead of the original simplex tableau by removing the column of basic variables. It can be written as follows:
268
K. Sangngern and A. Boonperm
B 1N T N
c
T B
B 1b 1
c B N xN
T B
xB
1
c B b
For using the simplex method with the condensed tableau, the updated condensed tableau for each iteration is defined by the theorem in [9]. NFB was introduced to construct the feasible basis without using artificial variables and deals with the condensed tableau. So, the size of a matrix of each problem handled by this method is smaller than the original simplex method. The NFB method starts when the chosen basis B is an infeasible basis (i.e., there exists the ith row that ðB1 bÞi \0). Finally, this method returns a basic feasible solution. The detail of the algorithm is shown as follows: Algorithm 2 : Nonfeasible Basis method (NFB) Input Infeasible basis B
Bb
min
While Bb
k
0 do
arg min
and
If K then Exit /*the feasible domain is empty*/ Else
s
arg min
r
arg max
basic
nonbasic
End Apply pivoting End The current basis is feasible. Apply the simplex method by the desired pivot rule.
Solving an Integer Program by Using the Nonfeasible Basis Method
269
3 Nonfeasible-Basis Cutting Plane Method Consider the following integer linear programming problem: max ðILPÞ s:t:
cT x Ax b x 0 and integer;
ð3Þ
where A 2 Rmn ; b 2 Rm ; c 2 Rn and x 2 Rn . Then, the standard form of the LP relaxation problem of ILP is as follows: max s:t:
cT x Ax þ Im s ¼ b x; s 0:
ð4Þ
If b 0, then the simplex method can be performed for solving the problem (4). Otherwise, the origin point is not a feasible point and artificial variables are required. From the previous section, the nonfeasible basis method is better than the twophase method because it performs on the smaller tableau and it is not involved the artificial variable. Consequently, NFB is used for solving the problem (4). After the problem (4) is solved, the optimal integer solution to the problem (3) can be found by the following cases: 1. If the optimal solution to the LP relaxation problem is found and it is integer, then it is the optimal solution to the problem (3). 2. If the optimal solution to the LP relaxation problem is not integer, then the cutting plane method is performed in the condensed tableau. 3. If the LP relaxation problem is unbounded, then the problem (3) is unbounded. 4. If the LP relaxation problem is infeasible, then the problem (3) is infeasible. From the problem (4), the optimal condensed tableau is written as seen below:
B 1N
cTN cTB B 1 N xN
B 1b
xB
cTB B 1b
From the above tableau, the reduced cost cTN cTB B1 N 0 is obtained, that is the dual solution is feasible. Suppose that there exists the fractional solution. Then, the cutting plane method is preferred for finding the integer solution.
270
K. Sangngern and A. Boonperm
If the Gomory’s cut is used, then one of rows in the optimal tableau, say the kth row, which has a fractional solution is selected to generate an added cut. Then, the added cut is ðbak c ak ÞxN þ xm þ n þ 1 ¼ bk bk
ð5Þ
k ¼ where ak is the k th row of matrix B1 N, bak c ¼ ½ b a1k c b amk c and b ðB1 bÞk . From the added cut (5), if xm þ n þ 1 is chosen to be a basic variable, then this cut can add all coefficients of nonbasic variables to the optimal condensed tableau which is suitable for the cutting plane method. Hence, the condensed tableau with the added cut can be written as follows:
B 1N ak ak
cTN cTB B 1 N xN
xB
B 1b
bk
bk
xm
n 1
cTB B 1b
Since bk bk cannot be positive, the dual simplex method will be performed for solving the condensed tableau with the added cut. After the LP relaxation problem is solved, if the obtained solution is not integer, then repeat steps by adding a new cut one by one until the optimal integer solution is found. The algorithm of this method can be summarized as follows: Algorithm 3 : Nonfeasible-Basis Cutting Plane method (NBC) Input A, b and c
Generate the LP relaxation problem If the origin point is feasible do Perform the simplex method in the condensed tableau Else
Perform NFB End While the solution is not integer do Perform the cutting plane method on the optimal condensed tableau If the optimal integer solution is found then The optimal integer solution is found Else There is no integer feasible solution End End
Solving an Integer Program by Using the Nonfeasible Basis Method
271
4 Computational Results In Sect. 3, the combination method for solving an integer linear programming problem without using artificial variables was designed. To show the performance of NBC, the computational time for solving random generated integer programs between the NBC method and the traditional method (the simplex method or the two-phase method combined with the cutting plane method) is compared. Both methods are implemented through Python. Finally, the computational results will be shown and discussed. 4.1
Generated Problem
Consider the following integer linear programming problem max s:t:
cT x Ax b x 0 and integer;
ð6Þ
where A 2 Rmn ; b 2 Rm ; c 2 Rn and x 2 Rn . Generated problems are in the form of problem (6) and satisfy the following conditions: 1. A coefficient matrix A ¼ aij , aij is randomly generated in an interval ½9; 9. T 2. A vector cT ¼ cj and vector x ¼ ½xi , cj and xi are randomly generated in an interval ½0; 9. 3. After a coefficient matrix A and vector x are generated, a right-hand-side vector b is computed by Ax ¼ b. The different sizes of the number of variants ðnÞ and the number of constraints ðmÞ are tested and summarized in the following table (Table 1): Table 1. The summarized data of the generated problems n 21 41 61 81 101 201 301 401 501 601 m 10 20 30 40 50 100 150 200 250 300 Size Small Large
All methods are implemented by using Python coding on the Google Colaboratory, and these tests were run on Intel® Core™ i7-5500U 8 GB 2.4 GHz. For the traditional method, the two-phase method is used for solving the LP relaxation problem, and then the original cutting plane method is used for finding the optimal integer solution and they are performed on the full simplex tableau.
272
4.2
K. Sangngern and A. Boonperm
Computational Results
According to the experimental design, we will show the comparison of the average of the computational time before the cutting plane method is performed, the average of the total computational time and bar charts of the average of the total computational time for each size of problem with the standard deviation. Table 2. The average of the computational time (sec.) Size (row, col.) Small 21,10 41,20 61,30 81,40 101,50 201,100 Large 301,150 401,200 501,250 601,300 Total average
TRAD BC 0.14 0.56 1.83 4.15 7.75 70.12 274.85 687.05 1408.01 2972.74 5427.2
TC 0.15 0.58 1.87 4.22 7.85 70.50 275.75 688.82 1410.14 2976.51 5436.39
NBC BC TC 0.03 0.04 0.16 0.19 0.41 0.49 0.94 1.19 1.92 3.08 32.23 32.33 153.88 154.48 234.65 258.05 307.72 628.82 845.10 1334.19 1577.04 2412.86
Table 2 exhibits the average computational time before the cutting plane method is performed and the average total computational time. In Table 2, the main column named TRAD and NBC are represented the computational time for solving problem by using the traditional method and the NBC method, respectively, and the sub column named BC and TC are represented the average computational time before the cutting plane method is performed and the average total computational time, respectively. Note that the bold face numbers indicate the smallest average computational time. From Table 2, the average computational time before the cutting plane method is performed and the average total computational time of NBC are less than TRAD. Since NBC is not involved the artificial variable and is performed on the condensed tableau while TRAD requires artificial variables and the original cutting plane method, NBC is faster than TRAD. From the computational result, we found that the average total computational time of NBC is 2412:86 5436:39 0:4438 or approximate 44.38% of the average total computational time of TRAD. It means that the proposed method can reduce the computational time approximately 55.62% from the computational time of the traditional time (Fig. 1).
Solving an Integer Program by Using the Nonfeasible Basis Method
273
Fig. 1. The average of the total computational time
4.3
Discussion
From the computational results, we found that the average of the total computational time of the traditional method is greater than the NBC method. There are two main reasons why the NBC method has more effective than the traditional method. First, the size of the performed tableau for each method is different since the traditional method performs on the full simplex tableau which is bigger than the condensed tableau that used in the NBC method. To compare the size of the tableaux between the original simplex tableau and the condensed tableau, if one cut is added, then the original simplex tableau will be expanded with one row and one column
Im
as
0
0
0
B 1N T N
as T B
c
c B N xN
xB
bs
1
1
bs
T B
0 xm
xB
B 1b
xm
n 1
1
c B b
n 1
while the condensed tableau will be expanded with only one row.
B 1N
as T N
c
as T B
xB
B 1b 1
c B N xN
bs T B
bs 1
c B b
xm
n 1
274
K. Sangngern and A. Boonperm
Moreover, if k cuts are added, then the original simplex must be expanded with k rows and k columns while the condensed tableau is expanded with only k rows. Therefore, the size of tableaux performed by NBC is smaller than the origin simplex method causing that the NBC can reduce the computational time. The remaining reason is about the use of artificial variable. The traditional method is involved in the artificial variables. This leads to the number of variables in the system of the traditional method more than the NBC method which is the artificial-free technique.
5 Conclusion There are two types of methods for solving an integer linear programming problem, the exact method and the heuristic method. In this paper, we focus on the exact method. The traditional exact method consists of the method for solving the LP relaxation problem and the technique for finding the integer solution. For solving the LP relaxation problem, the simplex method or the two-phase method is preferred. If the twophase method is chosen, then artificial variables are added, and it implies that the problem must be expanded. To avoid the artificial variable, the artificial-free technique is required. The artificial-free technique that we choose for dealing with the artificial variable is the nonfeasible basis method (NFB). NFB is not only the artificial-free technique but also it performs on the small simplex tableau named condensed tableau. The nonfeasible-basis cutting plane method (NBC) is a combination of the nonfeasible basis method and the cutting plane method performed on the condensed tableau. The step of NBC starts with NFB for solving the LP relaxation problem, then the cutting plane method is used to find the integer solution. Since NBC is not use the artificial variable and perform on the condensed tableau, the computational time of NBC must less than the traditional method. From the computational results, we found that the computational time of NBC is approximate 44% of the computational time of the traditional method.
References 1. Fatthi, W., Haris, M., Kahtan H.: Application of travelling salesman problem for minimizing travel distance of a two-day trip in Kuala Lumpur via Go KL city bus. In: Intelligent Computing & Optimization, pp. 227–284 (2018) 2. Torres-Escobar, R., Marmolejo-Saucedo, J., Litvinchev, I., Vasant, P.: Monkey algorithm for packing circles with binary variables. In: Intelligent Computing & Optimization, pp. 547– 559 (2018) 3. Yaskov, G., Romanova, T., Litvinchev, I., Shekhovtsov, S.: Optimal packing problems: from knapsack problem to open dimension problem. In: Advances in Intelligent Systems and Computing, pp. 671–678 (2019) 4. Marmolejo-Saucedo, J., Rodriguez-Aguilar, R.: A timetabling application for the assignment of school classrooms. In: Advances in Intelligent Systems and Computing, pp. 1–10 (2019)
Solving an Integer Program by Using the Nonfeasible Basis Method
275
5. Chanta, S., Sangsawang, O.: A single allocation P-Hub maximal covering model for optimizing railway station location. In: Intelligent Computing & Optimization, pp. 522–530 (2018) 6. Land, A.H., Doig, A.: An automatic method of solving discrete programming problems. Econometrica 28, 497–520 (1960) 7. Gomory, R.E.: Outline of an algorithm for integer solutions to linear programs. Bull. Am. Math. Soc. 64, 275–278 (1958) 8. Gomory, R.E.: Solving linear programming problems in integers. Proc. Symposia Appl. Math. 10, 211–215 (1960) 9. Padberg, M., Rinaldi, G.: A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM Rev. 33, 60–100 (1991) 10. Glover, F., Sherali, H.D.: Chvatal-gomory-tier cuts for general integer programs. Discrete Optim. 2, 51–69 (2005) 11. Wesselmann, F., Koberstein, A., Suhl, U.: Pivot-and-reduce cuts: an approach for improving gomory mixed-integer cuts. Eur. J. Oper. Res. 214, 15–26 (2011) 12. Dantzig, G.B.: Activity Analysis of Production and Allocation. Wiley, New York (1951) 13. Klee, V., Minty, G.: How Good is the Simplex Algorithm. In Equalities. Academic Press, New York (1972) 14. Paulraj, S., Sumathi, P.: A comparative study of redundant constraints identification methods in linear programming problems. Math. Problems Eng. 2010, 1–16 (2010) 15. Gao, C., Yan, C., Zhang, Z., Hu, Y., Mahadevan, S., Deng, Y.: An amoeboid algorithm for solving linear transportation problem. Physica A 398, 179–186 (2014) 16. Zhang, X., Zhang, Y., Hu, Deng, Y., Mahadevan, S.: An adaptive amoeba algorithm for constrained shortest paths. Expert Syst. Appl. 40, 7607–7616 (2013) 17. Zionts, S.: The criss-cross method for solving linear programming problems. Manage. Sci. 15, 426–445 (1969) 18. Pan, P.Q.: Primal perturbation simplex algorithms for linear programming. J. Comput. Math. 18, 587–596 (2000) 19. Paparrizos, K., Samaras, N., Stephanides, G.: A new efficient primal dual simplex algorithm. Comput. Oper. Res. 30, 1383–1399 (2003) 20. Nabli, H.: An overview on the simplex algorithm. Appl. Math. Comput. 210, 479–489 (2009) 21. Nabli, H., Chahdoura, S.: Algebraic simplex initialization combined with the nonfeasible basis method. Eur. J. Oper. Res. 245, 384–391 (2015) 22. Sangngern, K., Boonperm, A.: A new initial basis for the simplex method combined with the nonfeasible basis method. J. Phys. Conf. Ser. 1593, 012002 (2020) 23. Boonperm, A., Sinapiromsaran, K.: The artificial-free technique along the objective direction for the simplex algorithm. J. Phys. Conf. Ser. 490, 012193 (2014) 24. Boonperm, A., Sinapiromsaran, K.: Artificial-free simplex algorithm based on the non-acute constraint relaxation. Appl. Math. Comput. 234, 385–401 (2014)
A New Technique for Solving a 2-Dimensional Linear Program by Considering the Coefficient of Constraints Panthira Jamrunroj and Aua-aree Boonperm(&) Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University, Pathum Thani 12120, Thailand [email protected], [email protected]
Abstract. The popular method to solve a 2-dimensional linear program is the graphical method or the simplex method. However, if the problem has more constraints then they take more time to solve it. In this paper, a new method for solving a 2-dimensional linear program by considering only the coefficient of constraints is proposed. If the problem satisfies the following 3 conditions: the vector of the objective function is nonnegative, the right-hand-side values are nonnegative, and the decision variables are nonnegative. Then, the optimal solution can be obtained immediately if there exists a variable which has a positive coefficient vector. Moreover, this technique can apply for choosing the entering variable for the simplex method. Keywords: 2-dimensional linear program simplex method
Simplex method Double pivot
1 Introduction A linear program is an optimization technique which has a linear objective function subject to linear equality or inequality constraints. The aim of this technique is an investigation of the solution that satisfies the constraints and gives the minimum or maximum objective value. In real-world problems, a linear program is widely used for solving many industrial problems which are required to achieve the best outcome such as a production planning problem, a traveling salesman problem, an assignment problem, a transportation problem, etc. For solving a linear program, the graphical method is an easy method for solving a 2 or 3-dimensional problem. However, for a high-dimensional problem, it is not a practical method. The first practical method which is used to solve a linear program was presented in 1947 by George Dantzig [1], namely the simplex method. However, in 1972, Klee and Minty [2] had presented the worst-case computational time of the simplex method. Then, many researchers have presented efficient methods to solve this problem. Some issues that the researchers have been interested to study are to propose the new method [3–10], to improve the simplex method by the new pivot rules [11–21].
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 276–286, 2021. https://doi.org/10.1007/978-3-030-68154-8_27
A New Technique for Solving a 2-Dimensional Linear Program
277
In 1976, Shamos and Hoey [22] proposed the algorithm which is used to solve a 2dimensional linear program. This algorithm forms the intersection of geometric objects in the plane. Then, it starts by considering n line segments in the plane and finding all intersecting pairs. After that, they use the Oðn log nÞ algorithm to determine whether any 2 intersection points and use it to find out whether two simple plane polygons intersecting. They proved that a linear program with 2 variables and n constraints can be solved in Oðn log nÞ time while the simplex method requires Oðn2 Þ time. In 1983, Megiddo [23] introduced the linear-time algorithm to solve a linear program in R2 . This method searches for the smallest circle enclosing n given points in the plane. Moreover, it disproves a conjecture by Shamos and Hoey that their algorithm requires Oðn log nÞ time. In addition, a 2-dimensional linear program which has a closely related problem called a separability problem in R2 is constructed and solved in OðnÞ time. That is, this algorithm corrects the error in Shamos and Hoey’s paper that it can solve in the Oðn log nÞ time. In 1984, Dyer [24] also proposed the algorithm which requires OðnÞ time to solve a 2-dimensional linear program. The concept of this algorithm is utilization of convexity which is well-known ideas for creating the linear time algorithm. This algorithm starts at any feasible point and coefficient values of the objective function, c1 and c2 are considered. If c1 ¼ c2 ¼ 0, then this feasible point is optimal. Otherwise, the problem is transformed to provide a gradient direction of the objective function. After that, the optimal solution is gained by performing the pairing algorithm. All processes of Dyer’s algorithm require OðnÞ time for each process. Recently, Vitor and Easton [25] proposed a new method to solve a 2-dimensional linear program which is called the slope algorithm such that the problem must satisfy the following conditions: the coefficient values of the objective function are positive, the right-hand side values of all constraints are also positive and both variables are nonnegative. Not only this algorithm can solve the 2-dimensional problem, but also can be applied to identify the optimal basis for the simplex method. They apply the slope algorithm into the simplex framework for solving a general linear program. The simplex method is improved by exchanging two variables into the basis which are identified by using the slope algorithm and this method is called the double pivot simplex method. For each iteration of this method, it requires the optimal basis from a 2dimensional linear program to indicate leaving variables for the original problem. Then, the slope algorithm is used to identify such variables. After the slope algorithm is performed, the simplex tableau is updated corresponding to the new basis. It repeats steps until the optimal solution to the original problem is obtained. From the computational result, this algorithm can reduce the iterations of the simplex method. Although the slope algorithm can identify the basis and can reduce the iterations of the simplex method, it takes more computations for each iteration. So, in this paper, we propose a new method for solving a 2-dimensional linear program. It can improve the double pivot simplex method by avoiding the computation of the slope algorithm in such case that is there exists only one pivot. We specifically determine the 2dimensional linear program such that the problem satisfies the following conditions: the cost coefficient values are positive, the right-hand side values are nonnegative and both variables are also nonnegative. This approach can identify the optimal solution by
278
P. Jamrunroj and A. Boonperm
considering only the coefficient values of constraints. Additionally, it can apply to choose the effective pivot element for the simplex method. The details of the proposed approach give in the next section.
2 The Proposed Approach Consider the following special 2-dimensional linear program: Maximize Subject to
z = c1 x1 + c2 x2
a11 x1 + a12 x2 am1 x1 + am 2 x2 x1 , x2
≤ b1 ≤ ≤ bm ≥ 0,
ð1Þ
where c1 ; c2 [ 0 and bi 0 for all i ¼ 1; . . .; m. Since the origin point is feasible, the problem (1) is either optimal or unbounded solution (see in Fig. 1).
Fig. 1. The possible cases of a solution to the problem (1)
The motivation of our proposed method is to identify a solution to such a problem by considering only the coefficient values of constraints. For the case of ai2 0 or ai1 0 for all i ¼ 1; . . .; m, we see that x1 or x2 can increase infinitely. This implies that the problem is unbounded (see in Fig. 2). For the other cases, the following theorems can identify the optimal solution if the coefficient of constraints satisfies the conditions.
A New Technique for Solving a 2-Dimensional Linear Program
279
Fig. 2. The coefficient value ai2 0 for all i ¼ 1; . . .; m
Theorem 1. Consider the 2-dimensional linear program (1) with ai2 [ 0 for all i ¼ 1; . . .; m. Let n o n o k ¼ arg min abi1i jai1 [ 0; i ¼ 1; . . .; m and r ¼ arg min abi2i jai2 [ 0; i ¼ 1; . . .; m . i) If ak1 \ak2 ðc1 =c2 Þ, then x ¼ ðx1 ; 0Þ is the optimal solution to problem (1) where x1 ¼ bk =ak1 . ii) If ar1 [ ar2 ðc1 =c2 Þ, then x ¼ ð0; x2 Þ is the optimal solution to problem (1) where x2 ¼ br =ar2 . Proof. i) Suppose that ak1 \ak2 ðc1 =c2 Þ. Let x1 ¼ bk =ak1 . We need to show that x ¼ ðx1 ; 0Þ is the optimal solution by using the optimality conditions. First, we show that ðx1 ; 0Þ is a feasible solution of the primal problem. Consider the following system: a11 x1 þ a12 x2 .. .
¼ .. .
am1 x1 þ am2 x2
¼ am1 x1
Ax ¼ ak1 x1 þ ak2 x2 .. .
¼ .. .
a11 x1 .. . ak1 x1 .. .
Since bi 0 for all i ¼ 1; . . .; m, if ai1 0, then ai1 x1 bi and if ai1 [ 0, then ai1 x1 ¼ ai1 abk1k ai1 abi1i ¼ bi . Therefore, ðx1 ; 0Þ is a feasible solution. Next, we will show that there exists a dual feasible solution. Consider the dual problem: Minimize b1 w1 + b2 w2 + + bm wm = bT w Subject to
Choose w ¼ ½ 0
a11 w1 + a21 w2 + + ak1 wk + a12 w1 + a22 w2 + + ak 2 wk +
c1 =ak1
+ am1 wm ≥ c1 + am 2 wm ≥ c2 w1 ,..., wm ≥ 0.
0 T . Since ak1 [ 0 and c1 [ 0, c1 =ak1 [ 0.
280
P. Jamrunroj and A. Boonperm
Consider the following system: a11 w1 þ a21 w2 þ þ ak1 wk þ þ am1 wm a12 w1 þ a22 w2 þ þ ak2 wk þ þ am2 wm
¼ ¼
c1 ak2 ðc1 =ak1 Þ:
Since ak1 \ak2 ðc1 =c2 Þ, c2 \ak2 ðc1 =ak1 Þ. Hence, w ¼ ½ 0 c1 =ak1 0 T is a dual feasible solution. Finally, consider the objective values of the primal and the dual problems. We get cT x ¼ c1 ðbk =ak1 Þ and bT w ¼ bk ðc1 =ak1 Þ. Since x and w are a primal feasible solution and a dual feasible solution, respectively, and cT x ¼ bT w , x is the optimal solution to problem (1). ii) Suppose that ar1 [ ar2 ðc1 =c2 Þ. Let x2 ¼ br =ar2 . We need to show that x ¼ ð0; x2 Þ is the optimal solution by using the optimality conditions. First, we will show that ð0; x2 Þ is a primal feasible solution. Consider the following system: a11 x1 þ a12 x2 .. .
¼ .. .
am1 x1 þ am2 x2
¼ am1 x1
Ax ¼ ak1 x1 þ ak2 x2 .. .
¼ .. .
a11 x1 .. .
ak1 x1 : .. .
Since bi 0 for all i ¼ 1; . . .; m, if ai2 0, then ai2 x2 bi and if ai2 [ 0, then ai2 x2 ¼ ai2 abr2r ai2 abi2i ¼ bi . Therefore, the primal problem is feasible. Next, we will show that there exists a dual feasible solution. Consider the dual problem: Minimize b1 w1 + b2 w2 + + bm wm = bT w Subject to
a11 w1 + a21 w2 + + ak1 wk + a12 w1 + a22 w2 + + ak 2 wk +
+ am1 wm ≥ c1 + am 2 wm ≥ c2 w1 , , wm ≥ 0.
Choose w ¼ ½ 0 c2 =ar2 0 T . Since ar2 [ 0 and c2 [ 0, c2 =ar2 [ 0. Consider the following system: a11 w1 þ a21 w2 þ þ ar1 wr þ þ am1 wm a12 w1 þ a22 w2 þ þ ar2 wr þ þ am2 wm
¼ ¼
ar1 ðc2 =ar2 Þ c2 :
Since ar1 [ ar2 ðc1 =c2 Þ, c1 \ar1 ðc2 =ar2 Þ. Hence, w ¼ ½ 0 c2 =ar2 0 T is a dual feasible solution. Finally, we will prove that the objective values of primal and dual problems are equal. Consider cT x ¼ c2 ðbr =ar2 Þ and bT w ¼ br ðc2 =ar2 Þ.
A New Technique for Solving a 2-Dimensional Linear Program
281
Since x and w are a primal feasible solution and a dual feasible solution, respectively, and cT x ¼ bT w , x is the optimal solution to problem (1). Theorem 2. Consider the 2-dimensional linear program (1) with ai1 [ 0 for all i ¼ 1; . . .; m. Let n o n o k ¼ arg min abi1i jai1 [ 0; i ¼ 1; . . .; m and r ¼ arg min abi2i jai2 [ 0; i ¼ 1; . . .; m . i) If ar2 \ar1 ðc2 =c1 Þ, then x ¼ ð0; x2 Þ is the optimal solution to problem (1) where x2 ¼ br =ar2 . ii) If ak2 [ ak1 ðc2 =c1 Þ, then x ¼ ðx1 ; 0Þ is the optimal solution to problem (1) where x1 ¼ bk =ak1 : Proof. The proof is similar to Theorem 1. From Theorem 1 and Theorem 2, the solution to the problem (1) is obtained by considering only the coefficient values of constraints. Therefore, the 2-dimensional linear program can be solved easily when it satisfies the conditions of these theorems. We can see that this technique can reduce the computation to solve a 2-dimensional problem. The examples of the proposed approach are shown in the next section.
3 The Illustrative Examples For this section, we give two examples for solving 2-dimensional linear problems and one example for applying with the simplex method. The first and second examples present that the 2-dimensional linear program gives the optimal solution. For the third example, we apply the proposed approach with the general linear program for selecting a leaving variable. Example 1. Consider the following 2-dimensional linear program: Maximize z = 3x1 + 4 x2 Subject to −3 x1 + 5 x2 ≤ 15 −2 x1 + x2 ≤ 22 − x1 + 4 x2 ≤ 16 x1 + 7 x2 ≤ 30 x1 + 2 x2 ≤ 11 x1 , x2 ≥ 0 .
ð2Þ
Since c1 ¼ 3; c2 ¼ 4 [ 0; bi 0 for i ¼ 1; . . .; m, we compute k ¼ arg min
bi b4 b5 ¼ 30; ¼ 11 ¼ 5: jai1 [ 0; i ¼ 1; . . .; m ¼ arg min ai1 a41 a51
Since ai2 [ 0 for i ¼ 1; . . .; m and ak1 ¼ 1\ak2 ðc1 =c2 Þ ¼ 2ð3=4Þ ¼ 3=2, the optimal solution is x1 ; x2 ¼ ðbk =ak1 ; 0Þ ¼ ð11; 0Þ with z ¼ 44.
282
P. Jamrunroj and A. Boonperm
For solving this problem by the simplex method, it requires 4 iterations while our method can identify the optimal solution by considering the coefficients. Example 2. Consider the following 2-dimensional linear program: Maximize
z = 3x1 + 2 x2
Subject to
5 x1 + x2 − x1 + x2 2 x1 + x2 5 x1 + 4 x2
≤ 5 ≤ 5 ≤ 4 ≤ 25
x1 , x2
≥ 0.
ð3Þ
Since c1 ¼ 3; c2 ¼ 2 [ 0, bi 0 for i ¼ 1; . . .; m, we compute r ¼ arg min
b1 b2 b3 b4 25 ¼ 5; ¼ 5; ¼ 4; ¼ 4 a12 a22 a32 a42
¼ 3:
Since ai2 [ 0 for i ¼ 1; . . .; m and ar1 ¼ 2 [ ar2 ðc1 =c2 Þ ¼ 3=2, the optimal solution is x1 ; x2 ¼ ð0; br =ar2 Þ ¼ ð0; 4Þ with z ¼ 8. The next example, we will apply our algorithm to solve a general linear program for identifying an entering variable. Example 3. Consider the following linear program: Maximize
z = 5 x1 + 3x2 + x3 + 2 x4
Subject to
2 x1 + x2 + 2 x3 + x4 + 2 x4 7 x1 + 4 x2 6 x1 + 3x2 + 3x3 3x1 + x2 + x3 + 3x4 9 x1 + 2 x2 + 4 x3 + x4 + 2 x4 3x1 + x2 x1 , x2 , x3 , x4
≤ ≤ ≤ ≤ ≤ ≤ ≥
10 12 15 8 9 5 0.
ð4Þ
First, construct the initial tableau by adding the slack variables and they are set to form a basis. The initial tableau can be obtained below.
z x5 x6 x7 x8 x9 x10
x1 −5 2 7 6 3 9 3
x2 −3 1 4 3 1 2 1
x3 −1 2 0 3 1 4 0
x4 −2 1 2 0 3 1 2
x5 0 1 0 0 0 0 0
x6 0 0 1 0 0 0 0
x7 0 0 0 1 0 0 0
x8 0 0 0 0 1 0 0
x9 0 0 0 0 0 1 0
x10 0 0 0 0 0 0 1
RHS 0 10 12 15 8 9 5
A New Technique for Solving a 2-Dimensional Linear Program
283
Next, consider the coefficients of x1 and x2 . The 2-dimensional linear program associated with x1 and x2 can be constructed as follows: Maximize z = 5 x1 + 3 x2 Subject to 2 x1 + x2 ≤ 10 7 x1 + 4 x2 ≤ 12 6 x1 + 3 x2 ≤ 15 ð5Þ 3 x1 + x2 ≤ 8 9 x1 + 2 x2 ≤ 9 3 x1 + x2 ≤ 5 x1 , x2 ≥ 0. After we use the proposed method, we get x ¼ ð0; 3Þ is the optimal solution to (5). So, x2 is selected to be an entering variable. The variable x6 is chosen to be a leaving variable because its corresponding constraint gives the minimum value of bi =ai2 for all i ¼ 1; . . .; m. The simplex tableau is updated as the following tableau.
z x5 x2 x7 x8 x9 x10
x1 0.25 0.25 1.75 0.75 2.25 5.5 1.25
x2 0 0 1 0 0 0 0
x3 −1 2 0 3 1 4 0
x4 −0.5 0.5 0.5 −1.5 2.5 0 1.5
x5 0 1 0 0 0 0 0
x6 0.75 −0.25 0.25 −0.75 −0.25 −0.5 −0.25
x7 0 0 0 1 0 0 0
x8 0 0 0 0 1 0 0
x9 0 0 0 0 0 1 0
x10 0 0 0 0 0 0 1
RHS 9 7 3 6 5 3 2
From the above tableau, x3 and x4 are considered. Since no positive coefficient vector of both variables, the proposed method cannot determine entering and leaving variables. Then, we back to use the slope algorithm to solve it and get that x3 ; x4 are selected to be entering variables and x9 ; x10 are chosen to be leaving variables. The updated simplex tableau is as follows:
z x5 x2 x7 x8 x3 x4
x1 2.042 −2.917 1.333 −2.125 −1.208 1.375 0.833
x2 0 0 1 0 0 0 0
x3 0 0 0 0 0 1 0
x4 0 0 0 0 0 0 1
x5 0 1 0 0 0 0 0
x6 0.542 0.083 0.333 −0.625 0.292 −0.125 −0.167
x7 0 0 0 1 0 0 0
x8 0 0 0 0 1 0 0
x9 0.25 −0.5 0 −0.75 −0.25 0.25 0
x10 0.333 −0.333 −0.333 1 −1.667 0 0.667
RHS 10.417 4.833 2.333 5.75 0.917 0.75 1.333
Since all reduced costs are nonnegative, the optimal solution is obtained at this tableau which is x1 ; x2 ; x3 ; x4 ¼ ð0; 2:333; 0:75; 1:333Þ with z ¼ 10:417.
284
P. Jamrunroj and A. Boonperm
From Example 3, at the first iteration, the proposed method selects the second most negative reduced cost to be an entering variable while the simplex method chooses the most negative reduced cost to be an entering variable. At the second iteration, the proposed method performed double pivots while the simplex method performed only one a pivot. In summary, we can reduce the number of iterations for this case because the proposed method performed only 2 iterations to solve the problem in Example 3 while the simplex method performed 4 iterations. In addition, at the first iteration, we can reduce a computation because an entering variable was indicated without using the slope algorithm.
4 The Computational Results In this section, we report the computational results by comparing the number of iterations between the simplex method with the proposed technique and the original simplex method. By solving the following canonical linear program: Maximize z = cT x ð6Þ Subject to Ax ≤ b x ≤ 0. The sizes of tested problems have six different sizes of the coefficient matrix which are m n ¼ 6 4; 6 8; 6 10; 8 6; 8 10 and 8 12. In addition, most of the tested problems are generated by strictly at the first iteration that requires the second most negative reduced cost to be an entering variable. Then, the computational results are shown in Table 1. We can see that the average number of iterations of the simplex method with the proposed technique are less than the average number of iterations of the original simplex method. So, we can reduce the number of iterations when a problem is solved by the simplex method with the proposed technique.
Table 1. The computational results of the tested problems. Sizes of problems 64 68 6 10 86 8 10 8 12 Average
The average number of iterations The original simplex The simplex method with the proposed method technique 3.4 1.8 5.4 2.2 3.4 1.9 3.0 1.6 3.2 1.5 3.6 1.7 3.667 1.783
A New Technique for Solving a 2-Dimensional Linear Program
285
5 Conclusions In this paper, a new method for solving a 2-dimensional linear program by considering only the coefficient of constraints is proposed. If there exists a positive coefficient vector of one variable, then the optimal solution can be identified immediately. Moreover, this technique can fix some cases of the double pivot simplex method that is the case of having only one entering variable. In this case, the double pivot simplex method performs the slope algorithm to solve the relaxed 2-dimensional linear program for identifying an entering variable that has more computation while the proposed method considers only the coefficient constraints for specifying an entering variable. By this reason, we can reduce computation for this case. But if the relaxed 2dimensional linear program does not satisfy the conditions, it is solved inevitably by the slope algorithm. Now, we can fix only the case of one entering variable for the double pivot simplex method. In the future work, we need to find a method that can fix the case of two entering variables.
References 1. Dantzig, G.B.: Linear Programming and Extensions. RAND Corporation, Santa Monica (1963) 2. Klee, V., Minty, G.: How Good is the Simplex Algorithm. In Equalities. Academic Press, New York (1972) 3. Terlaky, T.: A finite crisscross method for oriented matroids. J. Comb. Theory B 42(3), 319– 327 (1987) 4. Pan, P.: A new perturbation simplex algorithm for linear programming. J. Comput. Math. 17 (3), 233–242 (1999) 5. Pan, P.: Primal perturbation simplex algorithms for linear programming. J. Comput. Math. 18(6), 587–596 (2000) 6. Elhallaoui, I., Villeneuve, D., Soumis, F., Desaulniers, G.: Dynamic aggregation of setpartitioning constraints in column generation. Oper. Res. 53(4), 632–645 (2005) 7. Elhallaoui, I., Desaulniers, G., Metrane, A., Soumis, F.: Bi-dynamic constraint aggregation and subproblem reduction. Comput. Oper. Res. 35(5), 1713–1724 (2008) 8. Elhallaoui, I., Metrane, A., Soumis, F., Desaulniers, G.: Multi-phase dynamic constraint aggregation for set partitioning type problems. Math. Program. 123(2), 345–370 (2010) 9. Elhallaoui, I., Metrane, A., Desaulniers, G., Soumis, F.: An improved primal simplex algorithm for degenerate linear programs. INFORMS J. Comput. 23(4), 569–577 (2010) 10. Raymond, V., Soumis, F., Orban, D.: A new version of the improved primal simplex for degenerate linear programs. Comput. Oper. Res. 37(1), 91–98 (2010) 11. Jeroslow, R.G.: The simplex algorithm with the pivot rule of maximizing criterion improvement. Discr. Math. 4, 367–377 (1973) 12. Bland, R.G.: New finite pivoting rules for the simplex method. Math. Oper. Res. 2(2), 103– 107 (1977) 13. Pan, P.: Practical finite pivoting rules for the simplex method. OR Spectrum 12, 219–225 (1990) 14. Bixby, R.E.: Solving real-world linear programs: a decade and more of progress. Oper. Res. 50(1), 3–15 (2002)
286
P. Jamrunroj and A. Boonperm
15. Pan, P.: A largest-distance pivot rule for the simplex algorithm. Eur. J. Oper. Res. 187(2), 393–402 (2008) 16. Pan, P.: A fast simplex algorithm for linear programming. J. Comput. Math. 28(6), 837–847 (2010) 17. Csizmadia, Z., Illés, T., Nagy, A.: The s-monotone index selection rules for pivot algorithms of linear programming. Eur. J. Oper. Res. 221(3), 491–500 (2012) 18. Liao, Y.: The improvement on R. G. Blands method. In: Qi, E., Shen, J., Dou, R. (eds.) The 19th International Conference on Industrial Engineering and Engineering Management, Changsha, pp. 799–803 (2013) 19. Pan, P.: Linear Programming Computation, 1st edn. Springer, Heidelberg (2014) 20. Etoa, J.: New optimal pivot rule for the simplex algorithm. Adv. Pure Math. 6, 647–658 (2016) 21. Ploskas, N., Samaras, N.: Linear Programming Using MATLAB®, volume 127 of Springer Optimization and Its Applications, 1st edn. Springer, Cham (2017) 22. Shamos, M., Dan Hoey: Geometric intersection problems. In: 17th Annual Symposium on Foundations of Computer Science (SFCS 1976), pp. 208—215 (1976) 23. Megiddo, N.: Linear-time algorithms for linear programming in R3 and related problems. SIAM J. Comput. 12(4), 759–776 (1983) 24. Dyer, M.: Linear time algorithms for two- and three-variable linear programs. SIAM J. Comput. 13(1), 31–45 (1984) 25. Vitor, F., Easton, T.: The double pivot simplex method. Math. Meth. Oper. Res. 87, 109–137 (2018)
A New Integer Programming Model for Solving a School Bus Routing Problem with the Student Assignment Anthika Lekburapa, Aua-aree Boonperm(&), and Wutiphol Sintunavarat Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University, Pathum Thani 12120, Thailand [email protected], {aua-aree,wutiphol}@mathstat.sci.tu.ac.th
Abstract. The aim of this research is to present an integer linear programming model for solving the school bus routing problem concerning the travel distance of all buses and the student assignment to each bus stop which depends on two views. Students’ satisfaction is more interested than the total travel distance of buses in the first model, and vice versa in the second model. Keywords: Capacity constraint routing problem
Integer linear programming School bus
1 Introduction One of the four elements of the marketing mix is distribution, which is the process involving selling and delivering products and services from a manufacturer to customer. Distribution is also an essential part of manufacturers’ operations since a poor distribution leads to a loss of trust of customers, retailers and suppliers. Therefore, it becomes important to improve product distribution to ensure that customers are pleased. There are many ways to improve the efficiency of the distribution. One of these improvements is finding a route for distributing products to maximize the satisfaction of a manufacturer and customer. For instance, in 2019, Mpeta et al. [1] introduced the binary programming model for solving a municipal solid waste collection problem. The obtained solution is an optimal route to reduce waste collection time and costs. The vehicle routing problem (VRP) is a problem for finding the route to distribute products from center depot to customers in different locations and return to center depot. The VRP is commonly investigated in companies for product distribution. There are many factors to seek the route in the VRP such as the capacity of vehicles, the number of vehicles, the driver cost, the travel distance, etc. In 1959, the VRP was first studied by Dantzig et al. [2]. The purpose in their work is to find the shortest route for delivery gasoline between a bulk terminal and many stations corresponding to the demand in each station. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 287–296, 2021. https://doi.org/10.1007/978-3-030-68154-8_28
288
A. Lekburapa et al.
The VRP is adjusted to the wider problem named the open vehicle routing problem (OVRP). The OVRP relaxes the condition that the vehicles must return to the center depot from the VRP. It aims that the OVRP is a much more precise model for conforming business compare to the VRP with close routes returning to the center depot. There are several algorithms for solving the OVRP. For instance, Tarantilisl et al. [3] presented the threshold-accepting algorithm called the list based threshold accepting (LBTA) to solve the OVRP on artificial case with 50–199 depots. The school bus routing problem (SBRP) is one type of the OVRP which is designed for planning the route to pick up students from bus stops to a school or from a school to bus stops. The SBRP was first considered by Newton and Thomas [4]. They solve this problem by generating routes using a computer. A case study of this research is in Indiana with 1500 students. Afterward, many researchers improved a model for solving the SBRP. For instance, In 1997, Braca et al. [5] attempted to minimize the number of buses with many constraints involving the capacity of buses, riding time, school time window, walking distance of student from home to bus stop and the number of students. They also studied the case study in New York with 838 bus stops and 73 schools. More importantly, they solved all the problems of every school in one state. In 2012, Jalel and Rafaa [6] developed a hybrid evolutionary computation based on an artificial ant colony with a variable neighborhood local search algorithm for the Tunisia school bus routing problem. In 2013, Taehyeong Kim et al. [7] formulated a mixedinteger programming problem for the SBRP, and developed a heuristic algorithm based on the harmony search to find the solution of this problem. The obtained solution from the developed heuristic algorithm are compared with the exact solution by CPLEX. The solution of both methods is the same, but the developed heuristic method is calculated faster. In 2016, Hashi et al. [8] used Clarke & Wright’s algorithms for solving the school bus routing and scheduling problem with time window. They also tested this algorithm with the Scholastica School in Dhaka, Bangladesh. Different methods for solving the SBRP can be seen from results of Park and Kim [9], Ellegood et al. [10], Bodin and Berman [11], Schittekat et al. [12], Eldrandaly and Abdallah [13] and references therein. We now discuss the research about the SBRP due to Baktas et al. [14], which is the main motivation for writing this paper. In their research, they presented a model to minimize the number of buses with capacity and distance constraints was presented. Both constraints prevent a sub-tour. They used this model for solving the SBRP with a real case in Turkey. In this model, they solved the SBRP under the situation that the number of students in each bus stop is known. However, the satisfaction of all students before arrival at each bus stop is not considered in their results. This leads to the inspiration for investigation and improving the process for solving the SBRP in this view. In this paper, we are interested for solving the SBRP with capacity and distance constraints in view of the above mentioned inspiration. Distance and capacity constraints in our model are revised from the model in [14] to handle the satisfaction of all students together with the total distance to all buses. In addition, we provide a comparison of results providing from our models between two situations based on two distinct objectives including to maximize the total satisfaction of students and to minimize the total travel distance of all buses.
A New Integer Programming Model
289
2 Problem Description and the Proposed Model In this paper, the SBRP with the student assignment depending on the satisfaction score of students at each bus stop is investigated. For this problem, all students go to bus stops under two situations including to maximize the total score of all students and to minimize the total travel distance of all bus stops. Then, each bus starts from some bus stops and picks-up students for going to a school before the school starts, and it returns to the same route for dropping-off students. The number of students in each bus must not exceed the capacity of the bus. Each bus is involved in a single path, and the total distance is limited by the specified distance. An example of a school, bus stops and home of each student are illustrated as Fig. 1.
Fig. 1. An example of the node of school, bus stops and home of students
For the proposed model, sets, parameters and decision variables are as follows: Sets: I is the set of all bus stops. V is the set of all bus stops and a school, that is, V ¼ I [ f0g; where 0 is a school node. V 0 ¼ V [ fdg; where d is a dummy node which is the first node of each path. L is the set of all students. Parameters: dij is the distance between nodes i and j for all i; j 2 I: 0; if i ¼ d; j 2 I dij ¼ where M is a large number. M; if i ¼ d; j ¼ 0; sil is the satisfaction score of a student l waiting a school bus at bus stop i:
290
A. Lekburapa et al.
k is the number of vehicles. Q is the capacity of a school bus. T is the limit of the total length of each path. Decision variables: xij ¼ yil ¼
1; if arc ði; jÞ is traveled by bus 0; otherwise, 1; if student l is picked up at bus stop i 0;
otherwise:
Before developing the capacity constraint of Bektas et al. [14], we give its formula as follows: ui uj þ Qxij þ ðQ qi qj Þxji Q qj ui q i ui qi x0i þ Qx0i Q
8i; j 2 I with i 6¼ j 8i 2 I 8i 2 I;
where ui is the total number of students after pick-up from bus stop i; and qi is the number of students waiting at bus stop i. In our focused problem, the number of students in each bus stop is unknown because the number ofP students is obtained from the selection of the model. So, the variable qi is replaced by yil : We also change the l2L
index to make it easier to understand. The formula is as follows: P P P ui uj þ Qxij þ Q yil yjl xji Q yjl l2L l2L l2L P ui yil l2L P ui yil xdi þ Qxdi Q
8i; j 2 I with i 6¼ j 8i 2 I 8i 2 I:
l2L
We can see that the first and third constraints are nonlinear. Then, the modification is obtained by linearization to make it easier to find theanswer. To make alinearization, P P we will introduce new decision variables as zji ¼ Q yil yjl xji and pi ¼ l2L l2L P yil xdi : Then, the capacity constraints can be rewritten as follows: l2L
A New Integer Programming Model
ui uj þ Qxij þ zji Q
P
291
8i; j 2 I with i 6¼ j
yjl
l2L
zji Qxji P P zji Q yil yjl P l2L P l2L Q yil yjl ð1 xji ÞQ zji l2L l2L P ui yil
8i; j 2 I with i 6¼ j 8i; j 2 I with i 6¼ j 8i; j 2 I with i 6¼ j 8i 2 I
l2L
ui pi þ Qxdi Q pi Qx Pdi pi yil P l2L yil ð1 xdi ÞQ pi
8i 2 I 8i 2 I 8i 2 I 8i 2 I:
l2L
Next, we present a model for maximizing the total student satisfaction with the improved constraint as follows: PP
Maximize
sil yil
i2I l2L
P
s:t:
xi0 k
ð2Þ
i2I
X
ð1Þ
xdi k
ð3Þ
i2I
P
yil ¼ 1
8l 2 L
ð4Þ
i2I
P j2I [ f0g
P i2I [ fdg
xij ¼ 1
8i 2 I
ð5Þ
xij ¼ 1 8j 2 I
ð6Þ
ui uj þ Qxij þ zji Q
P
8i; j 2 I with i 6¼ j
yjl
l2L
zji Qxji 8i; j 2 I with i 6¼ j P P zji Q yil yjl 8i; j 2 I with i 6¼ j l2L
Q
P l2L
yil
P
l2L
yjl ð1 xji ÞQ zji
8i; j 2 I with i 6¼ j
l2L
ui
P
yil
8i 2 I
l2L
ui pi þ Qxdi Q
8i 2 I
ð7Þ ð8Þ ð9Þ ð10Þ ð11Þ ð12Þ
292
A. Lekburapa et al.
pi Qxdi 8i 2 I P pi yil 8i 2 I
ð13Þ ð14Þ
l2L
P
yil ð1 xdi ÞQ pi
8i 2 I
l2L
vi vj þ ðT ddi dj0 þ dij Þxij þ ðT ddi dj0 dji Þxji T ddi dj0 ðxij þ xji Þ 8i; j 2 I with i 6¼ j vi di0 xi0 0
8i 2 I
vi di0 xi0 þ Txi0 T xij 2 f0; 1g yil 2 f0; 1g zji 0 pi 0
8i 2 I
ð15Þ ð16Þ ð17Þ ð18Þ
8i; j 2 V 0
ð19Þ
8i 2 I; l 2 L
ð20Þ
8i; j 2 I
ð21Þ
8i 2 I
ð22Þ
For the objective function (1), the total student satisfaction is maximized. In constraints (2) and (3), the number of vehicles used does not exceed the number of vehicles specified. Constraint (4) ensures that each student can only board at one bus stop. Constraints (5) and (6) ensure that vehicles visiting bus stop i must exit bus stop i: Constraints (7)–(15) ensure that each path does not occur sub tour by the number of students and the capacity of buses. Constraints (16)–(18) ensure that the cumulative distance at a bus stop i; denoted by vi ; does not exceed the limit T: Constraints (19)– (20) guarantee that the decision variables are binary. Constraints (21)–(22) claim that decision variables are nonnegative. In addition, we consider the other model by changing the objective function from (1) to a different view of the problem, that is, to minimize the total travel distance of all buses. In this considering this problem, we solve the following objective function: XX Minimize dij xij ð23Þ i2V 0 j2V 0
with the same constraints (2)–(22).
3 Experimental Results In this section, an experimental result is given to show the usage of our model. This experiment solves the SBRP in the case of a kindergarten with 70 students using the school bus service. Each school bus has 13 seats and a kindergarten has 8 school buses.
A New Integer Programming Model
293
There are 12 bus stops in our experiment. We solve this experiment in two situations by using CPLEX. Results of the first and the second objective functions are presented in Table 1 and Table 2, respectively. In these tables, the first column is the number of buses. The second column is the path of each bus for picking up students. The number of all students who are picked up by each bus shows in the third column and the satisfaction of students (%) for waiting the bus at each bus stop shown in the fourth column. The final column presents the total travel distance of buses for picking up students at each bus stop in the path to school. Table 1. Results from the proposed model with the objective function (1). Bus 1 2 3 4 5 6
Bus route d-3-5-0 d-4-2-0 d-6-9-0 d-7-8-0 d-11-1-0 d-12-10-0
Number of students Total satisfaction score (%) Total distance (km.) 11 100 20.5 12 100 37.1 11 95.46 14 11 97.23 14.3 12 99.17 34.3 13 100 13.7 98.71 133.9
Table 2. Results from the proposed model with the objective function (23). Bus 1 2 3 4 5 6
Bus route 1-2-0 3-8-0 5-10-0 7-4-0 9-6-0 12-11-0
Number of students Total satisfaction score (%) Total distance (km.) 12 73.33 22.3 12 84.17 15 10 90 13.35 12 76.67 8.9 11 83.64 11 13 78.46 12.5 80.71 83.05
In Table 1, The total distance of buses is 133.9 km. and the optimal solution is 691 scores, which is calculated as 98.71%. It implies that all students are extremely satisfied using the school bus service. Table 2 shows the solution of the second objective function. The total distance of buses is 83.05 km. The total satisfaction score is 565. A comparison of the satisfaction score and the total distance of the first and the second objective functions is given in Table 3. The solutions for these two objective functions give different values. The choice to use to solve the problem depends on the purpose of the user of this algorithm.
294
A. Lekburapa et al. Table 3. Comparison of the first and the second models. Total Satisfaction score (%) Total distance (km.) The first proposed model 98.71 133.9 The second proposed model 80.71 83.05
Figure 2 shows an example route in the first model. It can be seen that each student travels to the designated bus stop. The distance for each student will vary depending on the satisfaction score the student provides. The path of the school bus is quite
Fig. 2. An example route from the first model.
Fig. 3. An example route from the second model.
A New Integer Programming Model
295
complicated because we do not take into account the shortest route. But we want the distance of each route to not exceed the specified distance. Figure 3 shows an example route in the second model. The satisfaction of the students with the bus stop was 80.71%. The shortest distance is found caused a reducing travel time and expenses. It follows that the school can be saved money and so the school bus price for students is reduced. As traveling time is reduced, students have more time for homework or family activities.
4 Conclusion In this paper, we presented two integer linear programming models formulated to solve the SBRP with capacity and distance constraints. In these two models, students can be assigned to each bus stop based on the choice of an objective function. The first model is to maximize the total satisfaction score of students, and the second model is to minimize the total travel distance of buses. The selection of the model will depend on the needs of the user. That is, if the user takes the student’s needs in mind, the first objective function should be chosen. If the user takes into account the distance, time or cost of the travel, then the second objective function should be chosen. Acknowledgement. This work was supported by Thammasat University Research Unit in Fixed Points and Optimization.
References 1. Mpeta, K.N., Munapo, E., Ngwato, M.: Route optimization for residential solid waste collection: Mmabatho case study. In: Intelligent Computing and Optimization 2019. Advances in Intelligent Systems and Computing, vol. 1072, pp. 506–520 (2020) 2. Dantzig, G.B., Ramser, J.H.: The Truck dispatching problem. INFORMS. Manage. Sci. 6(1), 80–91 (1959) 3. Tarantilis, C.D., Ioannou, G., Kiranoudis, C.T., Prastacos, G.P.: Solving the open vehicle routing problem via a single parameter metaheuristic algorithm. J. Oper. Res. Soc. 56(5), 588–596 (2005) 4. Newton, R.M., Thomas, W.H.: Design of school bus routes by computer. Socio-Econ. Plan. Sci. 3(1), 75–85 (1969) 5. Braca, J., Bramel, J., Posner, B., Simchi-Levi, D.: A computerized approach to the New York City school bus routing problem. IIE Trans. 29, 693–702 (1997) 6. Jalel, E., Rafaa, M.: The urban bus routing problem in the Tunisian case by the hybrid artificial ant colony algorithm. Swarm Evol. Comput. 2, 15–24 (2012) 7. Kim, T., Park, B.J.: Model and algorithm for solving school bus problem. J. Emerg. Trends Comput. Inf. Sci. 4(8), 596–600 (2013) 8. Hashi. E.K., Hasan, M.R., Zaman, M.S.: GIS based heuristic solution of the vehicle routing problem to optimize the school bus routing and scheduling. In: Computer and Information Technology (ICCIT), 2016 19th International Conference, pp. 56–60. IEEE (2016) 9. Park, J., Kim, B.I.: The school bus routing problem: a review. Eur. J. Oper. Res. 202, 311– 319 (2010)
296
A. Lekburapa et al.
10. Ellegood, W.A., Solomon, S., North, J., Campbell, J.F.: School bus routing problem: contemporary trends and research directions. Omega 95, 102056 (2020) 11. Bodin, L.D., Berman, L.: Routing and scheduling of school buses by computer. Transp. Sci. 13(2), 113–129 (1979) 12. Schittekat, P., Sevaux, M., Sorensen, K.: A mathematical formulation for a school bus routing problem. In: Proceedings of the IEEE 2006 International Conference on Service Systems and Service Management, Troyes, France (2006) 13. Eldrandaly, K.A., Abdallah, A.F.: A novel GIS-based decision-making framework for the school bus routing problem. Geo-spat. Inform. Sci. 15(1), 51–59 (2012) 14. Bektas, T., Elmastas, S.: Solving school bus routing problems through integer programming. J. Oper. Res. Soc. 58(12), 1599–1604 (2007)
Distributed Optimisation of Perfect Preventive Maintenance and Component Replacement Schedules Using SPEA2 Anthony O. Ikechukwu1(&), Shawulu H. Nggada2, and José G. Quenum1 1
2
Namibia University of Science and Technology, Windhoek, Namibia [email protected], [email protected] Higher Colleges of Technologies, Ras Al Khaimah Women’s Campus, Ras Al Khaimah, United Arab Emirates [email protected]
Abstract. The upsurge of technological opportunities has brought about the speedy growth of the industry’s processes and machineries. The increased size and complexity of these systems, followed by high dependence on them, have necessitated the need to intensify the maintenance processes, requiring more effective maintenance scheduling approach to minimise the number of failure occurrences; which could be realised through a well-articulated perfect preventive maintenance with component replacement (PPMR) schedules from infancy stage through to completion stage. Then, using Strength Pareto Evolutionary Algorithm 2 (SPEA2), we devise a multi-objective optimisation approach that uses dependability and cost as objective functions. Typically, due to large scale problems, SPEA2 implementation on a single node proves to be computationally challenging, taking up to 1 h 20 min to run an instance of SPEA2 PPMR scheduling optimisation, the search time, the quality of the solution and its accuracy suffer much. We address this limitation by proposing a distributed architecture based on MapReduce, which we superimpose on SPEA2. The evaluation of our approach in a case study presents the following results: (1) that our approach offers an effective optimisation modelling mechanism for PPMR; (2) that the distributed implementation tremendously improves the performance by 79% computational time of the optimisation among 4 nodes (1 master-node and 3 worker-nodes) and the quality and accuracy of the solution without introducing much overhead. Keywords: Maintenance scheduling Component replacement Hierarchical Population balancing Parallel Strength Pareto Evolutionary Algorithm 2 (SPEA2) MapReduce Optimisation
1 Introduction Although maintenance and appropriate preventive maintenance schedules have been shown to elongate component useful life [1, 19], and improve reliability and availability. However, at a certain point the component would become unmaintainable or its © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 297–310, 2021. https://doi.org/10.1007/978-3-030-68154-8_29
298
A. O. Ikechukwu et al.
reliability falls below a predefined threshold. Consequently, the work in (Ikechukwu et al. 2019) considered component replacement as an improvement factor in form of optimisation parameter to evaluate and determine [2] when the components reliability has dropped beyond defined unacceptable level which informs whether the component will be maintained or replaced. At this stage, the computation of the cost of maintenance is evaluated to either include the cost of component replacement or not when only maintenance activity is carried out. We established mathematical models for reliability, unavailability and cost for perfect preventive maintenance using Weibull distribution and age reduction model. Then we developed a variant of SPEA2, defined and established an optimisation approach for optimising PPMR schedules. PPMR schedules of Fuel Oil Service System (FOSS) model case study with respect to unavailability and cost are optimised using the developed optimisation approach. SPEA2 is an extension of SPEA. It is an enhanced elitist multi-objective, multidirectional optimisation algorithm. SPEA2 is an enhanced version of SPEA that mitigates some of the drawbacks of SPEA. SPEA2 employs density valuation method to differentiate among individuals possessing similar raw suitability values. SPEA2 uses the archive update process and archive size remains same all through and the truncation method avoids border solutions removal. SPEA2 outperforms most of the other evolutionary algorithms (EAs) on almost all test problems [20] in the expense of computational time, when processed step by step (sequentially) because of its huge iterative processes. Evolutionary algorithms like SPEA2 are generally slow especially when evaluation function is computationally-intensive and slow. It takes so long for an evolutionary algorithm to find near optimal solutions when processed sequentially; therefore, techniques are needed to speed up this process. It is our view that distributing the optimisation process will help achieve the needed speed up. With data parallel model languages, data mapping and task assignments to the processors can be achieved. Data parallel model adopted in this work is MapReduce. MapReduce programming model [18] is efficient in solving large data-scale problems in a parallel and distributed environment. The introduction of parallelism in a distributed architecture for solving complex problems can increase performance [3]. This paper presents adoption of other methods like hierarchical decomposition, population balancing to distributed processing to further reduce computational time. The remainder of the paper is organised as follows. Section 2 discusses related work. Section 3 discusses the distributed solution which includes MoHPBPSPEA2 candidate solution algorithm, architecture, and flowcharts, while Sect. 4 presents MoHPBPSPEA2 implementation. Section 5 presents and discusses the results. Section 6 draws some conclusions and sheds light on future work. Section 7, the references.
Distributed Optimisation of Perfect Preventive Maintenance
1.1
299
Challenges with Sequential Evolutionary Algorithm
While EA executes processes sequentially step after step, and crossover and mutation carried out on couple of individuals, computational time usually grows higher for large populations as the number of steps also increases. Another aspect that further increases execution time is the number of objective functions that is to be met and the complexity of the system model under consideration. Another challenge with sequential implementation is its memory consumption for storing huge population. The execution memory requirements are as well higher as the entire population must be available in memory for the operations of EA. Even though EA still leads to optimal solution, sometimes sequential EAs may experience partial run in the search space thus outputting local optimal solutions instead of the global optimal solutions if it had completed its full run. Consequently, real-world problems that require faster solutions are not being considered for EA computation. This is so as EAs do not find solutions by mere using mathematical illustration of the problem, but are stochastic, non-linear, discrete and multidimensional. The use of evolutionary algorithms can be improved by reduction in computational time to achieving solutions faster. This paper proposes a technique to reduce computational time of evolutionary algorithms.
2 Related Work In the first International Conference of Genetic Algorithms (ICGA), there were no papers about parallel GAs at all. This situation changed in the second ICGA in 1987, where six papers were published. From then on there has been a steady flow of papers published in conferences and journals of GAs and parallel computation. To reduce the computation load of genetic search, a lot of methods for searching in parallel and distributed manner have been proposed [4–6]. D. Abramson and J. Abela [7] presented a paper in which they discussed the application of an EA to the school timetabling problem [7], and showed that the execution time can be reduced by using a commercial shared memory multiprocessor. The program code was written in Pascal and run on an Encore Multimax shared memory multiprocessor. The times were taken for a fixed number of generations (100) so that the effects of relative solution quality could be ignored and proved that the speedup was attained from a parallel implementation of the program. Although much work is reported about PEA models (and implementations on different parallel architectures), involving SOPs (single objective optimization problems), PEA models could also be applied for multiobjective optimization problems (MOPs) [8]. MOPs normally have several (usually conflicting) objectives that must be satisfied at the same time. The first multi-objective GA, called Vector Evaluated Genetic Algorithms (or VEGA), was proposed by Schaffer [9]. Afterward, several major multi-objective evolutionary algorithms were developed such as Multi-objective Genetic Algorithm (MOGA) [10], Niched Pareto Genetic Algorithm [11], Random Weighted Genetic Algorithm (RWGA) [12], Nondominated Sorting Genetic Algorithm (NSGA) [13], Strength Pareto Evolutionary Algorithm (SPEA) [14].
300
A. O. Ikechukwu et al.
Muhammad Ali Ismail [15] has presented his work by using a master-slave paradigm on a Beowulf Linux cluster using MPI programming library. With the help of this he has written a pseudo code that first initializes the basic population and also the number of nodes present in the cluster. Then it assigns the fitness function to the slave or other nodes in the cluster. The slave ones compute the fitness objective and do the mutation. After this, they send back the result to the master where it makes the final counterpart. This paper presented a view of implementation and realization of algorithms on a parallel architecture. Lim, Y. Ong, Y. Jin, B. Sendhoff, and B. Lee, proposed a Hierarchical Parallel Genetic Algorithm framework (GE-HPGA) [16] based on standard grid technologies. The framework offers a comprehensive solution for efficient parallel evolutionary design of problems with computationally expensive fitness functions, by providing novel features that conceals the complexity of a Grid environment through an extended GridRPC API and a metascheduler for automatic resource discovery (Fig. 1).
Multiobjective Hierarchical Population Balanced and Parallel SPEA2 using MapReduce (MoHPBPSPEA2) 1. Accept user input objectives (tasks). 2. Generate a population, P, at random (the solution space). 3.Decompose the problem based on population and objective (by the server). 3.1 Divide P into subp1, subp2…, subpNsubpopulations (N is the no of nodes use to solve the problem). 4.Parallel execution on each node (initialise parallel processing on the nodes). 5.For subpi, i=1... N, execute in parallel the next steps on nodes (N). 6.On each node: 6.1 Receive the subpopulation, subpi and the objectives (tasks). 6.2 Perform parallel SPEA2 on worker: 6.2.2 For subpw, w=1...W, execute in parallel the next steps on the available working thread (W). 6.2.2.1 Apply the selection mechanism and the genetic operators using MapReduce. 6.2.2.2If the termination criteria are not satisfied, return to 6.2.2 6.3 Combine results from the nodes. 6.4 Send the results to the server. 7.If the termination criteria are not met, return to 5. 8.Execute a local EA on server for best results. 9.Display best results (by the server).
Fig. 1. The proposed MoHPBPSPEA2 Algorithm
Distributed Optimisation of Perfect Preventive Maintenance
301
3 The Distributed Solution It is extremely obvious from literature that a range of variants of EAs, particularly hierarchical and parallel EAs perform better than the sequential EAs [16, 17]. Consequently, we consider this kind of EA and propose the application of these hierarchical and parallel EAs (MoHPBPSPEA2) to a multiobjective optimisation problems of perfect preventive maintenance with component replacement schedules, using a MapReduce model for the construct of the parallel design. MapReduce, [18] a software framework used to provide distributed computing on large scale problems on a cluster of nodes. The subsequent section presents an architecture depicting how a multiobjective real-world problem can be solved using this framework. 3.1
The Proposed Architecture
Shown in Fig. 2, is the proposed system architecture. The architecture setup consists of the server and the nodes. They are connected together in form of a cluster. The connection is such that information flows from the server to the nodes, between the nodes and from the nodes back to the server. Each node contains processor(s) which contains threads and each thread computes a process. A process consists of the task (objective) and the subpopulation. We propose to apply objective (task) and population (data) strategies together to decompose SPEA2. The construct for decomposition will be hierarchical and all the nodes in a cluster will be offered equal set of population, which they are to work on with the same task. The global decomposition level (server) decomposes the several SPEA2 tasks (objectives) and then assigns them to the local decomposition level (nodes). Each local level (node) then takes up the responsibility of decomposing the given population and task, and run parallel tasks on the cluster. The MapReduce model offers a parallel design arrangement for streamlining MapReduce utilisation in distributed architecture. MapReduce splits a large task into smaller tasks and parallelise the execution of the smaller tasks. The proposed system architecture shows the global and local levels of MoHPBPSPEA2 decomposition.
Fig. 2. Proposed system architecture
302
3.2
A. O. Ikechukwu et al.
The Proposed Distributed Algorithm and Flowcharts
We proposed the following MoHBPSPEA2 candidate solution (Fig. 2) above which we superimposed on the developed SPEA2 variant in [2] for the above-mentioned system architecture. MoHBPSPEA2 approach of distributing jobs (tasks and population) at the nodes and threads has two major benefits. First, it offers MoHBPSPEA2 the construct to sustain different computer specifications, that is, being able to run on a node with many threads or a node with only one thread – MoHBPSPEA2 ability to run on various hardware architecture. Then, second is that, it creates opportunity to study various variables, like communication between nodes of the same computer type and that of different computer type, communication between thread of same node and that of different nodes that may affect the performance of MoHBPSPEA2 so as to offer better understanding of MoHBPSPEA2 and enable its improvement.
4 MoHPBPSPEA2 Implementation In the proposed approach, the problem is first decomposed based on the number of objectives, i.e., tasks and population P, on individual node Ni. Then the second stage deals with population balancing by parallel decomposition of the given population on each node. Each node is offered P/N number of individuals in the population, where P is the population size and N is the number of nodes considered for solving the problem. The smaller population or subpopulation; which forms part of the jobs or tasks is then assigned to each node. Each node further distributes its workload or job among its working threads and the jobs are performed in parallel in the cluster. This dual-phase distribution approach also contributes to the efficiency and effectiveness of the developed MoHBPSPEA2. The dataset is first processed and stored in the datastore and almost all the timeconsuming computations and evaluations- population generation, fitness assignment, selection and objective functions are moved to the mapper for processing while the output is sent to the reducer as input for crossover and mutation, finally, the reducer output the results. These strategies help to save time of dataset processing (input-output process) that would have been done at different stages of the entire processing thereby saving both computation time and communication time amongst the nodes. Then the results from all the participating nodes in the clusters are combined and the results sent to the server for execution of local EA for best results which are also displayed by the server.
Distributed Optimisation of Perfect Preventive Maintenance
303
Julia is adopted to implement the developed distributed algorithm, the reliability threshold tr used is 0.8, the shortest PM interval T is 180, population and archive population size of 500 and after 500 generations, The MoHPBPSPEA2 algorithm is deployed on the cluster of nodes. The proposed approach is also generic as it offers flexibility for expansion to more than one local cluster. In this case, the server is connected to the n local clusters and have all the nodes on each cluster run in parallel. The algorithms and the flowcharts presented, illustrate the processes that would be executed to implement MoHPBPSPEA2. A simple model of a fuel oil service system of a ship (FOSS) case study described as in [2] is used to demonstrate the developed approach. The constituent components of the FOSS are marked with failure behaviour data and their short names described as in [1] (Figs. 3 and 4).
Fig. 3. Flowchart showing process flow on the server
304
A. O. Ikechukwu et al.
Fig. 4. Flowchart showing process flow on the nodes
5 Results A set of optimal PPMR schedules for the fuel oil system was obtained for (i) serial processing and (ii) parallel and distributed processing. Also, a set of trade-off PM schedules shown in the Pareto frontier of Table 1 and Fig. 6; Table 2 and Fig. 5 is produced. Each of the solutions or PM schedule represents an optimal trade-off between unavailability and cost. 5.1
Discussion
A total number of 53 optimal solutions are obtained under FOSS PPMR scheduling optimisation via serial and 64 optimal solutions via parallel and distributed process. The first and last 5 of the optimal PPM schedules for serial and parallel and distributed processes are as presented in the Table 1 and Table 2 above respectively and their graphic representation as shown in Figs. 5 and 6 respectively. Their statistical tool result comparison is shown in Table 3.
Distributed Optimisation of Perfect Preventive Maintenance
305
Table 1. Optimal PPMR schedules – parallel and distributed optimisation via NUST Cluster of Servers
Table 2. Optimal PPMR schedules – serial optimisation on a NUST Server
306
A. O. Ikechukwu et al.
Fig. 5. Pareto frontier of PPMR schedules via serial optimisation on a NUST Server
Fig. 6. Pareto frontier of PPMR schedules via parallel and distributed optimisation on NUST Cluster of Servers
Distributed Optimisation of Perfect Preventive Maintenance
307
Table 3. Statistical tool comparison for serial on server versus parallel and distributed optimisation of FOSS PPMR scheduling
From Table 3 above, the unavailability minimum and cost minimum values for both serial and, parallel and distributed processes are the same. However, this is not the same for cost, via parallel and distributed process has cost maximum value of 0.012276 greater than for serial and, the difference in range is also 0.012276 greater than for serial. In the same vein, parallel and distributed process has cost maximum value of 166000 with range variance of 166000 greater than serial. This implies that parallel and distributed process explored wider space into the feasible region than serial, and also obtained more optimal PPMR schedules than serial. The unavailability standard deviation for parallel and distributed process is 0.00151 smaller than for serial. But the reverse is the case for cost standard deviation as parallel and distributed process is 12026 greater than for serial. Similarly, the unavailability mean for parallel and distributed process is 0.014682 smaller than for serial. But the cost mean for parallel and distributed process is 16347 greater than for serial. However, to have a better and clearer perception of what happened in the optimisation of FOSS PPMR schedules, the distances in objective function values for both via serial and, parallel and distributed processes are required to be established further. From Table 2 (via serial), the least optimal schedule has unavailability as 0.094808367 and cost as 4161191 in generation 500 while the highest optimal schedule was also found in generation 500 with 0.37103334 as unavailability and cost as 2741967. While the least optimal schedule in Table 1 (via parallel and distributed process) has unavailability as 0.094808367 and cost as 4373191 at generation 497 and the highest optimal schedule obtained at generation 499 has unavailability as 0.383308865 and 2704967 as cost. The unavailability and cost variance for serial are 0.276225 and 1419224 respectively and the unavailability and cost variance for parallel and distributed process are 0.2885 and 1668224 respectively. Comparing the two objective functions variance values above, it is seen that distances (spread) between the optimal solutions obtained by parallel and distributed process shrunk more than that for serial. This means that the optimal PPMR schedules
308
A. O. Ikechukwu et al.
obtained via parallel and distributed process are closer in distance to themselves than that of serial and this satisfies one of the goals of multiobjective optimisation that says the smaller the distances of the objective function values between one solution and the other obtained non-dominated solutions, the better diverse the non-dominated solutions are. Interestingly, from Fig. 5 (serial), the first and last optimal solutions of the Pareto frontier for serial have objective functions as approximately 0.38 unavailability and 2.810 cost and; 0.08 unavailability and 4.310 cost respectively, giving objective function space value difference as 0.30 and 1.500 for unavailability and cost respectively. In the same vein, from Fig. 6 (parallel and distributed process), the first and last optimal solutions of the Pareto frontier for parallel and distributed process have objective functions as approximately 0.39 unavailability and 2.810 cost and; 0.07 unavailability and 4.410 cost respectively, giving objective function space value difference as 0.32 and 1.600. From the analysis, the difference of the objective functions for parallel and distributed process is seen to strike a balance more than that for serial. That is, the optimal schedules (solutions) obtained via parallel and distributed process has the objective functions (unavailability and cost) better trade-off. It can be seen from Fig. 5, Fig. 6, Table 1 and Table 2 that the PPMR schedules obtained via parallel and distributed process are superior to PPMR schedules obtained via serial. The computational time for optimisation via parallel and distributed process is 3792 s lower than the time for serial as shown in Table 3. In other words, parallel and distributed process improved the optimisation run time by 79%.
6 Conclusion The developed scalable, hierarchical, and distributed SPEA2 algorithm (MoHPbPSPEA2) for optimising preventive maintenance with component replacement schedules (PPMR) provides a wider contribution to knowledge in the field of AI application. The results obtained from subjecting FOSS constituent components to the developed optimisation approach were shown to have met the result expectations; that optimising PPMR schedules in distributed processes can reduce computational time and enhance cost effectiveness; inject more diversity into the solution space, as well as produce better design solutions. There are many possibilities for future work. The work in this paper can be extended to: (1) investigate and demonstrate the effect on design solution, of the use of Island model and SPEA2 for perfect preventive maintenance with replacement in a distributed environment. Island model could further help to enhance the design solution quality. (2) Investigate optimising imperfect preventive maintenance schedules under replacement policy using SPEA2 in a distributed process. (3) Investigate a network model to track the fitness of system components in between PM times and offer data that might be useful in recommending possible preventive measures on the system should it be requiring attention before the next PM time.
Distributed Optimisation of Perfect Preventive Maintenance
309
(4) Investigate and demonstrate the effect of PPMR schedules execution on more than 4 nodes or more than one cluster. This will significantly further reduce the computational time.
References 1. Nggada, S.H.: Reducing component time to dispose through gained life. Int. J. Adv. Sci. Technol. 35(4), 103–118 (2011) 2. Ikechukwu, O.A, Nggada, S.H, Quenum, J.G, Samara, K.: Optimisation of perfect preventive maintenance and component replacement schedules (PPMCRS) Using SPEA2. In: Proceedings of ITT 2019 6th International Conference on Emerging Technologies on Blockchain and Internet of Things, HCT, Women Campus, Ras Al Khaimah, UAE 20–21 November 2019. IEEE inspec access no: 19569157 (2019). https://doi.org/10.1109/itt48889. 2019.9075124. ISBN: 978-1-7281-5061-1 3. Pawar, R.S.: Parallel and distributed evolutionary algorithms. seminar computational intelligence in multi-agent systems, 10 July 2018 4. Cantú-Paz, E.: A survey of parallel genetic algorithms. Calculateurs Paralleles, Reseaux et Systems Repartis 10(2), 141–171 (1998) 5. Cantú-Paz, E., Goldberg, D.E.: Are multiple runs of genetic algorithms better than one? In: Proceedings of the 2003 International Conference on Genetic and Evolutionary Computation (GECCO 2003) (2003) 6. Hiroyasu, T., Kaneko, M., Hatanaka, K.: A parallel genetic algorithm with distributed environment scheme.In: Proceedings of the 1999 IEEE International Conference on Systems, Man, and Cybernetics (SMC 1999) (1999) 7. Abramson, D., Abela, J.: A parallel genetic algorithm for solving the school timetabling problem. In: Proceedings of the 15th Australian Computer Science Conference, Hobart, pp 1–11, February 1992 8. De Toro, F., Ortega, J., Fernandez, J., Diaz, A.: PSFGA: a parallel genetic algorithm for multiobjective optimization. In: Proceedings of the 10th Euromicro Workshop on Parallel, Distributed and Network-based Processing (2002) 9. Schaffer, J.D.: Multiple objective optimization with vector evaluated genetic algorithms. In: Proceedings of the International Conference on Genetic Algorithm and their applications (1985) 10. Fonseca, C.M., Fleming, P.J.: Multiobjective genetic algorithms. In: IEEE Colloquium on Genetic Algorithms for Control Systems Engineering (Digest No. 1993/130) (1993) 11. Horn, J., Nafpliotis, N., Goldberg, D.E.: A Niched Pareto genetic algorithm for multiobjective optimization. In: Proceedings of the 1st IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence (1994) 12. Murata, T., Ishibuchi, H.: MOGA: multi-objective genetic algorithms. In: Proceedings of the 1995 IEEE International Conference on Evolutionary Computation, Perth (1995) 13. Srinivas, N., Deb, K.: Multiobjective optimization using nondominated sorting in genetic algorithms. J. Evol. Comput. 2(3), 221–248 (1994) 14. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 3(4), 257–271 (1999) 15. Ismail, M.A.: Parallel Genetic Algorithms (PGAS): master slave paradigm approach using MPI, ETech (2004)
310
A. O. Ikechukwu et al.
16. Lim, D., Ong, Y., Jin, Y., Sendhoff, B., Lee, B.: Efficient hierarchical parallel genetic algorithms using grid computing. Proc. Future Gener. Comput. Syst. 23(4), 658–670 (2007) 17. Sefrioui, M., Srinivas, K., Periaux, J.: Aerodyanmic shape optimization using a hierarchical genetic algorithm. In: European Conference on Computational Methods in Applied Science and Engineering (ECCOMAS 2000) (2000) 18. Verma, A., Llora, X., Goldberg, D.E., Campbell, R.H.: Scaling genetic algorithms using MapReduce. In: Proceedings of the 9th International Conference on Intelligent Systems Design and Applications, ISDA 2009 (2009) 19. Nggada, S.H.: Multi-objective system optimisation with respect to availability, maintainability and cost. Ph.D. dissertation, The University of Hull, vol. 3 2012 (2012) 20. Zitzler, E.K., Deb, L.T.: Comparison of multiobjective evolutionary algorithms: Empirical results (revised version). Technical Report 70, Computer Engineering and Networks Laboratory (TIK), Swiss Federal Institute of Technology (ETH) Zurich, Gloriastrasse 35, CH 8092 Zurich, Switzerland (1999)
A Framework for Traffic Sign Detection Based on Fuzzy Image Processing and Hu Features Zainal Abedin(B) and Kaushik Deb Chittagong University of Engineering and Technology (CUET), Chattogram, Bangladesh [email protected], [email protected]
Abstract. The Traffic Sign Recognition (TSR) system is an essential part of traffic management, being support system for driver and intelligent vehicle. Traffic Sign Detection (TSD) is the prerequisite step of the automatic TSR system. This paper proposes a TSD framework by exploring fuzzy image processing and invariant geometric moments. In this framework, fuzzy inference system is applied to convert the HSV image into gray tone. Then, statistical threshold is used for the segmentation. After shape verification of every connected component using Hu moments and Quadratic Discriminant Analysis (QDA) model, the candidate signs are detected by referencing bounding box parameters. The framework is simulated in different complex scenarios of day and night mode. Experimental result shows that its performance is satisfactory and recommendable with the state of the art research. The proposed framework yields 94.86% F-measure in case of Bangladesh road signs and 93.01% in German road signs. Keywords: Traffic sign detection · Fuzzy inference system moments · Quadratic discriminant analysis
1
· Hu
Introduction
Traffic sign is an indispensable component to assist traffic police and to provide safety mechanism by controlling flow of traffic and guiding walkers and drivers in every aspect of road environment. Automatic traffic sign recognition system using computer vision attracts special interest in real time applications such as intelligent vehicles and traffic maintenance. In case of automatic TSR system, on board mounted module is used to detect and return the information of traffic signs for proper navigation of the vehicle. Regarding transportation maintenance, well positioning and accurate and exact information of traffic signs definitely enhance road security and safety. In a word, TSR is an important part of Intelligent Transportation System (ITS) [1,2,4]. Traffic sign is designed using distinct colours and shapes, which are really salient and visible, easily discriminable by human from the environment. Most of c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 311–325, 2021. https://doi.org/10.1007/978-3-030-68154-8_30
312
Z. Abedin and K. Deb
the signs are made using red, blue and yellow colors where shapes are triangular, circular, square and rectangle. A sign is made distinctive by three things :border color, pictogram and background. The signs are installed in a location along side with the road, so that driver can easily detect the signs [11]. Also, the shape and colors determine the categories of signs. The properties of Bangladesh road signs are outlined in Table 1 and some examples of traffic signs are illustrated in the Fig. 1. Table 1. Properties of Bangladesh traffic signs [5]. Color
Shape
Type
Red border with white background Circular
Regulatory
Red border with white background Triangular
Warning
Blue background
Circular
Regulatory
Blue background
Square or Rectangle Information
Fig. 1. Examples of Bangladesh traffic signs [5].
Uncertainty in image processing is a prevailing matter such that visual pattern is featured by inherent ambiguity, digital image is noised while capturing, object presentations are not crisp in practical scenario, knowledge of object detection is a vague term and color properties of a pixel is a linguistic matter (such as dark, bright and high luminance). Fuzzy logic in the field of soft computing is a tool to handle these sorts of uncertainty. Hence, the application of fuzzy logic can be a great field to explore in developing vision based application [3]. As in [9,10], large amount of research works has been done on traffic signs of countries like Germany, Belgium, Sweden, France, US, UK and China. Another important fact from the literature review of [10] is that traffic signs are not look alike across the globe. In spite of the clear definition of laws by the Vienna Treaty, dissimilarity of sign structures still prevail among the countries’ signatories to the agreements, and in some scenarios, noteworthy variation in traffic sign appearances can remain within the country. These variations can be managed easily by humans while in automatic detection and recognition system, being the challenging issues, needing to be addressed. A graphic is illustrated in the Fig. 2 showing this variation in the case of stop sign of Bangladesh with some
A Framework for Traffic Sign Detection
313
countries. Another important research gap is that most of the research did not consider the scenario of night mode sign detection [10].
Fig. 2. Variation in sign appearances
In the light of above situation, the aim of this research is to propose a framework for the detection of Bangladesh traffic signs, where along with the common challenges of the detection algorithm, night mode scenarios are emphasized. The key contributions of the framework are intensity transformation using fuzzy interference that can handle uncertainties in image processing and a computing tool provided for incorporating human knowledge, segmentation by a statistical threshold and shape verification using geometric invariant Hu moments and QDA model. The rest of the paper is structured as such that Sect. 2 presents related research, Sect. 3 presents the proposed framework while result and analysis is demonstrated in Sect. 4. Finally, conclusion is drawn in Sect. 5.
2
Related Works
To detect and localize the RoIs from an image is called Traffic Sign Detection (TSD) which is the first step of TSR. According to the comprehensive reviews regarding TSD accomplished by the contributors [9,10], the major categorizes of TSD algorithms are color based, shape based and the machine learning models. In color based method, the images are segmented using a threshold where the threshold is chosen on the basis of color information while in shape base method, shapes of the signs are detected using some popular edge detection algorithms such as Hough transformation. Recently, machine learning based sign detection has become very popular due to better accuracy. In this approach, pixels are classified from the background by using different features and classification models. In [11], a color based detection algorithm was proposed for Spanish traffic signs where the threshold was determined by observing the Hue and Saturation histogram of HSI images of the manually cropped traffic signs. Also, this algorithm used Distances to Borders (DtBs) vectors and SVM model to verify the shapes of the signs. Similar segmentation method was applied by the authors of [4] where the shape verification was implemented by Hu features and the cross correlation values of the detected shapes and the ground truth shapes were calculated and the shapes within an empirical threshold were considered as
314
Z. Abedin and K. Deb
positive shapes. Similarly, in [8], the authors proposed a detection algorithm for Bangladeshi road signs using YCbCr color space and used statistical threshold values for segmentation and DtBs vectors to verify shapes. In [12], a shape and color based detection algorithm was proposed with fmeasure of 0.93. The potential region of interests were searched in the scene area by using Maximally Stable Extremal Regions (MSERs) and then, used HSV threshold to detect the sign regions based on text. The detected regions were further filtered by using temporal information through consecutive images. Yang et al. proposed a computational efficient detection algorithm which can detect signs at a rate of six frames per second on image of size 1360 × 800 [13]. He developed probability model for red and blue channel to increase the traffic colors visibility using Gaussian distribution and Ohta space; then, the MSERs were applied to locate potential RoIs. Next, the RoIs are classified either signs or not using HOG features and SVM classifier. The authors of [2] presented a machine learning based approach to segment the image where positive colors are converted to foreground and others are mapped to black. Mean shift clustering algorithm based on color information was used to cluster the pixels into regions and then, centroid of the cluster was used to classify the cluster whether it is sign or not by random forest. In addition, a shape verification was included in the framework to discard the false alarm and this verification was based on log-polar transformation of the detected blob and correlations values with the ground truth. The authors of [15] proposed an efficient detection algorithm by using AdaBoost and Support Vector Regression (SVR) for discriminating sign area from the background where a salient model was designed by using specific sign features (shape, color and spatial information). By employing the salient information, a feature pyramid was developed to train an AdaBoost model for the detection of sign candidates in the image and a SVR model was trained using Bag of Word (BoW) histograms to classify the true traffic signs from the candidates. In [1], the chromatic pixels were clustered using k-NN algorithm taking the values of a and b in the L*a*b space and then, by classifying these clusters using Support Vector Machine. Alongside, the achromatic segmentation was implemented by threshold in HSI space and shape classification was done by Fourier descriptors of the regions and SVM model. Most of the machine learning algorithms methods are based on handcrafted features to identify signs. Recently, deep learning has been very popular in the computer vision research to detect objects due to their the capabilities of features representation from the raw pixels in the image. The detection algorithm proposed in [19], used dark area sensitive tone mapping to enhance the illumination of dark regions of the image and used optimized version of YOLOV3 to detect signs. The authors of [20] used to detect signs using Convolution Neural Network (CNN) where modified version of YOLOv2 was applied. An image segmentation by fuzzy rule based methods in YCbCr color space was presented where triangular membership function and weighted average defuzzification approach were selected [18].
A Framework for Traffic Sign Detection
3
315
Detection Framework
In general, the TSR is comprised of two steps: sign detection from an image or video frame and sign classification of the detection. This research concentrates on the detection of the signs from an image as a mandatory step before sign recognition. The proposed framework for traffic signs detection is illustrated in Fig. 3. According to the framework, the image is captured by a camera and then some pre-processing such as reshaping of the image into 400 × 300 pixels and color space mapping of RGB to HSV are applied. After pre-processing, the image is transformed into gray level by using fuzzy inference where the inputs are the crisp value of hue and saturation of a pixel. Then, the gray image is segmented by applying a statistical threshold. The segmented image is subjected to some processing such as morphological closing and filtering using area and aspect ratio to discard some unwanted regions. Next, the shape of remaining regions are verified by extracting Hu features and a trained Quadratic Discriminate Analysis (QDA) model. The regions which are not classified as circle or triangle or rectangle are discarded in this stage. After shape verification, the connected components remaining in the binary image are considered as the Region of Interest (RoI). Finally, the RoIs are detected as the potential traffic signs in the input image by bounding box parameters.
Fig. 3. Steps of Traffic sign detection framework.
3.1
Preprocessing
Nonlinear color space mapping of RGB to Hue Saturation Values (HSV) is applied in the input image to reduce the illumination sensitivity of the RGB model. Hue and saturation levels of HSV image are isolated for the crisp inputs in the fuzzy inference system. 3.2
Transformation Using Fuzzy Inference
In this work, we use Mamdani Fuzzy Inference System (FIS) to map the hue and saturation values of a pixel to gray value. This inference consists of mainly three steps: fuzzification, inference using fuzzy knowledge base and defuzzification [3,7]
316
Z. Abedin and K. Deb
with two inputs and one output. The inputs are the hue and saturation value of a pixel and the output is the gray value of a pixel. Fuzzification: it converts crisp sets into fuzzy sets with the assignment of degree of membership values. There are many techniques for fuzzification process. Here, the fuzzification is based on intuition of an expert. A fuzzy set is defined by linguistic variable where degree of the membership values is assigned in the range (0, 1) by a characteristic function. Intuition is used to make the membership functions and the knowledge of an expert in the domain is utilized to develop the intuition. For the universe of ‘hue’, we consider five linguistic labels such as reddish-1, reddish-2, bluish, noise-1, noise-2, for ‘saturation’, they are reddish and bluish and for the output they are reddish, bluish and black. To assign the membership values to each linguistic variable, triangular and trapezoidal functions are used. The Table 2 presents the parameters values of each membership function. Table 2. Membership parameters of the fuzzy sets. Linguistic Variables
Membership Function Parameters
Reddish-1 (hue)
Triangular
Reddish-2 (hue)
Triangular
a = 240, m = 255, c = 255
Bluish (hue)
Trapezoidal
a = 115, m = 140, n = 175, b = 190
Noise-1 (hue)
Trapezoidal
a = 150, m = 210, n = 255, b = 255
Noise-2 (hue)
Triangular
a = 0, m = 0, b = 22
a = 0, m = 0, b = 22
Reddish (saturation) Trapezoidal
a = 96, m = 102, n = 255, b = 255
Bluish (saturation)
a = 150, m = 210, n = 255, b = 255
Trapezoidal
Reddish (output)
Triangular
a = 220, n = 235, b = 252
Bluish (output)
Triangular
a = 110, n = 150, b = 255
Black (output)
Triangular
a = 0, m = 15, b = 35
Inference: it is the main step of FIS where the fuzzy knowledge base is used to infer a set of rules on the input fuzzy set. In fuzzy logic systems, the fuzzy knowledge base is developed by the rules and linguistic variables based on the fuzzy set theory to do inference. This knowledge base is constructed by using a set of If-Then rules. The structure of the rules in the standard Mamdani FIS is as follows: the kth rule is Rk : IF x1 is F1k and . . . and xp is Fpk , THEN y is Gk , where p is the number of inputs. Example of a rule is R1 = ‘If hue is reddish-1 AND saturation is reddish, THEN output is reddish’. In our knowledge base, five rules are incorporated. The inference is done by four steps. Step1: the inputs are fuzzified. Here the crisp value of hue and saturation value of a pixel are converted to fuzzy variables. Step2: all the antecedents of a rule are combined to yield a single fuzzy value using the fuzzy logic max operation, depending on whether the parts are connected by ORs or by ANDs. Step3: the output of each rule is inferred by an implication method. We use AND operator for the implication which do the min operation. Step4: This step aggregates the output of each rule to yield a single fuzzy output. Here fuzzy max operator is used.
A Framework for Traffic Sign Detection
317
Defuzzification: in this step, we defuzzify aggregate output(fuzzy values) to create a scalar value by applying centroid defuzzification method. Figure 4 shows three examples of gray images after the fuzzy intensity transformation.
Fig. 4. Gray image after fuzzy inference.
3.3
Segmentation
The gray image, obtained by fuzzy inference, is converted to binary image using the statistical threshold presented by the Eq. 1 which is derived on the statistical parameters of the image. 1, if I(x, y) > μ + σ Binaryimage (x, y) = (1) 0, otherwise where, μ is mean and σ is standard deviation of the gray image, I(x, y). 3.4
Morphological Closing and Filtering
The goals of the steps is three folds: merging regions which are belong to the same sign but fragmented during segmentation, to discard the regions which are very small or big and, to eliminate regions which do not comply with the aspect ratios of the signs. To get the desire output, morphological closing with circular structural element and area and aspect ratio filtering are applied in this order to the every connected components of the segmented image. The filtering criteria which are derived empirically are listed in the Table 3. Table 3. Filtering criteria. Parameter
Candidate region
Aspect ratio 0.66 to 0.71 Area
300 to 850
318
3.5
Z. Abedin and K. Deb
Shape Verification
Fig. 5. Steps in shape verification.
In this step, regions after the post processing, are verified to filter false alarm. To verify the regions, geometric invariant Hu moments are extracted for every regions and Quadratic Linear discriminant (QDA) classifier is used to verify the shapes. The steps of shape classification are depicted in Fig. 5. After the shape verification, the regions which are not triangular or circular or rectangle are filtered. According to [3,17], the 2D moment of order(p + q) of a digital image f(x,y) of size M X N is defined as mpq =
−1 M −1 N
xp y q f (x, y)
(2)
x=0 y=0
where p = 0,1,2,...... and q = 0,1,2........are integers. The corresponding central moment of order (p + q) is defined as μpq =
−1 M −1 N
(x − x ¯)p (y − y¯)q f (x, y)
(3)
x=0 y=0 10 for p = 0,1,2,..... and q = 0,1,2,......... where x ¯= m ¯= m00 and y central moments, denoted by ηpq are defined as
ηp q =
m01 m00
The normalized
μpq μγpq
(4)
where γ = p+q 2 + 1 for p + q = 2,3,4,.......... A set of seven invariant moments are calculated where h1 to h2 are orthogonal(independent of size, position and orientation) moments and h7 is invariant skew moment(good for processing mirror images) [17]. (5) h1 = η20 + η02 2 h2 = (η20 − η02 )2 + 4η11
(6)
h3 = (η30 − 3η12 ) + (3η21 − η03 ) 2 2
2
2
h4 = (η30 + η12 ) + (η21 + η03 )
h5 = (η30 − 3η12 )(η30 + η12 )((η30 + η12) )2 − 3(η21 + η03 )2 ) +(3η21 − η03 )(η21 + η03 )((3(η30 + η12 )2 − (η21 + η03 )2 )
(7) (8) (9)
A Framework for Traffic Sign Detection
319
h6 = (η20 − η02 )((η30 + η12 )2 − (η21 + η03 )2 ) + 4η11 (η30 + η12 )(η21 + η03 ) (10) h7 = (3η21 − η03 )(η30 + η12 )((η30 + η12 )2 − 3(η21 + η03 )2 ) +(3η12 − η30 )(η21 + η03 )(3(η30 + η12 )2 − (η21 + η03 )2 )
(11)
The classification model, discriminate analysis is defined by the Eq. 12. G(x) = arg max δk (x) k
(12)
where δk (x) is the discriminate function and definition of this is given by the Eq. 13. 1 1 deltak (x) = − log |Σk | − (x − μk )T Σk−1 (x − μk ) (13) 2 2 where Σk is co-variance matrix and πk is prior probability. 3.6
Detection
After the shape verification, the potential candidate of sign region is finalized. The bounding box parameters of the remaining connected component in the binary image after shape filtering, are determined. Then, using these parameters, the traffic sign area is detected in the input frame and can be easily cropped for recognition of the sign.
4 4.1
Result and Analysis Experiment Setup
To simulate the proposed detection framework, Matlab R2018a is used for the experiment. The configuration of the machine is Pentium5 quad-core processors with 8 GB ram without any GPU. 4.2
Detection Results
The presented traffic sign detection framework is evaluated over one publicly available data sets, including the German Traffic Sign Detection (GTSD) and Bangladesh Traffic Sign Detection(BTSD) created by own. For BTSD, the images are captured under many scenarios (rural,city and highway) during daytime, dusk and night focusing on different conditions such as illumination variation, shadows, different orientation and complex background. The images were down sampled to a dimension of 400 × 300 pixels. Figure 6, Fig. 7 and Fig. 8 demonstrate every steps of the detection framework. Figure 9 shows some samples of the detection results in different scenarios and Fig. 10 vivified the detection results in case of night mode. The incorporation of night mode scenarios is unique,as the previous research did not test their detection framework in nigh time. Table 4 tabulates the average computational cost of the detection framework per input image without any kind of code optimization and dynamic algorithm.
320
Z. Abedin and K. Deb
Fig. 6. Steps of detection framework (image from Bangladeshi road scene): a) input image b) HSV image c) fuzzified image d) binary image e) after morphological processing f) after filtering g) after shape verification h) detection of sign.
Fig. 7. Steps of detection framework (image from GTSD): a) input image b) HSV image c) fuzzified image d) binary image e) after morphological processing f) after filtering g) after shape verification h) detection of sign.
A Framework for Traffic Sign Detection
321
Fig. 8. Steps of detection framework (night mode): a) input image b) HSV image c) fuzzified image d) binary image e) after morphological processing f) after filtering g) after shape verification h) detection of sign.
Fig. 9. Some examples of road sign detection.
Fig. 10. Some examples of detection results in night mode scenarios.
322
4.3
Z. Abedin and K. Deb
Performance Evaluation
The Pascal’s measures are used to evaluate detection performance [6,16]. A detection is regarded as true positive if the intersection of the bounding box of the prediction and the ground truth is more than fifty percent. The Intersection over Union (IoU) is defined by the Eq. 14 where BBdt is the detection bounding box of detection and that of ground truth is BBgt . By computing the IoU score for each detection, the detection is classified into True Positives (TP) or False Negative (FN) or False Positives (FP) and from these values, precision, recall and f-measure of the detector are calculated. An illustration is presented in the Fig. 11. The precision and recall parameters of this detection framework are calculated for both German data sets and Bangladeshi road scenes. The performances in both cases are quite remarkable and comparable. In case of Bangladesh, the detection algorithm shows 95.58% recall and 94.15% precision and in GTSDB, recall is 92.8% and precision is 93.23%. To assess the efficiency of the framework, a comparative analysis is performed against five existing detection algorithms and it is summarized in the Table 5. It is evident from the Table 5 that the proposed framework achieves a F-measure value of 0.93 for GTSDB and 0.94 for BTSD. area(BBdt ∩ BBgt ) (14) IoU = area(BBdt ∪ BBgt )
Fig. 11. Illustration of IoU (red BB is ground truth and blue BB is detected with TP is 1, FN = 0 and =0).
4.4
Comparisons of Classifiers in Shape Verification
A data sets of 900 shapes for circle, triangle and rectangle are formed where the shapes are taken from the segmented road images. Next, the seven Hu features are extracted for every shape and a feature vector of size 900 × 8 is created. Then, the models are trained with 10-fold cross validation. The Fig. 12 illustrates the accuracy of different models. From Fig. 12, it is evident that the accuracy of the QDA model is better than others, with 96% accuracy. Hence, this model is deployed in our detection framework for shape verification.
A Framework for Traffic Sign Detection
Table 4. Average computational cost of the steps of detection framework. Process
Average simulated computational cost(s)
Image acquisition
0.016
Prepossessing
0.02
Intensity transformation using fuzzy inference
0.22
Segmentation
0.002
Morphological processing
0.035
Filtering
0.08
Shape verification
0.1
Detection using bounding box
0.015
Total
0.49
Table 5. Comparative analysis for the detection framework. Methods
Recall(%)
Precision(%)
F-measure(%)
Data sets
[4]
91.07
90.13
90.60
GTSDB
[2]
92.98
94.03
93.50
GTSDB
[20]
90.37
95.31
92.77
GTSDB
[21]
87.84
89.65
88.74
GTSDB
[14]
87.84
89.6
88.71
GTSDB
Proposed framework
92.8
93.23
93.01
GTSDB
Proposed framework
95.58
94.15
94.86
Bangladesh Data sets
Fig. 12. Comparison of accuracy in shape verification.
323
324
5
Z. Abedin and K. Deb
Conclusion
In this paper, a detection framework is presented by utilizing fuzzy image processing where crisp sets of hue and saturation are converted into fuzzy sets and Mamdani Fuzzy Inference scheme is applied to transform those into gray values. A statistical threshold is applied to segment the fuzzified image. After some filtering, to reduce the false alarm, shape classification is done by QDA model trained on Hu features. In addition, an analysis is drawn to compare accuracy of some popular models where QDA outperforms. The framework is evaluated by calculating the IoU scores where the average recall and precision rate are 94.19% and 93.69% respectively. The future work aims recognizing the traffic sign based on deep learning model.
References ´ Traffic sign segmentation and classification 1. Lillo, C., Mora, J., Figuera, P., Rojo, A.: using statistical learning methods. Neurocomputing 15(3), 286–299 (2015) 2. Ellahyani, A., Ansari, M.: Mean shift and log-polar transform for road sign detection. Multimed. Tools Appl. 76, 24495–24513 (2017) 3. Rafael, C.G., Richard, E.W.: Digital Image Processing, 3rd edn. Pearson Education, Chennai (2009) 4. Ayoub, E., Mohamed, E.A., Ilyas, E.J.: Traffic sign detection and recognition based on random forests. Appl. Soft Comput. 46, 805–815 (2015) 5. Road Sign Manual Volume-1.pdf. www.rhd.gov.bd/documents/ConvDocs/. Accessed 4 Nov 2020 6. Hung, P.D., Kien, N.N, : SSD-Mobilenet implementation for classifying fish species. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing and Optimization. ICO 2019, Advances in Intelligent Systems and Computing, vol. 1072, pp. 399–408. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33585-440 7. Borse, K., Agnihotri, P.G.: Prediction of crop yields based on fuzzy rule-based system (FRBS) using the Takagi Sugeno-Kang approach. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing and Optimization. ICO 2018, Advances in Intelligent Systems and Computing, vol. 866, pp. 438–447. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00979-3 46 8. Soumen, C., Kaushik, D.: Bangladeshi road sign detection based on YCbCr color model and DtBs vector. In: 2015 International Conference on Computer and Information Engineering (ICCIE) on Proceedings, Rajshahi, Bangladesh, pp. 158–161. IEEE (2015) 9. Safat, B.W., Majid, A.A., Mahammad, A.H., Aini, H., Salina, A.S., Pin, J.K., Muhamad, B.M.: Vision Based traffic sign detection and recognition systems: current trends and challenges. Sensors 19(9), 2093 (2019) 10. Chunsheng, L., Shuang, L., Faliang, C., Yinhai, W.: Machine vision based traffic sign detection methods: review, analyses and perspectives. IEEE Access 7, 86578– 86596 (2019) 11. Maldonado, S., Lafuente, S., Gil, P., Gomez, H., Lopez, F.: Road sign detection and recognition based on support vector machines. IEEE Trans. Intell. Transp. Syst. 8(2), 264–278 (2007) 12. Greenhalgh, J., Mirmehdi, M.: Recognizing text-based traffic signs. IEEE Trans. Intell. Transp. Syst. 16(3), 1360–1369 (2015)
A Framework for Traffic Sign Detection
325
13. Yang, Y., Luo, H., Xu, H., Wu, F.: Towards real time traffic sign detection and classification. IEEE Trans. Intell. Transp. Syst. 17(7), 2022–2031 (2016) 14. Cao, J., Song, C., Peng, S., Xiao, F., Song, S.: Improved Traffic sign detection and recognition algorithm for intelligent vehicles. Sensors 19(18), 4021 (2019) 15. Tao, C., Shijian, L.: Accurate and efficient traffic sign detection using discriminative AdaBoost and support vector regression. IEEE Trans. Veh. Technol. 65(6), 4006– 4015 (2016) 16. Everingham, M., Van, G.L., Williams, C.K.I.: The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 88, 303–338 (2010) 17. Hu, M.K.: Visual pattern recognition by moment in-variants. IRE Trans. Inf. Theory 8(2), 179–187 (1962) 18. Alvaro, A.R., Jose, A.M., Felipe, G.C., Sergio, G.G.: Image segmentation using fuzzy inference system on YCbCr color model. Adv. Sci. Technol. Eng. Syst. J. 2(3), 460–468 (2017) 19. Jameel, A.K., Donghoon, Y., Hyunchul, S.: New dark area sensitive tone mapping for deep learning based traffic sign recognition. Sensors 18(11), 3776 (2018) 20. Zhang, J., Huang, M., Jin, X., Li, X.: A real time Chinese traffic sign detection algorithm based on modified YOLOv2. Algorithms 10(4), 127 (2017) 21. Xue, Y., Jiaqi, G., Xiaoli, H., Houjin, C.: Traffic sign detection via graph-based ranking and segmentation algorithms. IEEE Trans. Syst. Man Cybern. Syst. 45(12), 1509–1521 (2015)
Developing a Framework for Vehicle Detection, Tracking and Classification in Traffic Video Surveillance Rumi Saha, Tanusree Debi, and Mohammad Shamsul Arefin(&) Department of CSE, CUET, Chittagong, Bangladesh [email protected], [email protected], [email protected]
Abstract. Intelligent Transportation System and safety driver assistance systems are significant topics of research in the field of transport and traffic management. The most challenging of moving vehicle detection, tracking and classification is in the Intelligent Transportation System (ITS) and smart vehicles sector. This paper provides a method for vision based tracking and classify different classes of vehicle by controlling the video surveillance system. Several verification techniques were investigated based on matching templates and classifying images. This paper focused on improving the performance of a single camera vehicle detection, tracking and classification system and proposed a method based on a Histogram of Oriented Gradient (HOG) function that is one of the most discriminatory features to extract the object features and trained for classification and object on the Linear Support Vector Machine (SVM) classifier. Also categorize vehicles on the shape or dimension based feature extraction with cascade based Adaboost classifier which has the high predictive accuracy and the low cost of storage affirm the efficacy for real-time vehicle classification. In the final stage, for minimizing the number of missing vehicles, Kalman Filter was used to track the moving vehicles in video frame. Our proposed system is checked using different videos and provided the best output with appropriate processing time. The experimental results shows the efficiency of the algorithm. Keywords: Vehicle detection Histograms of Oriented Gradients (HOG) Support Vector Machine (SVM) Occlusion Kalman filter Vehicle tracking Vehicle classification
1 Introduction As vehicle detection and classification system has attracted many researchers from both institutions and industry in the field of Intelligent Transportation System (ITS). We proposed a new method in this paper to effectively and robustly track and classify vehicles. We also compared our own improved system with various existing algorithms of detection and tracking them. The suggested approach is evaluated using selfcollected video sequences in condition of motorway driving. Experimental results show how effective and feasible our approach is.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 326–341, 2021. https://doi.org/10.1007/978-3-030-68154-8_31
Developing a Framework for Vehicle Detection
327
In recent years the rise of vehicles has caused significant transport problems. As a result of the increase in automobile traffic, some issues emerged. Due to the rise in vehicles, the jamming of traffic in cities is a major problem and people’s health is at risk. Thus, road traffic video monitoring is becoming crucial work on Intelligent Transportation System (ITS) is attracting considerable attention. The area of automated video surveillance systems is actually of immense concern because of its safety impacts. The objective of this work is vehicle detection, tracking and classifying from the high quality video sequence. The paper will be carried out to achieve following goals: detection and tracking of multiple moving vehicles in traffic video sequence. Also classifying vehicles into their types based on the dimension of the objects. It is an important and difficult job to monitor frame by frame of identifying objects in video. It is a vital part of the smart surveillance system because it would not be possible for the system to obtain coherent temporal information about objects without monitoring objects, and measures for higher level behavioral analysis. The first step in video processing is identification of moving objects. Several regions may be used, such as video monitoring, traffic controls, etc. According to the relevant works mentioned in Sect. 2, the remainder of this paper is structured. In Sect. 3, the system architecture and design is suggested. Section 4 and 5 display experimental results and conclusions.
2 Related Work For vehicle detection, tracking and classification several different methods have been suggested in the literature review. Detection and tracking of vehicles are major computer vision issues that serve as the basis for further study of target motion. Vehicle detection takes advantage of technology such as digital image processing to distinguish the vehicle and its context [15], multimedia automation [16] statistics etc. In [11] an efficient algorithm to track multiple people is presented. For feature extraction and neural networks for identification, PCA was used in Matthews et al. [12]. A method of measuring traffic parameters in real time is defined here. It uses a feature-based approach to track vehicles in congested traffic scenes, along with occlusion logic. Instead of monitoring whole vehicles, sub-features of the vehicle are monitored for handling occlusions. However this approach is very expensive in terms of computation [13]. For vehicle identification and spatio-temporal surveillance, Chengcu et al. [1] suggested a system for adaptive context learning where in order to analyze image/video segmentation process using SPCPE, this technique uses unsupervised vehicle detection and spatio-temporal tracking. This work’s difficulty that lacks user interaction and doesn’t know a priori can pixel belongs to the class. Gupte et al. [2] suggested a method, where processing is carried out at three levels: raw images, the level of the area and the level of the vehicle, and classification of two items of the type. The limitation of this work is cannot classify a large number of category object and error occur for occlusion. The experiment result shows the highway scene which can fail in case of narrow road video sequence.
328
R. Saha et al.
Peng et al. [3] implemented a system using data mining techniques for vehicle type classification where the front of the vehicle was precisely determined using the position of the license plate and the technique of background-subtraction. Also vehicle type probabilities were extracted from eigenvectors rather than explicitly deciding the type of vehicle and using SVM for classification. The limitation of this method is that all vehicle license plate can’t found so there is an error occur in detection object and also handling complex. Sun et al. [4] proposed a structure for a real-time precrash vehicle detection method utilizing two forms of detection algorithms, such as the generation of multicycle-driven hypotheses and the verification of appearance-based hypotheses, and the classification of SVM algorithm used. The shortcoming of this work is that parameter failed in the case of environment and also cause error in the detection moving object. Jayasudha et al. [5] set out a summary of road traffic and accident data mining in the report. They provide an overview of the techniques and applications of data mining in accident analysis working with different road safety database and also need data discriminator which compares the target class one or more set. Chouhan et al. [6] suggested a technique for retrieving images using data mining and image processing techniques to introduce an image’s size, texture and dominant color factors. If the shape and texture same, they use weighted Euclidean distance. Chen et al. [7] developed a framework on PCA based vehicle classification framework where they used for feature extraction segment individual vehicles using SPCPE algorithm and for classification use two algorithms – Eigenvehicle and PCASVM. The weakness of this method detects vehicle have to use an unsupervised algorithm SPCP. On-road vehicle detection technique with training linear and nonlinear SVM classifiers using modified HOG based features in Y. Zakaria et al. [9]. For training dataset, they used the KITTTI dataset. Vehicle detection, tracking and classification is complicated as any traffic surveillance system due to the interference of illumination, blurriness and congestion of narrow road in rural area but exposed to real world problems. This demand makes the task very challenging for requirements of increasing accuracy. This paper, we developed a robust vehicle detection, tracking and classification method by extraction HOG features and Haar like features, with Support Vector Machine (SVM) and cascade based classifier for classifying the types of vehicles. Then Kalman filter technique is used. The system increases the tracking and classification accuracy by using a proper algorithm with less computational complexity and reduces the processing time.
3 System Architecture and Design As we discuss previous related works step about the related works on vehicle detection, tracking and classification procedure. We notice that there are some limitations of all the procedure for which reason those methods can’t very efficient framework in the Intelligent Transportation System. For solving all of the limitations of those methods we proposed an efficient framework of this field, according to Fig. 6 which can reduce the limitations of the background work. The system architecture of vehicle detection,
Developing a Framework for Vehicle Detection
329
Tracking and classification System comprises some basic modules: Dataset initialization, feature extraction, Haar like feature, classifier, tracking and length calculation. The function of dataset initialization is to set up the database with positive and negative samples. The feature extraction module is for generating important feature from the positive samples. Classifier module is for training samples for separate vehicles and non- vehicles. Tracking module is for detecting all moving vehicles and also reduce the number of missing objects. Finally, the most important part of the system is classification module for classifying vehicle types for better recognition based on the dimension of the object. Algorithm 3 shown the whole evaluating procedure of Vehicle detection and Tracking through an Algorithm. 3.1
Dataset
Samples are collected to evaluate the algorithm presented, in various scenes, city highway, wide roads, narrow roads, etc. In the first stage,7325 samples were collected for training and testing, including 3425 samples of vehicles (positive samples) and 3900 samples of non-vehicles (negative samples). The vehicle samples include various vehicle types such as car, truck and buses with various directions such as front, rear, left and right. In addition, the vehicle samples include both vehicles near the cameramounted vehicle, as well as those far away. At both stages, traffics sign etc. Illustration Fig. 1 and Fig. 2 shows example of training on vehicles and non-vehicles images. The training samples were resized to 64 by 64 RGB images.
Fig. 1. Samples for vehicle image
Fig. 2. Samples for non-vehicle images
330
R. Saha et al.
3.2
Sample Process
The samples collected are then pre-processed prior to extraction of the feature. The sample is resized so its dimensions can be 64 64 pixel where height is 64 pixels and width is 64 pixels. The height is selected based on the sample’s original size, then the width is determined so the sample retains the average aspect ratio. The next step is to calculate the different variations in the HOG feature for each sample, the cell size change for each sample size so all samples have the same vector length. 3.3
Histogram of Oriented Gradient (HOG) Feature Extraction
In the vehicle detection process Histogram of Oriented Gradients (HOG) features extraction is the main components. The original HOG computation is performed in five steps. First the image goes through the normalization of color and the correction of gamma. The picture is then divided up as a cell grid. The cells are grouped into larger overlapping blocks which allow the cells to belong to more than one block. Example of dividing an image into 16 16 cells where each cell has 256 pixels and 2 2 size blocks indicating that each block contains 2 cells in each direction. The blocks in the figure have a 50% overlap ratio where half of the block cells are shared with the neighboring block. Cell size and block size are parameters that the user must determine according to the size of the image and the amount of details needed to be captured (Fig. 3).
Algorithm 1: HOG calculation 1: input A: a gradient orientation image 2: initiate H 0 3: for all points inside the image (p, q) do 4: i A (m, n) 5: k the small region including (p, q) 6: for all offsets (r, s) such as the representing neighbors do 7: if (m + s, n + s) is inside of the image then 8: j I (m + x, n +y) 9: H (C, A, B, r, s) H (C, A, B, r, s) + 1 10: end if 11: end for Fig. 3. Algorithm for HOG implementation
The key element of HOG features extraction in the object detection are as follows: The images are converted to grayscale input color. The Gamma correction procedure adopts the standardization of the color space (normalized) of the input image; the aim is to adjust the contrast of the image, decrease the shadow of the local image and the effect of changes in the illumination. Though it can restrain interference with noise.
Developing a Framework for Vehicle Detection
331
Measure the gradient; primarily with a view to collecting contour details and reducing light interference. Project gradient to a cell steering gradient. All cells within the block are normalized; normalization has compressed the light, shadow, and edges, and the block descriptor is called the HOG descriptor after normalization. Collect HOG features from all blocks in the detection space; this step is intended to collect and merge HOG features into the final classification feature vectors to overlap blocks in the detection window. In this Fig. 4 and 5 HOG feature extraction shown from vehicle and non- vehicle image.
Fig. 4. Hog Feature for Vehicle image
Fig. 5. Hog feature for non-vehicle image
The variants of HOG features use the same technique for extraction of the function but variations in the parameters below: 1. The number of angles to orientation. 2. The number of bins with histograms. 3. The method by which the pixel gradient is determined. We may also optionally apply a color transformation to your HOG function vector and attach binned color characteristics as well as color histograms.
332
3.4
R. Saha et al.
Support Vector Machine (SVM) Classifier
Support Vector Machine (SVM) is a kind of algorithm based on the structural risk theory. We may use a Support Vector Machine (SVM) for binary classification, if our results contain exactly two classes. An SVM classifies data by seeking the best hyperplane that differentiates in one class for all data points from those of the other class. For an SVM the best hyperplane means the one with the greatest gap between the two groups. Two classes are divided between two classes, with the highest interval. Linear classification of SVM and nonlinear classifiers of SVM is designed to detect artifacts. As a nonlinear SVM kernel, a Gaussian kernel was used. Perform a dense multi-scale scan at each location of the test image using the classifier, disclosing preliminary object decisions. In image recognition, HOG features combined with the SVM classifier were widely used, especially when an object detection was very effective. The goal is to differentiate between the two groups by a feature as separate vehicles and non-vehicles which are induced from the available examples. By analyzing the accuracy with the other classifier, it has been seen that HOG-SVM accuracy is more than other classifier. 3.5
Non-linear Suppression
A non-linear suppression process is performed to minimize overlapping detected windows to only one window, the suppression process is performed at each window according to the confidence level of the classifier window with the highest confidence level is left, while other windows that overlap with it are suppressed with a percentage overlap greater than certain thresholds. 3.6
Kalman Filter for Tracking
The next step after detecting vehicles is to track vehicles for moving objects, from frame to frame. Because of the detector’s relatively high cost we are interested in contrasting an algorithm with a lower tracking complexity. That is why we choose to use a tracking algorithm known as the Kalman Filter for tracking objects in our system. It can help system, improve auto tracking accuracy and reduce frame losses. Kalman filter typically has two sections. The former is prediction and update. Prediction: Kalman filter is used to predict the position in the subsequent frame. The vehicle speed is determined based on the blobs distance. The vehicle position in the current frame is calculated using the vehicle velocity, the location of the current frame and the time elapsed since the last frame. It is state dependent and used to get the prediction of the state vector and the error covariance.
Developing a Framework for Vehicle Detection
333
Fig. 6. System Architecture of the proposed framework.
Calculating Vehicle Positions: We use a heuristic approach in which each vehicle patch is shifted around its current position to cover as many of the vehicles related blobs are possible. This is taken as the real vehicle location. Estimation: A measurement in a position in the system of image coordinates, as determined previous section. It updates the prediction parameters to reduce the error between the vehicle’s predicted and measured location. By predicting the vehicle’s location in the next frame, the next step is to rectify the prediction stage errors. We use the function for the correction step feature based. Now we add two constraints on the color and the size features for the corresponding point. The first condition is a contrast of blobs in two consecutive color frames. The blob must be the same compared to the color of the blobs in the n + 1 frame in frame n. The second condition in two frames is about the size of the blobs. Given that the distance between the vehicle and the camera varies, the size of the vehicle is always different. But the size variance ratio is minimal, from one frame to the next, and the size difference. Two blobs must be low in one consecutive frame. So, if two blobs match colors and the difference in size between them is lower than the threshold and the choice foreseen satisfy these two conditions, we consider the prediction is true, and set the predicted position to the blob’s center in a new frame. In this case, if two vehicles are similar to each other, the criteria are met for each vehicle to be chosen as the acceptable choice (Fig. 7).
334
R. Saha et al.
Algorithm 2: Kalman filter Tracking 1: If (time==0) { 2: Consider a new track for any vehicle detected 3: } 4: Else { 5: For (All current tracks) { 6: New position and track size predict by Kalman-filter 7: Overlap new place with all blobs found 8: If (Overlap (track (a), blob (b))! =0) 9: Mark match_matrix [a] [b]; 10: Analyze (match_matrix); } Fig. 7. Algorithm for Kalman Filter Tracking
3.7
Haar –Like Feature Extraction
Many approaches to machine learning have the benefit of being more effective in object detection computed. Haar classification is a Decision Tree-based technique in which a statistically improved rejection cascade is created during the training process. At first, a classifier (namely a cascade of boosted graders operating with Haar like - Features) train with a few hundred of positive samples scaled to them same size and negative samples might be random images are also scaled to same size as positive images. After resizing, listing these images into a text file and create a vec. File (using OpenCV creates sample). After that, start training the samples using cascade classifier and built classifier (.xml file). By following the above steps, we can extract Haar like feature through training the cascade boosted classifier. 3.8
Load Classifier
Training with a standard classifier is the most important task of the work that has been done our previous section. Now cascade classifier is loaded (.xml file), our training the classifier for the giving positive images according to their vehicles types. 3.9
Length Calculation
In the shape based feature extraction of the object, its dimension such as height, width, length etc. are very necessary things. The height and width of each detected vehicle are calculated by using below formula (Fig. 8): D¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r ðX2 X1 Þ2 þ ðY2 Y1 Þ2
ð1Þ
Developing a Framework for Vehicle Detection
335
Algorithm 3: Vehicle Detection and Tracking 1: Input: The video sequence that is being monitored by the target specified 2: Output: The outcome of tracking 3: for t 1 to N do 4: if It has been detected for 5 seconds, compared with the last detection then 5: HOG and SVM to detect the vehicle; 6: If objects are found to be of concern (ROI) Then 7: Tracking the entity in the bounding box using the Kalman filter tracking algorithm 8: if It has been detected for 10 seconds 9: Comparing with the last detection then Normal SVM and HOG to judge whether Object still a vehicle in boundary box 10: else 11: Back to do kalman filter tracking 12: end 13: else 14: Back to the last step for detection 15: end 16: else 17: Back to the last step for detection 18: end Fig. 8. Vehicle detection and tracking algorithm
3.10
Classification
The classification of vehicle types has recently gained a great deal of research interest. It follows two main directions on the basis of the vehicle’s shape or appearance. Classification is achieved by categorizing the vehicles, according to the size of the vehicles into three groups, namely, large, medium and small. Since the length of the vectors is easy to find, the length was taken as the parameter for classifying vehicles according to the given size. When any vehicle object comes to our region of interest, the length of the vector has been calculated by using length calculation equation, Classification was carried out for that vehicle then pass the length requirements (Fig. 9).
336
R. Saha et al.
Vehicle Type
Vehicles Name
Large Medium
Bus, Truck Car, Van
Small
Motorbike, Bicycle Fig. 9. vehicle type classification
4 Implementation and Experiments In this portion, providing our development framework with the implementation procedure and performance review. 4.1
Experimental Setup
The developed vehicle Detection, Tracking and classification System have been implemented on a machine having the window 10, 2.50 Core i3-4100 processor with 4 GB RAM. The system has been developed in python 3.6.7 (version). 4.2
Implementation
Figure 10(a) shown the results only where there is some false position for detection. But the false positive rate can be reduced after applying the tracking algorithm which is shown in Fig. 10 (b), (c) and (d) and also track multiple vehicles. The detection boxes are in this bounding box red Color, the tracking boxes are the blue boxes.
(a)
(b)
(c)
(d)
Fig. 10. Vehicle detection and tracking example in different scenes such as (b) urban road with high light, (c) rural congested road, (d) low light.
Developing a Framework for Vehicle Detection
337
Vehicle tracking in town area is much more difficult because objects are nearby to each other, while tress or other background might cast shadws both on the road and the vehicles. Figure 10(b)-(d) shown some tracking results under different scenario where our proposed system work very satisfactory. By using different time and scene traffic sequence to check kalam-Filter based tracking system. We can see the system has a good tracker both videos and images. It can also track fast moving objects. When a vehicle enters the scene, a new tracking object is detected, a new number is transmitted and a tracking window is configured for the new vehicle. We tested the system on highway image sequences. Most vehicles can be successfully tracked and classified by the system. Different videos and images were chosen for testing of different types of vehicles, including 3 types, including buses, cars and motorbikes. By plugging in haar cascade-based classifier algorithms based on vehicle extract features or dimensions, we loop through them and draw a rectangle around each frame of the videos. After those preprocessing, we have integrated the framework that can automatically extract and classify vehicles into different traffic video scenes.
(e)
(g)
(f)
(h)
Fig. 11. Vehicle classification example in different types such as (e)-(f) motorbike types of vehicle, (g) car type of vehicle, (h) Bus type of vehicle
4.3
Performance Evaluation
To evaluate the efficiency of the proposed solution, we precisely break the training data train and test it by retaining 20% of the vehicle images and 20% of the vehicle subimages for training. We used a fixed set of 231 sub-images of vehicles and nonvehicles that were collected for testing as the data set.
338
R. Saha et al.
Correct correspondence is the number of points which the proposed method detects and tracks correctly. And the term for correspondence is the total number of match points. Accuracy ¼
Number of corrected correspondence 100% Number of correspondence
(1) Performance of Kalman Filter Tracking The traffic sequence videos we have tested for checking result Table 1, shown some information about those videos such as videos frame height and width, number of frames, total processing time. From this result of tracking video, we can understand about our proposed system accuracy and effectiveness. Table 1. Execution time analysis for vehicle detection and tracking Parameter Frame height Frame width Video length No. of frames Processing time
Video 1 320 320 14 s 256 2.5 min
Video 2 720 1280 30 s 918 6 min
As shown in Table 2, we use traffic sequences of video to check the Kalman Filter-based tracking system, with the tracking results as shown above. We can see that the system has a good tracking result and it can also track fast moving objects like vehicles. When a new vehicle enters the scene, a new Tracking object is seen, a new number is distributed and a tracking window in that vehicle initialized. For moving destinations such as vehicles quickly, this has a strong detection and tracking effect. This can detect and track vehicles that randomly join the monitoring scenario. As we assume that good tracking performance depends on good observation, we put more computational complexity on detection. This method is also a main-stream of current research trend called ‘Detection-byTracking’ or ‘Tracking-Based-Detection’. Table 2. Performance analysis for videos Test video Video 1 Video 2 Video 3 Video 4 Video 5 Video 6 Video 7 Video 8 Average accuracy
Tracking vehicles Actual vehicles Accuracy 62 68 91% 35 36 97% 5 5 100% 17 17 100% 22 25 88% 12 12 100% 2 2 100% 13 15 86% 95.25%
Developing a Framework for Vehicle Detection
339
Table 3. Comparison with some existing method with our proposed method. Methods Gabor + SVM PCA + NN Wavelet + Gabor +NN LDA Hog + SVM + Mean-shift HOG + PCA +SVM +EKF Our proposed method
Accuracy 94.5% 85% 89% 88.8% 94% 95% 95.25%
False rate 5.4% 7.5% 6.2% 5.5% 3.7% 3.5% 3.7%
(2) Comparison with other Existing Method From the Table 3, shown the comparison of other vehicle detection and tracking existing method with our proposed methods. From where, we see that wavelet feature extraction [17] with another feature reduction dimensional method achieved higher accuracy than other method but the false rate also increase in that method. Other existing method HOG [16] with SVM and Mean-shift tracker gain less accuracy comparing with other tracking method and this method failed to track object in bad weather and congested traffic scenes. But applying Kalman-filter tracker with mean-shift tracker increase the accuracy with decreasing the false rate. From the result of the accuracy, we see that the average accuracy for detection and tracking rate is 95.25%. The false rate of our proposed method is also less which is 3.4%. As increasing the size of video it is difficult to handle for tracking. (3) Performnace of Classification We managed to achieve a 90% right classification rate and the 15 fps frame rate. The accuracy of classification is also an important component for evaluating the classifier in machine learning. It is specified as the number of vehicles correctly categorize, divided by the total number of vehicles. Since the project’s success depends on a proper position line for the camera view, it was necessary to place the camera on an overhead bridge directly above the traffic route flow to reduce vehicle occlusion. Some of our system ‘s outcomes are shown in Fig. 11 (e)-(h). Classification errors were mainly due to the slight break between classes of vehicles. Since we have only used the scale as our metric, all vehicles cannot be properly categorized. More features will need to be explored to further improve our success in classification rates. Our information was collected different time on a day. In more complex scenes and under a wider range of lighting and weather conditions, we intend to further test the system. We measure the actual number and correct classification rate for each class, the final result for all types of vehicle is as shown Table 4.
340
R. Saha et al. Table 4. Performance analysis for result of vehicle classification Vehicle types Number of vehicles Number of classified vehicle Success rate Bus 6 5 83.3% Car 7 7 100% Motorbike 8 7 87.5% Total 18 15 90.26%
5 Conclusion and Future Work Robust and accurate vehicle detection, tracking and classification of frames collected by a moving vehicle is a significant issue for autonomous self-guided vehicle applications. This paper provides a summary study of the techniques proposed that were used in traffic video and images with a very high accuracy of tracking and classification rate. We have given a comprehensive analysis of the state-of-the-art literature in this paper dealing with computer vision techniques used in video-based traffic surveillance and monitoring systems. A HOG and Haar-like feature-based, integrated vehicle classification architecture is proposed in this paper that integrates several main components relevant to an Intelligent Transportation Surveillance System. The idea of using Histogram of gradients to extract features of different scales and orientations is key to our approach. The Kalman filter is used to track multiple objects in video. Classification has done based on the extracting Haar-like feature with cascade based classifier which is categorize vehicle based on the threshold value. The goal of the approach presented is to reduce the calculation time. Experimental findings show that the system is accurate and efficient and that a high performance rate for vehicles can be achieved by the system.
References 1. Zhang, C., Chen, S., Shy, M., Peeta, S.: Adaptive background learning for vehicle detection and spatio- temporal tracking. In: Information, Communication and Signal Processing 2003 and the Fourth Pacific Rim Conference on Multimedia.0-7803-8185-8/03/$17.00 0, pp. 797801. IEEE (2003). https://doi.org/10.1109/icics.2003.1292566 2. Gupte, S., Masud, O., Martin, R.F.K.: Detection and classification of vehicles. Trans. Intell. Transp. Syst. 3(1) 37–47 (2002) 3. Peng, Y., Jin, J.S., Luo, S., Xu, M., Au, S., Zhang, Z., Cui, Y.: Vehicle type classification using data mining techniques. In: Jin, J.S. et al. (ed.) The Era of Interactive Media, LLC, pp. 325–335. Springer, Cham (2013). https://doi.org/10.1007/978-1-4614-3501-3_27 4. Sun, Z., Miller, R., Bebis, G., DiMeo, D.: A real-time precrash vehicle detection system. In: Proceedings on the sixth Workshop on Application Computer Vision (WACV 2002), pp. 253–259. IEEE (2002) 5. Jayasudha, K., Chandrasekar, C.: An overview of data mining in road traffic and accident analysis. J. Comput. Appl. 4(4), 32–37 (2009) 6. Chouhan, P., Tiwari, M.: Image retrieval using data mining and image processing techniques, pp. 53–58 (2015). https://doi.org/10.17148/IJIREEICE.31212
Developing a Framework for Vehicle Detection
341
7. Zhang, C., Chen, X., Chen, W.: A PCA-based vehicle classification framework. In: Proceedings of the 22nd International Conference on Data Engineering Workshops (ICDEW 2006) 0–7695-2571-7/06 $20.00 IEEE, pp. 451–458 (2006) 8. Sun, Z., Bebis, G., Mille, R.: On-road vehicle detection using gabor filters and support vector machines. &7803-7S03- 3/02/$17.00 02002, pp. 1019–1022. IEEE (2002) 9. Zakaria, Y., Abd El Munim, H. E., Ghoneima, M., Hammad, S.: Modified HOG based on road vehicle detection method. In: International Journal of Pure and Applied Mathematics, vol. 118, no. 18, pp. 3277–3285 (2018) 10. Tian, Q., Zhang, L., Wei, Y., Zhao, W., Fei, W.: Vehicle detection and tracking at night. In: Video Surveillance. iJOE – vol. 9, no. 6 “AIAIP2012”, pp. 60–64 (2013) 11. Tao, H., Sawhney, H., Kumar, R.: A sampling algorithm for tracking multiple objects. In: Conference Proceedings of the International Workshop on Vision Algorithms (1999). https:// doi.org/10.1007/3-540-44480-7_4 12. Kul, S., Eken, S., Sayar, A.: A concise review on vehicle detection and classification. In: 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, IEEE (2017). https://doi.org/10.1109/IcengTechnol.2017.8308199 13. Cho, H., Rybski, P. E., Zhang, W.: Vision-based bicycle detection and tracking using a deformable part Model and an EKF Algorithm. In: 13th International Conferenceon Intelligent Transportation Systems, Funchal, Portugal. IEEE (2010). https://doi.org/10.1109/ ITSC,2010.5624993 14. Kong, J., Qiu, M., Zhang, K.: Authoring multimedia documents through grammatical specification. In: IEEE ICME 2003, Baltimore, USA, pp. 629–632 (2003) 15. Qiu, M.K., Song, G.L, Kong, J., Zhang, K.: Spatial graph grammars for web information transformation. In: IEEE Symposium on Visual/Multimedia Languages (VL), Auckland, New Zealand, pp. 84–91. (2003) 16. Xu, W., Qiu, M., Chen, Z., Su, H.: Intelligent vehicle detection and tracking for highway driving. In: IEEE International Conference on Multimedia and ExpoWorkshops, pp. 67–72 (2012). https://doi.org/10.1109/ICMEW.2012.19 17. Chitaliya, N.G., Trivedi, A.I.: Feature-extraction using Wavelet- PCA and neural network for application of object classification and face recognition. In: Second International Conference on Computer Engineering and Applications, pp. 510–514, IEEE (2010). https:// doi.org/10.1109/ICCEA.2010.104
Advances in Algorithms, Modeling and Simulation for Intelligent Systems
Modeling and Simulation of Rectangular Sheet Membrane Using Computational Fluid Dynamics (CFD) Anirban Banik1(&), Sushant Kumar Biswal1, Tarun Kanti Bandyopadhyay2(&), Vladimir Panchenko3, and J. Joshua Thomas4 1
Department of Civil Engineering, National Institute of Technology, Jirania, Agartala 799046, Tripura, India [email protected], [email protected] 2 Department of Chemical Engineering, National Institute of Technology, Jirania, Agartala 799046, Tripura, India [email protected] 3 Russian University of Transport, Obraztsova St., 127994 Moscow, Russia [email protected] 4 Department of Computing, School of Engineering, Computing and Built Environment, UOW Malaysia, KDU Penang University College, George Town, Malaysia [email protected]
Abstract. The study demonstrates the modeling and simulation of the flow phenomena inside the rectangular sheet-shaped membrane module using Computational fluid dynamics (CFD) based solver. The module was implemented to enhance the quality of effluent generated from the Rubber Industry. Commercially available CFD software (ANSYS) implemented to mimic the flow inside the porous membrane. The meshing of the developed model was done using Gambit software. The grid independency study reports a grid size of 375000 was the best grid for the simulation procedure. To mimic the flow pattern inside the membrane, the second-order laminar model was considered. The accuracy of the simulation process is evaluated using error analysis. In the error analysis, methods like percent bias, Nash-Sutcliffe, and the ratio of RMSEobservation standard deviation are selected for error analysis. The assessed estimations of PBIAS, NSE, and RSR are close to ideal value, justifying the adequacy of the simulation. From model validation, it demonstrates that CFD predicted values follow the experimental values with high precision. Keywords: CFD Membrane separation and filtration process Simulation Wastewater treatment
Modeling
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 345–356, 2021. https://doi.org/10.1007/978-3-030-68154-8_32
346
A. Banik et al.
1 Introduction Over the last decades, membrane separation processes played a crucial role in the industrial separation process. Numerous studies focus on the optimal way of using membrane separation and filtration processes. Computational fluid dynamics (CFD) techniques provides lots of information regarding the development of the membrane process. Numerous advancements in membrane technology allowed the selection procedure of the suitable membrane for different procedure easy and quick. Membrane filtration has been used in broad range of application like wastewater treatment [1, 2], protein separation [3], Food Processing industry [4] etc. In membrane separation and filtration technique, the hydrodynamics of the fluid flow pattern inside the membrane bed is very crucial. A membrane separation and filtration process is a combination of free flow inside the membrane module and flow through the porous membrane. Fluid dynamics description of free flow inside the membrane module is easy to simulate by using Navier-Stokes equation for this part but most complex one to reproduce is the flow of the fluid through the porous membrane bed which can be modeled by coupling Darcy’s law with the Navier Stokes equation. And the validity of Darcy law for simulating the incompressible flow through the porous zone with small porosity has been found to be acceptable. The important part is to ensure the continuity of flow field variables through the interface of laminar flow region and porous flow region is adequately maintained [5]. Initial simulation of flow inside the membrane was simulated using laminar condition along with porous wall [6]. Many authors have used computational fluid dynamics (CFD) for optimizing the membrane processes [7]. Rahimi et al. [8] used hydrophilic polyvinylidene fluoride (PVDF) membrane for preliminary work and studied using cross-flow velocities of 0.5, 1, and 1.3 m/s. Fouling of the membrane are analyzed with the help of photographs and micrographs of Scanning electron microscope. 3D CFD modelling of the developed porous membrane bed was carried out using Fluent 6.2 which was used to pronounce the shear stress distribution upon the membrane for explaining the fouling process. Particle deposition pattern on the membrane surface have also been pronounced using discrete phase model. Ghadiri et al. [9] developed model upon mass and momentum transfer equation for solute in all phases including feed, solvent and membrane. The flow behaviour inside the model was simulated using Navier Stokes equation by finite element method under steady state condition. CFD predicted results are compared with the experimental data to evaluate the accuracy of the model. A 2D mathematical model was developed for illustrating seawater purification using direct contact membrane desalination (DCMD) system [10]. The model was developed by coupling the equations of conservation for water molecule in three domains of the module and the governing equations are solved by using finite element method. From the study of Reza Kazemi [10], found that enhancing the channel length, the temperature of concentration stream and e⁄(T.d) ratio or reducing the inlet velocity of the concentrated solution, rejection of the salts can be enhanced. Such detail CFD based study considering rectangular sheet shaped membrane to enhance the quality of the rubber industrial effluent has not been reported earlier.
Modeling and Simulation of Rectangular Sheet Membrane
347
In the present paper, numerical finite volume method (FVM) is used to solve the three-dimensional convective diffusion equations for solute transport in laminar flow condition over a porous zone in stationary rectangular sheet membrane. CFD simulation is used to predict the distribution pressure drop, velocity profile, wall shear stress over the static rectangular sheet membrane. The CFD simulated data validated with the experimental data.
2 Experiment Descriptions Cellulose Acetate rectangular sheet membrane utilized in laborotary with a solo objective of improving the effluent quality. Figure 1 is the schematic illustration of the experimental setup. The experimental setup consists of neutralizing tank, feed tank, a centrifugal pump, and permeate tank. The effluent was permitted to flow through the neutralizing tank to maintain the pH of the feed sample as any deviation from the operating pH of the membrane may affect the lifespan of the membrane module. Optimu dose of soda ash was used to maintain the pH as raw rubber industrial effluents are acidic in nature. After neutralizing tank, the feed was allowed to flow into the feed tank from which the feed was permitted to flow through the module using centrifugal pump. The centrifugal pump was used to maintain trans-membrane pressure, and facilitate the movement of the feed across the membrane. The permeate tank was used to collect the permeate flux. The rejects of the membrane module were re-flowed to the
Fig. 1. Schematic diagram of experimental setup.
348
A. Banik et al.
feed tank. Table 1 demonstrate the characterization of the raw feed stream collected from Rubber Industry of Tripura. Table 1. Feed Stream Characterization. Sl. No. Parameters 1 Total suspended solids 2 Total dissolved solids 3 pH 4 Biochemical oxygen demand 5 Oil and grease 6 Sulfide
Units Value mg/L 398 mg/L 3030 5.2 mg/L 12.3 mg/L 1080 mg/L 23.5
3 Computational Fluid Dynamics (CFD) Advancement of computers and growing computational power, the computational fluid dynamics (CFD) become widely used computational tool for predicting solution for fluid related problem [11–14]. The mathematical model developed by using 3D hexahedral grid geometry utilizing Gambit software. continuum and Boundary condition of the model was defined for simulation purpose. Examination of the mesh was conducted for evaluate the skewness of the mesh/grid, which is found to be less than 0.5 and considered as acceptable. Developed geometry was exported to the pressure based solver fluent for predicting the pressure distribution, wall shear stress and Concentration profile over the membrane, while effluent from rubber industry allowed to flow through it. Simple pressure-velocity coupled with 2nd order upwind scheme implemented under relaxation for simulation purpose. The flux through the membrane bed is considered as laminar, axisymmetric, and Iso-thermal. Flow through the membrane bed was considered to be Laminar as Reynolds’s number was less than 2300. 3.1
Assumptions
Following assumptions and ideas are considered for building up the CFD model describing effluent flow through membrane bed [11]: I. Effluents generated from rubber industry was considered to be Newtonian fluid because of its dilution in water. II. Effluent from rubber industry was considered to be Isothermal and incompressible. III. Mathematical model was assumed as laminar single phase pressure based simple pressure-velocity coupling model for the simulation. IV. Membrane was considered to be isotropic and homogeneous porous zone throughout the geometry. V. According to the necessity of the simulation work, under-relaxation factor reduced to 0.7–0.3 or lower.
Modeling and Simulation of Rectangular Sheet Membrane
349
VI. Hexahedral grids considered for meshing the model for computational simplicity. VII. Large gradient of pressure and swirl velocities resolved by using refined grids. VIII. The mathematical model developed for simulation work was confined to flow model only. 3.2
Governing Equations
The flow through the membrane is found to be administered by equations like continuity, momentum, Darcy law, solute transfer [12]. The governing equations of the porous membrane bed defined below: Continuity equation Equation of Continuity for effluent flow through the membrane bed can be defined by Eq. 1, d d ! b d V i þ bj þb k ¼0 dx dy dz
ð1Þ
Darcy’s law Darcy’s law for flow of effluent through the membrane bed is defined by Eq. 2, rP ¼
l! V a
ð2Þ
Momentum equation Axial momentum equation for effluent flow through the membrane bed is given by Eq. 3, ! dP þ lr2 u r qu V ¼ dx
ð3Þ
Equation of radial momentum for rubber industrial effluent flow through membrane is given by Eq. 4, ! dP þ lr2 u r qu V ¼ dr
ð4Þ
Mass transfer or solute transfer equation. The equation of solute transfer through the membrane is given by Eq. 5, ! r q V C ¼ qDr2 C
ð5Þ
350
3.3
A. Banik et al.
Boundary Conditions
The following boundary conditions were considered for modeling and simulation of rectangular sheet membrane [11]: 1. 2. 3. 4. 3.4
Inlet was assumed to be mass inlet Outlet was considered to be pressure outlet with guage pressure equal to zero Membrane was assumed to be porous zone No slip condition is considered near the wall, where fluid velocity tends to zero. Convergence and Grid Independency Test
Default criteria of convergence for all equation was selected to be 10−5 except for the transport equation which was selected as 10−3. A computational domain used for calibrating the results of fully developed flow obtained for rectangular sheet membrane. From the results of the study, it observed that the final predicted results depend upon the mesh/grid geometry. Gradual increment and decrement of mesh/grid resolution by 50% have been applied to evaluate if the employed mesh/grid resolution was adequate to obtain results with minimum error percentage. It found that when 50% decrease in mesh/grid resolution, the pressure profile has 8–15% error of the currently employed mesh/ grid pressure profile for rectangular sheet membrane. When 50% increased in the mesh/grid resolution, the pressure profile has 1–5% error of the employed mesh/grid pressure profile for rectangular sheet membrane. From the results, it concluded that current mesh/grid resolution had been found sufficient for obtaining grid independent solution for the proposed model of the rectangular sheet membrane.
4 Results and Discussions 4.1
Selection of Optimum Grid Size
The grid independency test is implemented to select the optimum grid to carryout the simulation process. Table 2 demonstrate the grid selection procedure, where parameters like computational time, and deviation are used to select the optimum grid. The grid selection procedure is conducted by assuming membrane pore size and inlet velocity to be 0.2 µm, and 2.1 m/s respectively. The study is conducted to obtain grid independent solution. In the study, three type of grid size (coarse-81000, fine-375000, and finer-648000) are used to select the optimum grid. From the study it was found that there is no significant change in the results between the fine and finer grid. The fine grid is selected as the optimum one for its properties like less computational time and cost, less deviation. The fine grid having size 375000 is found to be optimum one and used for the simulation process as further increasing in grid size does not show any significant change in pressure profile inside the rectangular sheet membrane.
Modeling and Simulation of Rectangular Sheet Membrane
351
Table 2. Optimum grid selection. Sl. No. 1 2 3
4.2
Mesh nature Coarse Fine Finer
Mesh size 81000 375000 648000
No. of nodes 249300 1142500 1969200
No. of faces 87451 392751 673501
Time (min) 43 167 240
Exp. (Pa) 761.7 761.7 761.7
CFD (Pa) 742 756 756
Deviation 19.8 5.8 5.8
CFD Analysis
Figure 2 shows the profile of the concentration for inlet and outlet section of the membrane. The figure demonstrate quality of the permeate flux produced during the separation process. Red and blue color in the plot shows the maximum and minimum concentration of effluent respectively. The figures demonstrate that concentration of mass is high at the inlet contrasted with outlet of the membrane. It is because of the high resistance offered by the membrane bed and selectively permitting particle having diameter size less than the membrane pore size. Figure 3 is a graphical illustration of wall shear stress (Pa) of the rectangular sheet shaped membrane. It is observed that deformation of fluid is high in the vicinity of wall and low at the center of the membrane bed. The deformation of the fluid is high due to the existence of dominating adhesive force close to the wall compared to the weak cohesive force in the center. Plot of mass imbalance (kg/s) with respect to time (min) for rectangular sheet membrane is illustrated using Fig. 4. In this case, the permeate flux of the membrane gradually decreases with the progress of filtration time. This is because of the cake layer formation over the surface of the membrane which in partially or totally hinders the membrane pores and obstruct the flow, thus reducing the permeate flux. Membrane shows a membrane filtration time of 160 min when it is implemented to improve the effluent quality.
Fig. 2. Contour of concentration profile.
352
A. Banik et al.
Fig. 3. Graphical illustration of wall shear stress (Pa).
Fig. 4. Plot of mass imbalance (kg/s) with respect to time (min).
4.3
Error Analysis
The error analysis is implemented to assess the accuracy of the CFD model. The methods like percent bias (PBIAS), Nash-Sutcliffe efficiency (NSE), and ratio of RMSE-observation standard deviation (RSR) are used to conduct the error analysis [15–21]. The PBIAS, NSE, and RSR values are estimated using Eqs. 6–8;
Modeling and Simulation of Rectangular Sheet Membrane
2P n
6i¼1 PBIAS ¼ 6 4
3 Yi Yi 100 7 7 m 5 P ðYi Þ
353
ð6Þ
i¼1
2 P n 3 2 Y Yi 6 i¼1 i 7 7 NSE ¼ 1 6 n 4P 5 2 ðYi Ymean Þ
ð7Þ
i¼1
2 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3 n P 6 ðYi Yi Þ2 7 7 6 i¼1 7 6 RSR ¼ 6sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi7 7 6 P n 4 ðY Y Þ2 5 i
ð8Þ
mean
i¼1
In Eqs. 6–8, Yi, Y*i , and Ymean represents actual dataset, predicted dataset, and mean of the actual dataset, respectively. Where, n illustrate total number of observation in actual dataset. The value of PBIAS, NSE, and RSR are considered to be best, if the values are close to 0, 1, and 0, respectively. Table 3 shows the error analysis of the developed CFD model and the result of PBIAS (0.317), RSR (0.022), and NSE (0.99) are close to the ideal value. Error analysis demonstrate the accuracy of simulation process and justify the use of CFD to demonstrate the flow through the membrane. Table 3. Error analysis of the developed CFD model. Sl. No. Methods 1 PBIAS 2 NSE 3 RSR
4.4
Estimated value Best value 0.317 0 0.99 1 0.022 0
Model Validation
Figure 5 demonstrate the graphical plot between pressure drop (Pa) and feed velocity (m/s) of rectangular sheet membrane for validation purpose. From Fig. 5, it is observed that increment in inlet velocity impact the momentum of the fluid. As momentum of the liquid is a function of velocity (V). So, increase in speed increases the momentum of the fluid, which collides with membrane wall and pore wall at a higher rate, which causes kinetic energy loss of the fluid. This kinetic energy loss of the fluid due to increment in velocity of the fluid changed over into a form of pressure head which is demonstrated as a increased pressure drop. Computational fluid dynamics predicted results validated with the experimental values. From the validation plot, it is observed that results predicted by CFD hold good agreement with the results of the experiment.
354
A. Banik et al.
Fig. 5. Validation plot between pressure drop and inlet velocity.
5 Conclusions The model of the rectangular membrane was developed by using Gambit and effluent flow pattern inside the membrane was simulated using Fluent. To pronounce outcomes with least error percentage, hexahedral mesh with least PC memory was utilized for meshing. From grid independence test, the mesh size of 375000 was selected for carrying out the simulation work as any further refinement of the grid size does not show any change in pressure profile of rectangular sheet membrane. The mass imbalance (kg/sec) study proves that the membrane bed poses high separation efficiency. CFD predicted results are validated against experimental data, where CFD results follows the results of experiments with high precision and the error percentage typically varies in the range of 1–5%. CFD simulation provides an insight to hydrodynamic properties such as wall shear stress and concentration profile over the membrane. The error analysis was implemented to evaluate the accuracy of the simulation process. The methods such as PBIAS, NSE, and RSR are used for error analys. Determined estimation of NSE, PBIAS, and RSR were 0.99, 0.317, and 0.022. respectively, which were close to the ideal value. Pronounced results from error analysis justify the accuracy of the simulation process. Results obtained from the study can be used to develop a cost-effective membrane separation technique to treat the rubber industrial effluent.
Modeling and Simulation of Rectangular Sheet Membrane
355
References 1. Chen, Z., Luo, J., Hang, X., Wan, Y.: Physicochemical characterization of tight nanofiltration membranes for dairy wastewater treatment. J. Memb. Sci. 547, 51–63 (2018). https://doi.org/10.1016/j.memsci.2017.10.037 2. Noor, S.F.M., Ahmad, N., Khattak, M.A., et al.: Application of Sayong Ball Clay Membrane Filtration for Ni (II) Removal from Industrial Wastewater. J. Taibah Univ. Sci. 11, 949–954 (2017). https://doi.org/10.1016/j.jtusci.2016.11.005 3. Emin, C., Kurnia, E., Katalia, I., Ulbricht, M.: Polyarylsulfone-based blend ultrafiltration membranes with combined size and charge selectivity for protein separation. Sep. Purif. Technol. 193, 127–138 (2018). https://doi.org/10.1016/j.seppur.2017.11.008 4. Nath, K., Dave, H.K., Patel, T.M.: Revisiting the recent applications of nanofiltration in food processing industries: Progress and prognosis. Trends Food Sci. Technol. 73, 12–24 (2018). https://doi.org/10.1016/j.tifs.2018.01.001 5. Pak, A., Mohammadi, T., Hosseinalipour, S.M., Allahdini, V.: CFD modeling of porous membranes. Desalination 222, 482–488 (2008). https://doi.org/10.1016/j.desal.2007.01.152 6. Berman, A.S.: Laminar flow in channels with porous walls. J. Appl. Phys. 24, 1232–1235 (1953). https://doi.org/10.1063/1.1721476 7. Karode, S.K.: Laminar flow in channels with porous walls, revisited. J. Memb. Sci. 191, 237–241 (2001). https://doi.org/10.1016/S0376-7388(01)00546-4 8. Rahimi, M., Madaeni, S.S., Abolhasani, M., Alsairafi, A.A.: CFD and experimental studies of fouling of a microfiltration membrane. Chem. Eng. Process. Process Intensif. 48, 1405– 1413 (2009). https://doi.org/10.1016/j.cep.2009.07.008 9. Ghadiri, M., Asadollahzadeh, M., Hemmati, A.: CFD simulation for separation of ion from wastewater in a membrane contactor. J. Water Process. Eng. 6, 144–150 (2015). https://doi. org/10.1016/j.jwpe.2015.04.002 10. Rezakazemi, M.: CFD simulation of seawater purification using direct contact membrane desalination (DCMD) system. Desalination 443, 323–332 (2018). https://doi.org/10.1016/j. desal.2017.12.048 11. Banik, A., Bandyopadhyay, T.K., Biswal, S.K.: Computational fluid dynamics simulation of disc membrane used for improving the quality of effluent produced by the rubber industry. Int. J. Fluid Mech. Res. 44, 499–512 (2017). https://doi.org/10.1615/InterJFluidMechRes. 2017018630 12. Banik, A., Biswal, S.K., Bandyopadhyay, T.K.: Predicting the optimum operating parameters and hydrodynamic behavior of rectangular sheet membrane using response surface methodology coupled with computational fluid dynamics. Chem. Papers 74(9), 2977–2990 (2020). https://doi.org/10.1007/s11696-020-01136-y 13. Myagkov, L., Chirskiy, S., Panchenko, V., et al.: Application of the topological optimization method of a connecting rod forming by the BESO technique in ANSYS APDL. In: Vasant, P., Zelinka, I., Weber, G. (eds) Advances in Intelligent Systems and Computing. Springer, Cham (2020) 14. Vasant, P., Zelinka, I., Weber, G-W.: Intelligent computing and optimization. In: Proceedings of the 2nd International Conference on Intelligent Computing and Optimization 2019 (ICO 2019), 1st edn. Springer, Cham (2020) 15. Banik, A., Biswal, S.K., Majumder, M., Bandyopadhyay, T.K.: Development of an adaptive non-parametric model for estimating maximum efficiency of disc membrane. Int. J. Converg. Comput. 3, 3–19 (2018)
356
A. Banik et al.
16. Panchenko, V.I., Kharchenko, A., Valeriy Lobachevskiy, Y.: Photovoltaic solar modules of different types and designs for energy supply. Int. J. Energy Optim. Eng. 9, 74–94 (2020). https://doi.org/10.4018/IJEOE.2020040106 17. Panchenko, V.A.: Solar roof panels for electric and thermal generation. Appl. Sol. Energy (English Transl. Geliotekhnika) 54, 350–353 (2018). https://doi.org/10.3103/ S0003701X18050146 18. Banik, A., Dutta, S., Bandyopadhyay, T.K., Biswal, S.K.: Prediction of maximum permeate flux (%) of disc membrane using response surface methodology (rsm). Can. J. Civ. Eng. 46, 299–307 (2019). https://doi.org/10.1139/cjce-2018-0007 19. Kalaycı, B., Özmen, A., Weber, G.-W.: Mutual relevance of investor sentiment and finance by modeling coupled stochastic systems with MARS. Ann. Oper. Res. 295(1), 183–206 (2020). https://doi.org/10.1007/s10479-020-03757-8 20. Kuter, S., Akyurek, Z., Weber, G.W.: Retrieval of fractional snow covered area from MODIS data by multivariate adaptive regression splines. Remote Sens. Environ. 205, 236– 252 (2018). https://doi.org/10.1016/j.rse.2017.11.021 21. Vasant, P., Zelinka, I., Weber, G-W.: Intelligent computing & optimization. In: Conference proceedings ICO 2018. Springer, Cham (2019)
End-to-End Supply Chain Costs Optimization Based on Material Touches Reduction César Pedrero-Izquierdo1(&) , Víctor Manuel López-Sánchez1, and José Antonio Marmolejo-Saucedo2 1
2
Universidad Anáhuac, Av. Universidad Anáhuac 46, Lomas Anáhuac, 50130 Huixquilucan, Edo. México, Mexico [email protected] Facultad de Ingeniería, Universidad Panamericana, Augusto Rodin 498, 03920 Ciudad de México, Mexico
Abstract. The global manufacturing industry requires high standard productivity, quality, delivery and flexibility. This is especially true when it comes to the trucking industry, which has gained high efficiency by adopting lean manufacturing tools. Nevertheless, it is crucial to look into the supply chain to reach higher efficiency and sustainable competitive advantages. Multifold research on supply chain costs optimization treats it as a collection of interrelated and indivisible levels whose internal operations can be neglected. This prevents us from spotting non-value and wasteful activities inside these levels. To fix this drawback, this research proposes a different cost optimization strategy taking advantage of those internal operations. This procedure breaks the supply chain levels down into basic elements to generate new sets of operations that collectively satisfy the original supply chain. The solution to this combinatorial problem, which is inspired in the IP “crew-pairing problem”, provides a new and optimized supply chain that minimizes costs. Keywords: Supply chain Trucks
Set covering End to end Material touches
1 Introduction 1.1
Motivation and Objective
The global manufacturing industry requires the highest standards in productivity, quality, delivery and flexibility. This is especially true for the truck manufacturing industry, which is a relevant example due to the technical complexity of its assembly process, as well as its tendency toward mass personalization, creating a very complexsupply chain (Fogliatto 2003). There has been vast research on supply chain optimization; nevertheless, most of it relates to lean manufacturing concepts (Monden 2011, Krajewski 2010) or to the minimum cost flow problem or MCFP (Hammamia and Frein 2013). When related to lean concepts, it is normally applied to limited procedures or steps in the supply chain (or SC); and when related to the MCFP, SC is conceptualized as a net of indivisible levels whose internal operations are neglected. This prevents us from realizing that inside these levels there are redundant activities © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 357–368, 2021. https://doi.org/10.1007/978-3-030-68154-8_33
358
C. Pedrero-Izquierdo et al.
that do not add any value for further process (Closs 2010). In this particular research, those internal activities are referred to as “material touches” or just “touches”, meaning every action related to handling, moving or interacting in general with the product flowing on the SC. Based on the background above, it follows that the general objective of this research is to develop a different cost optimization methodology for a SC of i 1 stages and i levels serving to a truck assembly company from suppliers to final assembly process. This methodology must be capable of generating cost improvements within the internal processes of the SC i levels by fragmenting them in j basic touches and neglecting the traditional division between them. 1.2
Methodology
This research introduces a four-stage methodology. The first stage analyzes and segments the original i levels of the SC into individual j touches that are subsequently cataloged according to specific attributes. The result of this stage plus the selection of touches proposed to be externally outsourced, give rise to the next step that consists in generating new sets of touches according to individual characteristics. These subsets won´t satisfy the original SC touches individually; however, a combination of them will satisfy them collectively. The third stage is calculating the costs of the brand-new touch subsets. The last stage is dedicated to formulate and solve the mathematical model as a “set covering problem”. The case studies used to validate this methodology, come from a collaborative truck assembling company, which is the largest in the world and the one with the highest offer for customization in Mexico and Latin America. 1.3
Contribution
The main objective and contribution to the state of the art of this research, was to develop a new cost optimization strategy dedicated to the SC of the truck assembly industry that could visualize improvement opportunities within the SC internal steps. Out of the vast research dedicated to this topic, were lean concepts and MCFP stand out, this research found the following outcome that hadn´t been considered within these two branches: • The concept of material touch is introduced, as well as its mathematical description as the basis of a new methodology dedicated to SC costs optimization. • An iterative process was developed to segment the entire supply chain in touches based on individual attributes. • A practical application of the “end to end” concept related to the SC was presented as part of the cost optimization methodology. • A mathematically strict methodology was provided in order to merge both strengths and weaknesses of different logistics operators to optimize total costs. • A four-stage methodology inspired by the aircrew-assigning problem “crew pairing”, was developed as an IP in a “set covering” format.
End-to-End Supply Chain Costs Optimization
359
2 Literature Review Kumar and Nambirajan (2014) suggested crossing borders between client and provider to expand capabilities. Arif-Uz-Zaman and Ahsan (2014) suggested that the SC should be treated as companies linked to each other. Stavrulaki and Davis (2010) suggested that a SC improves the overall effectiveness when every stage is aligned. Krajewski et al. (2010) integrated the design of the SC under “lean manufacturing system”. Bezuidenhout (2015) added the agile concept to the existing lean concepts to describe what he described as a more realistic “lean-agile” process. As an additional tool to optimize a SC, Walters and Lancaster (2000) proposed the third party logistics (or 3PL) figure as part of this process. Arias-Aranda et al. (2011) verified the correlation of 3PL services and flexibility levels within a company. Wu et al. (2012) suggest flexibility and specialization improvements trough 3PL services. Dolguia and Proth (2013) considered that 3PL services can reduce non-essential activities and reduce costs. Agrawal (2014) mention that using 3PL services enhances flexibility and innovation. Lee (2010) used the concept “end to end” to suggest the elimination of redundant SC processes. Closs et al. (2010) suggested also to migrate to a collaborative culture to eliminate waste by eliminating silos along the SC. Trang (2016), Kram et al. (2015) and Ageron (2013), propose the elimination of redundant activities through shared SC design. González-Benito (2013) adopted the term “co-makership” as shared responsibility for SC design. Ashley (2014) affirmed that SC co-responsibility must go from the birth of the product to the end of its cycle. As of optimization methodologies, CastilloVillar (2014) added quality costs as optimization criterion. Hammamia and Frein (2013) added delivery-time as optimization criterion. Sakalli (2017) included stochastic parameters under uncertainty. Cheng and Ye (2011) adapted classic MCFP model to handle parallel suppliers. Fahimniaa et al. (2013) introduced production planning and SC parameters together. Ding (2009) proposed an optimization method using simultaneous production and distribution parameters. Paksoy and Ozceylan (2011) included operation-balancing parameters to MCFP. Paksoy and Ozceylan (2012), particularized their previous work by including U-shaped production balancing. Hamtaa (2015) proposed an SC optimization problem by adding production with uncertain demand parameters. Minoux (2006) solved the crew assignment problem by using heuristic methods. Souai and Teghem (2009) proposed reassigning routes by using a genetic algorithm. Deng and Lin (2010) proposed an ant colony algorithm to solve the crew scheduling problem. The same problem but considering also legal requirements is solved by Deveci and Demirel (2015) using a genetic algorithm. Tekiner et al. (2008) proposed a column generation method incorporating disruptive events. Later on Muter et al. (2010) also proposed a two-steps columns generation methodology solving firs the crew pairing and then using these data to solve the crew rostering as well as incorporating possible disruptions since the planning of the problem, what they called “robust planning”. Özener et al. (2016) solved the aircraft allocation and flight sequence assignation by using exact and meta heuristics methods.
360
C. Pedrero-Izquierdo et al.
3 Touches-Based Cost Optimization Methodology 3.1
Analysis of a SC Levels Using a Touches-Approach
This methodology starts by segmenting the i levels of a SC into j internal and independent basic steps for every k component. The segmentation is carried out through an empirical process that follows the knowledge dynamics described by Bratianu (2018). These individual actions called “material touches” are generically denoted as Tijk , which describe the touch j belonging to level i that acts on component k; this concept is represented in Fig. 1. Every touch Tijk will be particularized depending on the type of activity and cost it generates. Nine types of touches are proposed to describe every activity in the SC. These are listed below: • • • • •
Internal manual touch (TMNijk ): parts manipulation by physical labor force. Internal machinery touch (TQNijk ): parts movements by devices and mechanisms. Internal manual touch implying packaging (TMENijk ) packaging by force of labor. Internal machinery touch implying packaging (TQENijk ): packaging using devices. External manual touch (TMXijk ), external machinery touch (TQXijk ), external machinery touch that implies packaging (TMEXijk ) and external machinery touch that implies packaging (TQEXijk ): similar concepts but provided by a third party. • Transportation touch (TTRijk ): parts transportation by motor-vehicles of any kind.
Fig. 1. The concept of material touches is represented in this diagram, where level i ¼ 2 is fragmented in j ¼ 8 material touches for component k.
End-to-End Supply Chain Costs Optimization
3.2
361
Incidence Matrix Generation and Numerical Example
The incidence matrix is the registration of every touch subset that will collectively satisfy the activities of the original SC. Let A be a matrix of m n where m represents every touch of the original SC and n the number of internal touch subsets plus the subsets proposed to be outsourced. The subsets are formed in consecutive and inseparable operations determined by the process interrelation they keep with their immediate previous activity. The three proposed interrelations are: initial (IN), used only to tag the first touch; consecutive (C), intended for touches that are technically not separable from the previous process; and non-consecutive (NC) used for touches that can be performed independently from the previous process. The original m touches are recorded in the first columns of A strictly in the order and position of the original SC as described next. Let M = {touches of original SC} and let P = {touches proposed to be outsourced} where P M. Column n ¼ 1 corresponds to Pc , assigning a value of 1 to the positions where a touch exists and 0 to those positions where it does not. Thereafter, the elements of subset P are recorded in the subsequent columns. P is divided into subsets depending on its touches interrelation as follows: subset P will be registered in column ðn þ 1Þ starting with the initial touch IN followed by the subsequent touches that are classified as consecutive C. Once a non-consecutive NC touch is reached, it will be assigned to the next column followed by the subsequent touches C, until another NC touch appears again. This iterative process is performed for the remaining elements of P. Let be r the quantity of external offers to outsource selected subsets of P. Let us consider Pr as the subset outsourced by offer r. Each Pr will be recorded in an individual column for every r. As a numerical example, consider a SC of i ¼ 4 levels with i 1 ¼ 3 stages, where each level contains j ¼ 2 touches for k ¼ 1 components. Let us propose six touches within the SC: T111 ; T121 ; T211 ; T221 ; T311 ^ T321 . Then, set M would be M ¼ fT111 ; T121 ; T211 ; T221 ; T311 ; T321 g. Let us suppose that touches T111 ; T211 ; T221 can be outsourced and that there are three external proposals, therefore r ¼ 3. Set P is formed then by the touches P ¼ fT111 ; T211 ; T221 }. Let us propose the three external offers as P1 ¼ fT211 ; T221 g, P2 ¼ fT111 ; T211 ; T221 g and P3 ¼ fT211 ; T221 g, where every Pr P; therefore the complement set is Pc ¼ fT121 ; T311 ; T321 g. Each element belonging to Pc is recorded in column n = 1 in rigorous order and position as of the original SC. Succeeding, the elements of the set P are recorded starting at column n ¼ 2 depending on their touches interrelation (see Table 1) as described avobe. Finally, the r ¼ 3 proposals P1 ; P2 ; P3 to be outsourced are recorded in independent columns. Resulting incidence matrix is shown in Fig. 2.
Table 1. Interrelation of set P from numerical example presented in Sect. 3.2.
362
C. Pedrero-Izquierdo et al.
Fig. 2. Incidence matrix: internal touches registered in n ¼ 1 3 and external in n ¼ 4 6.
3.3
Touches Subsets Cost Calculation
The next step is to obtain the costs of both, internal and external touch subsets. The calculation parameters include: Te which represents the total amount of touches per day and Tt, the amount of touches corresponding to each k component. For both Te and Tt, dimensional units are ½touches=day. Both parameters must be particularized to individually represent the corresponding types of touches. The daily demand is represented by d ½units=day. The quantity of parts that component k needs to form one unit of product is Uijk ½pieces=unit and Qijk ½pieces is the quantity of k components in a standard package. Let us develop the formulation of Te and Tt for the first type of touch TMNijk ; the equations for the other eight type of touches are calculated the same way. For all of them, the dimensional units are: ðððUnits=dayÞ ðPices=UnitÞÞ=PicesÞ Touches ¼ Touches=day. The equations for TeTMNi and TtTMNijk are: TeTMNi ¼
XJ XK d Uijk TMNijk 8i ¼ 1; 2; . . .; I j¼1 k¼1 Qijk
ð1Þ
d Uijk TMNijk Qijk
ð2Þ
TtTMNijk ¼
8i ¼ 1; 2; . . .; I; 8j ¼ 1; 2; . . .; J; 8k ¼ 1; 2; . . .; K Where:
Tijk ¼ 1 if specific touch type j exists in level i for k: Tijk ¼ 0 in any other case:
ð3Þ
Note that in (3) Tijk represents each of the nine type of touches TMNijk , TQNijk , TMENijk , TQENijk , TMXijk , TQXijk , TMEXijk , TQEXijk and TTRijk . Once Te and Tt are calculated, the cost for each type of touch can be calculated. These costs are listed below where the dimensional units for all of them are [monetary unit/day]. • • • •
CTMNi Internal manual touch costs. CTQNi Internal machinery touch costs. CTMENi Internal manual touch that implies packaging costs. CTQENi Internal machinery touch that implies packaging costs.
End-to-End Supply Chain Costs Optimization
• • • • •
363
CTMXijk External manual touch costs. CTQXijk external machinery touch costs. CTMEXijk external machinery touch that implies packaging costs. CTQEXijk external machinery touch that implies packaging costs. CTTRi Transportation costs.
Calculation of costs for internal and external touches subsets follow completely different forms of logic. First, let us develop the internal cost calculation for the first type of touch TMNijk ; the formulation for the other eight types is done the same way. Let CTMNi be the total cost related to TMNijk activities in level i. Let CuTMNi be the unitary cost of touch TMNijk of level i, which is the equitable partition by touch of the resources dedicated to the execution of this type of touch. The dimensional units for CuTMNi are: ½Monetaryunit=Touch. The mathematical expression (4) is stated as follows: CuTMNi ¼ ðCTMNi =TeTMNi Þ8i ¼ 1; 2; . . .; I
ð4Þ
Once the unitary costs per touch-type are obtained for level i, the total cost for each individual touch is calculated by multiplying the unitary cost by the total touches Tt. This calculation is done for each touch belonging to each subset in column n. Its sum represents the total cost of every touch existing in that subset. The internal subset cost CIntn ½monetary unit=day is the sum of every element belonging to column n: 0
CuTMNi TtTMNijk þ CuTQNi TtTQNijk þ CuTMENi
1
B TtTMEN þ CuTQEN TtTQEN þ CuTMX TtTMX C ijk i ijk i ijk C B CIntn ¼ Rijk B C @ þ CuTQXi TtTQXijk þ CuTMEXi TtTMEXijk þ CuTQEXi A TtTQEXijk þ CuTTRi TtTTRijk ð5Þ Calculation of costs for the subsets to be outsourced follows a different logic from the previous one. These costs depend on two factors; the first one, CextTr , refers to the commercial proposal from third parties. The second one, defined as approaching cost CTextTrijk , is the relocation cost of the components to the point where they will continue to be processed. This cost makes external proposals cost-comparable with the internal ones since it compensates the fact that the outsourced processes will take place in another facility and later on, the components will return to the starting point to continue the original flow. The total cost CExtr of subset Pr is the sum of both costs, CextTr and CTextTrijk ½monetary unit=day; this cost is formulated in Eq. (6). CExtr ¼ CextTr þ
X ijk
CTextTrijk 8r
ð6Þ
The resulting cost matrix is shown in Table 2, were touches have already been illustratively particularized.
364
C. Pedrero-Izquierdo et al. Table 2. Cost matrix of internal and external touch subsets.
3.4
Generation of the Mathematical Model in Its “Set Covering” Form
The mathematical model is formulated as a combinatorial optimization problem in the form of “set covering”. The solution to the particular application presented can be obtained by using exact methods due to the limited size of the problems found in the industrial practice under study. Each subset will be represented by a decision variable PA1 ; PA2 ; PA3 ; . . .; PAn . The coefficients of the objective function are CIntn and CExtnni , in which the quantity of external proposals is r ¼ n ni. Decision variables are binary type, where: PAn ¼
1 0
if the subset isselected: otherwise:
Internal and external touches subsets are recorded in the incidence matrix A, where: amn ¼
1 0
if element exists in the subset registered in n: otherwise:
The objective function minimizes the total cost by selecting a combination of subsets PA1 , PA2 , … PAn . The selection will collectively satisfy the original SC touches. The resulting mathematical model is: Minimize Z ¼
Xni
XN CInt PA CExt PA þ n n nni n n¼1 n¼ni þ 1
Subject to :
XN n¼1
amn PAn 18m ¼ 1; 2; . . .; M
PAn ¼ 0; 1 where n ¼ 1; 2; . . .; N
ð7Þ ð8Þ
End-to-End Supply Chain Costs Optimization
365
4 Results Analysis The methodology was evaluated in two different ways. The first validation consisted in applying the proposed methodology to four designed instances under extreme conditions. The purpose was to verify the correct functioning of the methodology even under unusual circumstances. The second validation consisted in empirical tests, where three real documented cases previously solved for the company in collaboration would be solved again with the new methodology. Table 3 shows the seven instances. Table 3. Description of the instances used to validate the proposed methodology.
Validation IC1 verified the right segmentation of the SC based on the interrelation of touches. Validations of instances IC2 and IC3 consisted of proposing atypical parameters in combination with a random interrelation assignment. For IC2, very high value coefficients were selected for the external subsets. For IC3, equal parameters were proposed for all decision variables. As expected, for IC2 the result was the rejection of external subsets, and for IC3 the solution yielded to multiple solutions. IC4 validation consisted in using the longest and most complex internal supply chain found in the collaborative company. The empirical validation consisted in solving three real cases found in the collaborating industry and then comparing the results with the real solutions observed in industrial practice. For the empirical instances IR1 and IR2, the results obtained were consistent with the real observed solutions. However, IR3 presented a discrepancy when compared to the real solution observed. The new methodology recommended rejecting the outsourcing offers (in results table improvements show as zero), whereas in practice, it was observed that external services were engaged. When comparing the data of the business case in detail, it was confirmed that IR3 would be costlier if performed externally. However, in the business case IR3 was part of a package of services that, on the whole, was financially convenient. The methodology results are shown in Table 4.
366
C. Pedrero-Izquierdo et al. Table 4. Improvements achieved by using the new touches-based methodology.
The proposed methodology offers improvements over the common cost reduction methodologies used in the industrial practice, since it conveniently allows merging, substituting or eliminating sections of the SC in the most cost effective combination. This new methodology is capable of finding cost reduction opportunities in the internal processes of the SC levels that some other methodologies cannot reach. In addition, it allows combining touches belonging to one or more levels since the entire SC separates in its basic elements and follows the “E2E” lean principle. This methodology allows companies to search for cost opportunities that are hidden for other methodologies.
5 Conclusions and Future Research This methodology is suitable to be used as a mathematical based tool to support the decision-making process of the trucking industry as well as other industries whose SC can be divided in individual touches. The study of the truck manufacturing industry’s SC regarding touches and end-toend perspective allows visualizing redundant activities that do not add value or would turn out to be a waste for future processes. The presented SC segmentation strategy does not only apply to the trucking industry. It is readily applicable to any human activity that can be broken down into individual activities that can be performed independently from one another. The growing trend for companies to despise commercial division between levels of the SC is a fundamental aspect of the proposed methodology. Companies that share this progressive vision for both customer and supplier will have competitive advantages over companies with isolationist policies. Future research which could expand and strengthen the present investigation may not just include focusing on costs, but other equally important objectives in optimizing a supply chain. For instance, the aforementioned methodology could be applied to optimize quality levels, service satisfaction, labor in manufacturing processes, etc. Additionally, this methodology could be expanded to study two or more parallel supply chains to carry out the optimization by combining touches from the involved supply chains to take advantage of the strengths of some and to amend the weaknesses of others and therefore obtain a benefit for the complete cluster. A third future research is the design of a green field project using the proposed methodology since early stages. The intention is to select the best operation combination since the design of the SC and processes instead of modifying it once it is in operations.
End-to-End Supply Chain Costs Optimization
367
References Dolgui, A., Proth, J.M.: Outsourcing: definitions and analysis. Int. J. Prod. Res. 51(23–24), 6769–6777 (2013). Enero 2020. De PROQUEST Base de datos Agrawal, A., De Meyer, A., Van Wassenhove, L.N.: Managing value in supply chains: case studies on the sourcing hub concept. California Management Review, University of California, Berkeley, vol. 56, pp. 22–54 (2014) David, A.: Differentiating through Supply chain innovation. Bus. Econ. Market. Purchas. 1, 1–4 (2014) Fahimniaa, B., Farahani, R.Z., Sarkis, J.: Integrated aggregate supply chain planning using memetic algorithm – a performance analysis case study. Int. J. Prod. Res. 51(18), 5354–5373 (2013). http://dx.doi.org/10.1080/00207543.2013.774492 Bratianu, C., Vătămănescu, E.-M., Anagnoste, S.: The influence of knowledge dynamics on the managerial decision- making process. In: Proceedings of the European Conference on Knowledge Management, vol. 1, pp. 104–111 (2018). Accessed from http://search.ebscohost. com/login.aspx?direct=true&db=lih&AN=132145882&lang=es&site=ehost-live&custid= s9884035 Kumar, C.G., Nambirajan, T.: Direct And Indirect Effects: SCM Componnets. SCMS J. Ind. Manage. 1, 51–65 (2014) Bezuidenhout, C.N.: Quantifying the degree of leanness and agility at any point within a supply chain. School of Engineering, University of KwaZulu-Natal, Scittsville, South Africa and SCION, Rotorua, New Zealand, vol. 118, no. 1, pp. 60–69, 16 September 2015 Arias-Aranda, D., Bustinza, O.F. and Barrales-Molina, V.: Operations flexibility and outsourcing benefits: an empirical study in service firms. Routledge Taylor & Francis Group, vol. 31, no. 11, pp. 1849–1870. Enero 2020 (2011). De EBSCO Base de datos Closs, D.J., Speier, C., Meacham, N.: Sustainability to support end-to-end value chains: the role of supply chain management. Acad. Market. Sci. 39, 101–116 (2010) Walters, D., Lancaster, G.: Implementing value strategy through the value chain. Manag. Decis. 38(3), 160–178 (2000) Zeghal, F.M., Minoux, M.: Modeling and solving a crew assignment problem. Eur. J. Oper. Res. 175, 187–209 (2006). De ProQuest Base de datos Cheng, F., Ye, F.: A two objective optimisation model for order splitting among parallel suppliers. Int. J. Prod. Res. 49, 2759–2769 (2011). De EBSCO Base de datos Fogliatto, F.S., Da Silveira, G.J.C., Royer, R.: Int. J. Prod. Res. 41(8), 1811–1829 (2003) Deng, G.F., Lin, W.T.: Ant colony optimization-based algorithm for airline crew scheduling problem. Expert Syst. Appl. 38, 5787–5793. (2011). De EBSCO Base de datos Tekiner, H., Birbil, S.I., Bubul, K.: Robust crew pairing for managing extra flights. Manuf. Syst. Ind. Eng. Sabanci Univ. 1, 1–30 (2008). De EBSCO Base de datos Lee, H.L.: Don’t tweak your supply chain-rethink it end to end. Harvard Bus. Rev. I, 62–69 (2010) Ding, H., Benyoucef, L., Xie, X.: Stochastic multi-objective production-distribution network design using simulation-based optimization. Int. J. Prod. Res. 47(2), 479–505. (2009). De PROQUEST Base de datos Muter, I., Birbil, S.I., Bulbul, K., Sahin, G., Yenigun, H.: Solving a robust airline crew pairing problem with column generation. algopt Alg. Optim. 40, 1–26 (2013). De EBSCO Base de datos González-Benito, J., Lannelonguea, G., Alfaro-Tanco, J.A.: Study of supply-chain management in the automotive industry: a bibliometric analysis. Int. J. Prod. Res. 51(13), 3849–3863 (2013)
368
C. Pedrero-Izquierdo et al.
Wu, J.-Z., Chien, C.-F., Gen, M.: Coordinating strategic outsourcing decisions for semiconductor assembly using a bi-objective genetic algorithm. Int. J. Prod. Res. 50(1), 235–260 (2012) Arif-Uz-Zaman, K., Ahsan, A.M.M.N.: Lean supply chain performance measurement. Int. J. Product. Perform. Manage. 63(5), 588–612 (2014) Krafcik, J.F.: Triumph of the lean production system. Sloan Manag. Rev. 30(1), 41–52 (1988) Krajewski, L.J., Ritzman, L.P., Malhotra, M.K.: Operations Management – Processes and Supply Chains, vol. 07458, 9th edn. Pearson Education Inc, Upper Saddle River (2010) Castillo-Villar, K.K., Smith, N.R., Herbert-Acero, J.F.: Design and optimization of capacitated supply chain networks including quality measures. Hindawi Publishing Corporation, Mathematical Problems in Engineering, pp. 1–17 (2014). Article ID 218913. De PROQUEST Base de datos Kram, M., Tošanović, N., Hegedić, M.: Kaizen approach to supply chain management: first step for transforming supply chain into lean supply chain. Ann. Faculty Eng. Hunedoara – Int. J. Eng. Tome XIII – Fascicule 13(1), 161–164 (2015) Deveci, M., Demirel, N.C.: Airline crew pairing problem: a literature review. In: 11th International Scientific Conference on Economic and Social Development, vol. 1, 17 December 2015–July 2019. De EBSCO Base de datos Souai, N., Teghem, J.: Genetic algorithm based approach for the integrated airline crew-pairing and rostering problem. Eur. J. Oper. Res. 199, 674–683 (2009). De ProQuest Base de datos Trang, N.T.X.: Design an ideal supply chain strategy. Advances In Management 9(4), 20–27, April 2016 Hamta, N., Shirazi, M.A., Fatemi Ghomi, S.M.T., Behdad, S.: Supply chain network optimization considering assembly line balancing and demand uncertainty. Int. J. Prod. Res. 53(10), 2970–2994 (2015). De PROQUEST Base de datos Özener, O.Ö., Matoğlu, M.Ö., Erdoğan, G.: Solving a large-scale integrated fleet assignment and crew pairing problem. Ann. Oper. Res. 253, 477–500 (2016). De ProQuest Base de datos Paksoy, T., Ozceylan, E., Gokcen, H.: Supply chain optimization with assembly line balancing. Int. J. Prod. Res. 50, 3115 (2011). https://doi.org/10.1080/00207543.2011.593052 Hammami, R., Frein, Y.: An optimisation model for the design of global multi-echelon supply chains under lead time constraints. Int. J. Prod. Res. 51(9), 2760–2775 (2013). De EBSCO Base de datos Rubin, J.: A Technique for the solution of massive set covering problems, with application airline crew scheduling. Transp. Sci. 7(1), 15–34 (1973) Rushton, A., Croucher, P., Baker, P.: Logistics and distribution management. 4th edn. Kogan Page, London, 636 p (2010). ISBN 978 0 7494 5714 3 Stavrulaki, E., Davis, M.: Aligning products with supply chain processes and strategy. Int. J. Logist. Manage. Ponte Vedra Beach, 21(1), 127–151 (2010) Paksoy, T., Özceylan, E.: Supply chain optimisation with U-type assembly line balancing. Int. J. Prod. Res. 50(18), 5085–5105 (2012). De EBSCO Base de datos Sakalli, U.S.: Optimization of production-distribution problem in supply chain management under stochastic and fuzzy uncertainties. Hindawi, Mathematical Problems in Engineering, pp. 1–29 (2017). Article ID 4389064. De EBSCO data base Versión 2018 • Agenda Automotriz DIÁLOGO CON LA INDUSTRIA AUTOMOTRIZ 2018 • 2024 (2018). http://www.amia.com.mx/boletin/dlg20182024.pdf. Accessed 20 Jan 2019 Monden, Y.: Toyota Production System, An Integrated Approach to Just-In-Time, 4edn., CRC Press, New York (2011) Ageron, B., Lavastre, O., Spalanzani, A.: Innovative supply chain practices: the state of French companies. Supply Chain Management Int. J. 18(3), 256–276 (2013)
Computer Modeling Selection of Optimal Width of Rod Grip Header to the Combine Harvester Mikhail Chaplygin, Sergey Senkevich(&), and Aleksandr Prilukov Federal Scientific Agroengineering Center VIM, 1 St Institute Pas. 5, Moscow 109428, Russia [email protected], [email protected], [email protected]
Abstract. Economic and mathematical model and computer program that allows you to simulate the choice of the optimal header size for a combine harvester are presented in the article. Direct expenses for harvesting taking into account the quality of combine harvesters work in farm conditions at different speeds are accepted as an optimization criterion Keywords: Combine harvester Economic efficiency optimization model PC software
Header Computer
1 Introduction The results of conducted research [1–5] on the rationale the combine harvesters parameters are that the main factors determining the effectiveness of combine harvesters fleet choosing with the rationale the optimal header width are qualitative and quantitative composition of combine harvesters and the standard header size and the optimal harvesting period duration. The optimal these parameters ratio can be established only during the study of the combine harvester fleet operation by mathematical modeling methods, because natural experiments require a lot of time and material resources. The operating experience and results of comparative combine harvesters tests in different areas of the country show that the main factor determining their capacity underuse for direct combine harvesting is the insufficient headers width. Optimizing the harvester header width will allow to use effective it at different crop yield. Research and testing domestic and foreign combine harvesters have allowed to develop a refined economic and mathematical model for optimizing the size of the header for a combine harvester in recent years. The crop yield in the Russian regions varies widely from 10 to 80 c/Ha (centners per hectare), so there is a question of header purchasing by farms, it allows them to realize the nominal combine harvester productivity and ensure the minimum cash expenditures for bread harvesting. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 369–378, 2021. https://doi.org/10.1007/978-3-030-68154-8_34
370
M. Chaplygin et al.
Increasing the header width leads to a number of positive aspects: – the working strokes coefficient increases so the shift performance increases too; – the harvester operates at low speeds, which reduces the grain loss behind the header and thresher, it should also be noted that many scientists in their work suggest installing control systems and other technological and technical solutions, that help reduce grain losses, on the header and combine harvester [6–12]; – the cost of power and fuel for moving heavy harvesting units at low speeds decreases significantly and it realizes the nominal productivity; – machine operator’s work environment are improving. All this leads to an improvement in economic efficiency indicators. On the other hand, increasing the header width leads to a number of disadvantages: – the price of the header and combine harvester as a whole becomes higher, it make the harvesting more expensive; – the header weight increases, it requires additional power costs for self-movement of the harvesting unit; – soil compaction increases by heavy harvesting units.
2 Materials and Methods Theoretical (system analysis, methods for optimizing the combine harvester fleet composition, optimal header width calculation), logical (comparison, generalization and analyzing scientific literature on the research problem), statistical (processing experiment data). Mathematical modeling with the development deterministic economic and mathematical model and calculation the optimal header width for the combine harvester was used during the choosing header width. Research objectives ensuring effective combine harvester operational and technological indicators due to the scientifically proven header width. Research Tasks: – to develop economical and mathematical model of choosing header width for the combine harvester and a software for calculating by PC in relation to two levels of winter wheat productivity; – to conduct field research of a combine harvester with 12 kg/s throughput, that can work with headers of different capture widths, with the definition of conditions and operational and technological study indicators – to calculate by PC for the choosing the optimal width of the header for combine harvester with 12 kg/s throughput.
3 Discussion Results The optimal header width is selected according to the thresher bandwidth and the combine harvester engine power in specific zonal conditions. In this regard, a refined economic and mathematical optimization model for the header size is proposed.
Computer Modeling Selection of Optimal Width of Rod Grip Header
371
Structural scheme of a refined economic and mathematical model for optimizing header for the combine harvester size is shown in Fig. 1.
Fig. 1. Structural diagram of the algorithm of calculation and selection of the type size of the header to the combine harvester.
The model consists interconnected blocks that determine the sequence of calculations for choosing the optimal combine harvester header size according to the formulas (1−16) those are given in the text below. The general version of the winter wheat harvesting organization using the direct combining technology with grinding the non-grain crop part was adopted by us in order to make the calculations as close as possible to the real economic task.
372
M. Chaplygin et al.
Investigated variable parameter is purpose working header width function (Bh ): Bh ¼ f V; Gop ch ; K2 ; Nmov ; Wsh
ð1Þ
The following parameters change depending on the using different header width in the combine unit: – – – – –
working movement speed V, km/h; operating combine harvester weight Gop ch , kg; combine harvester working moves coefficient K2 ; power for self-movement Nmov , h.p.; shift combine unit productivity Wsh , ton/h.
Combine harvester working movement speed with different header width, km/h, it will be determined by the formula V¼
W0n 0; 1 Bh Ug
ð2Þ
where Wn0 is nominal combine harvester productivity, ton/h; Bh is header width, m; Ug is grain crop productivity, ton/Ha. Combine harvester operating weight, kg, is determined by: Gop ch ¼ Gch þ Gh þ Gg
ð3Þ
where Gch is combine harvester weight without header, kg; Gh is header weight, kg; Gg is grain weight (in grain tank), kg. The combine harvester working moves coefficient is determined by the formula K2 ¼
103 T2 W0n 1þ 6 Lr Bh Ug
1 ð4Þ
where T2 is turn time, h; W0n is nominal combine harvester productivity, ton/h; Lr is rut length, m; Bh is header width, m; Ug is crop productivity, ton/Ha. Power for self-movement, h.p., will be calculated by the formula Nmov ¼
Gop ch fst V 3:6 75
where Gop ch is combine harvester operating weight, kg; fst is rolling over stubble coefficient, 0.13; V is operating speed, km/h;
ð5Þ
Computer Modeling Selection of Optimal Width of Rod Grip Header
373
3.6 is conversion from km/h to m/sec coefficient; 75 is conversion from kW to h.p. coefficient. Combine harvester shift productivity, ton/h, will be determined by formula Wsh ¼
W0W
1 1 þ 1 sh K K2
1
ð6Þ
where is productivity (per main time hour), ton; Nen is engine power, h.p.; Kr is power reserve coefficient; Nmov is power for self-movement, h.p.; thr is specific power for threshing, h.p.∙ h/ton; Nsp 0
Ksh is using shift time normative coefficient. Grain weight in combine harvester grain tank, kg, is determined by formula: Gg ¼ Vgt cg
ð7Þ
where Vgt is grain tank volume, m3; cg is grain density, kg/m3. Changes in grain loss behind the header depending on working speed, %, were obtained as a result of research [3] by formula Hh ¼ a V
ð8Þ
where V is working speed, km/h; a is grain losses behind the header on working speed linear dependence coefficient, that is established experimentally. Power for threshing, h.p., will be determined by the formula Nthr ¼ Nen Kr Nmov
ð9Þ
where Nen is combine harvester power, h.p.; Kr is power reserve coefficient; Nmov is power for self-movement, h.p. Total harvesting costs is taken as an optimization criterion, RUB/ton, taking into account the header work at different speeds quality Ch ¼ S þ F þ R þ D þ Lh where S is machine operators salary, RUB/ton; F is fuel cost, RUB/ton; R is repair cost, RUB/ton; D is depreciation cost, RUB/ton; Lh is wastage due to grain losses behind the header volume, RUB/ton. Machine operator’s salary will be determined by formula
ð10Þ
374
M. Chaplygin et al.
S¼
Ts Wsh
ð11Þ
where s is hourly machine operators wage, RUB/h; T is machine operators number; Wsh is combine harvester shift productivity, ton/h. Specific fuel consumption for movement per Hectare, kg/ton: qmov ¼
Nmov K2 W0W
ð12Þ
where Nmov is power for movement, h.p.; W0w is productivity per hour, ton/h; K2 is combine harvester working strokes coefficient. Fuel cost, RUB/ton, will be determined F ¼ ðqthr þ qmov ÞPf
ð13Þ
where qthr is specific fuel consumption for threshing bread mass, kg/ton; qmov is specific fuel consumption for self-movement, kg/ton; qthr ¼
qmov Nmov Wsh
ð14Þ
Pf is diesel fuel price, RUB/kg; Repair cost, RUB/ton, will be determined by formula R¼
ðPch þ Ph Þsr Wsh Tzon
ð15Þ
where Pch is combine harvester without header cost, thous. RUB.; Ph is header cost, thous. RUB.; Tzon is zonal combine harvester workload at harvesting, h; sr is repair cost, %. Depreciation cost, RUB/ton, will be determined: D¼
ðPch þ Ph Þd Wsh Tzon
ð16Þ
where Pch is combine harvester cost, RUB.; Ph is header cost, RUB.; d is depreciation cost, %; Wsh is shift productivity, ton/h; Tzon is zonal combine harvester workload at harvesting, h. Wastage due to grain losses behind the header volume, RUB/ton, will be determined by formula
Computer Modeling Selection of Optimal Width of Rod Grip Header
Lh ¼
Ug Hh Pg 100
375
ð17Þ
where Hh is grain losses behind the header, %; Pg is commercial grain selling price, RUB/ton. Let us assume, that some farm has combine harvester, that works in wet work mode with specified normative indicators. The manufacturer produces different width headers for this combine harvester model. We will accept next condition: winter wheat productivity is two-level: achieved 5,0 ton/ha and predictable 7,0 ton/ha. Indicator are being calculated taking into account certain harvesting characteristics and combine harvesting productivity, after that shift productivity should be determined and harvesting cost will be calculated with different header width variants. It will be the first task solution. However we know that the answer to the question – how many combine harvesters are required for this farm with certain winter wheat productivity area? – it need calculations sequence for last block. Economical and mathematical model and PC software, that is named « ECOAdapter » were developed for solution this task [13, 14]. Software is used for calculating combine harvester work with different header width economical efficiency at grain harvesting and comparative analysis with using different width headers. Until 4 variants calculating can be in comparative analysis. Software includes two guide: header and combine guides. The guides contains information about technical tools, cost and another combine harvesters and headers characteristics. Command « Task – New » from main window menu is for making new task. Next, window « Add new task » appears. There must be additional source task data: – – – – – – – – – – –
fuel cost RUB/kg; crop productivity, ton/ha; grain density, kg/m3; typical rut length, m; turn time, min; rolling over stubble coefficient; using shift time coefficient; hourly machine operators wage, RUB/h; repair cost, %; depreciation cost, %; commercial grain selling price, RUB/ton.
Task report is generating and displayed in Excel. It can be formed and printed by menu command « Report – Task » in main form. Tasks with equal crop productivity values and different headers (different width), the same combine harvester is aggregated by those, are selected for comparative analysis. Selected tasks are named « VARIANT 1 » , « VARIANT 2 » , « VARIANT 3 » or « VARIANT 4 » by pushing right mouse button on the needful task line and by choosing variant number from context menu. Selected variants are displayed at the lower part of main window in yellow rectangle. Comparative analysis report is
376
M. Chaplygin et al.
forming by menu command « Report – Comparative analysis » , after that it is displaying in Excel, now it is available for editing, saving and printing. Header standard size optimization is made for combine harvester type TORUM 740 that manufactured by Rostselmash Combine Plant, LLC for header width of 6, 7 and 9 m and winter wheat productivity 5.0 and 7.0 ton/ha. Optimization results are in Table 1. Table 1. Results of optimization. No Width of cut Direct costs of means, RUB./ton header, m Productivity 5.0 ton/Ha Productivity 7.0 ton/Ha 1 6 877.3 923.7 2 7 829.5 870.1 3 9 740.3 783.8
Calculating results can be presented as a graph taking into account the entered optimal working speed range limit 3 to 7 km/h (Fig. 2). Choosing optimal working speed range limit is based on the research results, that were reviewed in study [15].
Fig. 2. Selection of the optimal width of the header, taking into account operating speed limits.
Graph show that optimal combine thresher workload with minimal grain losses and crushing values is provided at choosing optimal header width in the crop productivity range 2.5−9.5 ton/ha subject to entered working speed range limit 3−7 km/h. Thresher workload with working speed limit will be optimal for header width of 6 m at crop
Computer Modeling Selection of Optimal Width of Rod Grip Header
377
productivity 4.6−9.8 ton/Ha, for 7 m is 4.2−9.0 ton/Ha and for 9 m is 3.0−7.6 ton/Ha. All this indicates that header with 7 and 9 m width have bigger crop productivity range. Made for the future calculating 12 m header shows that its using in conjunction with combine harvester with 12 kg/sec capacity will be the most optimal at working speed ranged workload and crop productivity 2.5−5.6 ton/Ha. Combine harvester realizes its throughput potential fully under given conditions. To work with a larger productivity range, engine unit power must be increased.
4 Conclusions Analysis obtained results allows to make next conclusion. High-efficiency combine harvester type TORUM 740 should be equipped by 9 m header for bread harvesting by direct combining at 5.0 ton/Ha and 7.0 ton/Ha crop productivity. It will allow to decrease harvesting costs in the comparison with 6.0 m and 7.0 m header by 11−16% at 5.0 ton/Ha crop productivity and 15% at 7.0 ton/Ha. Economical and mathematical model of choosing optimal header width differs than previously developed by: – operational and economical indicators are taken into account at choosing optimal header width; – power for self-movement, that has a significant effect on bread mass threshing, is taken into account in calculations; – grain losses behind header are taken into account. Multi-year accounting for harvesting operations data for a number of Southern Federal District farms show that costs are in the range from 748.4 to 890.5 RUB/ton, so average calculations error does not exceed 15%. It proves optimization model correctness and suitability for modeling optimal header width at winter wheat harvesting for two crop productivity levels: 5 and 7 ton/Ha. Combine harvesting with optimal header width will allow to decrease grain losses behind the header, grain injury rate and power and fuel costs for heavy harvesting units, that is used for realizing the nominal productivity, also, this optimal header width harvesting will improve the machine operator work environments. The use of the proposed economical and mathematical model of choosing optimal header width differs of the harvest grip will help to rationally choose a fleet of harvesting machines from the proposed range of harvests produced abroad by the manufacturer having different width of the capture width, taking into account the features and needs of the farm. Further studies will be aimed at studying the effect of the grab width on grain loss and the spread width of crushed straw on the width of the passage, as well as on the development of new computer programs and test equipment.
378
M. Chaplygin et al.
References 1. Zhalnin, E.V.: Matematicheskoye modelirovaniye protsessov zemledel’cheskoy mekhaniki [Mathematical modeling of agricultural mechanics processes]. Tractors Agric. Mach. 1, 20– 23 (2000). (in Russian) 2. Lipkovich, E.I.: Osnovy matematicheskogo modelirovaniya sistemy mashin [Fundamentals of mathematical modeling of a machine system]. Povysheniye effektivnosti uborochnykh rabot. VNIPTIMESKH. Zernograd, pp. 13–22 (1984) (in Russian) 3. Tabashnikov, A.T.: Optimizatsiya uborki zernovykh i kormovykh kul’tur [Optimization of harvesting of grain and forage crops]. Agropromizdat. Moscow, p 159 (1985) (in Russian) 4. Kavka, M., Mimra, M., Kumhala, F.: Sensitivity analysis of key operating parameters of combine harvesters. Res. Agric. Eng. 62(3), 113–121 (2016). https://doi.org/10.17221/48/ 2015-rae 5. Badretdinov, I., Mudarisov, S., Lukmanov, R., Permyakov, V.: Mathematical modeling and research of the work of the grain combine harvester cleaning system. Comput. Electron. Agric. 165, 104966 (2019). https://doi.org/10.1016/j.compag.2019.104966 6. Šotnar, M., Pospíšil, J., Mareček, J., Dokukilová, T., Novotný, V.: Influence of the combine harvester parameter settings on harvest losses. Acta Technol. Agric. N21(3), 105–108 (2018). https://doi.org/10.2478/ata-2018-0019 7. Chen, J., Wang, S., Lian, Y.: Design and test of header parameter keys electric control adjusting device for rice and wheat combined harvester. Trans. Chin. Soc. Agric. Eng. 34 (16), 19–26 (2018). https://doi.org/10.11975/j.issn.1002-6819.2018.16.003 8. Liu, H., Reibman, A.R., Ault, A.C., Krogmeier, J.V.: Video-based prediction for headerheight control of a combine harvester. In: IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), San Jose, CA, USA, pp. 310–315 (2019). https://doi.org/ 10.1109/mipr.2019.00062 9. Zhang, K., Shen, H., Wang, H., et al.: Automatic monitoring system for threshing and cleaning of combine harvester. IOP Conference Series: Materials Science and Engineering. 452(4), p. 042124 (2018). https://doi.org/10.1088/1757-899X/452/4/042124 10. Shepelev, S.D., Shepelev, V.D., Almetova, Z.V., et al.: Modeling the technological process for harvesting of agricultural produce. In: IOP Conference Series: Earth and Environmental Science. 115(1), p. 012053 (2018). https://doi.org/10.1088/1755-1315/115/1/012053 11. Almosawi, A.A.: Combine harvester header losses as affected by reel and cutting indices. Plant Archives. 19, 203–207 (2019). http://www.plantarchives.org/PDF%20SUPPLEMENT %202019/33.pdf 12. Zhang, Y., Chen, D., Yin, Y., Wang, X., Wang, S.: Experimental study of feed rate related factors of combine harvester based on grey correlation. IFAC-PapersOnLine 51(17), 402– 407 (2018). https://doi.org/10.1016/j.ifacol.2018.08.188 13. Reshettseva, I.A., Tabashnikov, A.T., Chaplygin, M.E.: Certificate of state registration of the computer program “ECO-Adapter” No. 2015613469. Registered in the Program Registry 03/17/2015. (in Russian) 14. Chaplygin, M.Y.: Ekonomiko-matematicheskaya model’ optimizatsii tiporazmera khedera k zernouborochnomu kombaynu [Economic-mathematical model for optimizing the size of a header to a combine harvester]. Machinery and Equipment for Rural Area. (2), pp. 23–24 (2012) (in Russian) 15. Chaplygin, M.Y.: Povishenie effektivnosti ispolzovaniya zernouborochnogo kombaina putem obosnovaniya optimalnoi shirini zahvata jatki dlya uslovii yuga Rossii [Improving the efficiency of the combine harvester by justification the optimal header width for the conditions in southern Russia]. (Dissertation Candidate of Technical Sciences). Volgograd State Agrarian University, Moscow (2015) (in Russian)
An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition Nanziba Basnin1(&) , Lutfun Nahar1 and Mohammad Shahada Hossain2 1
,
International Islamic University Chittagong, Chittagong, Bangladesh 2 University of Chittagong, Chittagong, Bangladesh [email protected]
Abstract. Vision based micro gesture recognition systems enable the development of HCI (Human Computer Interaction) interfaces to mirror real-world experiences. It is unlikely that a gesture recognition method will be suitable for every application, as each gesture recognition system rely on the user cultural background and application domain. This research is an attempt to develop a micro gesture recognition system suitable for the asian culture. However, hands vary in shapes and sizes while gesture varies in orientation and motion. For accurate feature extraction, deep learning approaches are considered. Here, an integrated CNN-LSTM (Convolutional Neural Network- Long Short-Term Memory) model is proposed for building micro gesture recognition system. To demonstrate the applicability of the system two micro hand gesture-based datasets namely, standard and local dataset consisting of ten significant classes are used. Besides, the model is tested against both augmented and unaugmented datasets. The accuracy achieved for standard data with augmentation is 99.0%, while the accuracy achieved for local data with augmentation is 96.1% by applying CNN-LSTM model. In case of both the datasets, the proposed CNNLSTM model appears to perform better than the other pre-trained CNN models including ResNet, MobileNet, VGG16 and VGG9 as well as CNN excluding LSTM. Keywords: CNN-LSTM model Augmented dataset Unaugmented dataset Micro hand gesture Standard dataset Local dataset
1 Introduction Gesture symbolizes the posturized instance of a body through which some information is conveyed [3]. Gesture usually categorize as macro and micro. Where macro gesture demonstrates the perpetuating motion of the hand in coordination to the body, while micro gesture pictorializes the relative position of the fingers of the hand. This research makes use of micro gestures with static images. Hand gesture recognition systems are generally used to narrow the bridge between human and computer interactions (HCI). Human interactions through hand gesture sometimes involve one hand or both hands. An interactive machine which successfully mimics the natural way of human hand interaction can be developed [16]. The research presented in this paper is an attempt to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 379–392, 2021. https://doi.org/10.1007/978-3-030-68154-8_35
380
N. Basnin et al.
develop a classification tool, enabling the accurate recognition of both single and double hand gestures. Thus, this paper focuses on building an effective tool for the recognition of hand gesture, by introducing an integrated [17] CNN-LSTM (Convolutional Neural Network - Long Short Term Memory) model. The CNN model is considered [13] because it can handle large amount of raw data with comparatively less pre-processing effort [2], while LSTM can better optimize the back propagation. Therefore, the integration of CNN with LSTM [4] would play an important role to achieve better model accuracy [1]. This research is further enhanced by investigating the impacts of data augmentation on the proposed CNN-LSTM model. To perform these two different datasets, namely standard and local with augmentation as well as without augmentation have been used to train the CNN-LSTM model. It is important to note that the local dataset contains classes of gesture of both hands such as ‘dua’, ‘namaste’, ‘prayer’ and ‘handshake’ which are unique in the context of Indian subcontinent. The accuracy of the proposed CNN-LSTM model has been compared with LSTM excluded CNN model as well as with state-of-the-art pre-trained CNN models, including ResNet, Mobile Net, VGG9 and VGG16 by taking account of both augmented and unaugmented datasets. The real time validation against local dataset has also been carried out, this will be discussed in result section. To check the overfitting and underfitting aspects of the CNN-LSTM model four folds cross-validation has also been carried out. The remaining of the paper is structured as follows. Section 2 analyses the different works related to hand gesture recognition. Section 3 outlines the methodology undertaken in order to develop the model. Section 4 presents the results, while Sect. 5 concludes the paper with a discussion on future research.
2 Related Work There exist a plethora of research works in the area of hand gesture recognition systems. In [14] the different classes of hand gestures were classified using CNN on augmented and unaugmented datasets. It was seen that augmented dataset produced an accuracy of 97.18%, while unaugmented dataset produced an accuracy of 92.91%. However, the recognition of gesture by both hands was not considered in this research. In [19] Artificial Neural Networks (ANN) were used to classify ten different categories of hand gestures with an average accuracy of 98%. Due to the low computational cost to identify the different categories of hand gestures, this method became a good candidate to execute in real-time. However, the use of ANN requires pre-processing of raw data and hence, becomes computationally expensive. Another effective method [20] to recognize American Sign Language Hand Gesture for 26 different classes was proposed. Principal Component Analysis (PCA) was used to extract features from the dataset. These extracted features Micro Hand Gesture Recognition 3 were later fed into the ANN for training the dataset. This method produced an accuracy of 94.3%. In spite of this accuracy, the feature extraction technique, applied in this paper, was unable to extract features based on depth and hand direction. In [21] twenty-six different classes of American Sign language Hand Gesture dataset were trained using three different CNN models. The hidden layers of each model increased from the other by one. It was observed that, by increasing the number of hidden layers results in decreasing
An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition
381
recognition rate of CNN model but increasing its run time. Consequently, testing accuracy appears to steeply fall from 91% to 83%. In another approach [24], a selforganizing as well as self-growing neural gas network along with an YCbCr (Green, Blue, Red) color spacing method for hand region detection was proposed. The data used in this paper consists of six different classes of gestures and after applying classification produces an accuracy of 90.45%. Although it is fast to compute, the hand region detection method sometimes produces wrong feature detection. In [27] two different datasets were utilized, namely self-constructed and Cambridge Hand Gesture Dataset. Each dataset consists of 1500 images, which are categorized into five different classes. The self-constructed dataset was acquired from images of both left and right hands separately. The dataset was preprocessed using Canny Edge method which removed the effect of illuminations. In this study, a CNN-Architecture of five convolution layers were used to train each dataset. Although the method produced an accuracy of 94%, it was found suitable for recognizing complex gestures. It can be observed that none of the study as discussed above considered the classification of both hand gestures as well as considered context-based gestures such as ‘dua’, ‘namaste’, ‘handshake’ and ‘prayer’, resulting to inefficient nonverbal machine human interaction. In addition, the use of CNN models produces better accuracy than other machine learning models such as ANN [19]. However, layering of CNN’s introduce the problems with saturated neurons and vanishing gradient, which can correspond to a poor testing accuracy [21]. Such a drawback of CNN models can be overcome by integrating the model with such a model like LSTM because it reduces saturation of neurons as well as overcomes the vanishing gradient problem by optimizing backpropagation. Therefore, in this research an integrated model of CNN-LSTM has been proposed, which will be described in the following section.
3 Methodology Figure 1 illustrates the main components of the methodology used in this research to identify the different classes of hand gestures as mentioned previously. A brief discussion of each of the components is presented below.
Fig. 1. System Framework
382
N. Basnin et al.
3.1
Data Collection
As mentioned, ins Sect. 1 before, this research utilizes two datasets namely standard and local. The standard dataset is collected from a publicly available domain [10] as shown in Fig. 2. It is used as a benchmark dataset which requires no preprocessing. Two datasets are introduced in order to highlight a comparison between standard dataset and the locally collected dataset. Moreover, this will not only aid to deduce how the accuracy varies between double handed gestures and single-handed gestures but also recognize how effective data pre-processing on the local dataset is in terms of accuracy. So, the size of the standard dataset is kept similar to the size of the local dataset. As it consists 10,000 images, which are divided into ten different classes of hand gestures namely fist, thumb, palm, palm moved, okay, index, fist move, down, c and L. Each class consists of 1000 images. The model uses 7000 images for training while 3000 images for testing. The local dataset was collected using Webcam. Ten people with ethical approval were used as subjects to build the dataset. Likewise, local dataset consists of 10000 images. The sample of the local dataset consists of ten gesture classes. Out of these ten classes five are associated with double hand gestures as illustrated in Fig. 3. These five classes of gestures namely dua, handshake, namaste, pray and heart are gestures used in the Indian sub-continent. The remaining five classes, are gestures by single hand and they consist of palm, fist, one, thumb and okay as can be seen from Fig. 3. In this way, the local dataset demonstrates a heterogeneous combination of hand gesture classes, which cannot be found in the standard dataset. Each class consists of 1000 images. Likewise, standard dataset, the model uses 7000 images for training and 3000 images for testing from the local dataset.
Fig. 2. Sample of Standard Dataset
Fig. 3. Sample of Local Dataset
An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition
3.2
383
Data Pre-processing
Since the standard dataset is already pre-processed, so it can be directly passed into the CNN-LSTM model. To pre-process the local dataset the steps shown in Fig. 4 are followed. Firstly, in order to extract the foreground, the background of the image is subtracted. Secondly, gray scale conversion is applied to the image. The single channel property of gray scale image benefits the CNN model to gain a faster learning rate [9]. Thirdly, Morphological erosion is used [12]. Fourthly, the median filtering method lowers down the noise in the image [28]. Finally, for the convenience of CNN-LSTM model the images are resized into a pixel size of 60 60.
Fig. 4. Steps of Data Pre-Processing
3.3
CNN-LSTM Integrated Model
The proposed CNN-LSTM model comprises four convolution layers, two max pooling layers, a time distributed flattening layer, one hidden LSTM layer and a output layer. After every two convolution layers, a dropout layer is considered. After the LSTM layer, a dropout layer is also considered. The use of dropout layer facilitates addressing over-fitting issue [23] by deactivating the nodes. For example, the dropout value of 0.25, deactivates 25% of the nodes. The first input layer consists of 32 filters with a kernel size of 5 5. Zero padding is also added to this input layer in order to attach a border of zeros to the input volume. The convolution layer generates a feature map which is fed into the next layer. In each convolution layer of the network, the activation function Rectified Linear Unit (ReLU) is used. Rectified Linear Unit is chosen because it not only outperforms all the other activation functions including sigmoid and tanh but also avoids suffering from vanishing gradient problem. ReLU does this by introducing non-linearity to the linear process of convolution. A max pooling layer is added after two convolution layers in the model. The layer works by extracting the maximum features from the feature map and fitting each feature into a reduced window size of 2 2 dimension. This not only narrows down the parameters required to train the model but also retains the most important features in the feature map. Next, a flattening layer is introduced after the convolution layers. This layer flattens the image into a linear array so that it can easily be fed into the neural network. The CNN layers are integrated with an LSTM layer of 200 nodes. While CNN layers obtain extraction of features from
384
N. Basnin et al.
an image, the LSTM layer supports by interpreting those features in each distributed time stamps. These interpretations gives the model exposure to some complex temporal dynamics of imaging [6]. As a result, the perceptual representation of images in the convolutions become well defined. All the layers are connected to the output layer which consists of ten nodes that indicate ten different classes of hand gestures used to train the model. Afterwards, an activation function SoftMax is included in this layer to obtain a probabilistic value for each of the classes. SoftMax is used in the very last layer of the network to provide a nonlinear variant of multinomial logistic regression. Stochastic Gradient Descent (SGD) of a learning rate 0.01 is used as an optimizer to compile the model. SGD improves the optimization techniques by working with large datasets as it is computationally faster [7] and hence, reduces the model loss. Table 1 summarizes the various components of CNNLSTM model.
Table 1. CNN-LSTM integrated configuration Model content 1st convolution layer 2D 2nd convolution layer 2D 1st max pooling layer Dropout layer 3rd convolution layer 2D 4th convolution layer 2D 2nd Max pooling layer Dropout layer Flattening layer LSTM layer Dropout layer Output layer Optimization function Learning rate Matrix
3.4
Details Input size 60 60, 32 filters of kernel size 5 5, Zero Padding, ReLU 64 filters of kernel 3 3, ReLU Pool size 2 2 Randomly deactivates 25% neurons 64 filters of kernel 3 3, Same Padding, ReLU 64 filters of kernel 3 3, ReLU 64 filters of kernel 3 3, ReLU Randomly deactivates 25% neurons Time Distributed 200 nodes Randomly deactivates 25% neurons 10 nodes, SoftMax Stochastic Gradient Descent (SGD) 0.01 Loss, Accuracy
System Implementation
The data collection and real-time validation of the model are carried out in Jupyter Notebook IDE. The training module is developed in Google Collab. This platform facilitates the execution of programs in run time as well as support deep learning libraries by providing access to a powerful GPU (Graphics Processing Unit) and TPU (Tensor Processing Unit). Moreover, TPU comes with faster throughput. Python is supported in both Jupyter and Google Collab environments. The libraries required for implementation include Tensor flow, Keras, OpenCV, sklearn, Matplotlib, Tkinter,
An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition
385
PIL, Pandas and Numpy. Here, Tensor flow acts as the backend of the system while Keras is used to build the CNNLSTM classifier since it provides built-in functions for layering, optimization and activation. Further, Keras contains advanced tools for feature extraction. OpenCV is required for image processing [18]. On the other hand, Sklearn gives access to many supervised and unsupervised algorithms. Matplotlib is used to generate the confusion matrix. Tkinter supports easier configuration tools for the user interface, required for real-time data collection and validation. PIL is an integration tool for image processing, while Numpy is used to carry out the operations of arrays [8]. Callbacks are used to train the model. Callbacks not only prevents overfitting that occurs as a result of too many epochs but also avoids underfit models [11]. The callbacks, which are implemented in this model include checkpoints, early stopping and reducing learning rate on the plateau. Checkpoints allow saving the best models by monitoring the loss invalidation. Once the model stops performing well on the validation dataset, early stopping is applied to stop the training epochs. Reducing the learning rate on the plateau is used when the validation loss abstains from any further improvement. The data augmentation is directly applied to the CNN-LSTM model through Keras built-in Data Augmentation API, which enables the generation of the dataset while training is carried out on the model [5, 29]. Parameters namely shifting, rotation, normalization and zoom are applied to induce robustness of dataset. Therefore, the augmented dataset will expand the number of data points, enabling the reduction of the distance between training and testing datasets. Thus, this will decrease overfitting in the training dataset. 3.5
Cross Validation
Cross-validation is carried out to assess the quality of the CNN-LSTM when the accuracy of the model is greater or equal to 99.0%. k-4 cross-validation is performed. The testing set is divided into four equal sets (i.e. k-1, k-2, k-3, k-4), each comprising of 2,500 images for testing and 7500 images for training. For instance, from Fig. 5 ‘experiment 1 the first tuple consists of 2500 images for testing while the remaining three tuples consist of 7500 images for training.
Fig. 5. Demonstration of Cross Validation
386
N. Basnin et al.
4 Result and Discussion The section investigates the application of the developed CNN-LSTM model by taking account of the both standard and local datasets as introduced in Sect. 3. 4.1
CNN-LSTM Model on Standard Dataset
Fig. 6. Confusion Matrix
Fig. 7. Model Accuracy
Fig. 8. Model Loss
Figure 6 illustrates the confusion matrix generated by applying CNN-LSTM model on the augmented standard dataset. By taking account of the results presented in this figure the accuracy of the hand gesture class, named ‘Palm Moved’ is calculated as 99.2% by applying equation mentioned in [26]. Figures 7 and 8 illustrate the Model Accuracy and Model Loss of the augmented standard dataset by applying CNN-LSTM model. It can be seen from both the curves that an early stop at 68 epochs has been achieved. The training accuracy as well as testing accuracy achieved 99.1% and 99.2% respectively. Hence, it can be concluded that the model is well fit. 4.2
CNN-LSTM Model on Local Dataset
Fig. 9. Confusion Matrix
Fig. 10. Model Accuracy
Fig. 11. Model Loss
Figure 9 illustrates the confusion matrix generated by applying CNN-LSTM model on the augmented local dataset. By taking account of the results presented in this figure the accuracy of the both hand gesture class, named ‘dua’, ‘namaste’, ‘heart’, and ‘handshake’ are calculated as 90.0%, 99.3%, 93.8% and 99.8% respectively, by applying equation mentioned in [26]. Figure 10 and 11 illustrate the Model Accuracy and Model
An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition
387
Loss of the augmented local dataset by applying CNN-LSTM model. It can be seen from both the curves that an early stop at 54 epochs has been achieved. The training accuracy as well as testing accuracy achieved are 95.0% and 96.1% respectively. Hence, it can be concluded that the model is well fit. 4.3
Comparison of Result Table 2. Classification result of CNN-LSTM model on augmented datasets Performance measure Standard dataset Local dataset Precision 0.9909 0.8444 Recall 0.9907 0.8110 F1-score 0.9908 0.8224
Table 3. Classification result of CNN-LSTM model on unaugmented datasets Performance measure Standard dataset Local dataset Precision 0.7298 0.6894 Recall 0.6559 0.6133 F1-score 0.6909 0.6492
To evaluate each model used in this research, performance matrices such as precision, recall, F1-score and accuracy are used. These matrices calculate the performance by using parameters such as TP (true positive) and FP (False Positive). Precision tries to reduce the false positive values, in order to induce a precise model. Recall mainly focuses on reducing the false negative value. F1- score reduces the false positive and false negative values, in order to increase F1- score for the model. Accuracy focuses solely on true positive and true negative values, in order to correctly identify the classes. Table 2 provides the classification results of CNN-LSTM model for augmented data of both standard and local datasets. Table 3 provides the classification results off CNN-LSTM model for the unaugmented data of both the standard and local datasets. From both the tables it can be observed that the values of evaluation metrics of augmented data are much higher than that of unaugmented data. Table 4 illustrates a comparison between the proposed CNN-LSTM model and four pre-trained CNN models, namely RasNet, VGG9, VGG16 and MobileNet by taking account of augmented local dataset. In addition, it also illustrates the comparison between CNN-LSTM model and CNN model excluding LSTM by taking account of augmented local dataset. It can be observed from the values of the evaluation metrics against ResNet that the model performs poorly. It can also be observed that none of the pre-trained CNN models are well fit because testing accuracy for VGG9 and VGG16 are much higher than training accuracy, meaning they are underfitted. On the other hand, the testing accuracy of MobileNet is much lower than training accuracy meaning it is overfitted. The CNN model excluding LSTM also appears overfitted since its
388
N. Basnin et al.
testing accuracy is much higher than training accuracy. Thus, it can be concluded that none of the models are well fit. On the other hand, the proposed CNN-LSTM model is well fit since its testing and training accuracy is much closer. Table 5 illustrates a comparison between the proposed CNN-LSTM model and four pre-trained CNN models, namely RasNet, VGG9, VGG16 and MobileNet by taking account of augmented standard dataset. In addition, it also illustrates the comparison between CNNLSTM model and CNN model excluding LSTM by taking account of augmented standard dataset. ResNet demonstrates a poor performance in comparison to other CNN models because of its poor training accuracy. Both Mobile Net and CNN suffer from overfitting because their training accuracies are much lower than their testing accuracies. In contrast, VGG9 and VGG16 are well fit models. However, the training accuracy of VGG 9 is only 64%. Although VGG16 is a well fit model, its accuracy is lower than the proposed CNN-LSTM model. Therefore, it can be concluded that the proposed CNN-LSTM model is well fit as well as its performance in terms of accuracy is better than other models.
Table 4. Comparision between CNN-LSTM model with other CNNmodels on augmented local dataset CNN model ResNet VGG9 MobileNet CNN VGG16 CNN-LSTM
Precision 0.03 0.11 0.44 0.80 0.87 0.83
Recall 0.12 0.10 0.28 0.80 0.86 0.81
F1-score 0.04 0.10 0.24 0.81 0.86 0.82
Train Acc. Test Acc. 0.10 0.12 0.27 0.72 0.40 0.28 0.89 0.75 0.87 0.97 0.96 0.95
Table 5. Comparison between CNN-LSTM model with other CNNmodels on augmented standard dataset CNN model ResNet VGG9 MobileNet CNN VGG16 CNN-LSTM
4.4
Precision 0.02 0.09 0.57 0.98 0.98 0.99
Recall 0.11 0.09 0.36 0.98 0.97 0.99
F1-score 0.03 0.08 0.32 0.98 0.98 0.99
Train Acc. Test Acc. 0.10 0.11 0.64 0.64 0.43 0.36 0.98 0.88 0.98 0.96 0.99 0.99
Cross Validation
Cross-validation is carried out on the augmented standard dataset. In order to, justify the 99.0% accuracy. The average training accuracy is 96.0%, while the average testing accuracy is around 98.0%. This means the testing accuracy is higher than the training
An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition
389
accuracy. This means our proposed CNN-LSTM model is well fit. The average testing loss is 19.0% while the average training loss is around 26.0%. This means the testing loss is lower than the training loss (Table 6). Table 6. Cross validation result of testing verses training accuracy and loss Dataset k-1 k-2 k-3 k-4 Average
4.5
Test accuracy 0.96 0.99 0.98 0.97 0.98
Train accuracy Test loss 0.97 0.30 0.98 0.10 0.96 0.20 0.92 0.15 0.96 0.19
Train loss 0.15 0.20 0.40 0.30 0.26
Real Time Validaiton
The CNN-LSTM model has been validated against real world data [25]. In doing so, the real time unknown images of hand gesture was fed into to the model. From the Fig. 12 it can be observed that the model can accurately identify one of the classes of hand gesture named ‘dua’.
Fig. 12. Sample Clip of Real Time Validation Program
390
N. Basnin et al.
5 Conclusion and Future Work In this research, two datasets were used namely, standard and local dataset for visionbased micro hand gesture recognition. The local dataset is a heterogeneous dataset which includes five double hand gesture classes out of the ten classes. These five different classes of gestures include gestures used in the Indian sub-continent. Thus, a socio-physical approach is taken into account. The CNNLSTM model proposed in this study can accurately classify the micro gestures used in both the datasets. Besides, the CNN-LSTM model for micro gesture datasets has been compared with four pre-trained models namely ResNet, Mobile Net, VGG9 and VGG16 and with LSTM excluded CNN model. It has been demonstrated in the result section the CNN-LSTM model outperforms other pretrained models in terms of accuracy in case of both standard and local dataset. Therefore, the system not only recognizes the gestures of one hand but also two hands. In future, this research aims to build a dynamic dataset by involving a greater number of micro gesture classes in the context of Asia as well as collect images from a greater number of subjects. Few environmental factors such as the intensity of light, a variety of image backgrounds and skin tone can be considered while building the dataset. Furthermore, the performance of this robust dataset will present interesting phenomena. This complexity can be overcome if an ensemble learning methodology proposed in [22] is considered. This will not only result in better accuracy but also help in selecting the optimal learning algorithm.
References 1. Ahmed, T.U., Hossain, M.S., Alam, M.J., Andersson, K.: An integrated cnn-rnn framework to assess road crack. In: 22nd International Conference on Computer and Information Technology (ICCIT), pp. 1–6. IEEE (2019) 2. Ahmed, T.U., Hossain, S., Hossain, M.S., ul Islam, R., Andersson, K.: Facial expression recognition using convolutional neural network with data augmentation. In: Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), pp. 336–341. IEEE (2019) 3. Akoum, A., Al Mawla, N., et al.: Hand gesture recognition approach for asl language using hand extraction algorithm. J. Softw. Eng. Appl. 8(08), 419 (2015) 4. Basnin, N., Hossain, M.S., Nahar, L.: An integrated cnn-lstm model for bangla lexical sign language recognition. In: Proceedings of 2nd International Conference on Trends in Computational and Cognitive Engineering (TCCE-2020) Springer Joint 8th International Conference on Informatics (2020) 5. Chowdhury, R.R., Hossain, M.S., ul Islam, R., Andersson, K., Hossain, S.: Bangla handwritten character recognition using convolutional neural network with data augmentation. In: Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), pp. 318–323. IEEE (2019) 6. Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2625–2634 (2015)
An Integrated CNN-LSTM Model for Micro Hand Gesture Recognition
391
7. Goyal, P., Dollar, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., He, K.: Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677 (2017) 8. Greenfield, P., Miller, J.T., Hsu, J., White, R.L.: Numarray: a new scientific array package for python. PyCon DC (2003) 9. Grundland, M., Dodgson, N.A.: Decolorize: fast, contrast enhancing, color to grayscale conversion. Pattern Recogn. 40(11), 2891–2896 (2007) 10. Gti: Hand gesture recognition database (2018), https://www.kaggle.com/gtiupm/ leapgestrecog 11. Gulli, A., Pal, S.: Deep learning with Keras. Packt Publishing Ltd (2017) 12. Haralick, R.M., Sternberg, S.R., Zhuang, X.: Image analysis using mathematical morphology. IEEE Trans. Pattern Anal. Mach. Intell. 4, 532–550 (1987) 13. Hossain, M.S., Amin, S.U., Alsulaiman, M., Muhammad, G.: Applying deep learning for epilepsy seizure detection and brain mapping visualization. ACM Trans. Multimedia Comput. Commun. Appl. (TOMM) 15(1s), 1–17 (2019) 14. Islam, M.Z., Hossain, M.S., ul Islam, R., Andersson, K.: Static hand gesture recognition using convolutional neural network with data augmentation. In: Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), pp. 324–329. IEEE (2019) 15. Islam, R.U., Hossain, M.S., Andersson, K.: A deep learning inspired belief rulebased expert system. IEEE Access 8, 190637–190651 (2020) 16. Jalab, H.A.: Static hand gesture recognition for human computer interaction. Inf. Technol. J. 11(9), 1265 (2012) 17. Kabir, S., Islam, R.U., Hossain, M.S., Andersson, K.: An integrated approach of belief rule base and deep learning to predict air pollution. Sensors 20(7), 1956 (2020) 18. Nandagopalan, S., Kumar, P.K.: Deep convolutional network based saliency prediction for retrieval of natural images. In: International Conference on Intelligent Computing & Optimization. pp. 487–496. Springer (2018) 19. Nguyen, T.N., Huynh, H.H., Meunier, J.: Static hand gesture recognition using artificial neural network. J. Image Graph. 1(1), 34–38 (2013) 20. Nguyen, T.N., Huynh, H.H., Meunier, J.: Static hand gesture recognition using principal component analysis combined with artificial neural network. J. Autom. Control Eng. 3(1), 40–45 (2015) 21. Oyedotun, O.K., Khashman, A.: Deep learning in vision-based static hand gesture recognition. Neural Comput. Appl. 28(12), 3941–3951 (2017) 22. Oz¨o˘g¨ur-Aky¨uz, S., Otar, B.C., Atas, P.K.: Ensemble cluster pruning via convex- concave programming. Comput. Intell. 36(1), 297–319 (2020) 23. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929– 1958 (2014) 24. Stergiopoulou, E., Papamarkos, N.: Hand gesture recognition using a neural network shape fitting technique. Eng. Appl. Artif. Intell. 22(8), 1141–1158 (2009) 25. Uddin Ahmed, T., Jamil, M.N., Hossain, M.S., Andersson, K., Hossain, M.S.: An integrated real-time deep learning and belief rule base intelligent system to assess facial expression under uncertainty. In: 9th International Conference on Informatics, Electronics & Vision (ICIEV). IEEE Computer Society (2020) 26. Wang, W., Yang, J., Xiao, J., Li, S., Zhou, D.: Face recognition based on deep learning. In: International Conference on Human Centered Computing. pp. 812– 820. Springer (2014)
392
N. Basnin et al.
27. Yingxin, X., Jinghua, L., Lichun, W., Dehui, K.: A robust hand gesture recognition method via convolutional neural network. In: 6th International Conference on Digital Home (ICDH), pp. 64–67. IEEE (2016) 28. Zhu, Y., Huang, C.: An improved median filtering algorithm for image noise reduction. Phys. Procedia 25, 609–616 (2012) 29. Zisad, S.N., Hossain, M.S., Andersson, K.: Speech emotion recognition in neurological disorders using convolutional neural network. In: International Conference on Brain Informatics. pp. 287–296. Springer (2020) 30. Zivkovic, Z., Van Der Heijden, F.: Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recogn. Lett. 27(7), 773–780 (2006)
Analysis of the Cost of Varying Levels of User Perceived Quality for Internet Access Ali Adib Arnab1(B) , Sheikh Md. Razibul Hasan Raj1 , John Schormans2 , Sultana Jahan Mukta3 , and Nafi Ahmad2 1 2
University of Global Village, Barisal, Bangladesh [email protected] Queen Mary University of London, London, UK 3 Islamic University, Kushtia, Bangladesh
Abstract. Quality of Service (QoS) metrics deal with network quantities, e.g. latency and loss, whereas Quality of Experience (QoE) provides a proxy metric for end-user experience. Many papers in the literature have proposed mappings between various QoS metrics and QoE. This paper goes further in providing analysis for QoE versus bandwidth cost. We measure QoE using the widely accepted Mean Opinion Score (MOS) rating. Our results naturally show that increasing bandwidth increases MOS. However, we extend this understanding by providing analysis for internet access scenarios, using TCP, and varying the number of TCP sources multiplexed together. For these target scenarios our analysis indicates what MOS increase we get by further expenditure on bandwidth. We anticipate that this will be of considerable value to commercial organizations responsible for bandwidth purchase and allocation. Keywords: Mean Opinion Score (MOS) · Quality of Experience (QoE) · Bandwidth · Bandwidth cost · Quality of Service (QoS)
1
Introduction
Quality of Experience (QoE) has a significant but complex relationship with Quality of Service (QOS) and it’s underlying factors [1]. Mean Opinion Score (MOS) ranges from 1 to 5 (Bad to Excellent) and represents QoE [2]. Although considerable work has been carried out in papers like [3–5], in this paper, we have considered QOE within a budget with numerous metrics such as PLP, bandwidth, and round trip time, TCP sending rate factor, packet buffer lengths and packet bottleneck capacity. From the curve fitting, we obtained an analytical expression for bandwidth and bandwidth cost. The goodness of fit is obtained from SSE, R Square, Adjacent R Square and RMSE. Consequently, we found one equation with variable MOS and bandwidth and another one with variable bandwidth and bandwidth cost. The analysis has been performed multiple times for varying the number of TCP sources. The major objective of this research is to identify the mathematical relationship between MoS and bandwidth cost. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 393–405, 2021. https://doi.org/10.1007/978-3-030-68154-8_36
394
2
A. A. Arnab et al.
Related Work
Well-grounded bandwidth anticipated by the Internet Protocol is a pivotal requirement for the advanced services, which also includes reliability amplification arranged by TCP [6]. The paper signifies QoS (Quality of service) as a contemporaneous considered issue worldwide that offers further reliable bandwidth network traffic is based on TCP which keeps increasing the sending rate. Packet loss could be resultant from above issues which is analyzed during the QoS model improvement [7]. The author implies it is also possible for the bottleneck links to have high utilization without any packet loss. According to Roshan et al. [8] paper evaluates the relationships between QoS metrics packet loss probability (PLP) and the user-perceived Quality of Experience (QoE) for video on demand (VoD). QoS has major implications from a policy standpoint [9]. The authors indicates about a present situation where users acquire internet service according to a fixed price. A simple monthly subscription fee or price according to the connection time is common [10]. Dial-up modem, Ethernet, cable Ethernet, or DSL are utilized to provide bandwidth. It is very clear from the analysis of the authors that there may be no bandwidth guarantee or only statistical bandwidth guarantees either one can be offered. Conventional Internet applications mainly utilize TCP as their transport layer protocol [11]. We can understand from the paper that Packet loss is used as a sign of congestion. As soon as the sender experiences network congestion, TCP responds quickly by reducing its transmission rate multiplicative [12]. The UK is ranked third in terms of the services and metrics included in this analysis followed by France and Germany [13]. In this paper, The UK has occupied third place in a table of the weighted average and lowest available basket prices across the services which are covered in analyzing bandwidth cost at the varying level. USA is the most expensive regarding both average and lowest available prices across all the services while France is the least expensive. We can also learn that UK’s highest ranking is acquired by the mobile phone service price, triple-play bundle price, in which category the UK is ranked second followed by France. There are numerous analysis of QoE, QoS and relationship with bandwidth where our paper actually provides relationship between QoE and Bandwidth cost. This can cover up business aspects of many organizations. It also implements proper idea about within allocating budget how much QoE can be achieved which has not yet been discussed in mentioned papers. 2.1
Abbreviations and Acronyms
Mean Opinion Score and Its Indicator. Mean Opinion Score is broadly known as MOS, and has become a widespread perceived media quality indicator [12]. MOS (also superfluously known as the MOS score), is one way of classifying the characteristic of a phone call. This score is set out of a five-point scale, shown in Table 1 below: MOS of 4.0 or higher is toll-quality. Once within the
Analysis of the Cost
395
Table 1. MOS Score vs Performance MOS Score Performance 5
Excellent
4
Good
3
Fair
2
Poor
1
Bad
building, enterprise voice quality patrons generally expect constant quality while employing their telephone [14].
3
Method
Our first requirement is to determine PLP and MOS for different numbers of TCP sources, round trip time, bottleneck capacity in packets and buffer length. We take buffer length values from 10 to 1000 [8]. Then we obtained a MOS vs bandwidth graph for different settings of buffer lengths. Different countries pays different amounts of money for internet access. It largely depends on the internet providers, facilities, internet access availability of networking telecommunication product, infrastructure, etc. For example, someone in North America may not be paying the same as someone in Africa. We considered values for UK which will give us range of bandwidth and estimated average cost for that bandwidth. From our analysis we can obtain a graph for bandwidth and bandwidth cost. Our goal is to relate MOS score and bandwidth cost analytically. The next step was to use existing formulas for PLP and MOS again. Initially we used bottleneck capacity as one of the parameters to determine PLP. We obtained a formula for MOS and bandwidth cost and plotted various bandwidth cost value and evaluated MOS scores against those values. Also, we obtained a similar curve which we previously obtained from MOS vs bandwidth. If we want to differentiate for big organizations and small family houses we need to modify our formula for MOS by changing number of TCP sources. In big organizations, the number of TCP sources will be large, and for small household, it can be much smaller. We modify the number of TCP sources in our initial formula for MOS. We obtain one graph of MOS vs bandwidth cost for a larger number of TCP sources and one graph of MOS vs bandwidth cost for smaller number of TCP sources. 3.1
Determining Parameters Value for Experimental Analysis
An existing formula for PLP in TCP has been used to determine Mean Opinion Score (MOS) [15]. MOS was plotted against bandwidth and bandwidth have been plotted against cost. The relationship of MOS and cost has been determined from MOS vs bandwidth and bandwidth vs cost plots.
396
A. A. Arnab et al.
To determine the values of MOS we require PLP values. According to the analytical expression and performance evaluation of TCP packet loss probability is given by: 32N 2 (1) Pi = 3b(m + 1)2 (C.RT T + Q)2 N = Number of TCP sources=50 C = Bottleneck capacity in packets per second=12500 b = number packets acknowledged by an ACK packet=1 m = factor by which TCP sending rate is reduced =1/2 RTT = Round Trip Time=0.1 s Q = Packet buffer lengths(bits) According to [10] we can obtain MOS from PLPs shown in Eq. 2 below: M OS = 1.46 ∗ exp(exp(−44 ∗ P LP )) + 4.14 ∗ exp(−2.9 ∗ P LP )
(2)
Using different buffer length we get values for PLP and MOS (See Table 2). The Q value has been taken from 10 to 1000. With the same packet buffer length and increasing bandwidth-15 Mbps each time, from 15 to 120 we evaluate PLP and MOS in Table 3. Bandwidth and cost pricing is different worldwide. We can get an estimation of bandwidth vs cost for United Kingdom which is used as sample data here as a unit of analysis [13]. bandwidth= [10 30 50 100 200 400 600 800 1000] Mbps Cost= [20 37 40 42 43 45 46 46 46] Dollars
Table 2. Q, PLP and MOS value with different Q Q
PLP
MOS
10
7.465263e−01 4.750997e−01
100
6.503074e−01 6.280123e−01
200
5.637028e−01 8.073142e−01
400
4.353297e−01 1.171447e+00
600
3.462922e−01 1.516566e+00
800
2.820191e−01 1.827308e+00
1000 2.341107e−01 2.099708e+00
3.2
Determining Formula for Bandwidth and Bandwidth Cost by Curve Fitting
To obtain bandwidth vs cost curve, we need a specific formula for bandwidth and cost which is obtained by curve fitting. f (x) = axb
(3)
Analysis of the Cost
397
Table 3. Q, bandwidth, PLP and MOS value with same Q Q Bandwidth PLP
MOS
10 15
6.503074e−01 6.280123e-01
10 30
1.753233e−01 2.490591e+00
10 45
7.995852e−02 3.326486e+00
10 60
4.556652e−02 3.824152e+00
10 75
2.939265e−02 4.202314e+00
10 90
2.051913e−02 4.492741e+00
10 105
1.513212e−02 4.712481e+00
10 120
1.161832e−02 4.878501e+00
Coefficients (with 95% confidence bounds) a= 27.13 (22.65, 31.61) b= 0.0986 (0.06944, 0.1279) The values of a or b were provided by the curve fitting as the best fit for the graph we got for bandwidth vs cost. Confidence bounds value are: Goodness of fit: SSE: 38.48 R-square 0.9589 Adjusted R-square 0.953 RMSE 2.345 See Table 4 below where goodness of fit parameters are shown. From Table 5 and 6 we know the method named in curve fitting is NonlinearleastSquares which is taken automatically by the characteristics, shape and number of squares present in the curve. Among Interpolant, Linear fitting, Polynomial, Rational, Sum of shine, Smoothing Spline, Weibull, Exponential, Gaussian, Fourier we selected Power1 since it gives better visualization and better goodness of fit prediction. The prediction includes calculation of SSE, R-square, Adjacent Rsquare, RMSE, coefficient with confidence bound. The robust option is enabled and Least Absolute Residuals (LAR) shows more accurate results than Bi Square. LAR mainly focuses on the contrast between residuals rather than squares and Bi-Square reduces weight of the sum of the squares which is necessary for our curve’s case. Table 4. Goodness of fit parameters for curve fitting Fit name Data R-square SSE Adj R-sq RMSE Fit 1 38.4806 0.9530
Fit type DFE Coeff
Cost vs bandwidth Power1 0.9589 7 2.3446 2
398
A. A. Arnab et al. Table 5. Method and Algorithm for curve fitting Method
NonlinearleastSquares
Robust
LAR
Algorithm
Trust Region
DiffMinChange 1.0e-8 DiffMaxChange 0.1 MaxFunEvals
600
Maxlter
400
TolFun
1.0e-6
TolX
1.0e-6
Table 6. Method and Algorithm for curve fitting Coefficient StartPoint Lower Upper a
9.2708
−lnf
lnf
b
0.3019
−lnf
lnf
We will take bandwidth as ‘x’ which lies X axis and bandwidth cost as ‘f(x)’ which lies in Y axis. We can acquire values for ‘a’ and ‘b’ from curve fitting tool which is visible in Fig. 1. After that by implementing the Eq. (3) we obtain Eq. (4) below: (4) Cost = 27.13 ∗ bandwidth0.0986
Fig. 1. Setup for applying Curve fitting formula from Cost vs bandwidth to obtain a formula for MOS vs bandwidth cost
Analysis of the Cost
3.3
399
Implementing MOS and Cost Relationship by Eliminating Bandwidth from Bandwidth vs Cost Formula
By taking different values of cost we can get different values of bandwidth and if we replace bandwidth from Eq. (6) we can get different values for cost and MOS. The MATLAB code for MOS vs Bandwidth cost is shown in Fig. 2. Bottleneck capacity is given by Eq. (5) below:
Fig. 2. MOS vs bandwidth cost relationship
C=
bandwidth ∗ 1000000 12000
(5)
MOS and Bandwidth. From Eq. (1), (2) and (5) we obtained relationship between MOS and bandwidth which is Eq. (6): −44∗
11851.851 0.01( BW.1000 )2 +2( BW.1000 )+100 12 12
−2.9∗
11851.851 0.01( BW.1000 )2 +2( BW.1000 )+100 12 12
M OS = 1.46 ∗ e + 4.14 ∗ e
(6)
MOS and Bandwidth Cost. If we put bandwidth cost in the place of bandwidth by help of Eq. (4) we obtain the following relationship which is Eq. (7):
400
A. A. Arnab et al.
M OS = 1.46 ∗ e + 4.14 ∗ e 3.4
−44∗
−2.9∗
√ Cost1185151.1
√ Cost
√ Cost1185151.1
√ Cost
0.01(83.33 0.0986
0.01(83.33 0.0986
0.0986 2 27.13 ) +2(83.33
27.13
)2 +2(83.33 0.0986
27.13 )+100
(7)
27.13 )+100
MOS and Bandwidth Cost
To evaluate MOS with different numbers of TCP sources we changed the values for N. We took 2 sample values of N which are 80 and 500. 80 TCP sources mainly represent a small building or bandwidth use for family purposes. 500 TCP sources represents bigger companies and organizations. To get MOS and cost relationship, we took a value of N=80 and 500 instead of 50 which provided us a different PLP and a different MOS. The bandwidth and cost relationship remains same as before because it is here seen to have nothing to do with the number of TCP sources. We were able to obtain different MOS and bandwidth formula and different MOS and Cost formula and get output for different number of TCP sources. For N=80, MOS and Cost relationship is obtained in Eq. (8):
M OS = 1.46 ∗ e + 4.14 ∗ e
−44∗
−2.9∗
√ Cost30340.740
√ Cost
√ Cost1185151.1
√ Cost
0.01(83.33 0.0986
0.01(83.33 0.0986
0.0986 2 27.13 ) +2(83.33
27.13
)2 +2(83.33 0.0986
27.13 )+100
(8)
27.13 )+100
Similarly for N=500, we can obtain Eq. (9):
M OS = 1.46 ∗ e + 4.14 ∗ e 4 4.1
−44∗
−2.9∗
√ Cost11851.51
√ Cost
√ Cost1185151.1
√ Cost
0.01(83.33 0.0986
0.01(83.33 0.0986
0.0986 2 27.13 ) +2(83.33 0.0986 2 27.13 ) +2(83.33
27.13 )+100
(9)
27.13 )+100
Results and Testing Plotting MOS vs Bandwidth (Bps) with Different Packet Buffer Lengths
Our initial formula for PLP provides a relationship between packet loss probability and various parameters including the number of TCP sources, bottleneck capacity, round trip time, number of packets acknowledged by an ACK packet, factor by which TCP sending rate is reduced and packet buffer length. We can determine the PLP by taking sample data for this parameter. The MATLAB code and formula for PLP is discussed in Sect. 3.1. Then we calculated MOS from PLP. Figure 3 is MOS vs bandwidth (Mbps) with different packet buffer lengths.
Analysis of the Cost
401
Fig. 3. MOS vs bandwidth (Bps) with different packet buffer lengths
4.2
Plotting of MOS vs Bandwidth (Mbps) with Constant Packet Buffer Length Output
If we keep changing buffer length it is very difficult to evaluate MOS change with the effects of bandwidth change. So by keeping the same buffer length, Q which is 10, we can obtain a MOS vs bandwidth curve. So it is quite evident from Fig. 4 that when we increase bandwidth, MOS is also increasing proportionally bandwidth. As a sample when bandwidth is 50 Mbps, MOS is 3.5 (approximately), when bandwidth is 100 Mbps MOS is 4.5 (approximately) and when bandwidth is 150 Mbps MOS is close to 5. 4.3
Plotting of Bandwidth vs Bandwidth Cost Relationship
We took some different parameters for bandwidth cost in the UK. If we look at the Fig. 5 we can see initially the cost increases within a limited range of bandwidth. In this case even for 20 Mbps the bandwidth price is somewhere close to 35£ per month. The rate increases until 100 Mbps, from the graph customer has to pay nearly 43£ for 100 Mbps per month. Beyond 100 Mbps the cost increment is very slow. From 100 Mbps to 1000 Mbps cost only increases from 43£ to 46£ per month which is incredibly low. So we can draw conclusions about how much it is worth spending on bandwidth before we run into a law of diminishing returns.
402
A. A. Arnab et al.
Fig. 4. MOS vs bandwidth (Mbps) with same packet buffer length
Fig. 5. Bandwidth vs bandwidth cost relationship
Analysis of the Cost
403
Fig. 6. MOS vs bandwidth cost relationship
4.4
Plotting of MOS vs Bandwidth Cost Relationship Output
Initially MoS is very low with low cost. That indicates there is a certain amount of money a customer needs to pay initially to get internet access. A customer paying 38£ can obtain quality of experience of 2 MoS. Where a customer paying 45£ is receiving far better quality of experience, MoS is close to 5 according to Fig. 6. The experiment makes sense if we compare broadband prices in the UK. For example, Broadband provider named ‘Now Broadband’ offers 11 Mbps in 18£ per month in the ‘Brilliant Broadband’ plan. The Same broadband provider is offering 36 Mbps speed in 24£ per month in the ‘Fab Fibre’ plan. So the bandwidth is increasing more than 3 times while cost is only increasing by £ 6 [16]. 4.5
Plotting of MOS and Bandwidth Cost Output with Different TCP Sources
Initially we took the number of TCP sources to be 50. If we change the TCP sources we obtain different outputs and results. There are 2 output graphs in Fig. 7. The first one in Fig. 7 is calculated by taking 80 TCP sources and second one is calculated taking 500 TCP sources. If we take more TCP sources the quality increases more rapidly with the price increase compared to taking fewer TCP sources. Big organizations usually have TCP sources and so that they have the luxury of getting better quality of experience within the same cost but after crossing initial price barrier. In small household fewer TCP sources are most
404
A. A. Arnab et al.
Fig. 7. MOS vs bandwidth Cost Output with different TCP sources.
likely used which also experiences a rise in MOS which provides good quality within a price range but that is less than that seen in big organizations.
5
Discussion and Further Work
Prior work in the field has shown that QoE has a complex relationship with QoS factors like packet loss probability (PLP), delay or delay jitter. Furthermore, contemporary analyses have indicated that the relationship between these QoS factors and QoE can significantly vary from application to application. In this paper we take prior analyses of the relationship between the key QoS metric of packet loss probability and QoE and target an internet access scenario. We use this relationship to show how QoE (measured as MOS) varies as more money is spent on the bandwidth of the internet access link. Our results target two different scenarios – a small number of multiplexed TCP sources and a relatively large number of multiplexed TCP sources. We show that increase in MOS is not a linear with increasing spent on bandwidth at all, and considering these two different we are able to resolve the highly non-linear fashion in which MOS does increase with spend on bandwidth.
References 1. Quality of Experience Paradigm in Multimedia Services. ScienceDirect (2017) 2. Streijl, R.C., Winkler, S., Hands, D.S.: Mean opinion score (MOS) revisited: methods and applications, limitations and alternatives. SpringerLink (2014) 3. Ahmad, N., Schormans, J., Wahab, A.: Variation in QoE of passive gaming video streaming for different packet loss ratios. In: QoMEX 2020- The 12th International Conference on Quality of Multimedia Experience, Athlone (2020) 4. Ahmad, N., Wahab, A., Schormans, J.: Importance of cross-correlation of QoS metrics in network emulators to evaluate QoE of video streaming applications. Bordeaux (2020)
Analysis of the Cost
405
5. Vasant, P., Zelinka, I., Weber, G. W.: Intelligent computing & optimization. in SpringerLink (2018) 6. Camp, L.J., Gideon, Carolyn.: Limits to Certainty in QoS Pricing and Bandwidth. 22 July 2002. https://dspace.mit.edu/handle/1721.1/1512. Accessed 2 May 2020 7. Xiao, X.: Technical, commercial and regulatory challenges of QoS. Amsterdam [u.a.]: Elsevier/Morgan Kaufmann, p.30. Zarki, M. (2019). QoS and QoE (2008) 8. Roshan, M., Schormans, J., Ogilvie, R.: Video-on-demand QoE evaluation across different age-groups and its significance for network capacity. EAI Endorsed Transactions on Mobile Communications and Applications, no. EAI (2018) 9. Gideon, C. (2002). Limits To Certainty in QoS Pricing and bandwidth. http://hdl. handle.net/1721.1/1512. Accessed 11 Aug 2019 10. Internet Cost Structures and Interconnection Agreements. The journal of electronic publishing (1995) 11. Joutsensalo, J., H¨ am¨ al¨ ainen, T.D., Siltanen, J., Luostarinen, K.: Delay guarantee and bandwidth allocation for network services. In: Next Generation Internet Networks, 2005 (2005) 12. Aragon, J.C.: Analysis of the correlation between packet loss and network delay and their impact in the performance of surgical training applications. In: Semantic Scholar (2006) 13. Ofcom (2017). International Communications Market Report 2017. London. https://www.ofcom.org.uk/research-and-data/multi-sector-research/cmr/cmr2017/international. Accessed 7 Aug 2019 14. Expert System (2017). What is Machine Learning? A definition - Expert System. Expertsystem.com. https://www.expertsystem.com/machine-learning-definition/. Accessed 6 Aug 2019 15. Bisio, I., Marchese, M.: Analytical expression and performance evaluation of TCP packet loss probability over geostationary satellite. IEEE Commun. Lett. 8(4), 232–234 (2004) 16. Cable.co.uk (2019). Best Broadband Deals August 2019 — Compare Broadband Offers - Cable.co.uk. Cable. https://www.cable.co.uk/broadband/. Accessed 16 Aug 2019
Application of Customized Term Frequency-Inverse Document Frequency for Vietnamese Document Classification in Place of Lemmatization Do Viet Quan and Phan Duy Hung(&) FPT University, Hanoi, Vietnam [email protected], [email protected]
Abstract. Natural language processing (NLP) is a problem which attracts lots of attention from researchers. This study analyzes and compares a different method to classify text sentences or paragraphs in Vietnamese into different categories. The work utilizes a sequence of techniques for data-preprocessing, customize learning model and methods before using Term Frequency-Inverse Document Frequency (TF-IDF) for model training. This classification model could contribute positively to many Vietnamese text-analyzing based businesses, such as social network, e-commerce, or data mining in general. This problem’s challenge relies on two main aspects: the Vietnamese language itself and current NLP researches for the Vietnamese language. The paper utilizes the pros of many different classification methods to provide better accuracy in text classification. Keywords: NLP Lemmatization
Text-classification Vietnamese IF-TDF POS-tag
1 Introduction Vietnamese is a special language whether how it was formed and developed, or how it is expressed and used. The language itself is ambiguous, but we can find various reasons as why Vietnamese is not a simple language to be processed by a typical NLP model such as deep learning model [1], thus we need customization. As a historical result, Vietnamese vocabulary is majorly structured from words derived from Chinese, notably words in scientific or politic domains, namely Han-Viet (Vietnamese words taken from Chinese since the Han-Dynasty era), which existence takes roughly 70% [2]. Moreover, because of French colonization, Vietnamese had taken in a numerous loanword from French language in multiple form. Recently, many words from English have joined the Vietnamese vocabulary family, either being translated word-by-word or being used directly as a part of Vietnamese [2]. NLP researches in Vietnam have been greatly delayed compare to what it could have reached, as they have been made without a common, academic direction, there are some notable projects but still lack the systematic approach to the some important issues (text summarization, context based intelligent search, etc.). The lack of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 406–417, 2021. https://doi.org/10.1007/978-3-030-68154-8_37
Application of Customized Term Frequency-Inverse Document Frequency
407
inheritable researches also hinders the followers to continue researching. Although some methods have been implemented for Vietnamese, NLP in general and text classification in specific still face several issues and can mostly work in formal document situations. Inheriting from this vocabulary issue, we have stated the following reasons that make Vietnamese unable to efficiently utilize typical NLP pre-processing methods [3] and they are available in Table 1. Table 1. Reasons why Vietnamese are relatively easy for human to learn, but cause difficulties for Machine Learning. Short words
No gender No plural, no cases No articles No verb conjugation
For human Easy to learn
Easy to Easy to listen Easy to listen Easy to
learn speak, hard to speak, hard to learn
Simple verb tenses
Easy to speak, hard to comprehend
Tenses are optional
Easy to speak, hard to comprehend
No agreement
Faster to learn, less knowledge to remember Easy to understand compound words if one understands each element Hard to distinguish
Single syllable words
Homophones: due to singlesyllable characteristics, in Vietnamese, homophones also mean words that are written identically
For NLP efficiency Repetitive structure, hence, many synonyms and homophones Inefficient lemmatization No lemmatization needed, thus no lemmatization No lemmatization needed Difficult to differentiate between tenses or nouns’ genres Shift the complicating task(s) onto the listening side instead of the speaking side, thus increasing NLP difficulties Shift the complicating task(s) onto the listening side instead of the speaking side, thus increasing NLP difficulties No explicit relationship in terms of data between words in a sentence Increase the difficulties in distinguishing if a word is a compound word or it is multiple separated words Hard to distinguish
These characteristics effectively reduce the typical Stemming and Lemmatization steps’ methods, and this will also affect the accuracy of POS-tagging task. Many
408
D. V. Quan and P. D. Hung
Vietnamese NLP experts have been working to address these steps with different approaches. This project does not have the ambition to overcome all these challenges by itself, we will implement some works of other experts before adding our contribution. Our work concern in the differentiating potential of synonyms and classifiers in Vietnamese. Ho Le et al. conducted a systematic research using Vectorized IF-TDF. The authors prepared their dataset for training and archived an average impressive result of 86,7%, and peak result of 92.8%. We follow their team’s footsteps [4] while conducting this paper. Using several different libraries, our average result provides comparable performance with faster processing speed, and minor accuracy rate improvement. The following sections are ordered as: Sect. 2 describes our methodology, Sect. 3 explains and describes the pre-processing step; we provide some case studies in Sect. 4; and finally, Sect. 5 is reserved for conclusions and what to do next.
2 Methodology As we mentioned in Sect. 1, Vietnamese lemmatization is not as effective using the same method of lemmatization for Indo-European languages. The Vietnamese language consists of mostly single-syllable words, which have meaning on their own, depending on contexts. Additionally, when the Vietnamese language needs to express a more complicated meaning, it uses compound words, but, by any mean, still being written by two or more single-syllables words next to each other (e.g.: “anh em” is compounded from “anh” – big brother and “em” – little brother, which means “brothers”) [5]. This explains why, homophones in Vietnamese are more common than Indo-European languages, because Vietnamese syllables consist of 5 components: initial sounds, rhymes (main sounds, ending sounds and accompaniments) and tones [2]. However, unlike the Indo-European languages, since Vietnamese does not transform, there is only one type of homonym which make homophones to be written and spoken identically. This phenomenon makes Vietnamese having a high number of words which have different meaning, but are written identically (for single-syllables) or partially identically (for unit(s) in a compound word), just like “giáp” may mean “12 years”, “armor”, or “beside” depending on context, or “tiền tuyến” (frontline) has one common unit with “tiền bạc” (money) [5]. With the rule of combining phonetics, theoretically Vietnamese can create over 20,000 different syllables, but in practice, it only uses about 6000. Digging deeper into the study of Ho Le et al. [4], we reckon that more than 20% of Vietnamese vocabulary is occupied by homophones, among which, 73.6% are single-syllable words, 25.9% are compound words which are created using 2 single-syllable words, effectively occupy 99.56% of all cases in Vietnamese Dictionary, edition of 2006 as stated in Table 2.
Application of Customized Term Frequency-Inverse Document Frequency
409
Table 2. Number of Homophone cases in Vietnamese (2006). Statistic of Homophone in Vietnamese, according to Vietnamese Dictionary [5] Single-syllable word Compound words Coincident Same From 2 units From 3 units From 4 units origin 1913 807 Coincident Same Coincident Same Coincident Same origin origin origin 282 673 04 03 02 07 2720 cases 955 cases 07 cases 09 cases 3691 cases
Looking further into the characteristic of these cases in [2] Sect. 2.1.2, we gathered the statistic of these cases by Parts of Speech (POS) as presented in Table 3.
Table 3. Number and percentage of Homophone distributed onto POS. Same POS Different POS Noun Verb Adjective 2 POS Number 111 38 53 Noun Noun Verb Adjective 202 279 245 Percentage 22.69% 31.35% 27.53%
3 POS Adjective Verb 114 12.81%
Others Total 36 14 4.04% 1.57%
Moreover, most of Vietnamese homophones consist a small number of different meanings (01−06 meanings). Among these, 77.31% (rounded number) are Homophones with different POS, meaning if a word is detected to have different POS in different contexts, it will highly occur that they have a different meaning. This effectively replaces traditional lemmatizing methods, specifically in text classification, as 2 words can be synonyms but are used in 2 different genres of text (e.g.: “sư phụ” as master in ancient time, and “thầy giáo” as teacher in more modern time can have a similar meaning, but exist and be used in two totally different text genre) [6]. Following these conclusions, we decide to combine a word with its POS to give it a unique ID, thus effectively distinguish between homophones while preserving the difference between synonyms. The execution of this theory will be presented in the following sections.
410
D. V. Quan and P. D. Hung
3 Data Collection and Pre-processing 3.1
Data Collection
To perform the experiment, we have prepared a dataset (modest size) using several books and multiple collected articles, taken from various fields as data source, as following: • • • • • • •
Lord of the ring story [7] A collection of Grim fairy tales [8] Diary of a Cricket [9] A collection of 109 Vietnamese fables [10]. A collection of 80 articles about the world’s politic, written in 2019 and 2020 [11, 12]. A collection of 80 articles about Vietnam’s showbiz, written in 2019 and 2020 [12]. A collection of 80 articles about Vietnam’s economics, written in 2019 and 2020 [11, 13].
Each of these categories consists of 18,000 to 22,000 sentences. Selected data are versatile enough to contain different writing styles, purposes, and vocabularies. Moreover, the chosen articles and stories all talk about the developing stories of one or more subjects (instead of a more specific matter such as scientific documents) so that it can show us the ability to distinguish between different contexts of our method. 3.2
Data Processing
The dataset is manually prepared, and trim lined using the following steps to clean up and for consistency sake: The dataset is copied and process semi-manually, going through some preprocessing tasks. First step, we filter and delete unnecessary special characters, unprintable characters. Since the data come from various sources (including from websites, pdf files, etc.), we needed to filter all the unprintable characters such as pilcrows, spaces, non-breaking spaces, tabs, etc. Next, we remove all formatting and unnecessary line chunking, since there are some parts where the dialog gets displayed as a separate line and it is not consistent. Into the next steps, we fix the content if needed. Throughout Vietnamese language history, there are multiple “popular” way of writing and it is correct in their own era. For example, we had two ways of writing words with phonetics: “hòa bình” and “hoà bình” (peace). We needed to unify how words are written across all documents to ensure accuracy. Along the way, we also fix all detected typos. The expectation from this step is to make all words under the same set of standards, which reduce the risk of mis-interpreting words meaning or its nature. Next step, we remove redundant spaces, dots, and commas. Afterward, we modify the dataset to standardize keep-able special characters (such as: “…” for dialogs and [1] for annotations). It is important to note that all this step is done automatically instead of manually. These special characters are kept distinguishing between different categories of document, for example, scientific documents often use “[…]” for references, but this does not appear as much in others.
Application of Customized Term Frequency-Inverse Document Frequency
411
Lastly, we reordered these data onto files to represent the following classes: • • • • • • •
Western fantasy novel (Lord of the ring) Western fairy tale (Grimm stories) Vietnamese fairy tale and stories for kids (Diary of a Cricket, Vietnamese fables) World’s politic Vietnam’s showbiz Vietnam’s economics Others
It would be acceptable if in our test run, we find out that a text can exist in more than one category, for example, Vietnam’s economics matters may coerce with World’s politic event(s) in one way or another, and a predict in either one of these categories can be considered acceptable.
4 Data Training To train the collected data, first we perform sentences tokenizing, using the sentenceboundary-detection based on Naïve Bayes algorithm by Vuong Quoc Binh and Vu Anh [4]. The result of this step is one text file for each document, with each sentence in a separated line, for example, with the story “Diary of a Cricket” we have a chunk of result as sampled in Table 4.
Table 4. Sample output of Sentence Tokenizing step performed on “Diary of a Cricket”. … tôi là em út, bé nhất nên được mẹ tôi sau khi dắt vào hang, lại bỏ theo một ít ngọn cỏ non trước cửa, để tôi nếu có bỡ ngỡ, thì đã có ít thức ăn sẵn trong vài ngày rồi mẹ tôi trở về tôi cũng không buồn trái lại, còn thấy làm khoan khoái vì được ở một mình nơi thoáng đãng, mát mẻ tôi vừa thầm cảm ơn mẹ, vừa sạo sục thăm tất cả các hang mẹ đưa đến ở khi đã xem xét cẩn thận rồi, tôi ra đứng ở ngoài cửa và ngửng mặt lên trời qua những ngọn cỏ ấu nhọn và sắc, tôi thấy màu trời trong xanh tôi dọn giọng, vỗ đôi cánh nhỏ tới nách, rồi cao hứng gáy lên mấy tiếng rõ to từ đây, tôi bắt đầu vào cuộc đời của tôi …
Afterward, we perform words tokenizing step combining with POS-tagging, we loop through each sentence and execute the POS-tagging method of Vu Anh et al. [4] utilize Conditional Random Fields. The output of POS-tagging is a python array where each element contains a pair of tokenized word (single or compound) with its respectively POS-tag, available in Table 5.
412
D. V. Quan and P. D. Hung Table 5. Sample output of Word Tokenizing step performed on “Diary of a Cricket”.
… [(‘tôi’, ‘P’), (‘là’, ‘V’), (‘em út’, ‘N’), (‘,’, ‘CH’), (‘bé’, ‘N’), (‘nhất’, ‘A’), (‘nên’, ‘C’), (‘được’, ‘V’), (‘mẹ’, ‘N’), (‘tôi’, ‘P’), (‘sau’, ‘N’), (‘khi’, ‘N’), (‘dắt’, ‘V’), (‘vào’, ‘E’), (‘hang’, ‘N’), (‘,’, ‘CH’), (‘lại’, ‘R’), (‘bỏ’, ‘V’), (‘theo’, ‘V’), (‘một ít’, ‘L’), (‘ngọn’, ‘Nc’), (‘cỏ’, ‘N’), (‘non’, ‘A’), (‘trước’, ‘E’), (‘cửa’, ‘N’), (‘,’, ‘CH’), (‘để’, ‘E’), (‘tôi’, ‘P’), (‘nếu’, ‘C’), (‘có’, ‘V’), (‘bỡ ngỡ’, ‘A’), (‘,’, ‘CH’), (‘thì’, ‘C’), (‘đã’, ‘R’), (‘có’, ‘V’), (‘ít’, ‘A’), (‘thức ăn’, ‘N’), (‘sẵn’, ‘A’), (‘trong’, ‘E’), (‘vài’, ‘L’), (‘ngày’, ‘N’), (‘.’, ‘CH’)] [(‘rồi’, ‘C’), (‘mẹ’, ‘N’), (‘tôi’, ‘P’), (‘trở về’, ‘V’), (‘.’, ‘CH’)] [(‘tôi’, ‘P’), (‘cũng’, ‘R’), (‘không’, ‘R’), (‘buồn’, ‘V’), (‘.’, ‘CH’)] [(‘trái lại’, ‘N’), (‘,’, ‘CH’), (‘còn’, ‘C’), (‘thấy’, ‘V’), (‘làm’, ‘V’), (‘khoan khoái’, ‘N’), (‘vì’, ‘E’), (‘được’, ‘V’), (‘ở’, ‘V’), (‘một mình’, ‘X’), (‘nơi’, ‘N’), (‘thoáng đãng’, ‘V’), (‘,’, ‘CH’), (‘mát mẻ’, ‘N’), (‘.’, ‘CH’)] [(‘tôi’, ‘P’), (‘vừa’, ‘R’), (‘thầm’, ‘A’), (‘cảm ơn’, ‘V’), (‘mẹ’, ‘N’), (‘,’, ‘CH’), (‘vừa’, ‘R’), (‘sạo sục’, ‘V’), (‘thăm’, ‘V’), (‘tất cả’, ‘P’), (‘các’, ‘L’), (‘hang’, ‘N’), (‘mẹ’, ‘N’), (‘đưa’, ‘V’), (‘đến’, ‘V’), (‘ở’, ‘V’), (‘.’, ‘CH’)] [(‘khi’, ‘N’), (‘đã’, ‘R’), (‘xem xét’, ‘V’), (‘cẩn thận’, ‘A’), (‘rồi’, ‘T’), (‘,’, ‘CH’), (‘tôi’, ‘P’), (‘ra’, ‘V’), (‘đứng’, ‘V’), (‘ở’, ‘E’), (‘ngoài’, ‘E’), (‘cửa’, ‘N’), (‘và’, ‘C’), (‘ngửng mặt’, ‘V’), (‘lên’, ‘V’), (‘trời’, ‘N’), (‘.’, ‘CH’)] [(‘qua’, ‘V’), (‘những’, ‘L’), (‘ngọn’, ‘Nc’), (‘cỏ’, ‘N’), (‘ấu’, ‘N’), (‘nhọn’, ‘A’), (‘và’, ‘C’), (‘sắc’, ‘V’), (‘,’, ‘CH’), (‘tôi’, ‘P’), (‘thấy’, ‘V’), (‘màu’, ‘N’), (‘trời’, ‘N’), (‘trong’, ‘E’), (‘xanh’, ‘A’), (‘.’, ‘CH’)] [(‘tôi’, ‘P’), (‘dọn giọng’, ‘V’), (‘,’, ‘CH’), (‘vỗ’, ‘V’), (‘đôi’, ‘M’), (‘cánh’, ‘N’), (‘nhỏ’, ‘A’), (‘tới’, ‘E’), (‘nách’, ‘N’), (‘,’, ‘CH’), (‘rồi’, ‘C’), (‘cao hứng’, ‘V’), (‘gáy’, ‘V’), (‘lên’, ‘V’), (‘mấy’, ‘L’), (‘tiếng’, ‘N’), (‘rõ’, ‘A’), (‘to’, ‘A’), (‘.’, ‘CH’)] [(‘từ’, ‘E’), (‘đây’, ‘P’), (‘,’, ‘CH’), (‘tôi’, ‘P’), (‘bắt đầu’, ‘V’), (‘vào’, ‘E’), (‘cuộc đời’, ‘N’), (‘của’, ‘E’), (‘tôi’, ‘P’), (‘.’, ‘CH’)] …
Noted that we will perform this step for all included words, including proper noun for an individual person, place, or organization. These are valuable data that helps with document classification, for e.g., multiple presidents’ name will help concluding the said document is politic related. Moreover, special characters are also POS-tagged, as dialogs keep their respectively quotation marks, the end of sentences can be a period or a quotation mark depending on the nature of the sentence itself. This reservation helps with indicating if a certain category will include a lot of dialogues by counting quotation marks, or we indicate that brackets are mostly used in scientific articles and books, or sometime used in novels. Next, we transform these data into a combined state, where each item consists of a word and its POS-tag, which we address as CWP (combined word with POS-tag). Following what we have been studying, this combined item effectively differentiate itself with its homophones as effective as 77.31% or better. The output of this step should look like the sample presented in Table 6.
Application of Customized Term Frequency-Inverse Document Frequency
413
Table 6. Sample output of POS-tag and Word combining step performed on “Diary of a Cricket”. … tôi__P là__V em_út__N,__CH bé__N nhất__A nên__C được__V mẹ__N tôi__P sau__N khi__N dắt__V vào__E hang__N,__CH lại__R bỏ__V theo__V một_ít__L ngọn__Nc cỏ__N non__A trước__E cửa__N,__CH để__E tôi__P nếu__C có__V bỡ_ngỡ__A,__CH thì__C đã__R có__V ít__A thức_ăn__N sẵn__A trong__E vài__L ngày__N.__CH rồi__C mẹ__N tôi__P trở_về__V.__CH tôi__P cũng__R không__R buồn__V.__CH trái_lại__N,__CH còn__C thấy__V làm__V khoan_khoái__N vì__E được__V ở__V một_mình__X nơi__N thoáng_đãng__V,__CH mát_mẻ__N.__CH tôi__P vừa__R thầm__A cảm_ơn__V mẹ__N,__CH vừa__R sạo_sục__V thăm__V tất_cả__P các__L hang__N mẹ__N đưa__V đến__V ở__V.__CH khi__N đã__R xem_xét__V cẩn_thận__A rồi__T,__CH tôi__P ra__V đứng__V ở__E ngoài__E cửa__N và__C ngửng_mặt__V lên__V trời__N.__CH qua__V những__L ngọn__Nc cỏ__N ấu__N nhọn__A và__C sắc__V,__CH tôi__P thấy__V màu__N trời__N trong__E xanh__A.__CH tôi__P dọn_giọng__V,__CH vỗ__V đôi__M cánh__N nhỏ__A tới__E nách__N,__CH rồi__C cao_hứng__V gáy__V lên__V mấy__L tiếng__N rõ__A to__A.__CH từ__E đây__P,__CH tôi__P bắt_đầu__V vào__E cuộc_đời__N của__E tôi__P.__CH …
Since Vietnamese compound words are multiple single-syllable words being written next to each other, to present these words, we use underscore special character “_” to separate its units and its POS-tag. Thus, the noun “thức ăn” (food) will be presented as “thức_ăn__N”. The final output of this step is a list of all sentences being tokenized, with a label for each of them, stating which category it belongs to. For the sake of simplicity, we have written these data to a separate file for each book or each article. Fourth step is where we merge the output files of third step into two large datasets: one with CWP and one without. This way we can compare the difference between barebone training and CWP method. These datasets are written onto separate files, but with the same structure as following: a csv file with two columns: Encoded Label and Processed sentence. Encoded Labels are used to differentiate between classes to be classified, and to improve processing performance. We have used a sample Encoded Labels set as in Table 7. Table 7. Encoded labels. Category Label Vietnamese fairy tale and stories for kids 1 Vietnamese History books 2 Western fantasy novel 3 Vietnamese news articles 4 World news articles 5 Others 100
414
D. V. Quan and P. D. Hung
Step five, we perform TD-IDF vectorizer by building up weight matrix. This step evaluates the relevance of each CWP toward its sentence and its Encoded Label. We will perform this step on top of the training dataset, consisting 80% number of rows in the combined dataset in step four. The output of this step is presented in Table 8. Table 8. Sample output of Sentence Tokenizing step performed on “Diary of a Cricket”. … ‘tôi__p’: 4068, ‘sống__v’: 3289, ‘từ__e’: 4223, ‘bé__n’: 218, ‘__ch’: 48, ‘một__m’: 2327, ‘sự__n’: 3331, ‘đáng__v’: 4694, ‘suốt__a’: 3209, ‘đời__n’: 4933, ‘ấy__p’: 4973, ‘là__v’: 1935, ‘lâu_đời__a’: 1966, ‘trong__e’: 3807, ‘họ__p’: 1560, ‘nhà__n’: 2562, ‘chúng_tôi__p’: 559, ‘vả_lại__c’: 4378, ‘mẹ__n’: 2301, ‘thường__r’: 3556, ‘bảo__v’: 281, ‘rằng__c’: 3117, ‘phải__v’: 2877, ‘như__c’: 2603, ‘thế__p’: 3617, ‘để__e’: 4858, ‘các__l’: 715, ‘con__n’: 673, ‘biết__v’: 138, ‘một_mình__x’: 2331, ‘cho__e’: 467, ‘quen__v’: 2946, ‘đi__v’: 4644, ‘con_cái__n’: 678, ‘mà__c’: 2194, ‘cứ__r’: 894, ‘vào__e’: 4307, ‘bố_mẹ__n’: 358, ‘thì__c’: 3510, ‘chỉ__r’: 623, ‘sinh__v’: 3180, ‘ra__v’: 3037, ‘tính__v’: 4056, ‘xấu__a’: 4522, ‘lắm__r’: 2075, ‘rồi__c’: 3128, ‘ra_đời__v’: 3044, ‘không__r’: 1720, ‘gì__p’: 1269, ‘đâu__p’: 4711, ‘bởi__e’: 383, ‘nào__p’: 2685, ‘cũng__r’: 786, ‘vậy__p’: 4393, ‘đẻ__v’: 4841, ‘xong__v’: 4474, ‘là__c’: 1933, ‘có__v’: 746, ‘ba__m’: 88, ‘anh_em__n’: 69, ‘ở__v’: 4993, ‘với__e’: 4426, ‘hôm__n’: 1482, ‘tới__e’: 4203, ‘thứ__n’: 3687, ‘trước__n’: 3885, ‘đứa__nc’: 4942, ‘nửa__n’: 2781, ‘lo__v’: 1896, ‘vui__a’: 4294, ‘theo__v’: 3379, ‘sau__n’: 3164, ‘dẫn__v’: 1033, ‘và__c’: 4301, ‘đem__v’: 4635, ‘đặt__v’: 4836, ‘mỗi__l’: 2322, ‘cái__nc’: 720, ‘hang__n’: 1318, ‘đất__n’: 4807, ‘ở__e’: 4992, ‘bờ__n’: 382, ‘ruộng__n’: 3072, ‘phía__n’: 2844, ‘bên__n’: 223, ‘kia__p’: 1766, ‘chỗ__n’: 641, ‘trông__v’: 3864, ‘đầm__n’: 4815, ‘nước__n’: 2722, ‘đã__r’: 4718, ‘đắp__v’: 4830, ‘thành__v’: 3452, ‘bao_giờ__p’: 103, ‘nhất__a’: 2618, ‘nên__c’: 2697, ‘được__v’: 4775, ‘khi__n’: 1658, ‘dắt__v’: 1041, ‘lại__r’: 2044, ‘bỏ__v’: 352, ‘một_ít__l’: 2336, ‘ngọn__nc’: 2517, ‘cỏ__n’: 862, ‘non__a’: 2679, ‘trước__e’: 3884, ‘cửa__n’: 902, ‘nếu__c’: 2750, ‘ít__a’: 4596, ‘thức_ăn__n’: 3693, ‘sẵn__a’: 3281, ‘vài__l’: 4302, ‘ngày__n’: 2449, ‘trở_về__v’: 3955, ‘buồn__v’: 167, ‘còn__c’: 743, ‘thấy__v’: 3588, ‘làm__v’: 1936, ‘vì__e’: 4319, ‘nơi__n’: 2721, ‘vừa__r’: 4437, ‘thầm__a’: 3589, ‘cảm_ơn__v’: 821, ‘thăm__v’: 3544, ‘tất_cả__p’: 4133, ‘đưa__v’: 4764, ‘đến__v’: 4848, ‘xem_xét__v’: 4462, …
The TF-IDF vocabulary can now be accessed, with top rows being as in Table 9 as sampling data. We can now look at some cases where CWP effectively differentiate homophones as sampled in Table 10.
Application of Customized Term Frequency-Inverse Document Frequency
415
Table 9. TF-IDF vocabulary. (0, (0, (0, (0, (0, (0, (0, (0, (0, (1, (1, (1, (1, (1, (1, (1, (1, (1, …
4644) 0.168109877554555 4254) 0.4531296121077332 2872) 0.3682155978026012 2595) 0.35653421293953347 2383) 0.36033217381957333 1496) 0.21122947912465437 804) 0.34709010848646565 703) 0.43111976142881975 48) 0.15139449063208063 4718) 0.18776376846938878 3367) 0.39076469627344607 3102) 0.3703815706378775 1784) 0.33743975469057436 1749) 0.3143520631846064 1720) 0.16931995401726685 1235) 0.4196143541973651 745) 0.30462437923547336 185) 0.40110986375766133
Table 10. Sampling processed data – Homophone differentiation. # Homophone Words 218 bé bé__n 217 bé__a 1183 giáp giáp__v 1182 giáp__n 1181 giáp__m
Notes A noun (a baby) An adjective (small) A verb (being adjacent to) A noun (armor) An article
Afterward, we train and test with Support Vector Machine (SVM). The reason for this choice is because it is a very universal learner, and it is independent of the dimensionality of the feature space according to Thorsten Joachims [14]. It is proven to be effective and can be used to generalize even with many features involved, which is ideal in our case study. In practice, with some tuning, we achieve the following result in Table 11.
Table 11. SVM accuracy rate. Random seed Max feature With CWP Without CWP, with combined words
500 5 63.295 63.173
100 5 63.341 63.296
100 50 67.940 69.892
500 50 68.189 69.781
1000 100 72.029 72.631
5000 100 72.282 72.788
5000 500 82.917 81.510
500 5000 90.734 90.144
200 5000 91.075 90.140
100 5000 90.573 90.179
50 5000 90.799 90.106
416
D. V. Quan and P. D. Hung
The highest accuracy rate we received is 91.075% with CWP, and 90.179% without CWP. Based on these data, we concluded that CWP does not always give better results, but with tuning it can give significant improvement over without. For comparison purpose, we also perform Naïve-Bayes Classifier algorithm training model (arguably providing among one of the fastest processing speeds out of the box) with a slight improvement as stated in Table 12. Table 12. Naïve-Bayes classifier accuracy rate. Random seed Max feature With CWP Without CWP, with combined words
500 5 63.295 63.173
100 5 63.341 63.296
100 50 66.916 67.272
500 50 66.532 67.107
1000 100 70.805 70.643
5000 100 70.847 70.417
5000 500 80.041 78.990
500 5000 87.980 87.248
200 5000 88.329 87.336
100 5000 88.179 87.551
50 5000 88.064 86.999
The best we were able to achieve without overfitting is 88.329% with CWP, and 87.551% without CWP. The trend followed through as we needed proper tuning for this result. It is also safe to conclude that in general, even though SVM leads the accuracy rate in most of cases, Naïve-Bayes Classifier still process faster with reasonably accuracy rate. Comparing with results from “underthesea group”, who has been developing and maintaining the popular NLP python toolkit for Vietnamese language processing, that they have also published in their official website [2] and quoted as in Table 13, we conclude that after skipping lemmatizing step, CWP provide a modestly comparable result. This proves that CWP not only saves processing power, but also remain competitive in accuracy front.
Table 13. underthesea group’s work result – version 1.1.17 - 2020 [2]. TfidfVectorizer(ngram_range = (1, 2), max_df = 0.5) CountVectorizer(ngram_range = (1, 3), max_df = 0.7) TfidfVectorizer(max_df = 0.8) CountVectorizer(ngram_range = (1, 3) TfidfVectorizer(ngram_range = (1, 3)) CountVectorizer(max_df = 0.7)
92.8 89.3 89.0 88.9 86.8 85.5
5 Conclusion and Perspectives This paper proposes the combination of a word with its POS to give it a unique ID, thus effectively distinguish between homophones while preserving the difference between synonyms. The dataset is collected and (semi-)manually processed from various fields, then a sequence of techniques for customizing and training learning model is used. The classification model could contribute positively to many Vietnamese text-analyzing based businesses, such as social network, e-commerce, etc. The work results prove that
Application of Customized Term Frequency-Inverse Document Frequency
417
CWP not only saves processing power but also competes for accuracy with the best-inclass results in the same field. For future work, we shall apply studied method into sentiment analysis of social network posts and comments, e-commerce reviews analysis for data mining purpose. The study also can give a reference to other fields, in general, any text processing field that requires synonyms difference preservation such as social network analysis, sentiment analysis [15, 16].
References 1. Cai, J., Li, J., Li, W., Wang, J.: Deeplearning model used in text classification. In: Proceedings 15th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, pp. 123–126 (2018) 2. Le, H., Hoang, T., Toan, D.M.: Homophone and polysemous words in Vietnamese (in comparison with modern Han language) - Đồng âm và đa nghĩa trong tiếng Việt (Đối chiếu với tiếng Hán hiện đại) (Vietnamese) (2011) 3. Halpern, J.: Is Vietnamese a hard language? http://www.kanji.org/kanji/jack/vietnamese/is_ VN_hard_sum_EN_VN.pdf 4. Le, H. et al.: Vietnamese NLP Toolkit https://underthesea.readthedocs.io 5. Institute of Linguistics of Vietnam: Vietnamese dictionary - Republish 12th ed, (2006) 6. Thuat, D.T.: Vietnamese phonetic (Ngữ âm tiếng Việt (Vietnamese)), Hanoi, p. 89 (1977) 7. Tolkien, J.R.R.: Lord of the Ring - Book 1: The Fellowship of the Ring, Vietnamese Literature publisher, translated by Yen. N.T.T, Viet, D.T. (2013) 8. A collection of Grim fairy tales, Dan Tri publisher, translated by various members (2008) 9. Hoai, T.: Diary of a Cricket, Kim Dong publisher (2014) 10. 109 modern Vietnamese fables – Various artists, Hong Duc publisher (2018) 11. Various newspaper published during 2019–2020 at https://vnexpress.net 12. Various newspaper published during 2019–2020 at https://thanhnien.vn 13. Various newspaper published during 2019–2020 at https://dantri.com.vn 14. Thorsten, J.: Text Categorization with Support Vector Machines: Learning with Many Relevant Features (2005) 15. Hung, P.D., Giang, T.M., Nam, L.H., Duong, P.M., Van Thang, H., Diep, V.T.: Smarthome control unit using Vietnamese speech command. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing and Optimization. ICO 2019. Advances in Intelligent Systems and Computing, vol 1072. Springer, Cham (2019) 16. Tram, N.N., Hung, P.D.: Analyzing hot Facebook users posts’ sentiment. In: Emerging Technologies in Data Mining and Information Security Proceedings of IEMIS (2020)
A New Topological Sorting Algorithm with Reduced Time Complexity Tanzin Ahammad1 , Mohammad Hasan1,2(&) and Md. Zahid Hassan1,2
,
1
2
Chittagong University of Engineering and Technology, Chattogram, Bangladesh [email protected] Department of CSE, Bangladesh Army University of Science and Technology (BAUST), Saidpur, Bangladesh
Abstract. In the case of finding the topological ordering of a directed acyclic graph (DAG), kahn’s and Depth First Search (DFS) topological sorting algorithms are used. Both of these algorithms time complexity is O(|V| + |E|). Here a topological sorting algorithm is proposed that is completely new and it reduces the time complexity of the previous algorithms. By separating the vertices having outgoing edges and the vertices having no outgoing edges then removing outgoing edges step by step, we can find a topological ordering of any DAG. The P jvj time complexity after using the proposed algorithm reduces to O ðNE Þ i¼1
i
for both average case and worst case but for best case it reduce to O(|V|), here |V| is the number of vertex and |(NE)| is the number of vertex contains at least one outgoing edge. This algorithm also can detect cycle in a graph. This algorithm cab be used for resolving dependencies, scheduling system, planning and in many graph algorithms. Keywords: Topological sorting Vertex
Directed acyclic graph Complexity Set
1 Introduction Topological sorting for a directed acyclic graph (DAG) means the linear ordering of the vertices of a graph. In this ordering for every directed edge (u, v) from vertex u to vertex v, u comes before v. In a graph, vertex represents task which has to perform and edge indicates constraints that the task performed one after another, then a valid sequence for the tasks is the topological ordering of the tasks. Topological ordering is possible only for DAGs. If at least one cycle exists in a graph then the topological ordering of this graph is not possible. Every DAG has at least one topological ordering. The first topological sorting algorithm was given by Kahn (1969). In Kahn’s algorithm, for topological sorting first have to take the vertices from the graph which do not have any incoming edges. If the graph is acyclic then the set must contain at least one vertex that does not have any incoming edges. From this set of vertices, graph traversing will begin. Then have to find the vertices of outgoing edges and have to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 418–429, 2021. https://doi.org/10.1007/978-3-030-68154-8_38
A New Topological Sorting Algorithm with Reduced Time Complexity
419
remove them. For other incoming edges this vertex have to check again, if there are no incoming edges then this vertex has to insert into the set of sorted elements. So, Kahn’s algorithm uses a breadth first search technique. In this algorithm, we have to check all the vertices and edges for the start vertex. Then have to start check every elements over again. So, the overall time complexity of Kahn’s algorithm is O(|V| + |E|). Another popular topological sorting algorithm is DFS topological sorting algorithm, which is based on DFS graph traversal technique. In DFS topological sorting algorithm, graph traversing started from the start vertex and traversing goes through down from the start vertex. When a vertex is found which contain more incoming vertices then by backtracking procedure a previous level vertex is traversed. The time complexity of this DFS topological sorting algorithm is also O(|V| + |E|). In the proposed algorithm traversing of the full graph is not needed. Just list the empty vertices (vertices contain no outgoing edge) in a set and non-empty vertices (vertices contain at least one outgoing edge) in another set, then order the vertices by removing empty vertices from vertices set. In our proposed algorithm time P non-empty jvj complexity reduced to O ðNE Þ , because of using two sets and here also i¼1
i
elements will decrease in each step. The space complexity of our proposed algorithm is O(|V| + |E|). So, the proposed algorithm is completely new that finds the topological ordering of a graph in an optimum time. The proposed algorithm also can detect cycle in a graph. This is a completely new algorithm for topological sorting which reduces the time complexity for topological sorting. This algorithm avoids using any graph traversing algorithm like BFS or DFS. This is a new approach for simply implementation of topological sorting algorithm. This algorithm returns false, if any cycle presents in the graph that means, then not any topological sorting is possible. So, this new topological sorting algorithm can be used for detecting cycle in the graphs. This algorithm implementation is very simple. Topological sorting is a popular algorithm for scheduling tasks. From a long period of time researchers continuing research over this topic for improving this algorithm and they also working on application of topological sorting. So, there is a large space for researching on this algorithm. The application of topological sorting algorithm is huge. It can be used for complex database tables that have dependencies. It is also useful for ordering object in a machine, course prerequisite system and many more. Topological ordering is mostly used for planning and scheduling.
2 Related Works A depth first discovery algorithm (DFDA) is proposed to find topological sorting in paper [1]. To reduce the temporary space of a large DAG they used depth first search. DFDA is more efficient and simple than Discovery Algorithm (DA). To generate all topological sorting, three algorithms are described in paper [2]. From this three algorithms they improved the Well’s topological sorting algorithm. A new sorting algorithm is implemented that reduces the time complexity of topological sorting algorithm in paper [3] by using matlab script. This algorithm is more efficient than Kahn’s and DFS topological sorting algorithm and required storage is few than DFS. But, in the
420
T. Ahammad et al.
case of complex and large system this algorithm increases the complexity. In a food supply chain, for checking the adulterant nodes, a new algorithm is proposed in paper [4]. A novel optimization algorithm is proposed in paper [5] to find the solution of many diverse topological sorts which implies many cut-sets for a directed acyclic graph. For enhancing grid observation, the proposed algorithm is very effective. For large graphs, another topological sorting algorithm is given in paper [6]. For topological sorting, they proposed an I/O-efficient algorithm named IterTs. But, this algorithm is inefficient for worst case. A new algorithm is given in paper [7] that gives the topological sorting of sparse digraphs with better time complexity than all other previous algorithms. This topological sorting algorithm is dynamically maintained. They have given an experimental comparison result of topological sorting for large randomly generated directed acyclic graphs. In paper [8], P. Woelfel solved the topological sorting with O(log2|V|) OBDD operations. For a fundamental graph problem, it is the first true runtime analysis of a symbolic OBDD algorithm. In paper [9], they proposed some algorithms to solve the problem of single-source shortest path, computing a directed ear decomposition and topological sorting of a planar directed acyclic graph in O(sort(N)) I/Os, here sort(N) is the number of I/Os needed to sort N elements. All topological ordering of a directed acyclic graph is found in paper [10]. They proposed an extended algorithm which is capable of finding all topological solutions of a DAG. To implement this algorithm they have used backtracking, iteration in the place of recursion, data structures and many more. In paper [11], they proposed a new algorithm by using parallel computation approach. To find topological sorting of a directed acyclic graph, this algorithm is very simple. In this algorithm, they have traversed the nodes parallel from a node and marked as visited. After visiting all the source nodes, all other nodes in the graph is marked as visited. For acyclic graph the parallel traversing will terminate. For an SIMD machine the implementation of this algorithm is discussed in this paper. Time complexity of this proposed parallel algorithm is ordered for longest distance between a source and a sink node for a directed acyclic graph. Large networks topological sorting is given in paper [12]. The general topological algorithms are not efficient to find topological ordering for a large network. So, A. B. Kahn proposed this procedure to find the topological ordering of large networks efficiently. By using this algorithm a PERT (Program Evaluation Review Technique) network has been maintained which contains 30,000 activities which can be ordered in less than one hour of machine time. This is an efficient method because the location of any item is known so that no need to searching an item and for correct sequence of network there only needed a single iteration, so the method go in single step for this case. If all events are traversed or any cycle found in the network then the network is terminated. This method is more efficient than all other topological algorithm for large network. In paper [13], T. Ahammad et al. proposed a new dynamic programming algorithm to solve job sequencing problem that reduces the time complexity of job sequencing problem from O(n2) to O(mn). They have used the tabulation method of dynamic programming. They have showed the step by step simulation of their algorithm working procedure with experimental analysis. In paper [14], J.F. Beetem proposed an efficient algorithm for hierarchical topological sorting. This algorithm is also applicable in the presence of apparent loops. Its application is also described in this paper.
A New Topological Sorting Algorithm with Reduced Time Complexity
421
3 Overview of the Proposed Algorithm In this proposed algorithm, from a graph we put the empty vertices means those vertices which contains no outgoing edges in a set E and the non-empty vertices means those vertices which contains at least one outgoing edge in another set NE. In each step, we will take an empty vertex and remove this vertex from all non-empty vertices and also remove this particular vertex from set E. In this procedure, if any vertex becomes empty then we will put it in set E. The topological ordering is the set T, the sequence in which we choose the empty vertices from the set E. When the set E becomes empty then the algorithm terminated and if T contains all vertices of the graph then no cycle exists and the graph is a DAG and T is the correct topological order. But if T doesn’t contain all vertices of the graph then the graph contains cycle and the graph is not DAG, then any topological ordering is not possible (Fig. 1).
Fig. 1. Flow Chart of topological sorting algorithm.
422
T. Ahammad et al.
4 Algorithm 4.1
Pseudocode
Here G is the set of vertices of the graph which divided into two set E (contains empty vertices) and NE (contains non-empty vertices). Empty vertex contains no outgoing edges and non-empty vertex contains outgoing edges. When the set of empty vertices E becomes empty then the ordering is completed. If the size of T is not equal to the size of G then the graph has at least one cycle and the graph is not DAG. So, the topological sorting of this graph is not possible. Otherwise if the size of T is equal to the size of G then topological ordering is possible and the elements of set T in reverse order is in topological ordered. From this algorithm, we can get the correct topological ordering of a graph in set T after reversing T. If topological ordering is not possible or the algorithm detect any cycle then the algorithm return false otherwise the algorithm return set T. T is correct topological ordering of graph G. By this procedure this algorithm also detect cycle in a graph. 1. G: a set of vertices of a graph 2. T: an empty set that will fill in topological order 3. Procedure Topological Sort(G,T) 4. E 0 3->2 4->5, 6, 7 5-> 6->7 7->8 8-> 9->4 10->0, 1, 12 11->13 12->0, 11 13->5 Since the graph is a directed acyclic graph (DAG), so the graph has at least one topological ordering exist. Here the empty vertices (vertex with no outgoing edges) are (0, 5, 8). We can take those empty vertices in any order. Each order is valid for a particular step. After taking those empty vertices the procedure begins. If the graph is a DAG then in the first step at least one empty vertex will exists. Here showed adjacency list simulation for finding a topological ordering of this graph. Here first table represents the adjacency list of the following graph (Fig. 3). By removing its empty vertices, the size reduces which is shown below-
424
T. Ahammad et al.
Fig. 3. Step by step visualization of proposed algorithm working procedure with an example.
Step 1: Find the empty vertices (0, 5, 8) and insert these vertices in a set T (0, 5, 8). Then remove these vertices from adjacency list and also remove these from all other vertices. Now the empty vertices are (2, 7, 13). Step 2: Find the empty vertices (2, 7, 13) and insert these vertices in the set T(0, 5, 8, 2, 7, 13). Then remove these vertices from adjacency list and also remove these from all other vertices. Now the empty vertices are (3, 6, 11). Step 3: Find the empty vertices (3, 6, 11) and insert these vertices in the set T(0, 5, 8, 2, 7, 13, 3, 6, 11). Then remove these vertices from adjacency list and also remove these from all other vertices. Now the empty vertices are (4, 12). Step 4: Find the empty vertices (4, 12) and insert these vertices in the set T(0, 5, 8, 2, 7, 13, 3, 6, 11, 4, 12). Then remove these vertices from adjacency list and also remove these from all other vertices. Now the empty vertex is 9 only. Step 5: Find the empty vertex 9 and insert this vertex in the set T(0, 5, 8, 2, 7, 13, 3, 6, 11, 4, 12, 9). Then remove this vertex from adjacency list and also remove this from all other vertices. Now the empty vertex is 1 only. Step 6: Find the empty vertex 1 and insert this vertex in the set T(0, 5, 8, 2, 7, 13, 3, 6, 11, 4, 12, 9, 1). Then remove this vertex from adjacency list and also remove this from all other vertices. Now the empty vertex is 10 only.
A New Topological Sorting Algorithm with Reduced Time Complexity
425
Step 7: Insert the last empty vertex 10 in the set T(0, 5, 8, 2, 7, 13, 3, 6, 11, 4, 12, 9, 1, 10). Then remove this vertex from adjacency list. Now there is no such empty vertex remaining. After reversing T, we will get the topological ordering of the graph. The topological ordering of the graph by proposed algorithm is(10->1->9->12->4->11->6->3->13->7->2->8->5->0) By Kahn’s algorithm, the topological ordering is(3->10->2->1->12->9->0->11->4->13->6->5->7->8) By DFS toposort algorithm, the topological ordering is(10->12->11->13->3->2->1->9->4->6->7->8->5->0) The proposed topological sorting algorithm is less complex than Kahn’s and DFS topological sorting algorithms.
5 Analysis The time complexity of this proposed algorithm is O
P
j vj i¼1
ðNE Þ and the space i
complexity of this proposed algorithm is O(|V| + |E|). The calculation process of the time and space complexity is given in the next sections. 5.1
Time Complexity
In the proposed algorithm the outer loop will execute |V| number of times and the inner loop will be execute for |NE| number of times. But in this algorithm for each vertex 1 to |V| the inner loop value |NE| will be different i ¼ 1. . .ðNE Þ1 i ¼ 2. . .ðNE Þ2 i ¼ 3. . .ðNE Þ3 i ¼ 4. . .ðNE Þ4 i ¼ 5. . .ðNE Þ5 i ¼ 6. . .ðNE Þ6 :
:
:
:
:
: i ¼ jV j. . .ðNE Þjvj
426
T. Ahammad et al.
Total complexity will be counted for the summation of non-empty vertex elements of each step. So, for 1 to |V| Compexity ¼ ðNE Þ1 þ ðNE Þ2 þ ðNE Þ3 þ ðNE Þ4 þ ðNE Þ5 þ ðNE Þ6 þ . . . þ ðNE Þjvj Xjvj ðNE Þ ; where i ¼ 1; 2; 3; 4; 5; 6; . . .; jV j ¼ i i¼1
ð1Þ So, the general time complexity is jmj X
jðNEÞi j
i¼1
The time complexity for best case, average case and worst case is depends on the jðNEÞi j term. Best Case Analysis From Sect. 1, the general time complexity is jmj X
jðNEÞi j
i¼1
If the graph has not any edge then empty vertex set E will be full and non-empty vertex set NE will be empty. Then only the outer loop will execute |V| times. Here, jðNEÞi j is the total number of elements in non-empty vertex set where (i) is step number. The value of jðNEÞi j will 1 for each step means|(NE)1| = |(NE)2| = |(NE)3| = |(NE)4| = |(NE)5| = |(NE)6| = 1 Equation (1) becomesTotal complexity ¼ 1 þ 1 þ 1 þ . . . þ 1 ¼ jV j So, the time complexity for the best case of this algorithm is X(|V|), where |V| is the total number of vertices of the graph. Average Case Analysis From Sect. 1, the general time complexity is jmj X
jðNEÞi j
i¼1
Time complexity for the average case depends on the jðNEÞi j term. The outer loop of the algorithm will execute for |V| number of times and the inner loop will be execute for |NE| number of times.
A New Topological Sorting Algorithm with Reduced Time Complexity
427
Equation (1) becomes Compexity ¼ ðNE Þ1 þ ðNE Þ2 þ ðNE Þ3 þ ðNE Þ4 þ ðNE Þ5 þ ðNE Þ6 þ . . . þ ðNE Þjvj Xjvj ðNE Þ ; where i ¼ 1; 2; 3; 4; 5; 6; . . .; jV j ¼ i i¼1
So, the average case of this algorithm is H
P jvj ðNE Þ . i¼1
i
Which depends on the jðNEÞi j term. jðNEÞi j is the total number of elements in nonempty vertex set in each step(i). Worst Case Analysis From Sect. 1, the general time complexity is jmj X
jðNEÞi j
i¼1
Here, the complexity depends on the jðNEÞi j term. In worst case the |NE| will be greater than average case. Here also the outer loop of the algorithm will execute for |V| number of times and the inner loop will be execute for |NE| number of times. Equation (1) becomes Compexity ¼ ðNE Þ1 þ ðNE Þ2 þ ðNE Þ3 þ ðNE Þ4 þ ðNE Þ5 þ ðNE Þ6 þ . . . þ ðNE Þjvj Xjvj ðNE Þ ; where i ¼ 1; 2; 3; 4; 5; 6; . . .; jV j ¼ i i¼1
So, time complexity for the worst case of this algorithm is O
P jvj ðNE Þ . i¼1
i
Which depends on the jðNEÞi j term. jðNEÞi j is the total number of elements in nonempty vertex set in each step(i). 5.2
Space Complexity
Space complexity of this algorithm is O(|V| + |E|). Here |V| is the number of vertices and |E| is the number of edges. In this algorithm, we have used adjacency list so that the sum of |NE| in the worst case will be O(|V| + |E|). So the space complexity of this proposed algorithm is O(|V| + |E|).
428
T. Ahammad et al.
6 Comparison with Related Works Table 1. Comparison of time and space complexity of popular algorithms. Algorithm Name/Paper No Kahn’s Algorithm DFS Algorithm Paper [1]
Time complexity for the best case X(|V| + |E|)
Time complexity for the average case H(|V| + |E|)
Time complexity for the worst case
Space complexity
O(|V| + |E|)
O(|V|)
X(|V| + |E|)
H(|V| + |E|)
O(|V| + |E|)
O(|V|)
X(|A|k log k)
H(|A|k log k)
O(|A|k log k)
Paper [2] Proposed Algorithm
X(m + n) X(|V|)
H(m + n) P jv j H i¼1 ðNE Þi
O(m + n) P jv j O i¼1 ðNE Þi
O(|A| + || K||) O(n) O(|V| + | E|)
It is clear that the proposed algorithm’s time complexity is less than all other topological sorting algorithms (Table 1).
7 Conclusions and Future Recommendation In this paper, we proposed a completely new algorithm in simplest way where no such graph traversing technique is needed and the complexity has reduced. Time complexity for the best case becomes X(|V|), here |V| is the total number of vertices of the graph. P jvj Time complexity for the average case is H i¼1 ðNE Þi , here |NE| is the number of P jvj non-empty edges. Time complexity for the worst case is O i¼1 ðNE Þi . Normally the time complexity of general topological sorting algorithm is O(|V| + |E|). In this paper, we have showed that our algorithm is better which gives results faster than any other topological sorting algorithm. As in future, we will again try to reduce the average case and worst case time complexity. We will also try to do further research about the application of this proposed algorithm.
References 1. Zhou, J., Müller, M.: Depth-first discovery algorithm for incremental topological sorting of directed acyclic graphs. Inf. Process. Lett. 88, 195–200 (2003) 2. Kalvin, A.D., Varol, Y.L.: On the generation of all topological sorting. J. Algorithm 4, 150– 162 (1983) 3. Liu, R.: A low complexity topological sorting algorithm for directed acyclic graph. Int. J. Mach. Learn. Comput. 4(2) (2014)
A New Topological Sorting Algorithm with Reduced Time Complexity
429
4. Barman, A., Namtirtha, A., Dutta, A., Dutta, B.: Food safety network for detecting adulteration in unsealed food products using topologcal ordering. Intell. Inf. Database Syst. 12034, 451–463 (2020) 5. Beiranvand, A., Cuffe, P.: A topological sorting approach to identify coherent cut-sets within power grids. IEEE Trans. Power Syst. 35(1), 721–730 (2019) 6. Ajwani, D., Lozano, A.C., Zeh, N.: A topological sorting algorithm for large graphs. ACM J. Exp. Algorithmic 17(3) (2012) 7. Pearce, D.J., Kelly, P.H.J.: A dynamic topological sort algorithm for directed acyclic graphs. ACM J. Exp. Algorithmic 11(1.7) (2006) 8. Woelfel, P.: Symbolic topological sorting with OBDDs. J. Discrete Algorithms 4(1), 51–71 (2006) 9. Arge, L., Toma, L., Zeh, N.: I/O-efficient topological sorting of planar DAGs. In: Proceedings of the Fifteenth Annual ACM Symposium on Parallel Algorithms and Architectures, pp. 85–93 (2003) 10. Knuth, D.E., Szwarcfiter, J.L.: A structured program to generate all topological sorting arrangements. Inf. Process. Lett. 2(6), 153–157 (1974) 11. Er, M.C.: A parallel computation approach to topological sorting. Comput. J. 26(4) (1983) 12. Kahn, A.B.: Topological sorting of large networks. Commun. ACM 5, 558–562 (1962) 13. Ahammad, T., Hasan, M., Hasan, M., Sabir Hossain, M., Hoque, A., Rashid, M.M.: A new approach to solve job sequencing problem using dynamic programming with reduced time complexity. In: Chaubey, N., Parikh, S., Amin, K. (eds.) Computing Science, Communication and Security. COMS2 2020. Communications in Computer and Information Science, 1235. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-6648-6_25 14. Beetem, J.F.: Hierarchical topological sorting of apparent loops via partitioning. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 11(5), 607–619 (1992) 15. Intelligent Computing & Optimization, Conference proceedings ICO 2018, Springer, Cham, ISBN 978-3-030-00978-6 16. Intelligent Computing and Optimization, Proceedings of the 2nd International Conference on Intelligent Computing and Optimization 2019 (ICO 2019), Springer International Publishing, ISBN 978-3-030-33585-4
Modeling and Analysis of Framework for the Implementation of a Virtual Workplace in Nigerian Universities Using Coloured Petri Nets James Okpor1(&) and Simon T. Apeh2 1
Department of Computer Science, Federal University Wukari, Wukari, Taraba State, Nigeria [email protected] 2 Department of Computer Engineering, University of Benin, Benin City, Edo State, Nigeria
Abstract. This paper presents a Hierarchical Coloured Petri Nets model specifically designed to support the implementation process of a virtual workplace in Nigerian Universities. CPN Tools 4.0 was used to capture the various phases and activities of the framework for the implementation of a virtual workplace into a Hierarchical Coloured Petri Nets (CPN) model. The developed virtual workplace CPN model has been analyzed using simulation, and state space analysis methods. The developed CPN model being a graphical and also an executable representation of the implementation framework; will assist those empowered with the responsibility of implementing a virtual workplace to have a better understanding of the implementation process. Keywords: Coloured petri nets workplace
Framework Nigerian universities Virtual
1 Introduction The sudden emergence of the COVID-19 pandemic has impacted the workplace. Several nations of the world have introduced lockdown measures and social distancing in a bid to curtail the spread of the novel coronavirus. This has led to the mass implementation of virtual workplace by organizations in an attempt to ensure business continuity [1]. In Nigeria, only a few organizations have adopted the virtual workplace before the global outbreak of the COVID-19 Pandemic. Unfortunately, Nigerian Universities are among those that are yet to embrace the concept of virtual workplace [2] even though the recent technological advancement in information and communication technology has altered the workspace, and has ushered a new form of work known as virtual workplace [3, 4]. Although the impact of the COVID-19 in Nigerian Please note that the AISC Editorial assumes that all authors have used the western naming convention, with given names preceding surnames. This determines the structure of the names in the running heads and the author index. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 430–439, 2021. https://doi.org/10.1007/978-3-030-68154-8_39
Modeling and Analysis of Framework
431
universities are enormous due to the closure of schools [2, 5], this crisis offers a great opportunity for policymakers in the educational sector to explore ways on how to reposition Nigerian universities by adopting the necessary technologies needed for elearning, and also creating a virtual workplace for staff. In a virtual workplace, employees work remotely away from the central office, communicate with colleagues, and managers through electronic means [6]. Research suggests that the implementation of a virtual workplace comes with a lot of benefits such as enabling employees to work wherever it is convenient for them to carry out their work [4], reduction in traffic congestion and vehicular emissions [7–10], greater talent pool, less unnecessary meetings, reduction in commuting time, increased employees productivity [3], reduction in vehicle miles traveled [11], provides opportunities for people with disabilities to work from home, savings to the organization in terms of cost of real estate [3, 12], decrease energy consumption, and ultimately leads to a significant reduction in air pollution [10]. Despite the enormous benefits that can be derived from the implementation of a virtual workplace, there is currently minute theoretical and empirical research as regards the development of a framework for the implementation of a virtual workplace in developing countries, and more specifically Nigeria. Although there are few publications and blog posts about how best to implement a virtual workplace, however, many of these materials are not research-based [13]. According to [14] organizations still lack much-needed theoretical and empirical support as regards the planning and implementation of a virtual workplace. The objective of this study was to transform the phases and activities of the framework for the implementation of a virtual workplace in Nigerian Universities into a CPN model and to analyze the behaviour of the model. Interestingly, Coloured Petri Nets have been used by several authors in modeling workflow systems and business processes, manufacturing systems, computer networks, and embedded systems [15–17, 19]. This motivated us to used coloured Petri Nets to capture the various phases and activities of the framework for the implementation of a virtual workplace in Nigerian Universities into a CPN model. The advantage of using Coloured Petri Nets to model the framework for the implementation of a virtual workplace is that being a graphical model and also an executable representation [16, 18], the CPN model will help to provide a better understanding of the implementation framework than textual description. Also, simulation from the CPN model will provide a more accurate view of the framework for the implementation of a virtual workplace, and could be used to demonstrate to management and other stakeholders on the best approach to implementing a virtual workplace. The choice of coloured Petri Nets was based on fact that it is a graphical modeling language and executable [19], and the analysis methods [15, 19]. The rest of this paper is organized as follows: Sect. 2 shows the proposed implementation framework for virtual workplace. Section 3 presents the developed CPN model of the virtual workplace. Section 4 describes the state space analysis. Section 5 concludes the paper.
432
J. Okpor and S. T. Apeh
2 Framework for the Implementation of a Virtual Workplace in Nigerian Universities This section gives an overview of the developed framework for the implementation of a virtual workplace in Nigerian universities. The framework for the implementation of a virtual workplace was developed based on a comprehensive literature review, and data obtained from selected organizations in Nigeria that have successfully implemented virtual workplace in their respective organizations. Although the framework was developed for Nigerian universities, it can be adopted by other organizations in Nigeria that intend to implement the concept of a virtual workplace. The developed framework, when adopted, will assist universities and other organizations in the successful implementation of a virtual workplace in Nigeria. The developed virtual workplace framework is composed of four phases, namely conceptual phase, pre-implementation phase, implementation phase, and post-implementation phase as shown in Fig. 1.
Fig. 1. Proposed Virtual Workplace implementation framework for Nigerian Universities
In the conceptual phase, the university set up a committee that will be empowered with the responsibility of identifying the benefits, barriers, and risks associated with virtual workplace implementation. The implementation of a virtual workplace should only be considered after an adequate evaluation of university needs for the virtual
Modeling and Analysis of Framework
433
workplace, identification of Job suitability, cost-benefits analysis, determination of impact (in terms of structure, people, task, culture), and the identification of critical success factor have been carried out successfully as indicated in the pre-implementation phase. Immediately after the pre-implementation phase has been concluded, the next stage (implementation phase) involves four sets of activities; which include planning, adoption of the implementation strategy, and program launch. Finally, the last phase (post-implementation) involves program evaluation.
3 Top-Level Module of the Virtual Workplace CPN Model The Hierarchical CPN model of the virtual workplace was developed based on the framework for the implementation of a virtual workplace in Fig. 1, where the various phases and activities involved in the implementation framework were used to develop a CPN model of the virtual workplace, and is shown in Fig. 2. The top-level module of the Virtual Workplace CPN model in Fig. 2 consists of 10 places and 4 substitution transitions (drawn as rectangular boxes with double-line borders) representing the conceptual phase, Pre-implementation phase, implementation phase, and postimplementation phase. Each of the substitution transition has a substitution tag (submodule) beneath it, that model a more detailed behaviour of the conceptual phase, Preimplementation phase, implementation phase, and post-implementation phase.
Fig. 2. The Top-level module of the virtual workplace CPN model
The relationship between the submodule and their functions is described as follows: The conceptual phase module in Fig. 3 represents the initial phase of the virtual workplace implementation process. The benefits, barriers, and risks are identified in this module, and thereafter a decision is made either to accept or abandon the virtual workplace implementation. Once a decision is made to accept a virtual workplace, the next phase (pre-implementation phase) involves series of activities such as the identification of university needs for the virtual workplace, job suitability, cost analysis, determination of the impact, and the identification of success factors as indicated in the pre-implementation phase module in Fig. 4. Also, some critical decisions such as the selection of appropriate
434
J. Okpor and S. T. Apeh
implementation methods, selection of interested staff/participants, training, and the launch of virtual workplace programs are made in the implementation phase module in Fig. 5. Finally, the post-implementation phase module in Fig. 7 model two sets of activities, which include; assessing program implementation and the subsequent analysis of the assessment result. 3.1
Conceptual Phase Module of the Virtual Workplace Model
Figure 3 shows the conceptual phase module, which is the submodule of the conceptual phase substitution transition in Fig. 2. The essence of the conceptual phase is to ensure that the university identifies the benefits, barriers, and risks associated with the implementation of a virtual workplace. Port place university management is given the marking “1`Management” and signifies the first step of the conceptual phase. If a valid token as specified by the arc expression mgt is presented in the input port place University Management, then the transition setup committee will be enabled. When the transition setup committee is fired, a committee will be set up in place VW committee. Transition Assess becomes enabled immediately place VW committee receives its desired token(s). Place VWorkplace is connected to transition Assess and is declared as colour set VWrkplace, which is a record data type with two tokens of benefits, barriers, and risk assessment. When the transition Assess occurs it extracts tokens from place VWorkplace and placed each of these tokens on the corresponding output places VW benefits identified, VW barriers identified, and Asset Identified, as specified by the output arc expressions #Ben(V_Wrk), #Barr(V_Wrk), and #Risk_Ass(V_Wrk). When the place Asset Identified has received its required tokens, transition Identify Threat Vuln Likehd Imp becomes enabled and ready to fire. Transition Identify Threat Vuln Imp has two input places. The first place Threat Vuln and Iikehd is defined as product colour set Thrt_Vuln_LHd containing two tokens of threat, vulnerabilities, and threat likelihood. While the second place Impact is defined as an enumerated colour set Imp and contains three tokens of High_Impact, Medium_Impact, and Low_Impact. So when the transition Identify Threat Vuln Likehd Imp fires, token(s) is moved to place Risk Determination, and either transition Low Risk, Medium Risk, or High Risk becomes eligible for execution. A guard is placed on each of the three transitions to ensure that only the required threat likelihood and threat impact passed through the transitions. Place Risk level received the token(s) immediately after the execution of transition Low Risk, Medium Risk, and High Risk, and trigger transition Make Decision. When transition Make decision is triggered, function Acpt(RL1,RL2,VW_Ben, VW_Barr), and Rej(RL3,VW_Ben,VW_Barr) ensure that place Accept, and Abandon meet the requirement of the output arc expression. 3.2
Pre-implementation Phase Module of the Virtual Workplace CPN Model
The pre-implementation phase module shown in Fig. 4 consists of five sets of activities which include the identification of needs, job suitability, cost analysis, determination of impact, and the identification of success factors. Two of these activities (job suitability, and cost analysis) are modeled with a substitution transition, and the remaining three
Modeling and Analysis of Framework
435
Fig. 3. Conceptual Phase module of the virtual workplace CPN Model
activities (identification of needs, determination of impact, and identification of success factor) are modeled with an elementary transition. The port place Accept connect the conceptual phase module to the pre-implementation phase module. Transition Identify needs is enabled when the input port place Accept received the desired token. When the transition Identify needs is fired, place identified needs received the tokens as specified by the output arc expression Nds,deci. The substitution transitions job suitability, and cost analysis and its associated submodules, modeled in more detail the activities involved in each of the submodules.
Fig. 4. Pre-implementation Phase module of the virtual workplace model
436
J. Okpor and S. T. Apeh
Port place Suitable Job serves as an interface between the Job suitability module and the cost Analysis module. The cost analysis module estimates the total cost and the expected savings. The transition Determine Impact model the expected impact of the implementation of a virtual workplace in the university while transition Identify success factor identifies the critical success factor to successful virtual workplace implementation shown in Fig. 4. 3.3
Implementation Phase Module of the Virtual Workplace Model
Figure 5 shows the submodule associated with the implementation phase substitution transition. There are four sets of activities in this phase which include planning, implementation strategy, and training modeled by three substitution transitions, and program launch modeled with an elementary transition.
Fig. 5. Implementation phase module of the virtual workplace model
The submodule of the planning substitution transition is shown in Fig. 6. Place Planning is defined as a record colour set containing “Policy_Devt”, “Eva_ICT_infra”, “Sele_Remote_Access”. When the transition identify planning area fires, place Policy Development, Evaluate ICT Infra., and Selection of remote Access will receive the desired token(s) according to output arc expressions. Once the tokens are received, the transition Evaluate becomes enabled and ready to fire. When the transition Evaluate fires, tokens are moved from places Existing ICT Infra., Policy Guidelines, and Remote Acess Technique according to the function defined by the output expression DPG(PD, PG), EICT(EVAII,EIIE), and RATECH(SRA,rt) to place Developed Policy Guide, ICT Evaluation, and selected remote access respectively. The transition finalize plan with output arc expression Plan(PG,EIIE,rt) ensures that every part of the plan is captured. Place VW plan serves as a connection between the planning module, and the implementation strategy module. The implementation phase module model all the activities required in the implementation phase. While on the other hand, the planning module handles all the planning activities. The CPN model of the virtual workplace cannot be presented in full in this paper due to page limitations.
Modeling and Analysis of Framework
437
Fig. 6. Planning submodule
3.4
Post-implementation Phase Module of the Virtual Workplace
The submodule of the post-implementation phase substitution transition is shown in Fig. 7. It contains two elementary transitions; conduct IMP Assmnt, and Analyze Assmnt Result. This module model the program evaluation.
Fig. 7. Post-implementation phase module of virtual workplace model
Transition conduct IMP Assmnt model all activities relating to the conduct of the implementation assessment. When transition conduct IMP Assmnt is fired, Places Imp Assment Pilot, Imp Assment Phased, and Imp Assment full will receive tokens concurrently for the conduct of implementation assessment as defined by the output arcs expression with function PI_Assmnt(Assmnt,SSP), PH_Assmnt(Assmnt,GS), and FU_Assmnt(Assmnt,CAES). Transition Analyze Assmnt result immediately become enable as Place Imp Assment Pilot, Imp Assment Phased, and Imp Assment full receive a valid token from transition Conduct Imp Assmnt. Transition Analyze Assmnt result ensures that the strength and weakness of the virtual workplace program are critically examined.
438
J. Okpor and S. T. Apeh
4 Simulation and State Space Analysis Simulation and state space are the two methods of analysis provided by CPN Tools [19]. We conducted several simulations on the module and submodule of the developed virtual workplace CPN model using CPN Tools installed on an Intel (R) Pentium (R) CPU P6200 @2.13 GHz PC to ascertain whether the module and submodule work as expected. The simulation shows that token(s) were received/consume in the correct order, and the model always terminates in the desired state. State space was then applied to explore the standard behavioral properties of the virtual workplace CPN model. The analysis of the state space was achieved using the state space report. The state space report revealed important information about the Boundedness properties, home properties, liveness properties, and fairness properties of the CPN model. Analysis of the state space report for virtual workplace CPN model shows that there are 12889 nodes, 30400 arcs, and was generated in 36 s while the SCC graph has 12889 nodes, 30400 arcs, and was generated in 2 s. The state space report shows that the SCC graph and state space have the same number of nodes and arcs. It is obvious from the report that no infinite occurrence exists, and hence we can conclude the model terminate appropriately. The liveness properties show that are 24 dead markings in the developed CPN model. A query was used to investigate these dead states and to determine whether these dead markings represent the terminal state. The query result proves that these dead markings are end states in the model. That is the state at which the CPN model will terminate. The fact that these dead markings in the CPN model are end states shows that the virtual workplace CPN model will produce the desired result when it terminates. The state space report also shows that there are no dead transitions in the CPN model. A transition is dead if there are no reliable markings in which it is enabled [19].
5 Conclusions In this paper, we have presented a framework designed for the implementation of a virtual workplace in Nigerian Universities. A CPN model was developed based on the framework for the implementation of virtual workplace using CPN Tools. The CPN model covers the four phases and activities involved in the virtual work implementation framework. The developed virtual workplace CPN model was verified using the model simulation and state space analysis methods. The result from the analysis of the developed virtual workplace CPN model shows that the CPN model work as expected. Therefore, the CPN model being an executable model and also a graphical representation of the developed framework will help to provide a better understanding of the virtual workplace implementation framework. Future work may be geared towards the development of an animation that will be interface to the virtual workplace CPN model.
Modeling and Analysis of Framework
439
References 1. International Labour Organization: Teleworking during the COVID-19 pandemic and beyond A Practical Guide, pp. 1–48 (2020) 2. Pauline, E.O., Abiodun, A., Hindu, J.A.: Telecommuting: a panacea to COVID-19 spread in Nigerian Universities. Int. J. Innov. Econ. Devt. 6(1), 47–60 (2020) 3. PwC S.A.: HR Quarterly. 1–16 (2015) 4. Richard, E., Carl, J.K., Christopher, G.P., Antony, I.S.: The new workplace: are you ready? How to capture business value. IBM Global Serv. 1–12 (2011) 5. Ogunode, N.J., Abigeal, I., Lydia, A.E.: Impact of COVID-19 on the higher institutions development in Nigeria. Elect. Res. J. Soc. Sci. Hum. 2(2), 126–135 (2020) 6. Cascio, W.F.: Managing a virtual workplace. Acad. Mgt. Exec. 14(3), 81–90 (2000) 7. Caldow J..: Working outside the box: a study of the growing momentum in telework. Inst. for Elect. Govt. IBM Corp. 1–14 (2009) 8. Obisi, C.: The empirical validity of the adjustment to virtual work arrangement by business organisations in Anambra State. Nigeria. Int. J. Sci. Res. Edu. 9(3), 173–181 (2016) 9. Koemg, B.E., Henderson, D.K., Mokhtanan, P.L.: The travel and emissions impacts of telecommuting for the state of California telecommuting pilot project. Transpn. Res-C. 4(1), 13–32 (1996) 10. Choo, S., Mokhtarian, L.P., Ilan, S.: Does telecommuting reduce vehicle-miles traveled? An aggregate time series analysis for the U.S. Transportation 32(1), 37–64 (2005) 11. Henderson K.D., Koening E.B., Mokhtarian L.P.: Travel diary-based emissions analysis of telecommuting for the puget sound demonstration project. Res. Report UCD-ITS-RR-94–26, 1–54 (1994) 12. Thompson C., Caputo P.: The Reality of Virtual Work: is your organization Ready?, AON consulting. 1–12 (2009) 13. Onpoint Consulting: Success factors of top performing virtual teams. Res. report. 1–14 (2015) 14. Madsen, S.R.: The benefits, challenges, and implications of teleworking: a Literature Review. Cult. Relig. J. 1(1), 148–158 (2011) 15. vander Aalst, W.M.P.: Modelling and analysis of production systems using a Petri Net based approach. In: Boucher, T.O., Jafari, M.A., Elsayed, E.A. (eds.) Proceedings of the Conference on Computer Integrated Manufacturing in the Process Industry, pp. 179–193 (1994) 16. Kristensen, L.M., Mitchell, B., Zhang, L., Billington, J.: Modelling and initial analysis of operational planning processes using coloured Petri nets. In: Proceedings of Workshop on Formal Meth. Applied to Defence System, Australian Computing Society in Conference in Research and Practice in Information Technology (12), 105–114 (2002) 17. Hafilah, D.L., Cakravastia, A., Lafdail, Y., Rakoto, N.: Modeling and simulation of air france baggage handling system with colored Petri Nets. IFAC PapersOnLine. 2443–2448 (2019) 18. Jensen, K., Kristensen, L.M., Wells, L.: Coloured Petri Nets and CPN tools for modelling and validation of concurrent systems. Int. J. Softw. Tools Techno. Trans. (STTT) 9(3–4). 213–254 (2007) 19. Jensen, K., Kristensen, L.M.: Coloured Petri Nets. Springer-Verlag, Modelling and Validation of Concurrent Systems (2009)
Modeling and Experimental Verification of Air - Thermal and Microwave - Convective Presowing Seed Treatment Alexey A. Vasiliev(&) , Alexey N. Vasiliev , Dmitry A. Budnikov , and Anton A. Sharko Federal Agroengineering Scientific Center VIM, Moscow, Russian Federation [email protected]
Abstract. The use of electrophysical effects for presowing treatment of seeds belongs to most effective methods of improving their sowing quality. However, application of such methods is limited by the fact that specialized new-type technological equipment is normally required for their implementation in seed processing lines. This problem can be solved in a simpler way where processing lines designed for forced ventilation are applied for presowing treatment of grain crops. This work deals with the issues related to the use of aerated bins and convection-type microwave grain driers for presowing processing of seeds. The requirement for homogeneity of external field distribution over a dense grain layer has to be met, in the course of processing. It is necessary to ensure both the preset values of temperature and the dynamics of its change. Computer simulations were carried out to check whether the applied processing facilities are physically capable to maintain the required operation modes during these seed treatment procedures. For the purposes of modeling, the entire dense grain layer was divided onto three sections, and the heat-and-moisture exchange processes developing during treatment of seeds were simulated, in each section. The curves of change for temperature and moisture content in seeds during processing were calculated. Simulation results for heat exchange in the course of air-thermal treatment and for convective-microwave processing have proved the implementability of the required operation modes of presowing treatment. The results of field experiments on seed presowing processing in aerated bins and in convective-microwave grain dryers made it possible to evaluate the advantages and effectiveness of each method. Thus, air-thermal treatment stimulates the development of secondary root system and intensive growth of green-basis weight of plants. Application of convective-microwave processing of seeds makes it possible to increase the number of productive culms, as well as that of grain heads (ears) per plant. Therefore, specific modes of electrophysical exposure, in the course of seeds presowing treatment, can be selected depending on the goals that have to be achieved, in each particular case. Keywords: Air-thermal treatment Convective-microwave processing Computer simulations Presowing treatment Seeds
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 440–451, 2021. https://doi.org/10.1007/978-3-030-68154-8_40
Modeling and Experimental Verification of Air
441
1 Introduction Presowing treatment of seeds is an essential link in the chain of technological operations within the entire seeding cycle. Prior to chemical treatment with the use of chemical solutions for disinfection, seeds have to be calibrated and culled out. Seeds are vernalized and hardened off depending on particular plant species. All those operations are designed to improve sowing quality of seeds and, therefore to increase yields [1]. Electrophysical method of seed treatment play an important role in presowing processing technologies [2–4]. Wide-scale experimental research [5–8] has proved the effectiveness of various methods of electrophysical treatment. However, their application at grain-growing farms is limited by the technological performance of existing processing equipment. It is necessary that a newly-designed processing plant or treatment technology could be optimally integrated into the technological processes of seed presowing treatment adhered to at a particular agricultural enterprise. This requirement is, to a major extent, fulfilled for grain dryers including aerated ones, as well as, plants for drying and disinfection of grain with the use of microwave fields [9–11]. In case that presowing treatment lines are used to implement convectivemicrowave processing of seeds one has to be sure that the requirements for processing modes are complied with and no damage to seeds occurs. Objective of research is to study, with the help of computer simulations, the possibility to apply the combination of aerated bins and convective-microwave grain dryers for seed presowing processing and to evaluate the effectiveness of such presowing treatment.
2 Materials and Methods In order to study the issues related to the implementability of grain presowing treatment in such processing lines, modeling of the processes air-thermal and convectivemicrowave seed treatment processes were performed. The basis of simulation model was the following system of equations and transfer functions making it possible to calculate heat-and-moisture exchange in elementary grain layer [12, 13]: 1 T ðpÞ ¼ T0 ðpÞ ð1 þ peps1 eps1 Þ A1 hðpÞp A2 W ðpÞp; p
ð1Þ
1 1 WðpÞp; DðpÞ ¼ D0 ðpÞ ð1 þ peps1 eps1 Þ p A3
ð2Þ
hð pÞ ¼
T ð pÞ1 1 T0 ð pÞ ðð1 A4 Þ ð1 eps1 Þ þ peps1 Þ; A4 pA4
ð3Þ
1 1 þ h0 þ A5 W ð pÞp þ A5 W0 þ A6 Qv ; T1 p þ 1 p
ð3:1Þ
h ð p Þ ¼ ð h0 T ð pÞ Þ
442
A. A. Vasiliev et al.
WðpÞ ¼ Weq
K ; pþK
lnð1 F Þ Weq ¼ 5:47106 ðT þ 273Þ
ð4Þ 0:435 ;
ð5Þ
K ¼ 7:4897 0:1022T 0:6438W þ 0:0134T V þ 0:0148T W þ 0:0029V W 0:0026T 2 þ 0; 0071W 2 ; ð6Þ F ¼ where A1 ¼
cg cg eca ca ;
A2 ¼
745D : 7:5T ð622 þ DÞeð0:622 þ 238 þ T Þ
cg r 0 100ca ca e;
A3 ¼
eca 103 cg ;
A4 ¼
aq sv eca ca ;
ð7Þ A5 ¼ r 0 cge ; A6 ¼ cg1q . g
T is air temperature, °C; D is air moisture content, g/kg; W is grain moisture content, %; F is air relative humidity, a.u.; Weq is equilibrium moisture content of grain, %; K is drying coefficient, 1/h; h is temperature of grain, °C; Qv is specific microwave power dissipated in dielectric medium, W/m3; V is air velocity, m/s; ca is specific heat capacity of air, kJ kg–1 °C–1; qg is volumetric density of grain, on dry basis, kg/m3; e is pore volume of grain layer; cg is specific heat capacity of grain, kJ kg–1 °C–1;r’ is specific heat of evaporation for water, (kJ/kg); cg is grain bulk weight (kg/m3); ca is air volumetric density kg/m3; s is time h. Equation (3) was used in computer model for air-thermal treatment while Eq. (3.1) was applied to that of convective-microwave presowing treatment of seeds. The developed computer model includes those of corresponding elementary grain layers. The thickness of elementary layer equals to the size of one seed. Therefore, the sequence of models for all elementary layers makes it possible to describe the process of heat-and-moisture exchange in dense grain layers of any thickness owing to the conditions when the out parameters of each precedent layer are equal to input parameters of the next one. In the process of air-thermal presowing treatment, it is important to not only maintain parameters of air within a required range but also insure a required value of grain temperature and exposure time over the entire grain layer. In the process of air-thermal presowing treatment, seeds are exposed to heated air. Application of microwave fields for heating seeds is not provided for. That is why conventional electric heaters are used to heat seeds. Therefore, electric heater control has to be considered while modeling the process. In this work, the applicability of aerated bins with radial air distribution for airthermal presowing treatment was studied. Grain layer of thickness 1.2 m is located in such bin in a stable manner and is blown through with air directed from the central air duct towards the perforated outer wall. For the modeling purposes, the entire gran layer was divided onto three zones of equal thickness. However, the air velocity will differ, in each zone. It is a result of radial direction of air flux from the center of bin. It means that the cross-section area of grain layers increases while the aggregate air flow remains constant. While modeling the process, the average value of air velocity was assumed to
Modeling and Experimental Verification of Air
443
be equal to 0.7 m/s, 0.5 m/s and 0.7 m/s, for the first, second and third zone, respectively. Simulink software was used for developing computer model. The diagram of the developed model is shown in Fig. 1.
Fig. 1. Simulink model for air-thermal presowing treatment of seeds in bins with radial air distribution.
Units ‘Interpeted MATLAB Fcn’ [14] (F = f(tau), Ta = f(tau)) serve for definition of temperature and relative humidity change profile of ambient air, in correspondence with the statistically processed data by meteorological services. Unit ‘Q0’ is designed for setting the initial value of grain temperature. The dependence of relative humidity of air fed into the grain layer on the capacity of electric heater is set with the help of unit ‘Interpeted MATLAB Fcn’ (F = f(Pk)). Unit ‘C’ is used to set the temperature level to which ambient air has to be heated in order to provide the required seed presowing processing algorithm. Oscillograph units (F, W, Q, T) make it possible to monitor the change of parameters of seeds and air over the layer thickness, in the course of treatment. In the process of modeling the process of presowing treatment with the use of heated air the requirements of process control algorithm for aerated bins were considered [15]. In accordance with this algorithm the following operations have to be performed: – to measure the initial value of grain temperature, – to heat ambient air in the input to the value equal to 1.3 of initial grain temperature with the help of electric heater, – to blow heated air through the grain layer during 1.1 h, – to cool seeds with the help of ambient air during 1.1 h, – to blow through the layer during 1.1 h with the help of air heated to the temperature value equal to 1.8 of initial grain temperature, – to cool seeds with the help of ambient air. It is assumed that the initial grain temperature value is 20. The results of modeling enable to monitor the change of seed temperature, that of air blowing through the grain layer, as well as moisture content of seeds. These results of modeling are presented in Figs. 2, 3 and 4.
444
A. A. Vasiliev et al.
3 Results and Discussion 3.1
Air-Thermal Treatment of Seeds
Heated air is the major external factor, in this kind of presowing treatment of seeds. Therefore, it is important to ensure conditions when air blowing through the grain layer could heat seeds to a specific required temperature value. The analysis of diagrams shown in Fig. 2 allows for the conclusion that a slight (by 0.11 h) temperature delay inside the grain layer and temperature change over the grain layer compared to that in its input do not considerably affect air temperature variations within the grain layer.
а) temperature in the input of grain layer, b) temperature in the output of Zone 1, c) temperature in the output of Zone 2, d) temperature in the output of Zone 3 Fig. 2. Temperature change of air blowing through the grain layer.
It means that the requirements for temperature mode of seed presowing treatment are complied with. A short-term air temperature drop within inter-seed spaces can be explained by water transfer from the areas of grain layer located close to the air inlet towards its downstream areas. This can be seen from the time dependences of grain moisture content presented in Fig. 3. Variations of moisture content of seeds do not exceed 3%. Besides, the values of their moisture content upon completing the presowing process are identical to those in the beginning. It is an essential feature since seeds have not to be exposed to overdrying resulting in loss of their internal water. Modeling results for temperature change in seeds under presowing treatment are presented in Fig. 4. Diagrams shown in Fig. 4 give the reason to conclude that the requirements for control algorithm of seed presowing treatment in aerated bins are fully complied with, and such processing lines can be applied for the purposes under this study.
Modeling and Experimental Verification of Air
445
а) moisture content in Zone 1, b) moisture content in Zone 2, c) moisture content in Zone 3 Fig. 3. Results of modeling the change of moisture content in grain layer during presowing treatment with heated air.
а) grain temperature in Zone 1, b) grain temperature in Zone 2, c) grain temperature in Zone 3 Fig. 4. Temperature change of seeds during air-thermal presowing treatment in aerated bins.
446
3.2
A. A. Vasiliev et al.
Dependence of Operation Modes of Seed Presowing Treatment on Parameters of Convective-Microwave Zone
Microwave fields have the advantages of their combined character of effects produced on seeds in the course of presowing processing including thermal and electromagnetic effects. Combination of microwave method with thermal treatment enables to implement various seed processing options that may yield specific useful reactions for plant vegetation processes. Seed presowing treatment method of this kind is assumed to be of perspective ones. However its implementation requires application of special equipment. Based on the results of analysis of electrophysical methods applicable to the presowing treatment technologies, it was found out that one of the critical aspects is providing homogeneous effects over the entire depth of grain layer under processing. That is why it is advisable to carry out modeling the process of air-thermal treatment of seeds with the use of microwave fields in order to specify the basic requirements for processing line design. The equations described above were applied for modeling heat-and-moisture exchange in grain layer. The thickness of grain layer exposed to processing, in convective-microwave plant, is 15 cm to 20 cm. Therefore, the average thickness value of 18 cm was chosen for simulations. For descriptive reasons, conditionally thick grain layer was divided in three sections, 6 cm each. Computer model for convectivemicrowave presowing seed treatment designed in Simulink is presented in Fig. 5.
Fig. 5. Block diagram of computer model for convective-microwave presowing seed treatment
This computer model enables to set values of input parameters of air (temperature and relative humidity). For this purpose, model units Tvh and Fvh. Magnetron operation control is performed with the use of two-step action controller that is represented
Modeling and Experimental Verification of Air
447
by unit Relay. Operation control of magnetron is organized so that it has to be switched-off when grain temperature in the first zone attains 45 °C. This temperature value was selected in order to avoid overheating of seeds in the course of their thermal treatment. The graphs of seed temperature change in each Layer (unit ‘Q’), grain moisture content (unit ‘W’), air relative humidity in the output of each Layer (Fvih), temperature of drying agent in the output of each Layer (Tvih) are displayed in oscillographs. The results of modeling are shown in Figs. 6 and 7.
Fig. 6. Change of grain temperature under microwave heating, in three sublayers
From the diagrams presented in Fig. 6, different layers are heated to different temperatures. However, this heterogeneity disappears and the grain temperature becomes uniform over a thick layer as the grain passes through the entire dryer. Because when moving inside the dryer, the grain is mixed and processed by a microwave field of different intensities. Magnetron operation control ensures the required modes of presowing grain treatment without exceeding set values of seed temperature. Diagrams for moisture content of seeds in each layer (see Fig. 7) show that no water loss occurs in seeds in the process of presowing treatment. Variations of grain moisture content in the process of treatment do not exceed 0.2% and they will not affect sowing quality of seeds.
448
A. A. Vasiliev et al.
а) moisture content in Layer 1, b) moisture content in Layer 2, c) moisture content in Layer 3 Fig. 7. Change of grain moisture content in the process of presowing treatment with the use of microwave fields
3.3
Experimental Tests of Seed Presowing Treatment Modes.
Experiment tests of seed presowing treatment modes were performed on seeds of winter barley of grade ‘Elita’. At the first step of tests, presowing treatment was carried out with the use of air-thermal processing. Seeds were treated in aerated bins with radial distribution of air. During processing, processing modes described above in this article were adhered to.
1 – receiving container, 2 – microwave active chamber, 3 – magnetrons, 4 – airduct of magnetron cooling system, 5 – input airduct Fig. 8. Convective-microwave unit for presowing treatment of seeds.
Modeling and Experimental Verification of Air
449
Convective-microwave processing unit was applied whose layout is shown in Fig. 8. Grain is fed into the module from receiving container (1). Waveguides provide the link between microwave active chamber (2) and the magnetron unit (3). Air for cooling power supplies of magnetrons is blown through via airduct (4). Air for cooling seeds enters the grain layer through airduct (5). Two different physical methods were studied in the course of presowing treatment. In the first one, called ‘air-thermal treatment’, is heating grain in a dense layer by blowing hot air through it. The second one called ‘convective-microwave treatment’ is heating grain with the use of microwave fields with simultaneous cooling by air. Field experiments have been carried out for adequate interpretation of the results of presowing treatment. For this purpose, experimental working plots were used for seeding grain treated after two-year lying period [16]. The results of treatment were evaluated from the day of seedling emergence and during the whole cycle of plant vegetation. In accordance with the practice adhered to in agricultural science all necessary measurements were made during the whole period of plant vegetation including formation stems and ears. Productivity indicators of plants as grain weight and their number per one ear were evaluated after harvesting such [17]. The results of observations are presented in Table 1. Table 1. Experimental data on the results of pre-sowing seed treatment Indicators for evaluating the effectiveness of pre-sowing treatment Seedling density Number of secondary roots Plant length Number of leaves Weight of 20 green plants Number of plants per m2 Number of stems per m2 Number of productive stems per m2 Number of ears per 1 plant Plant height, cm Ear length, cm The number of grains per ear, pcs Weight of 1000 grains, g Grain weight per ear, g Productivity, c / ha
Presowing treatment option Without Air –thermal processing treatment 516 491 2.7 5.4 18.9 26.0 6.8 9.9 32.5 71.4 488 480 1030 871 809 788 1.58 1.60 76.7 77.9 6.9 7.6 15.0 16.1 45.5 48.4 0.72 0.82 57.5 64.0
Microwave field treatment 486 3.5 21.2 7.9 44.3 476 984 877 1.79 81.9 7.4 15.4 46.0 0.77 67.0
The data presented in Table make it possible to evaluate the effectiveness of each presowing treatment technology for particular purposes. For instance, air-thermal treatment is preferable for winter crops cultivation because it ensures better conditions
450
A. A. Vasiliev et al.
for secondary root development. In agricultural technologies oriented on production of crops for green fodder it is advisable, as well, to apply air-thermal presowing treatment. The best crop yield was obtained with the application of convective-microwave treatment of seeds. This treatment technology made it possible to increase the number of ears per one plant. Probably, this particular indicator will enable to achieve productivity growth. It has to be taken into account that the results of experiments may differ depending on microclimate variations. Therefore, additional research has to be carried out to study possible effects of climatic factors.
4 Conclusion The results of both computer simulations and experimental research afford a strong ground to claim that technological equipment designed for grain drying with the application of forced ventilation method can be effectively applied in presowing treatment technologies of seeds. Aerated bins make it possible to implement air-thermal presowing processing technique. In convective-microwave grain driers, the required conditions of electrophysical processes are implementable, as well. The results of field experiments have shown that application of various external factors in the process of presowing treatment of seeds is an effective tool to control the structure of grain yield. For example, application of air-thermal method may be advisable for winter crops in which case development of secondary roots is essentially important. Convective-microwave technique makes it possible to stimulate development of multiple productive stems and ears per one plant.
References 1. Isaeva, A.O., Kirilin, G.M., Iskakov, A.Y., Giniyatulina, A.K.: Effects of vernalizing and seeding method on output yield of carrot. In: Selection of Collected Papers: Scientific Society of the 21th Century Students. Technical Science. E-print Selection of Collected Papers Based on Materials of the 67th Inteernational Students Scientific-practical Conference, pp. 119–123 (2018). (Russ.) 2. Shabanov, N.I., Ksenz, N.V., Gazalov, V.S., et al.: The substantiation of dose for presowing treatment of cereal seeds in electromagnetic field of industrial frequency. Ambient Sci. 5(2), 20–24 (2018) 3. Dukic, V., Miladinov, Z., Dozet, G., et al.: Pulsed electro-magnetic field as a cultivation practice used to increase soybean seed germination and yield. Zemdirbyste Agric. 104(4), 345–352 (2017) 4. Badridze, G., Kacharava, N., Chkhubianishvili, E., et al.: Effect of UV radiation and artificial acid rain on productivity of wheat. Russ. J. Ecol. 47(2), 158–166 (2016) 5. Gilani, M.M., Irfan, A., Farooq, T.H., et al.: Effects of pre-sowing treatments on seed germination and morphological growth of acacia nilotica and faidherbia albida. Scientia Forestalis 47(122), 374–382 (2019) 6. Luna, B., Chamorro, D., Perez, B.: Effect of heat on seed germination and viability in species of Cistaceae. Plant Ecol. Divers. 12(2), 151–158 (2019)
Modeling and Experimental Verification of Air
451
7. Mildaziene, V., Aleknaviciute, V., Zukiene, R., et al.: Treatment of common sun-flower (Helianthus annuus L.) seeds with radio-frequency electromagnetic field and cold plasma induces changes in seed phytohormone balance, seedling development and leaf protein expression. Sci. Rep. 9(1), 1-12 (2019) Article Number: 6437 8. Nurieva, K.O.: Electrophysical factors for treating seeds. In: Collected Papers ‘The Youth: Education, Science, Creating – 2019’ Collected Papers Based on Materials of Regional scientific-practical Conference, pp. 122–124 (2019) (Russ.) 9. Budnikov, D.A., Vasiliev, A.N., Vasilyev, A.A., Morenko, K.S., Mohamed, S.I., Belov, F.: Application of electrophysical effects in the processing of agricultural Materials. In: Advanced Agro-Engineering Technologies for Rural Business Development. Valeriy Kharchenko (Federal Scientific Agroengineering Center VIM, Russia) and Pandian Vasant (Universiti Teknologi PETRONAS, Malaysia), pp. 1–27 (2019) 10.4018 / 978–1–5225– 7573–3.ch001. 10. Vasiliev, A.N., Ospanov, A.B., Budnikov, D.A.: Controlling reactions of biological objects of agricultural production with the use of electrotechnology. Int. J. Pharm. Technol. 8(4), 26855–26869 (2016) 11. Wang, S., Wang, J., Guo, Y.: Microwave irradiation enhances the germination rate of tartary buckwheat and content of some compounds in its sprouts. Pol. J. Food Nutr. Sci. 68(3), 195– 205 (2018) 12. Ospanov, F.B., Vasilyev, A.N., Budnikov, D.A., Karmanov, D.K., Vasilyev, A.A., et al.: Improvement of grain drying and disinfection process in microwave field. Almaty Nur-Print 155 (2017) 13. Vasilyev, A.A., Vasilyev, A.N., Dzhanibekov, A.K., Samarin, G.N., Normov, D.A.: Theoretical and experimental research on pre-sowing seed treatment. IOP Conf. Ser.: Mater. Sci. Eng. 791(1), 012078 (2020) https://doi.org/10.1088/1757-899X/791/1/012078 14. Dabney, J., Harman, T.L.: Mastering Simulink 4. 412 p. Prentice-Hall, Upper Saddle River (2001) 15. Udintsova, N.M.: Mathematical substantiation of electric heater capacity for presowing treatment of seeds. Deposited manuscript All-Russian Institute of Scientific and Technical Information No. 990-B2004 10.06.2004. (Russ.) 16. Sabirov, D.C.: Improvement of seed presowing treatment effectiveness. In: Collected Papers ‘Electric Equipment and Technologies in Agriculture’. Collected papers based on materials of the 4th International Scientific-practical Conference, pp. 222–225 (2019) (Russ.) 17. G. A. Burlaka, E. V. Pertseva. Effect of presowing treatment of seeds on germination readiness and germination ability of spring wheat. In: Collected Papers ‘Innovative Activities of Science and Education in Agricultural Production. Proceedings of International scientific-practical Conference, pp. 301–305 (2019) (Russ.)
Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes Jaramporn Hassamontr1(&)
and Theera Leephaicharoen2
1
2
King Mongkut’s University of Technology North Bangkok, Bangkok 11000, Thailand [email protected] Thai Metal Aluminum Company Limited, Samutprakarn 10280, Thailand
Abstract. In some manufacturing industries, starting raw material size is critical to determining how much material will be scrapped by the end of a process. Process planners working in aluminum profile extrusion industry, for example, need to select appropriate aluminum billet sizes to be extruded to meet specific customer orders while minimizing resulting scraps. In this research, extrusion process is classified according to how billets are extruded, namely multiple extrusions per billet and multiple billets per extrusion. Mass balance equations for each configuration are used to formulate a yield optimization problem where billets are pre-cut to specific sizes and kept in stock. Both models are non-linear and discrete. The solution procedure is developed using an enumeration technique to identify the optimal solution. It is validated with extrusion data from an industrial company. Its effectiveness is demonstrated using various case studies. Keywords: Aluminum profile extrusion Yield optimization equation Billet selection problem. Scrap reduction
Mass balance
1 Introduction Most manufacturing processes or operations can be optimized to save time and cost. Yet theory and practice are sometimes far apart. Most practitioners in industry deem optimization as too idealistic, cannot be done in practice. Thus, many manufacturing processes are not yet optimized to the most effective manner. In this research an attempt has been made to apply optimization technique to aluminum profile extrusion process. Aluminum extrusion process is used to manufacture long aluminum profiles that are widely used in construction, mechanical and electrical appliances. Obvious examples are window frames, heat sink and steps on SUVs. Global aluminum extrusion market value was $78.9 billion in 2018 [1] and is still growing. Aluminum extrusion process typically involves 3 main steps. First, aluminum billets are cut into specific sizes, heated and then loaded into a container, one billet at a time. Then each billet will be pushed through a die by hydraulic press. Aluminum profile will be formed at the other side of the die. Once the profile reaches prespecified extrusion length, it will be cut by the saw in front of the die. The profiles will be straightened by a stretcher and cut to customer-specified lengths afterward. Figure 1 illustrates an example of extrusion plan where one billet is extruded © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 452–462, 2021. https://doi.org/10.1007/978-3-030-68154-8_41
Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes
453
three times, i.e. three identical extrusion lengths. Due to stretcher capacity, this extrusion length is constrained to within a certain range, such as from Rmin to Rmax.
Fig. 1. Aluminum extrusion process: 3 extrusions per billet
Given a customer order, consisting of product length L and order quantity n, process planner needs to select available pre-cut billet sizes bi and extrusion lengths LE1 to meet individual demand whilst resulting in minimal scrap. Many short billets need to be extruded consecutively until the extrusion length exceeds Rmin before being cut by the saw. This type of configuration will be called multiple billets per extrusion in this paper. On the other hand, a large billet can be extruded and cut by the saw more than once with each extrusion length between Rmin and Rmax. This will be called multiple extrusions per billet, though a single extrusion per billet is also included in this category. So far, in the literature, there is no explicit modeling that will consider both extrusion setups simultaneously. The objective of this research is to formulate a yield optimization model for the extrusion process using mass balance equations. Available pre-cut billet sizes, process setup and customer order requirements are taken into accounts in the model. It is validated with actual data from an aluminum extrusion company. The model is then used to illustrate possible yield improvements through case studies. Specifically, it is shown how two objective functions, namely minimizing total billet weight used versus maximizing yield, provide different optimal solutions. The effects of extrusion die design on yield is also studied.
454
J. Hassamontr and T. Leephaicharoen
2 Literature Review Aluminum extrusion yield was first investigated by Tabucanon [2]. A dynamic programming model was proposed to identify an optimal set of billets to be stocked, that will minimize overall scraps. Masri and Warburton [3, 4] formulated yield optimization as a billet selection problem. Using past customer orders’ data, the goal was to identify optimal billet sizes that should be stocked. Their model considered a global problemmultiple profile cross sections and lengths. Multiple pushes within the same billet were not considered due to longer cycle time. Wastes from each billet size-customer order pair was calculated a priori, though not explicitly shown. Hajeeh [5] formulated a mathematical model to optimize cutting aluminum logs into billets to fill customer orders. The scraps considered were from log cutting, butt scrap and profile cutting. The goal was to minimize overall scrap. Again, wastes from each billet assignment were calculated before optimization. Another approach to scrap reduction was introduced by researchers in mechanical engineering discipline. Reggiani, Segatori, Donati and Tomesani [6] used Finite Element simulation to predict charge welds between two consecutive billets that must be scrapped. Oza and Gotowala [7] used HyperXtrude, a commercial CAD software for extrusion, to study the effect of ram speed, billet and die temperatures on traverse weld length. Ferras, Almeida, Silva, Correia and Silva [8] conducted an empirical study on which process parameters contributing the most to material scrap. In this research, an alternative approach for extrusion yield optimization is proposed. Instead of considering a global problem where multiple customer orders are optimized simultaneously, the problem size can be greatly reduced by considering one customer order at a time. Secondly, mass balance equations are used explicitly in the model, allowing clear indication how billets are extruded in the optimal solution. Thirdly, both general extrusion setups, including multiple pushes, are considered. As will be shown later, there are cases where multiple pushes can lead to lower scrap. Last, it is possible to identify the impact of improving hardware constraints. For example, how die design can improve the optimal solution.
3 Research Methodology 3.1
Problem Formulation
There are three types of material scrap in aluminum extrusion process. The first is incurred from cutting aluminum log into specific billet lengths. The second comes from in-process extrusion, which are denoted by billets’ backend e, in kg, and aluminum material remained in the die pot, P, also in kg, after the die is removed from the extrusion press. These are denoted in Fig. 1. The third type of waste is induced by profile cutting. They come in many forms as follow. • Aluminum portion between die and cutting saw, Ls, is usually too short for customer usage and therefore discarded.
Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes
455
• Part of the profile gripped by pullers for stretching are usually deformed and later cut off. These are denoted by head loss h and tail loss t in Fig. 1. Generally, portion of the profile that first comes out and gripped by puller, h, is less than Ls. Thus, the scrap h is already included in Ls and can be considered zero in most cases. • An allowance, d, shown in Fig. 2, is to avoid irregularities within the profiles. These irregularities may come from stop marks while profile is being cut, or where material from two different billets are merged within the same profile. These irregularities may lead to profile failures when undertaking large loads. Thus, manufacturers will cut this portion out. • Another allowance, though not shown in both figures, is the material scrap remained after the extruded profile cut to pre-determined number of pieces, say q1, of customer order’s length L. Other input variables for the planning includes billet weight per length, b, number of die openings (how many profiles can be extruded in one time), a, and extruded profile’s average weight per length, w. In the case of multiple extrusions per billet, process planner must determine billet sizes to be used as the first and consecutive billets B1 and B2 consecutively. For each billet, how many extrusions C1 and C2 are performed and in each extrusion how many pieces of product with length L are made within each profile, q1 and q2. The extrusion lengths LE1 and LE2 from each billet should be constant for simplicity in profile cutting. All C’s are at least 1. Figure 1, for example, demonstrates the case of three extrusions per billet (C1 = 3), whereas consecutive billets are not shown.
Fig. 2. Multiple billets per extrusion (3 billets per extrusion).
Schematics of multiple billets per extrusion are shown in Fig. 2. Here at most three different billet sizes, B1, B2 and B3, can be used to form a set of billets that will result in single extrusion. The second billet B2 can be used D2 times within each billet set while the first billet B1 and the last billet B3 are used only once. Each billet results in q1, q2, and q3 pieces of product within each profile respectively. As it will be shown in the
456
J. Hassamontr and T. Leephaicharoen
solution procedure, the first billet in the next set, B1′, can be smaller than B1. The extrusion length LE can be computed for each billet set. Assumptions for the problem formulation are summarized as follows. • • • • • • • • •
3.2
Billets are pre-cut to specific sizes and available in stock with unlimited quantity. Extrusion length must be between Rmin and Rmax. Saw used for profile cutting on the runout table is at fixed position Ls from the die. Material losses due to cutting saw is negligible. Desired product length, L and minimum required quantity n are known input variables for billet selection. Aluminum density w along the profile is constant. Process parameters such as extrusion speed, temperature, etc. are pre-determined. No significant adjustment is made during extrusion. For multiple pushes per billet case, extrusion length is uniform for one billet size. At most two billet sizes are used to meet a customer order. For multiple billets per extrusion case, extrusion length should be uniform for each set of billets. At most three billet sizes are used to meet a customer order. Solution Procedure
Based on mass balance equations from both extrusion setups, it is possible to devise a divide-and-conquer approach. Step 1. Classify available billet sizes into two categories. For each billet size bi, maximum extrudable length can be calculated from Lmax;i ¼
bi e P Ls aw
ð1Þ
Let U be the set of billets bu of which maximum extrudable length Lmax,u Rmin and V the set of billets bv of which maximum extrudable length Lmax,v < Rmin. Step 2. For billets in U, solve the following optimization model for multiple extrusions per billet setup. First, a few more decision variables are needed. Let. X1u = 1 if billet u is used as the first billet and zero otherwise, 8u. X2u = 1 if billet u is used as consecutive billets and zero otherwise, 8u. nb2 = number of consecutive billets used. The objective function is to minimize total billet weight used. Equations (2) and (5) are mass balance equations that can be derived from Fig. 1. Equations (2)–(7) are to ensure that only one billet size is used as the first billet and only one for consecutive billets. Equations (8)–(11) are to impose constraint on extrusion lengths for the first and consecutive billets respectively. The number of consecutive billets, nb2, can be calculated from (12). All decision variables are denoted in (13) and (15).
Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes
457
minimize B1 þ nb2 B2 . Subject to B1 e þ P þ awfC 1 ðLs þ Lq1 þ tÞ þ Ls þ hg X B1 ¼ X b u 1u u X u
X 1u ¼ 1
B2 e þ awfC 2 ðt þ Ls þ Lq2 Þ þ hg: X B2 ¼ X b u 2u u X u
X 2u ¼ 1
ð2Þ ð3Þ ð4Þ ð5Þ ð6Þ ð7Þ
LE1 ¼ Ls þ t þ Lq1 þ h
ð8Þ
Rmin LE1 Rmax
ð9Þ
LE2 ¼ Ls þ t þ Lq2 þ h
ð10Þ
Rmin LE2 Rmax
ð11Þ
aC 2 q2 nb2 n aC 1 q1
ð12Þ
C1 ; C 2 1; integer
ð13Þ
q1 ; q2 ; X 1u ; X 2u ; nb2 0; integer
ð14Þ
B1 ; B2 ; LE1 ; LE2 0
ð15Þ
Step 3. For billets in V, solve the following optimization model for multiple billets per extrusion case. A few more variables are needed. Let. X1v = 1 if billet v is used as the first billet and 0 otherwise, 8v. X2v = 1 if billet v is used as consecutive billets and 0 otherwise, 8v. X3v = 1 if billet v is used as the last billet and 0 otherwise, 8v. Y1v = 1 if billet v is used as the first billet for the next set and 0 otherwise, 8v. ns = number of consecutive billet sets used. The objective is to minimize total billet weight which consists of the first billet set and consecutive billet sets. Equations (16), (19), (22) and (25) represent mass balance equations that can be derived from Fig. 2. Equations (16)–(27) are used to make sure that only one billet size is used for each billet type. The extrusion length for each billet set is computed in (28) while its limits are imposed by (29). The total number of consecutive billet sets is calculated by (30). Constraints on decision variables are summarized in (31) and (32).
458
J. Hassamontr and T. Leephaicharoen
0 minimize ðB1 þ D2 B2 þ B3 Þ þ ns B1 þ D2 B2 þ B3 . Subject to B1 e þ P þ awfLq1 þ Ls þ hg X B1 ¼ X b v 1v v X v
X 1v ¼ 1
P B2 e þ P þ aw Lq2 þ d aw X X b B2 ¼ v 2v v X v
X 2v ¼ 1
P B3 e þ P þ aw Ls þ t þ Lq3 þ d aw X X b B3 ¼ v 3v v X v
X 3v ¼ 1
P 0 B1 e þ P þ aw Lq1 þ d aw X 0 Y b B1 ¼ v 1v v X v
Y 3v ¼ 1
ð16Þ ð17Þ ð18Þ ð19Þ ð20Þ ð21Þ ð22Þ ð23Þ ð24Þ ð25Þ ð26Þ ð27Þ
LE ¼ ðLs þ Lq1 Þ þ D2 ðd þ Lq2 Þ þ ðd þ Lq3 þ tÞ
ð28Þ
Rmin LE Rmax
ð29Þ
aðq1 þ D2 q2 þ q3 Þns n aðq1 þ D2 q2 þ q3 Þ
ð30Þ
0
B1 ; B1 ; B2 ; B3 0
ð31Þ
D2 ; q1 ; q2 ; q3 ; X 1u ; X 2u ; X 3u ; Y 1v ; ns 0; integer
ð32Þ
Step 4. Show results from steps 2 and 3 so that the better solution can be selected. Even though both optimization models are non-linear, all independent variables are discrete. A computer program can be developed to enumerate all possible values of these variables and identify the optimal solutions, if any, in each model. The solution procedure is developed using Visual Basic Application in Microsoft Excel.
Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes
459
4 Case Studies 4.1
Case Study 1
Actual operating data from an aluminum extrusion company is used to validate the model. The input data, shown in Table 1, is used for all cases, unless specified otherwise. In the company where the case data is taken, a human process planner decides which billet sizes are used. Table 2 illustrates how this conventional extrusion data compares with results from the enumeration program developed. To make 600 pieces of 3.1 m-long profile, the first billet used is 64 cm in length whereas consecutive billets are smaller at 59 cm. Each billet is extruded once, single push per billet, to make 10 pieces of product in each profile extruded. There are two profiles extruded at the same time (a = 2). Each billet, therefore, generates 20 pieces of product. With 29 consecutive billets, exactly 600 pieces of product are made. Total weight for all billets used are 1185.7 kg. Since material yield is defined as a ratio of total product weight to total billet weight, the resulting yield is 85.8%. Table 1. Parameters used in case studies. Extrusion press setup Ls = 1.5 m, h = 0 m, t = 1 m, d = 0 m, e = 1.75 kg Billet Rmin = 25 m, Rmax = 47 m b = 0.668 kg/cm Available billet sizes 35, 40, 59, 64, 74, 78 cm Die P = 1.25 kg, a = 2 Product w = 0.545 kg/m, L = 3.1 m, n = 600 pcs
Table 2. Conventional plan versus results from the enumeration program. Result from industry Min. Billet weight First billet size (cm) pcs/profile pushes/billet q1C1 Extruded length LE1 (m) Consecutive billet size (cm) pcs/profile pushes/billet q2 C2 Extruded length (m) LE2 Number of consecutive billets nb2 Last billet size (cm) pcs/billet q3 Extruded length (m) LE3 Number of sets ns Total no. of workpieces made (pcs) Total billet weight (kg) Yield (%)
Max yield
Multiple billets/ext
Multiple billets/ext Multiple ext/billet Multiple ext/billet
64 10 1 33.5 59 10 1 33.5 29
40/35 6
600 1185.7 85.80
40 6 39.70 24 600 1255.84 81.02
59 91 30.40 78 14 1 45.90 21
78 13 1 42.80 78 14 1 45.90 21
606 1133.60 90.65
614 1146.29 90.83
460
J. Hassamontr and T. Leephaicharoen
With the enumeration program, the first two billet sizes, 35 and 40 cm, are put into set V, to be considered for multiple billets per extrusion. The program selects the first billet set consisting of 40-cm billet as the first and the last billet while no billets are extruded in between. Both billets result in 6 pieces per profile or 24 pieces per billet set. The second set is similar to the first, except that the first billet size can be reduced to 35 cm, but still able to produce the same number of products per billet set. The extrusion length for each billet set is 39.7 m. Making 600 pieces product requires another 24 sets of billets besides the first set. Certainly, this extrusion setup requires more billet weight than the conventional case. The resulting yield is 81.02%. In this particular case, multiple billets per extrusion could not perform better than multiple extrusions per billet. As the program considers billets in U for multiple (or single) pushes per billet, it selects 59 cm for the first billet size and 78 cm for consecutive billet sizes. The first billet created 9 pieces of product per profile while consecutive billets 14 pieces. It should be noted that extrusion lengths from the first billet is not the same as those from consecutive billets. Here, only 21 consecutive billets are needed. The total product quantity made is 606 pieces while total billet weight required is less than that from the conventional case. The resulting yield, 90.65%, is significantly better than that from the conventional solution. The last column in Table 2 is used to demonstrate how objective function influences the optimal solution. If the objective function is to change from minimizing total billet weight to maximizing yield, the optimal solution will look to add more pieces of products, the nominator of yield, while keeping total billet weight as low as possible. Thus, the optimal solution from maximizing yield objective is to use the largest billet sizes for the first and consecutive billets. The resulting yield is 90.83% with more products made. Selecting which objective function to use will depend on whether customers allow over-shipment or if there exists a need to make extra pieces, for example, to compensate for product defects. 4.2
Case Study 2
As a common practice in industry, extrusion companies try to improve productivity by increasing the number of die openings, denoted by a in this paper, to as high as possible. For example, the original number of die openings is 2. If the third opening can be added, the number of profiles extruded will be increased by 50% immediately. In this case study, the impact of increasing a is investigated. As a increases, the amount of material remained in die pot, P, also increases. There are some other drawbacks in increasing a. Notably, there are more material scraps introduced in each profile and it is generally more difficult to control extrusion process (Table 3).
Modeling of Aluminum Profile Extrusion Yield: Pre-cut Billet Sizes
461
Table 3. Effects of increasing the number of die openings a within a die. a=1 40 13 1 42.80 78 14 2 45.90 21
2 59 91 30.40 78 14 1 45.90 21
3 78 81 27.30 78 91 30.40 22
First billet size (cm) pcs/profile pushes/billet q1C1 Extruded length LE1 (m) Consecutive billet size (cm) pcs/profile pushes/billet q2C2 Extruded length (m) LE2 Number of consecutive billets nb2 Last billet size (cm) pcs/billet q3 Extruded length (m) LE3 Number of sets ns Total no. of workpieces made (pcs) 601 606 618 Total billet weight (kg) 1120.9 1133.6 1198.4 Yield (%) 90.92 90.65 87.45
4 40/35 3
5 35/35 2
40 4
78 6
2 40 3 45.90 10 616 1142.4 91.44
1 59 4 39.70 9 600 1149 88.55
The enumeration program is used to investigate the extrusion process in case study 1 with various number of die openings, from a = 1 to 5, assuming that extrusion press capacity is sufficient, and the process is under control. Here, only the objective of minimizing total billet weight is used and only the better solutions from two extrusion setups are shown. When a = 1, 2 or 3, the optimal solutions are from multiple extrusions per billet. As a increases, the optimal yield decreases. The optimal solutions from a = 1 and a = 2 are very close to each other as the main difference comes from only the first billet size used. The consecutive billets, when a = 1 or 2, will create 28 pieces of product. When a = 4 or 5, the optimal solution comes from multiple billets per extrusion with only slightly less yield. At a = 5, all billet sizes are in U. Thus, multiple billets per extrusion can be beneficial when companies look to improve both yield and productivity at the same time.
5 Conclusion In this research, an alternative yield optimization model is proposed for aluminum profile extrusion process using mass balance equations. The model takes into account two general ways to extrude aluminum profiles, namely multiple billets per extrusion and multiple extrusions per billet. The model proposed has three distinct advantages. First, the problem size is drastically reduced since customer orders are considered one at a time. Secondly, all possible ways to extrude billets are explored to find the optimum solution. And last, it allows process planners to investigate the effects of process parameters, such as allowable runout length.
462
J. Hassamontr and T. Leephaicharoen
Based on limited case studies from the industry, preliminary conclusions can be drawn. • The choice of objective function will have profound effects on the optimal solution. Minimizing total billet weight seems to be a better objective than maximizing yield, at least from the overproduction quantity standpoint. • Multiple pushes, such as multiple extrusions per billet and multiple billets per extrusion, can be beneficial for material utilization. Both setups should be considered unless there are some physical constraints on the extrusion press. More work can be done to extend the model for online billet cutting where billets can be cut to lengths and extruded directly. This should provide even better yield.
References 1. Grand View Research. https://www.grandviewresearch.com/industry-analysis/aluminumextrusion-market. 2. Tabucanon, M.T., Treewannakul, T.: Scrap reduction in the extrusion process: the case of an aluminium production system. Appl. Math. Modelling 11, 141–145 (1987). https://doi.org/10. 1016/0307-904X(87)90158-2 3. Masri, K., Warburton, A.: Optimizing the yield of an extrusion process in the aluminum industry. In: Tamiz, M. (ed.) Multi-Objective Programming and Goal Programming. LNE, vol. 432, pp. 107–115. Springer, Heidelberg (1996). https://doi.org/10.1007/978-3-64287561-8_9 4. Masri, K., Warburton, A.: Using optimization to improve the yield of an aluminum extrusion plant. J. Oper. Res. Socy. 49(11), 1111–1116 (1998). https://doi.org/10.1057/palgrave.jors. 2600616 5. Hajeeh, M.A.: Optimizing an aluminum extrusion process. J. Math. Stat. 9(2), 77–83 (2013). https://doi.org/10.3844/jmssp.2013.77.83 6. Reggiani, B., Segatori, A., Donati, L., Tomesani, L.: Prediction of charge welds in hollow profiles extrusion by FEM simulations and experimental validation. Intl. J. Adv. Manuf. Technol 69, 1855–1872 (2013). https://doi.org/10.1007/s00170-013-5143-2 7. Oza, V.G., Gotowala, B.: Analysis and optimization of extrusion process using Hyperworks. Int. J. Sci. Res. Dev. 2(8), 441–444 (2014) 8. Ferras, A.F., Almeida, F. De, Silva, E. C, Correia, A., Silva, F.J.G.: Scrap production of extruded aluminum alloys by direct extrusion. Procedia Manufacturing, 38, 1731–1740 (2019). https://doi.org/10.1016/j.promfg.2020.01.100
Models for Forming Knowledge Databases for Decision Support Systems for Recognizing Cyberattacks Valery Lakhno1 , Bakhytzhan Akhmetov2 , Moldyr Ydyryshbayeva3 , Bohdan Bebeshko4 , Alona Desiatko4 , and Karyna Khorolska4(&) 1
National University of Life and Environmental Sciences of Ukraine, Kiev, Ukraine [email protected] 2 Abai Kazakh National Pedagogical University, Almaty, Kazakhstan [email protected] 3 Al Farabi Kazakh National University, Almaty, Kazakhstan [email protected] 4 Kyiv National University of Trade and Economics, Kioto Street 19, Kyiv, Ukraine {b.bebeshko,desyatko,k.khorolska}@knute.edu.ua
Abstract. Patterns of Bayesian networks have been developed for the computing core of the decision support system in the course of threats prediction and stages of intrusion into information and communication networks of informatization objects. The proposed Bayesian networks templates allow one to operate with a variety of random variables and determine the probability of a cyber threat or a specific stage of an invasion under given conditions. Probabilistic models for detecting network intrusions based on the use of dynamic Bayesian networks have been added. The training of Bayesian networks parameters based on the EM-algorithm was carried out. In contrast to existing solutions, the proposed approach makes it possible not only to take into account the main stages of intrusions but also to make more reasonable decisions based on the use of both typical intrusion patterns and newly synthesized patterns. All templates and models make up the decision support system computing core for intrusion detection. The effectiveness of the developed models was tested on test samples that were not previously used in training. Keywords: Decision support system networks Models
Intrusion recognition Bayesian
1 Introduction It is possible to resist the constant growth of the complexity of illegitimate impacts on objects of informatization (OBI), in particular, using systems for intelligent recognition of anomalies and cyber-attacks [1]. In the face of the increasing complexity of attack scenarios, many companies that are developing intrusion detection systems began to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 463–475, 2021. https://doi.org/10.1007/978-3-030-68154-8_42
464
V. Lakhno et al.
integrate intelligent decision support systems (DSS) into their products. Note that the basis of modern DSS is formed by various models and methods, which together form its computational core [2, 3]. In difficult situations of guaranteed information security of various objects of informatization, the decision-making process will take place under the condition of active interaction of information security means and cybersecurity systems with experts. The above circumstances determined the relevance of our research in the field of synthesis of models for the computational core of the decision support systems as part of the I intrusion detection system.
2 Review and Analysis of Previous Research As noted in [4], a promising direction of research in this area has become studies on the development of methods, models, and software systems for DSS [5] and expert systems (ES) [6] in the field of information security. In studies [6, 7, 24], Data Mining technologies in information security problems were considered. The emphasis in these studies was made on the task of identifying the situation evolution pattern, which is associated with the provision of informatization objects information security. The considered works had no practical implementation, in the form of a software system. The studies [8, 9] analyzed the methodology of intelligent modeling in the problems of informatization objects information security. The methodology proposed by the authors is intended to provide analysis and decisionmaking under different scenarios of informatization objects intrusion implementation. However, these studies have not been brought down to hardware or software implementation. The appearance of new classes makes it difficult to analyze and support decisionmaking in information security tasks due to poorly amenable formalization and structuring of the task themselves for providing information security [10, 11]. In such cases, the parameters of the informatization objects information security state can be represented by qualitative indicators. The latter is not always advisable. According to the authors of [12, 13], the analysis of the security level of informatization objects and the development of plans to counter targeted cyber-attacks should be preceded by the stage of identifying the main threats. The authors of [14] point out that it is problematic to qualitatively solve such a problem without the appropriate DSS. The researchers did not describe the practical results of their development. In studies [15, 16], it was shown that aspects of building information security means can be taken into account in the approach based on the applied Bayesian networks (hereinafter BN). Compared to existing data analysis methods, they provide a clear and intuitive explanation of their findings. Also, BN (often dynamic BN - DBN) implies a logical interpretation and modification of the structure of the relations among the variables of the problem. The representation of the BN in the form of a graph makes it a convenient tool for solving the problem of assessing the probabilities of the possible cyber threat occurrence, for example, for the information and communication networks of informatization objects.
Models for Forming Knowledge Databases for Decision
465
The computing core of the developed decision support system is based on probabilistic models that make up the knowledge base. Such probabilistic models are capable of describing processes under conditions when the data is poorly structured or insufficient for statistical analysis. Problems that can be solved based on the use of knowledge bases probabilistic models should also use the procedures of probabilistic inference. Various processes take place in information and communication networks of informatization objects to generate their own data sequences. Accordingly, these data sequences should be reflected in the description of the course of each process. The above analysis showed that research in the field of new model development for the formation of decision support systems knowledge bases in the process of recognizing cyber-attacks is relevant.
3 Objectives of the Study The research aims to develop models based on Bayesian networks for the knowledge base of the computing core of the decision support system in the course of identifying complex cybernetic attacks.
4 Methods and Models It should be noted that unauthorized actions of an information and communication networks intruder, as a rule, generate corresponding anomalies in its operation. And in this context, information and communication networks itself is not something completely isolated but is located in a certain external environment (environment, which includes personnel, competitors, legislative and regulatory bodies, etc.). Such an environment is usually rather weakly formalized, since the connections between the components are not always clearly defined, and to solve the problems of cyberattacks detection that have generated anomalies in this environment, appropriate tools and methods are needed to detect an intrusion based on a variety of different characteristic features and characteristics. The main selection criterion for intrusion signs and their parameters is the level of statistical significance. But since there are no signs of network intrusions following KDD99 [18] 41 (see Table 1 [17, 18]), then the stage of minimizing the number of the most informative of these signs is necessary. However, as was shown in [19, 20], the number of informative features can be minimized. Such minimization will allow, as in the previous example, to build a Bayesian network for simulation of the intrusion detection logic for the intrusion detection systems. Thus, after the procedure of minimization of the total number of parameters for the analysis and selection of the most informative characteristics, the number of parameters decreased from 41 to 8. In Table 1 are shown the 8 rows with features that were recognized as the most informative for detecting network intrusions [20]. These 8 features make it possible to recognize with an accuracy of 99% the presence of typical network attacks such as Probe, U2R, R2L, Dos / DDos. Note that some of the selected informative features (in particular, lines 23 and 28 of Table 1) are dynamic and change their value in the time interval 0–2 s. Therefore, to
466
V. Lakhno et al.
design a DSS and fill its knowledge base with appropriate Bayesian network patterns, in this subtask it is better to use a dynamic Bayesian network - DBN. Dynamic Bayesian networks are ordinary Bayesian networks that are related by variables at adjacent time steps [21]. In fact, the dynamic Bayesian network is a Markov process, which can be attributed to the 1st order. The designed dynamic Bayesian network will consist of two networks. Let's designate the first BN as ðB1Þ; respectively, the second as ðB2Þ: The network ðB1Þ will be the original one. Transit network ðB2Þ: The original network ðB1Þ defines the prior distributions of the available model ðpðzð1ÞÞÞ: The transit model ðB2Þ will determine the probabilities of transitions between time slices. Table 1. Parameters of network traffic for building a BN (according to [18, 19]) № 2. 3. 4. 23.
Parameter protocol_type service src_bytes count
26. 27. 28. 34.
same_srv_rate diff_srv_rate srv_count dst_host_same_srv_rate
Description Protocol type (TCP, UDP, etc.) Attacked service Number of bytes from source to destination The number of connections per host in the current session for the last 2 s % of connections with the same service % of connections for various services Number of connections for the same service in the last 2 s % of connections to localhost established by the remote side and using the same service
Then, taking into account the above written, one can hold the dynamic Bayesian networks for the variables marked in lines 2–4, 23, 26–28, 34 of Table 1. Also, one can add a variable type to the DBN, which, will display the moment of transitions between networks B1 and B2. For the design of dynamic Bayesian networks was used editor GeNIe Modeler. The probabilistic model of the designed dynamic Bayesian network for transitions between the vertices of the joint distribution graph of available models is presented as follows: pðZt jZt1 Þ ¼
N Y p p Zti jPa Zti ;
ð1Þ
t¼1
pðZ1:t Þ ¼
N Y N Y p Zti jPa Zti ;
ð2Þ
t¼1 i¼1
where Z t - is the BN slice for the time point t; Z it - BN node at the moment in time t; i Pa Z t - a set of parent nodes for a BN node; the number of nodes to cut the BN. Expression (2) will describe the probabilities of transitions between BN nodes.
Models for Forming Knowledge Databases for Decision
467
If the model is divided into a set of non-observable variables ðV t Þ, as well as a lot of variables recorded ðU t Þ, in such a case, the expression (1) should be supplemented by the expression (2) that, accordingly, asks not only state model first, but and the observation model first. The solution to the redundancy problem of the traffic properties analysis for the designed test network, was made by analysis of traffic informative characteristics. In the course of such an analysis of informative characteristics, based on the analysis of publications [17–20], it was proposed to apply the criterion of information gain IðV; UÞ: According to this criterion, information gain IðV; UÞ one attribute to another attribute (e.g., line 3 - attribute V, line 23 - attribute U) indicates that the uncertainty about the values U decrease when limestone defined values V: Hence, IðV; UÞ ¼ HðUÞ HðUj V), where HðUÞ; HðUjVÞ - is the entropy of the attribute U and ðUjVÞ; respectively. Since the values U and V are both discrete, and can take values in the ranges for fu1 ; :::; uk g andfv1 ; :::; vk g, respectively, the entropy values for the attribute U were determined as follows: H ðU Þ ¼
i¼k X
PðU ¼ ui Þ log2 ðPðU ¼ ui ÞÞ:
ð3Þ
i¼1
Conditional entropy can be found like this: H ðUjV Þ ¼
j¼l X
PðV ¼ vi Þ H ðUjV ¼ vi Þ:
ð4Þ
j¼1
Because the calculation was IðV; UÞ performed for discrete values, the values from the set of fu1 ; :::; uk g and fv1 ; :::; vk g had to be discretized. In the course of test experiments, the method of equal frequency intervals was used for sampling. Following the method value space attribute value should be pitched be an arbitrary number of partitions. Each section will contain the same number of points described by the corresponding data. The application of this method allowed us to achieve a better classification of data values. The information amplification factor IðV; UÞ will depend on the number of values of the corresponding attributes. Therefore, as the number of values increases, the entropy value for the attribute decreases. More informative attributes can be selected using the gain [23]: GðUjV Þ ¼
I ðU; V Þ H ðU Þ H ðUjV Þ ¼ : H ðV Þ H ðV Þ
ð5Þ
The information amplification factor GðUjVÞ will take into account not only its amount, which is necessary to represent the result in the DSS but also HðVÞ that is necessary to separate the information by the current attribute ðVÞ: Consider the following example of designing a dynamic Bayesian network for a knowledge base decision support system. We will proceed with the following
468
V. Lakhno et al.
assumptions. Typically, intrusion into the network is implemented in a three-stage scheme. In the first step, the attacker performs a network scan ðSÞ: In the second stage, the impact on information and communication networks vulnerabilities occurs ðEÞ: At the final third stage, the attacker seeks to gain access to the information and communication networks, in particular, through the backdoor ðBÞ: Modern attacks most often occur remotely and attackers do not have all the information about the attacked network. The attacker needs to collect as much data as possible about the target of the attack. Otherwise, you will have to go through all the known vulnerabilities, and this is a rather lengthy procedure. The scan network process certainly will impose its mark on the network traffic. But when the attacker already gained information, they are able to focus more specifically on the potential vulnerabilities of network devices, as well as of services, operating systems and application software. In this case of impact on the network, traffic will change respectively. The sequence of attackers actions has been already described in sufficient details, therefore, without delving into the details and techniques of the invasion, we will focus on the construction of a dynamic Bayesian network, which will describe a probabilistic model of network states at different stages of the invasion. Note that if the stages of the ðSÞ; ðEÞ; ðBÞ were not detected by protection services, then they will be considered as hidden. Fig. 1 shows a general model of dynamic Bayesian network for the invasion stage ðSÞ; ðEÞ; ðBÞ: The model shows two slices that correspond to the invasion process.
Fig. 1. General DBN model for the invasion of S, E, B
Fig. 2. An example of the relationship between the observed parameters (lines 3–6, 12, 25, 29, 30, 38, 39 of Table 1)
Models for Forming Knowledge Databases for Decision
469
Invasion stages are hidden. Accordingly, it makes it necessary to collect statistics on the network traffic observed parameters (Table 1). An example of the relationship between the observed parameters (lines 3–6, 12, 25, 25, 29, 30, 38, 39 of Table 1) is shown in Fig. 2. Having a state graph of the attacked network, one can describe it as the following model: PðV ðnÞjPaðV ðnÞÞÞ ¼
3 Y
Pðvi ðnÞjPaðvi ðnÞÞÞ;
ð6Þ
i¼1 3 Y
Pðvi ðnÞjPaðvi ðnÞÞÞ
i¼1
ð7Þ
¼ PðsðnÞjsðn 1ÞÞ PðeðnÞjeðn 1Þ; sðn 1ÞÞ PðbðnÞjbðn 1Þ; eðn 1ÞÞ: For example, for the Bayesian network variant, which is shown in Fig. 2, the model, which describes the relationship between the observed variables for network traffic and the probabilities of transition from state to the state of the graph vertices, will look like following: PðU ðnÞjPaðU ðnÞÞÞ ¼
11 Y P uNj ðnÞjPa uNj ðnÞ ;
ð8Þ
j¼1 11 4 Y Y P uNj ðnÞjPa uNj ðnÞ ¼ P uNj ðnÞjPaðsðnÞÞ j¼1
j¼1
8 11 Y Y P uNj ðnÞjPaðeðnÞÞ P uNj ðnÞjPaðbðnÞÞ ; j¼5
ð9Þ
j¼9
P uNj ðnÞjPaðsðnÞÞ ¼ P u31 ðnÞjsðnÞ P u42 ðnÞjsðnÞ P u53 ðnÞjsðnÞ P u64 ðnÞjsðnÞ ;
ð10Þ
26 P uNj ðnÞjPaðeðnÞÞ ¼ P u25 5 ðnÞjeðnÞ P u6 ðnÞjeðnÞ 39 P u38 7 ðnÞjeðnÞ P u8 ðnÞjeðnÞ ;
ð11Þ
11 Y P uNj ðnÞjPaðbðnÞÞ : j¼9
29 30 ¼ P u12 9 ðnÞjbðnÞ P u10 ðnÞjbðnÞ P u11 ðnÞjbðnÞ ;
ð12Þ
470
V. Lakhno et al.
Where j ¼ 1; . . .; 11 - is the number of variables that are observed for the network state graph (see Fig. 2), N ¼ 1; . . .; 41 the corresponding property (traffic parameter) from Table 1. Therefore, for the previously selected informative signs of a DDoS attack, lines 2– 4, 23, 26–28, 34 of Table 1, the dynamic Bayesian networks will look like this, see Fig. 3.
Fig. 3. An example of the relationship between the observed parameters (lines 2 - 4, 23, 26–28, 34 of Table 1)
Accordingly, for a given attack pattern, the model that describes the relationship between the observed variables for network traffic and the probabilities of transition from state to the state of the graph vertices will look like following: PðU ðnÞjPaðU ðnÞÞÞ ¼
8 Y P uNj ðnÞjPa uNj ðnÞ ;
ð13Þ
j¼1 8 3 Y Y P uNj ðnÞjPa uNj ðnÞ ¼ P uNj ðnÞjPaðsðnÞÞ j¼1
j¼1
6 Y j¼4
P uNj ðnÞjPaðeðnÞÞ P u76 ðnÞjPaðbðnÞÞ :
ð14Þ
Models for Forming Knowledge Databases for Decision
P uNj ðnÞjPaðsðnÞÞ ¼ P u21 ðnÞjsðnÞ P u32 ðnÞjsðnÞ P u43 ðnÞjsðnÞ ;
471
ð15Þ
26 P uNj ðnÞjPaðeðnÞÞ ¼ P u23 4 ðnÞjeðnÞ P u5 ðnÞjeðnÞ 28 P u27 6 ðnÞjeðnÞ P u7 ðnÞjeðnÞ ; :
ð16Þ
P uNj ðnÞjPaðbðnÞÞ ¼ P u34 7 ðnÞjbðnÞ :
ð17Þ
Based on the above calculations, it became possible to compose Bayesian network templates and corresponding models that describe the relationship of observable variables for network traffic for typical attacks Probe, U2R, R2L, Dos / DDos and simulate the probabilities of transition from state to the state of the graph vertices. Bayesian networks templates and corresponding models form the basis of the decision support system knowledge base.
5 Computational Experiments Below we consider the results of testing Bayesian networks templates for the decision support system knowledge base, see Fig. 2 and 3. Test samples included 30 entries for each template. Testing of samples was implemented on a test network, see Fig. 1. The PC and EM algorithms were used [15, 16, 21] to train the networks. The experimental results are shown in Fig. 4 and 5. The graphs show the results of modeling the probabilities of the correct definition and interpretation of Dos / DDos attacks (errors of the 1st kind, respectively) for networks that were trained: using the PC algorithm (line 1) and the EM algorithm (line 2). And the analysis of the simulation results for the obtained Bayesian networks (see Fig. 2, 3) was carried out in the direction of estimating errors of the 1st and 2nd kind. Of the 60 records that were used to support decision-making and did not take part in training the BN, the first 30 records are correct data to check whether a test attack was missed (type 2 error). The remaining 30 entries were used to check for a false positive (type 1 error).
472
V. Lakhno et al.
1- EM algorithm; 2 - PC algorithm Fig. 4. Probability of correct determination of type 1 error when interpreting a DDoS attack for BN that was trained using various algorithms
1-EM algorithm; 2 - PC algorithm Fig. 5. The probability of correctly detecting a DDoS type attack (type 2 errors) for BNs that were trained using various algorithms
6 Discussion of the Results of the Computational Experiment. Figure 4 and Fig. 5 shows errors of the 1st and 2nd kind, respectively. The PC and EM algorithms [15, 16, 21] have confirmed their effectiveness. BN testing showed that the developed templates correctly interpret the attack with a probability of 95–96%. Like the previous series of experiments that were described above, the experiments carried out confirmed that Bayesian reasoning makes it possible to determine the likely representation for each component in the diagram. Some laboriousness of drawing up templates is compensated by the possibility of their repeated use without significant modification if all the developed templates become part of the decision support system knowledge base. Hypothetically, a Bayesian network can be constructed using a simple enumeration to construct the sets of all possible models, if they are not cyclical. However, note that, as shown in [15] and [16], this approach is not optimal. This is because, with the number of more than 7 vertices, a full search requires significant computational resources and takes quite a long time. Therefore, for the developed decision support system and its knowledge base, the preferable option is the preliminary creation of attack patterns, minimization of hidden variables, and the elimination of uninformative variables that do not have a determining effect on the accuracy of intrusion detection. Bayesian networks training consists of incorrectly adjusting the parameters of individual nodes for a specific task, for example, to identify a certain type of network attack. The prospect for the development of this research is the software implementation of the decision support system in high-level algorithmic languages and the subsequent testing of this decision support system and its knowledge base on the segments of real informatization objects networks.
Models for Forming Knowledge Databases for Decision
473
7 Further Research Further development of the proposed method has two standalone directions. The first direction of the research will be sharpened on the development of specific thread simulation tools that will work under an adjustable environment with most likely biobehavior. As soon as the proposed model is an artificial emulation it really cannot mirror the psychology and unpredictability of real threads. However, the development of these tools will provide an unlimited thread pool that could/will possibly be made against the system and therefore provide silicon datasets for models to train how to detect those threads during their lifecycle without actually being under real attack. The second direction will be to create a hardware supply for a thread detection system that can work in conjunction with a prediction system and will be able to block suspicious threads. Hardware will be installed right after the main gateway transferring all the data through itself. As a result of the described directions, we are planning to implement this system on the real working network.
8 Conclusions Patterns of Bayesian networks have been developed for the computational core of the decision support system in the course of predicting threats and stages of intrusion into the OBI of information and communication networks. The constructed BN templates allow you to operate with a variety of random variables and determine the probability of a cyber-threat or a specific stage of an invasion under given conditions. To improve the efficiency of intrusion forecasting, the network parameters were trained. The EM algorithm and the PC algorithm were used, as well as the available statistics for the test network. Probabilistic models for detecting network intrusions based on the use of BN are described. In contrast to the existing models, the proposed approach makes it possible not only to take into account the main stages of intrusions but also to make more reasonable decisions based on the use of both typical intrusion patterns and newly synthesized patterns. All templates and models make up the decision support system computing core for intrusion detection. The effectiveness of the developed models was tested on test samples that were not previously used in training. The results obtained indicate the feasibility of using the EM algorithm to obtain a high-quality result of the recognition of cyber threats to information and communication networks.
References 1. Elshoush, H.T., Osman, I.M.: Alert correlation in collaborative intelligent intrusion detection systems–a survey. Appl. Soft Comput. 11(7), 4349–4365 (2011) 2. Shenfield, A., Day, D., Ayesh, A.: Intelligent intrusion detection systems using artificial neural networks. ICT Express 4(2), 95–99 (2018)
474
V. Lakhno et al.
3. Rees, L.P., Deane, J.K., Rakes, T.R., Baker, W.H.: Decision support for Cybersecurity risk planning. Decis. Support Syst. 51(3), 493–505 (2011) 4. Akhmetov, B., Lakhno, V., Boiko, Y., & Mishchenko, A.: Designing a decision support system for the weakly formalized problems in the provision of cybersecurity. Eastern-Eur. J. Enterp. Technol. (1(2)), 4–15 (2017) 5. Fielder, A., Panaousis, E., Malacaria, P., Hankin, C., Smeraldi, F.: Decision support approaches for cybersecurity investment. Decis. Support Syst. 86, 13–23 (2016) 6. Atymtayeva, L., Kozhakhmet, K., Bortsova, G.: Building a knowledge base for expert system in information security. In: Chapter Soft Computing in Artificial Intelligence of the series Advances in Intelligent Systems and Computing, vol. 270, pp. 57–76 (2014) 7. Dua S., Du, X.: Data Mining and Machine Learning in Cybersecurity, p. 225. CRC Press (2016) 8. Buczak, A.L., Guven, E.: A Survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Commun. Surv. Tutor. 18(2), 1153–1176 (2016) 9. Zhang, L., Yao, Y., Peng, J., Chen, H., Du, Y.: Intelligent information security risk assessment based on a decision tree algorithm. J. Tsinghua Univ. Sci. Technol. 51(10), 1236–1239 (2011) 10. Ben-Asher, N., Gonzalez, C.: Effects of cybersecurity knowledge on attack detection. Comput. Hum. Behav. 48, 51–61 (2015) 11. Goztepe, K.: Designing fuzzy rule based expert system for cyber security. Int. J. Inf. Secur. Sci. 1(1), 13–19 (2012) 12. Gamal, M.M., Hasan, B., Hegazy, A.F.: A Security analysis framework powered by an expert system. Int. J. Comput. Sci. Secur. (IJCSS) 4(6), 505–527 (2011) 13. Chang, L.-Y., Lee, Z.-J.: Applying fuzzy expert system to information security risk Assessment – a case study on an attendance system. In: International Conference on Fuzzy Theory and Its Applications (iFUZZY), pp. 346–351 (2013) 14. Kanatov, M., Atymtayeva, L., Yagaliyeva, B.: Expert systems for information security management and audit, Implementation phase issues, Soft Computing and Intelligent Systems (SCIS). In: Joint 7th International Conference on and Advanced Intelligent Systems (ISIS), pp. 896–900 (2014) 15. Lakhno, V.A., Lakhno, M.V., Sauanova, K.T., Sagyndykova, S.N., Adilzhanova, S.A.: Decision support system on optimization of information protection tools placement. Int. J. Adv. Trends Comput. Sci. Eng. 9(4), 4457–4464 (2020) 16. Xie, P., Li, J. H., Ou, X., Liu, P., Levy, R.: Using Bayesian networks for cybersecurity analysis. In: 2010 IEEE/IFIP International Conference on Dependable Systems & Networks (DSN), pp. 211–220. IEEE, June 2010 17. Shin, J., Son, H., Heo, G.: Development of a cybersecurity risk model using Bayesian networks. Reliab. Eng. Syst. Saf. 134, 208–217 (2015) 18. Özgür, A., Erdem, H.: A review of KDD99 dataset usage in intrusion detection and machine learning between 2010 and 2015. PeerJ Preprints, 4, e1954v1 (2016) 19. Elkan, C.: Results of the KDD’99 classifier learning. ACM SIGKDD Explorat. Newsl. 1(2), 63–64 (2000) 20. Lakhno, V.A., Kravchuk, P.U., Malyukov, V.P., Domrachev, V.N., Myrutenko, L.V., Piven, O.S.: Developing of the cybersecurity system based on clustering and formation of control deviation signs. J. Theor. Appl. Inf. Technol. 95(21), 5778–5786 (2017)
Models for Forming Knowledge Databases for Decision
475
21. Lakhno, V.A., Hrabariev, A.V., Petrov, O.S., Ivanchenko, Y.V., Beketova, G.S.: Improving of information transport security under the conditions of destructive influence on the information-communication system. J. Theort. Appl. Inf. Technol. 89(2), 352–361 (2016) 22. Heckerman, D.: A tutorial on learning with bayesian networks, Tecnical report, Redmond: Microsoft Research (1995). 58 p. 23. Raileanu, L.E., Stoffel, K.: Theoretical comparison between the gini index and information gain criteria. Ann. Math. Artif. Intell. 41(1), 77–93 (2004) 24. Alhendawi, K.M., Al-Janabi, A.A.: An intelligent expert system for management information system failure diagnosis. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-00979-3_26
Developing an Intelligent System for Recommending Products Md. Shariful Islam1, Md. Shafiul Alam Forhad1(&), Md. Ashraf Uddin2, Mohammad Shamsul Arefin1, Syed Md. Galib3, and Md. Akib Khan4 1
2
Department of CSE, CUET, Chattogram, Bangladesh [email protected], {forhad0904063,sarefin}@cuet.ac.bd Information and Communication Technology Division, Dhaka, Bangladesh [email protected] 3 Department of CSE, JUST, Jessore, Bangladesh [email protected] 4 Department of EEE, AUST, Dhaka, Bangladesh [email protected]
Abstract. When it comes to making decisions on which product to buy, knowing the overall reviews from other users becomes very helpful. Evaluating this task from user ratings is so simple. Although a machine can be used for evaluating the recommendations, simply by calculating its user ratings, sometimes it becomes difficult to provide accurate and efficient results. As therefore, evaluating users’ comments usually leads to assigning humans to read all the comments one by one and then let them decide on how useful the product seems. This is a tedious process which wastes our valuable time and resources due to no way of automating the process. On the other hand, selecting the most valuable product from an enormous number of reviews becomes a hectic task for the consumers. Considering all of the above, we have developed a machine learning based intelligent system which not only evaluates the ratings from users’ reviews but also provides a reflection about the products which are popular simply by analyzing those reviews. Keywords: Sentiment analysis Machine learning Random forest classifier K-nearest neighbors Support vector machine Deep learning
1 Introduction With the ever-increasing growth of the internet, there have been a lot of websites containing a continuously huge number of user-generated texts all around the world. Depending on the types of websites, a large amount of those texts contain reviews belonging to various types of products. If processed properly, these texts can be used in order for understanding public opinion on different products so that manufacturing companies can adjust their products accordingly. On the other hand, this can also help customers make proper decisions, when it comes to buying products. While it is easier to figure out the utilization of a product from learning the average rating, it becomes © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 476–490, 2021. https://doi.org/10.1007/978-3-030-68154-8_43
Developing an Intelligent System for Recommending Products
477
difficult when ratings are not provided on the website. Finding the best product involves reading a plenty number of reviews on several items and finally making decision for getting the best product. In either case, performing sentiment analysis on online reviews for various products can help both the companies and customers for saving their valuable time. Supervised learning is a learning technique which requires suitable labels for efficient result. However, polarity of individual text review is highly unlikely to be available with texts themselves. Even if they are available, these are either machine generated or human viewed, and the main problem for both cases is that they do not reflect the reviewer’s original intention. Moreover, ratings provided with the reviews can be considered a valid label since a rating is the originally intended quantitative representation of the reviews. Hence, in this paper, we have proposed an intelligent system in order to predict ratings for different products and also evaluate which product is best depending on the user’s reviews. After predicting ratings with different machine learning models, various performance measurements were checked for the comparison and evaluation of the overall efficiency. The rest of the paper is organized as follows: In Sect. 2, Literature review related to our work is shown. A detail description of our proposed system is shown in Sect. 3. In Sect. 4, Experimental result is shown. Finally, a conclusion section is provided in Sect. 5.
2 Literature Review There are some researches which have previously been done on similar topics. For example, Ahmed et al. Have proposed a framework in their paper [1] about deep learning in order to predict ratings from online reviews from Yelp and Amazon Datasets. In their paper, they have managed to achieve a very good performance using various models. However, it is seemed that they had enough lacking using the linguistic rules during the analysis of review text. Due to the ever-growing availability of online reviews, it becomes difficult to draw chronological insights from them. For this reason, in the paper [1] by Murtadha et al., chronological sentiment analysis using dynamic and temporal clustering was presented. In their research, they have also considered consistency in order to assess the performance of their algorithms. However, they have utilized unified feature set of window sequential clustering in order to enhance the ensemble learning. Murtadha et al. Have also proposed in another paper [2] an automated unsupervised method of sentiment analysis. Their method involves contextual analysis and unsupervised ensemble learning. SentiWordNet has been deployed in both phases. The research datasets include Australian Airlines and HomeBuilders reviews for domain performance and algorithm seems to be the most accurate for some datasets even though Support Vector Machine (SVM) seems to be the most accurate on average. Applying deeper contextual analysis might have resulted in a higher accuracy score. Long Mai and Bac Le have taken a slightly different approach towards rating prediction of product reviews. In their paper [3], they have proposed a unique framework for automatic collection and processing of YouTube comments on various products which are then used for sentiment analysis of those products. Various models
478
Md. S. Islam et al.
have been used on their reviews performance with accuracy ranging from 83%–92%. However, the domain which has been used for the dataset needs to be expanded in order to make their models more effective in the real world. In [4], they presented a novel approach that uses aspect level sentiment detection. They have tested their model on Amazon customer reviews and their acquired crossvalidation score seems to be somewhere near 87%, which is 10% less than the score has been claimed in the paper. More challenging areas like spam, sarcasm etc. have not been touched in their current research. Rishith et al. Introduces in their paper [5], a sentiment analysis model that uses Long Short Term Memory for analyzing movie reviews. Their model seems to have a very high accuracy on both the train and the test data. However, it might be able to achieve a better performance if it was applied on more varied datasets. This paper [6] reviews the foremost later considers approximately that have utilized profound learning to disentangle conclusion examination issues, such as suspicion limit. It is superior to combine significant learning techniques with word embeddings than with Term Frequency-Inverse Document Frequency (TF-IDF) when performing a supposition examination. Eventually, a comparative take into account has been conducted on the searching comes more or less gotten for the differing models and input highlights. In [7], the authors used applied machine learning algorithms to detect polarity of the reviews. Their models seem having a very high accuracy with Naïve Bayes. However, their algorithm cannot detect from review why people like or dislike a product. In [8], they used Isolation Forest for sentiment analysis on data recovered from Amazon on products of different brands. While their project flow diagram shows sentiment detection process, no prediction measurement score is present within their paper. The use of convolutional neural network is also common in text-based prediction. In [9], they proposed an approach for predicting the helpfulness of different online reviews in the form of helpfulness score by using Convolutional Neural Network (CNN). However, in this paper, their dataset is limited to Amazon and Snapdeal only, and it has seemed that they never use any model for getting metadata information either. Mamtesh et al. Analyzed customer reviews for a number of movies using KNearest Neighbors (KNN), Logistic Regression and Naïve Bayes in their paper [10]. They were able to predict the sentiment of the reviews with very high accuracy. They might be able to achieve higher accuracy with a hybrid approach. In [11], the author analyzes user sentiments of Windows Phone applications and classifies their polarity by using Naïve Bayes model. The model seems to be able to achieve a high accuracy. But it can classify in only 2 categories: positive and negative. Rintyarna et al. Proposed a method in order to extract features from texts by focusing on the words’ semantics in their paper [12]. To achieve this, they used an extended Word Sense Disambiguation method. From their experiment, it seems that their proposed method boosts the performance of the ML algorithms sufficiently. In [13], they have proposed a method for data pre-processing within three machine learning techniques Support Vector Machine (SVM), Naïve Bayes Multinomial (NBM) and C4.5 for the sentiment analysis for both English and Turkish language.
Developing an Intelligent System for Recommending Products
479
Finally, they concluded that although SVM worked well for the English language, NBM performed well for the Turkish language. In [14], they have proposed an appropriate and accurate algorithm for finding movie reviews among some of machine learning based algorithms. In [15], they proposed a hybrid sentiment analysis method for analyzing Amazon Canon Camera Review in order to classify them into 2 polarity classes, positive &negative. Compared to SVM, their approaches seem to be having very high score in different performance measurements including accuracy, precision, recall, and others. In [16], Haque et al., they used supervised learning method on Amazon Product Review on large scale. The model used in their algorithms has performance around 86% to 94% varying on different models and different datasets. Their main limitation is the scarcity of standard dataset required with 10-fold cross-validation. Sentiment analysis is a demanding task comprising of natural language processing, data mining, and machine learning. To tackle this challenge furthermore, deep learning models are often merged with them. In [17], they have emphasized on the execution of different DL techniques in Sentiment Analysis. In [18], the assessment was done by utilizing 10-Fold Cross Validation and the estimation precision is done by Confusion Matrix and Receiver Operating Characteristic (ROC) bend. The result appeared an expanding in precision SVM of 82.00% to 94.50% results of testing the model can be concluded that the Support Vector Machinebased Particle Swarm Optimization provide solutions to problems of classification more accurate smartphone product reviews.
3 Proposed Methodology The proposed methodology consists of the following steps for polarity prediction through sentiment analysis. Figure 1 shows a block diagram for the entire process. 3.1
Dataset Description
Two different datasets have been used here. The first dataset contains over 400,000 reviews of unlocked phones for sale the Amazon website. 40,000 of these reviews have been used for this paper. This dataset has 6 columns: Product Name, Brand Name, Price, Rating, Reviews and Review Votes. It was collected from [19] as CSV format. The second dataset has more than 34,000 reviews provided by Datafiniti on Amazon products like Kindle E-readers, Echo smart speaker, and so on. This dataset contains 24 columns: id, dateAdded, dateUpdated, primary Categories, imageURLs, keys, reviews. id, reviews.rating, reviews.text, reviews.title, etc. However, it does not contain type of the products; which has been manually added to the dataset. It has reviews mostly on batteries and tablets. This is why only these two product types have been used for this paper. This dataset [20] was acquired as CSV format.
480
3.2
Md. S. Islam et al.
Pre-processing
This step involves preparing data from the dataset for training model. There are several steps to this. 3.2.1 Initial Database Based Pre-processing Since the dataset is very large with about 40 million reviews, only the first 40000 reviews have been selected for training for the first dataset while 26467 reviews were selected for the second. Since the product pages are written by the sellers, almost all of the product names consist of their description in the first dataset. As a result, the same product ended up with multiple different unique product names. For this reason, the names of products with such issues have been replaced only by the common name of the products in the first dataset for better product suggestion. Moreover, among those reviews, some products have a very small number of user reviews, indicating very low popularity among users. However, due to such a low amount, very small number of reviews with high ratings (either because only the satisfied customers ended up reviewing the products or because of fake reviews with high ratings) will give them higher average ratings than most of the actually popular products. This is why products with less than 25 reviews have been filtered out. For the second dataset, an extra column is added before filtering in order to identify which product belongs in which product type. Since this dataset is smaller, products with less than 15 reviews were filtered out this time. Next, only the reviews of the most common product types are selected for machine learning. The next part is Text Cleaning. 3.2.2 Cleaning Text This step involves transforming the text into a much simpler form of data so that the result can easily be crunched into numbers. This step involves the following sub-steps. 3.2.2.1. Punctuation Removal In this step, all the punctuation symbols are removed from the texts since they usually do not have any significant impact on the final rating. 3.2.2.2. Normalization Here, all uppercase letters in all the texts are replaced by their lowercase counterparts for further simplicity. 3.2.2.3. Stop-Word Removal There are some words in every language that are so common in texts that their existence in a sentence does not influence the meaning of the sentence too much. For example, articles, prepositions, conjunctions etc. are some common stop-words. Since their influence in the meaning of a sentence is very little removing them results in a more condensed text.
Developing an Intelligent System for Recommending Products
481
3.2.2.4. Lemmatization Lemmatization involves resolving words to their dictionary form. Hence, dictionary search is usually used in order to proper lemmatize the words into their dictionaryaccurate base forms. The purpose of lemmatization is to make sure that the algorithm can distinguish a word in its different forms. This also reduces the word index of the tokenizer and hence makes the result more efficient.
3.2.3 Final Pre-processing and Training In this step, the labels are adjusted and the dataset is split in the ratio of 3:1. Then the training data used for tokenization and all out of vocabulary words are represented by the “< OOV>” symbol. Next, the texts both training and test data are converted to sequences according to the tokenizer. Finally, all the reviews are padded in order to ensure equal length for all reviews. 3.3
Training
In this step, K-Nearest Neighbors (KNN) classifier, Random Forest Classifier (RFC) & Support Vector Machine (SVM) classifier are trained using padded training data as features and ratings as labels. Moreover, a deep learning classifier is also developed for the same purpose. For the second dataset, rather than being trained separately, the Machine Learning (ML) models are trained on 75% of the entire combined dataset to Battery reviews and Tablet reviews for better generalization. 3.4
Testing and Rating Prediction
When training is complete, these classifiers are tested against the padded test data for performance measurement. Even though the ML models are trained on 75% of the combined dataset, they are tested individually against Battery’s and Tablet’s test datasets. The values predicted by the classifiers are stored as the predicted ratings. 3.5
Product Recommendation
This step is performed once after the completion of training session for each classifier. Here, average predicted rating for each product is separately calculated. These average ratings are used for recommending products.
482
Md. S. Islam et al. Storage Module Unlocked Phone Reviews
Amazon Product Reviews
Dataset Splitting
Training Set
Pre-Processing
Eliminating Unnecessary Reviews
Test Set
Tokenization
Sentence Splitting Text Sequencing
Stopword Removal Padding Special Character Removal Training
Testing
Normalization
Rating Prediction Lemmatization
Recommendation Generation
Fig. 1. An intelligent system for recommending products
4 Experimental Results The dataset & number of reviews used for machine learning for different types of products is shown in Table 1. Table 1. Review count for different types of products Product types Unlocked Phones Batteries Tablets
Dataset no. Number of reviews 1 40000 2 12071 2 14396
Developing an Intelligent System for Recommending Products
483
Table 2. Accuracy, precision and recall for several classifiers on different products Classifiers Products KNN Accuracy Precision Recall RFC Accuracy Precision Recall SVM Accuracy Precision Recall DL Model Accuracy Precision Recall
Unlocked Phones Batteries 82.75% 69.18% 78.67% 47.26% 76.37% 41.12% 86.63% 77.5% 90.35% 77.92% 77.44% 40.12% 81.80% 78.66% 91.93% 89.48% 72.27% 36.17% 86.90% 77.80% 80.86% 53.28% 79.54% 47.22%
Tablets 84.7% 67.54% 73.3% 88.86% 95.81% 73.57% 89.52% 97.26% 74.57% 87.21% 78.34% 77.38%
From Table 1, we can see that 40000 reviews on unlocked phone reviews from the first dataset have been used. On the other hand, Reviews for batteries and tablets both come from the second dataset and contain 12071 and 14396 reviews respectively. The different score types used for measuring performance of different classifiers are accuracy, precision, and recall. Accuracy ¼
The Number of Correct Predictions Total Number of Predictions
Precision ¼ Recall ¼
TP TP þ FP
TP TP þ FN
Here, TP, FP & FN denote True Positive, False Positive & False Negative respectively. The overall precision for a multi-class classification is the average of the precision scores of every individual class. The same goes for the overall recall score as well. Table 2 shows different performance scores for certain classifiers on several types of products. Figure 2 shows a graphical representation of accuracy, precision and recall for several classifiers on different product types. From this figure, it can be seen that Deep Learning based classifier has both the highest accuracy and the highest recall score for unlocked phone reviews, with an accuracy score of 86.90% and a recall score of 79.54%. However, Support Vector Machine (SVM) has the highest precision score of 91.93% for the same reviews.
484
Md. S. Islam et al.
Fig. 2. Graphical presentation of performance measurement scores of different classifiers
The second type of products is battery. For this product’s reviews, Support Vector Machine (SVM) seems to be having both the highest accuracy score and the highest precision with accuracy score of 78.66% and precision score of 89.48%. However, the Deep Learning (DL) based classifier has the highest recall value of 47.22%. Finally, tablets are the last product type. For the selected tablet reviews, Support Vector Machine again has the highest accuracy score and the highest precision score of 89.52% and 97.26% respectively. However, Deep Learning (DL) based model again seems to have the highest recall score with a score of 77.38%.
Developing an Intelligent System for Recommending Products
485
Table 3 shows the training time for all 4 of the classifiers. From here, it can be seen that the classifiers sorted in their ascending order of their training time are: KNN, RFC, SVM and DL for either dataset; with KNN’s training time being less than 1second in both cases and DL model’s training being greater than 300seconds for both cases. It can be noted that training time is higher on the 2nd dataset for all classifiers; despite the 2nd dataset being smaller. This is because the 1st dataset is very large and that is why, compared to the 2nd dataset, less training was required for better generalization. Table 3. Training time (in seconds) for different classifiers Classifier Dataset 1 (Unlocked Phones) Dataset 2 (Batteries and Tablets) KNN 0.0568621 0.0903794 RFC 0.8510487 1.2277246 SVM 41.0242428 52.5817226 DL Model 372.0272711 519.7536614
Figure 3 shows the top 5 phones according to the average of their individual ratings. This can used as a benchmark for comparing the effectiveness recommendation systems implemented with the ML models.
Fig. 3. Graphical presentation of the average of the actual ratings of Top 5 phones
From Fig. 4, we can see that the DL (Deep Learning) model managed to predict 4 items from the top 5 Phones based on the average of the average ratings (Fig. 3) with mostly the same order.
486
Md. S. Islam et al.
Fig. 4. Graphical presentation of average of predicted scores for Top 5 phones recommended by different machine learning models
Similar to Fig. 3, Fig. 5 also shows the average of the actual ratings, but for the batteries. From Fig. 6, we can see that all four machine learning models predicted the order correctly. However, both KNN and DL models were able to predict the average close to the actual values.
Developing an Intelligent System for Recommending Products
487
Fig. 5. Graphical presentation of the average of the actual ratings of the types of batteries
Fig. 6. Graphical presentation of average of predicted scores for the types of batteries
Figure 7 shows a graphical representation of top 5 tablets based on the average of their actual ratings.
488
Md. S. Islam et al.
Fig. 7. Graphical presentation of the average of the actual ratings of Top 5 tablets
From Fig. 8, it can be seen that KNN model was able to predict 4 items from Fig. 7, with the order and the values also being mostly accurate.
Fig. 8. Graphical presentation of average of predicted scores for top 5 tablets recommended by different machine learning models
Developing an Intelligent System for Recommending Products
489
5 Conclusion In this paper, we have developed an intelligent system which predicts ratings from different product reviews and provides recommendation to the customers. From the experimental results, it has been found that SVM gives better result based on the performance scores. However, during training time for each classifier, it seems that Random Forest (RF) is more efficient than SVM and the performance is too close to SVM for the three product types. Overall, performance scores for all four classifiers are pretty good. Bag of Words has the tendency of increasing the dimensionality of the matrix with the increase of vocabulary size. Moreover, since the context of the words cannot be recovered from Bag of Words approach, semantic meaning is often not preserved in this method. In order to address these issues, we have decided to posttruncate and post-pad the review texts with a maximum word length of 15 and 30 for Dataset 1 and Dataset 2 respectively. In future, we will try to improve the performance of our system by using some rule-based pre-processing techniques.
References 1. Al-sharuee, M.T.: Sentiment analysis : dynamic and temporal clustering of product reviews (2020) 2. Al-sharuee, M.T., Liu, F., Pratama, M.: Sentiment analysis: an automatic contextual analysis and ensemble clustering approach and comparison. Data Knowl. Eng. (2018) 3. Mai, L., Le, B.: Joint sentence and aspect-level sentiment analysis of product comments. Ann. Oper. Res. (2020) 4. Nandal, N., Tanwar, R., Pruthi, J.: Machine learning based aspect level sentiment analysis for Amazon products. Spat. Inf. Res. (2020) 5. Rishith, U.T., Rangaraju, G.: Usense: a user sentiment analysis model for movie reviews by applying LSTM. Int. J. Res. Eng. Appl. Manag. 01, 369–372 (2020) 6. Dang, N.C., Moreno-García, M.N., De la Prieta, F.: Sentiment analysis based on deep learning: a comparative study. Electron. 9(3) (2020) 7. Jagdale, R.S., Shirsat, V.S., Deshmukh, S.N.: Sentiment Analysis on Product Reviews Using Machine Learning Techniques : Proceeding of CISC 2017 Sentiment Analysis on Product Reviews Using Machine Learning Techniques, no. January. Springer Singapore (2019) 8. Salmiah, S., Sudrajat, D., Nasrul, N., Agustin, T., Harani, N.H., Nguyen, P.T.: Sentiment Analysis for Amazon Products using Isolation Forest (6), 894–897 (2019) 9. Saumya, S., Singh, J.P., Dwivedi, Y.K.: Predicting the helpfulness score of online reviews using convolutional neural network. Soft Comput., no. BrightLocal 2016 (2019) 10. Mamtesh, M., Mehla, S.: Sentiment analysis of movie reviews using machine learning classifiers. Int. J. Comput. Appl. 182(50), 25–28 (2019) 11. Normah, N.: Naïve Bayes algorithm for sentiment analysis windows phone store application reviews. SinkrOn 3(2), 13 (2019) 12. Rintyarna, B.S., Sarno, R., Fatichah, C.: Semantic features for optimizing supervised approach of sentiment analysis on product reviews. Computers 8(3), 55 (2019) 13. Parlar, T., Özel, S.A., Song, F.: Analysis of data pre-processing methods for sentiment analysis of reviews. Comput. Sci. 20(1), 123–141 (2019) 14. Dwivedi, S., Patel, H., Sharma, S.: Movie reviews classification using sentiment analysis. Indian J. Sci. Technol. 12(41), 1–6 (2019)
490
Md. S. Islam et al.
15. Chhabra, I.K., Prajapati, G.L.: Sentiment analysis of Amazon canon camera review using hybrid method. Int. J. Comput. Appl. 182(5), 25–28 (2018) 16. Haque, T.U., Saber, N.N., Shah, F.M.: Sentiment analysis on large scale amazon product reviews. In: 2018 IEEE International Conference Innovation Research Deviation, no. May, pp. 1–6 (2018) 17. Kalaivani, A., Thenmozhi, D.: Sentimental Analysis using Deep Learning Techniques, pp. 600–606 (2018) 18. Wahyudi, M., Kristiyanti, D.A.: Sentiment analysis of smartphone product review using support vector machine algorithm-based particle swarm optimization. J. Theor. Appl. Inf. Technol. 91(1), 189–201 (2016) 19. Amazon Reviews: Unlocked Mobile Phones | Kaggle. https://www.kaggle.com/ PromptCloudHQ/amazon-reviews-unlocked-mobile-phones 20. Consumer Reviews of Amazon Products | Kaggle. https://www.kaggle.com/datafiniti/ consumer-reviews-of-amazon-products
Branch Cut and Free Algorithm for the General Linear Integer Problem Elias Munapo(&) Department of Statistics and Operations Research, School of Economic Sciences, North West University, Mafikeng, South Africa [email protected]
Abstract. The paper presents a branch, cut and free algorithm for the general linear integer problem. This proposed algorithm, like most of the other exact algorithms for the general linear integer problem also relies on the usual strategy of relaxing the linear integer problem and then solve it to obtain a continuous optimal solution. If solution is integer then the optimal solution is found else the largest basic variable in the continuous optimal solution is selected and freed of integral restrictions and then the branch and cut algorithm is used to search for the optimal integer solution. The main and obvious challenge with the branch and bound related algorithms is that the numbers of nodes generated to verify the optimal solution can sometimes explode to unmanageable levels. Freeing a selected variable of integral restrictions which is proposed in this paper can significantly reduce the complexity of the general linear integer problem. Keywords: Linear integer problem Continuous optimal solution Variable freeing Branch and cut algorithm Computational complexity Optimality verification
1 Introduction The general linear integer programming (LIP) problem has so many important applications in life. These applications include, interactive multimedia systems [1], home energy management [15], cognitive radio networks [7], mining operations [22], relay selection in secure cooperative wireless communication [11], electrical power allocation management [18, 20], production planning [4], selection of renovation actions [2], waste management, formulation and solution method for tour conducting and optimization of content delivery networks [16]. The main challenge with the branch and bound related algorithms is that the numbers of nodes that are required to verify the optimal solution can sometimes explode to unmanageable levels. Freeing a variable of all its integral restrictions which is proposed in this paper can significantly reduce the complexity of the general linear integer problem. In fact the LIP problem is NP hard, thus it is very difficult to solve and heuristics are still being used up to now to approximate optimal solutions in reasonable times. An efficient consistent exact solution for the general LIP is still at large. The coefficient matrix for the general linear integer problem is not unimodular [14]. A branch, cut and free algorithm which combines the branch and cut algorithm and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 491–505, 2021. https://doi.org/10.1007/978-3-030-68154-8_44
492
E. Munapo
freeing of a selected variable is proposed in this paper. This algorithm is similar to [17] but differs in that it can be used for any linear integer model with any number of variables and linear constraints. On the other hand the method proposed in [17] requires calculation of variable sum limits before solving and it works for a single constrained linear integer problem and no variables are freed in the solving process.
2 The General LIP Maximize Z ¼ c1 x1 þ c2 x2 þ . . . þ cn xn , Such that: a11 x1 þ a12 x2 þ . . . þ a1n xn b1 ; a21 x1 þ a22 x2 þ . . . þ a2n xn b2 ; ... am1 x1 þ am2 x2 þ . . . þ amn xn bm :
ð1aÞ
Minimize Z ¼ c1 x1 þ c2 x2 þ . . . þ cn xn , Such that: a11 x1 þ a12 x2 þ . . . þ a1n xn b1 ; a21 x1 þ a22 x2 þ . . . þ a2n xn b2 ; ...
ð1bÞ
am1 x1 þ am2 x2 þ . . . þ amn xn bm : Where aij ; bi and cj are constants and xj is integer 8; i ¼ 1; 2; . . .; m and j ¼ 1; 2; . . .; n:
3 Variable Integral Restriction Freeing Theorem Given n variables in an equality constraint such as that given in (2), then one of the n variables can be freed of integral restrictions. In other words one out of the n variables is not supposed to be restricted to integer. a1 x1 þ a2 x2 þ . . . þ aj xj þ . . . þ an xn ¼ b:
ð2Þ
Where aj and b are integral constants and the unknown variables are integer 8j ¼ 1; 2; . . .; n: If all the variables are integers then one variable ðxj Þ out of n variables is not supposed to be restricted integer.
Branch Cut and Free Algorithm
493
Proof From (2) the variable ðxj Þ can be made the subject as given in (3). aj xj ¼ b ða1 x1 þ a2 x2 þ . . . þ an xn Þ:
ð3Þ
If a sum of integers is subtracted from an integer, then the difference is also an integer. According to constraint (2) the variable xj is a free variable.
4 Continuous Optimal Solution The general continuous optimal tableau is given in Table 1. The arrangement of variables is done in this way just for convenience, as any optimal continuous tableau can be arranged in many ways. Table 1. Continuous optimal tableau.
Basic variables ( x)
Non-basic variables ( s )
0 0 0 … 0 1 0 0 … 0 0 1 0 … 0 0 0 1 … 0
ω1 α11 α 21 α 31
ω2 α12 α 22 α 32
ω3 α13 α 23 α 33
α m2
α m3
ωm … α1m … α 2m … α 3m
…
rhs
γ β1 β2 β3
…
0
0 0 … 1
α m1
…
α mn
βm
Since this is the continuous optimal tableau then (4) and (5) are valid. x1 ; x2 ; x3 ; . . .; xm 0:
ð4Þ
b1 ; b2 ; b3 ; . . .; bm 0:
ð5Þ
In addition c is the objective value and aij is a constant. Both c and aij can either be negative or positive. In this paper Zcopt is the continuous optimal solution. The other examples of general continuous optimal solutions are given in Table 2 and Table 3. The specific continuous optimal solutions for numerical illustrations are given in Table 4 and Table 5.
5 Surrogate Constraint The surrogate constraint or clique constraint is obtained by adding all the rows of original variables that are basic at optimality as given in (6).
494
E. Munapo Table 2. Sub-Problem 1.
Table 3. Sub-Problem 2.
x1 þ a11 s1 þ a12 s2 þ a13 s3 þ . . . þ a1m sm ¼ b1 þ x2 þ a21 s1 þ a22 s2 þ a23 s3 þ . . . þ a2m sm ¼ b2 þ x3 þ a31 s1 þ a32 s2 þ a33 s3 þ . . . þ a3m sm ¼ b3
ð6Þ
... xm þ am1 s1 þ am2 s2 þ am3 s3 þ . . . þ amm sm ¼ bm : This simplifies to (7). x1 þ x2 þ x3 þ . . . þ xm þ k1 s1 þ k2 s2 þ k3 s3 þ . . . þ km sm ¼ bc :
ð7Þ
Branch Cut and Free Algorithm
495
Where kj ¼ a1j þ a2j þ a3j þ . . . þ amj 8j; j ¼ 1; 2; 3; . . .; m:
ð8Þ
bc ¼ b1 þ b2 þ b3 þ . . . þ bm :
ð9Þ
Since bc is not necessarily integer then we have (10). bc ¼ I þ f :
ð10Þ
Where I is the integer part and f is the fractional part. Since at optimality the non-basic variables are zero as given in (10) i.e. s1 ¼ s2 ¼ s3 ¼ . . . ¼ sm ¼ 0:
ð11Þ
Then (12) and (13) are valid. Sub-Problem a: x1 þ x2 þ x3 þ . . . þ xm I:
ð12Þ
x1 þ x2 þ x3 þ . . . þ xm I þ 1:
ð13Þ
Sub-Problem b:
In other words we add (14) to Sub-Problem a instead of (12). x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm þ xn þ 1 ¼ I:
ð14Þ
Similarly (13) becomes (15). x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm xn þ 2 ¼ I þ 1:
ð15Þ
As a diagram, the original problem and the two sub-problems are related as shown in Fig. 1.
0
x1 + x2 + x3 + ... + x j
x1 + x2 + x3 + ... + x j
+... + xm + xn +1 = I .
+... + xm − xn + 2 = I + 1.
a
b Fig. 1. The two initial sub-problems of an LIP.
496
E. Munapo
Where Node 0 is the original problem, Node a is given by Table 2 which is the original problem after adding constraint (14) and Node (b) is given by Table 3 which is obtained after adding constraint (15) to original problem. The unrestricted variable is selected from that basic variable satisfying (16). b‘ ¼ Maxfb1 ; b2 ; b3 ; . . .; bm g:
ð16Þ
Justification: The larger the variable range, the more branches we are likely to obtain, so it makes sense not to subject such a variable to integral restriction. The equality constraint (14) is added to continuous optimal tableau to obtain Sub-Problem a. Similarly (15) is added to the continuous optimal tableau to obtain Sub-Problem b. Where variable xj is now unrestricted by using the variable integral restriction freeing theorem. In this paper Zopt is the optimal integer solution.
6 Measuring Complexity Reduction The percentage complexity reduction ðqÞ when using the proposed algorithm is given as (17). q¼
ðR rÞ 100%: R
ð17Þ
Where R is the number of branch and bound nodes before using the proposed algorithm and r is the number of nodes after freeing a selected variable. 6.1
Numerical Illustration 1
Maximize Z ¼ 111x1 þ 211x2 þ 171x3 þ 251x4 þ 151x5 , Such that: 110x1 þ 210x2 þ 115x3 þ 112x4 31x5 7189; 50x1 þ 183x2 þ 261x3 79x4 þ 259x5 6780; 142x1 þ 244x2 140x3 þ 139x4 þ 153x5 2695;
ð18Þ
224x1 87x2 þ 128x3 þ 129x4 þ 133x5 12562; 155x1 þ 252x2 þ 258x3 þ 156x4 þ 157x5 2533: Where x1 ; x2 ; x3 ; x4 ; x5 0 and integer. The continuous optimal solution for Numerical Illustration 1 is presented in Table 4.
Branch Cut and Free Algorithm
497
Table 4. Continuous optimal tableau (18).
Where s1 ; s2 ; s3 ; s4 ; s5 0 are slack variables and these satisfy (19). 110x1 þ 210x2 þ 115x3 þ 112x4 31x5 þ s1 ¼ 7189; 50x1 þ 183x2 þ 261x3 79x4 þ 259x5 þ s2 ¼ 6780; 142x1 þ 244x2 140x3 þ 139x4 þ 153x5 þ s3 ¼ 2695;
ð19Þ
224x1 87x2 þ 128x3 þ 129x4 þ 133x5 þ s4 ¼ 12562; 155x1 þ 252x2 þ 258x3 þ 156x4 þ 157x5 þ s5 ¼ 2533: Solving (18) directly by the automated branch and bound algorithm it takes 459 nodes to verify the optimal solution given in (20). x1 ¼ 31; x2 ¼ 0; x3 ¼ 22; x4 ¼ 9; x5 ¼ 0; Zopt ¼ 9462:
ð20Þ
Using the simplex method it gives (21) as the continuous optimal solution. x1 ¼ 31:5636; x2 ¼ 0; x3 ¼ 22:7229; x4 ¼ 9:8910; x5 ¼ 0:1264; Zcopt ¼ 9890:8977:
ð21Þ
6.1.1 Freeing the Selected Variable The surrogate constraint or clique equality becomes (22). x1 þ x3 þ x4 þ x5 ¼ 64:3039:
ð22Þ
The variable to be freed of integer restrictions is determined by (23). b‘ ¼ Maxf31:5636; 0; 23:7229; 9:8910; 0:1264g ¼ 31:5636:
ð23Þ
The largest value 31.5636 comes from variable x1 which implies that this variable is to be freed of the integral restriction.
498
E. Munapo
Sub-Problem a - additional constraint: x1 þ x3 þ x4 þ x5 þ x6 ¼ 64:
ð24Þ
Sub-Problem b - additional constraint: x1 þ x3 þ x4 þ x5 x7 ¼ 65:
ð25Þ
Note that we consider only the original variables ðx1 ; x3 ; x4 &x5 Þ that are in the optimal basis. Where x6 and x7 are the additional variables which are also restricted to integers. In diagram form the original problem and the two sub-problems (a) and (b) are related as shown in Fig. 2. 0
x1 + x3 + x4 + x5 + x6 = 64
x1 + x3 + x4 + x5 − x7 = 65
a
b
Fig. 2. The two initial sub-problems of LIP for Numerical Illustration 1
Solving Sub-Problem (a) using the automated branch and bound algorithm, it takes 21 nodes (from Sub-Problem a) to verify the optimal solution given in (20). x1 ¼ 31; x2 ¼ 0; x3 ¼ 22; x4 ¼ 9; x5 ¼ 0; Z ¼ 9462:
ð26Þ
Solving Sub-Problem 2 using the automated branch and bound algorithm, it takes 1 node to verify infeasibility. The complexity of this problem is reduced from 459 to just (21 + 1) = 22 nodes i.e. complexity reduction is 95.2 as given in (27). q¼
ð459 22Þ 100% 459
¼ 95:2%:
ð27Þ
Mere addition of the cut given in (28) for Sub-problem (a) and the cut given in (29) for Sub-problem (b) does not reduce the total number of nodes required to verify an optimal solution by using the automated branch and bound algorithm. x1 þ x3 þ x4 þ x5 64:
ð28Þ
x1 þ x3 þ x4 þ x5 65:
ð29Þ
In fact it takes 461 nodes for (Sub-Problem a) and 1 node for (Sub-Problem b) which gives a total of 462 nodes to verify optimality.
Branch Cut and Free Algorithm
499
A set of 100 randomly generated linear integer problems have shown that complexity decreases significantly if the largest basic variable in the continuous optimal solution is freed of integral restrictions. As a result of this, freeing of variables is now combined with cuts to form what is proposed in this paper as the branch, cut and free algorithm for the general linear integer problem.
7 Branch and Cut Algorithm Previously cuts have been used to enhance the performance of the branch and bound algorithm [3, 17, 22] to form what is now known as the branch and cut algorithm [6, 8, 15, 19]. In addition pricing has been used in a branch and bound setting to come up with the well-known branch and price algorithm [5, 21]. The branch and cut and branch and price are well-known ideas and have been combined to form a hybrid which is now called the branch, cut and price [9, 10]. With all these wonderful and impressive ideas available, the general linear integer problem is still NP hard and very difficult to solve up to now. Heuristics are still being used to this day to approximate optimal solutions to this difficult problem [11–13]. Freeing a variable from integral restriction is now being used in the context of a branch and cut to give birth to the branch, cut and free algorithm for the general integer problem. 7.1
Proposed Branch, Cut and Free Algorithm
The proposed algorithm is made up of the following steps. Step 1: Relax the given LIP and use linear programming techniques to obtain a continuous optimal solution. If optimal solution is integer then it is also optimal to the original problem else go to Step 2. Step 2: Use the continuous optimal tableau to construct Sub-Problem a and SubProblem b. Determine the variable xj to be freed of integral restriction. Step 3: From the Sub-Problems a and b, use the branch and cut to search for smallest i that satisfies (30) and (31). x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm ¼ I i; i ¼ 0; 1; 2. . .
ð30Þ
x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm ¼ I þ 1 þ i; i ¼ 0; 1; 2. . .
ð31Þ
Where xj is the freed variable and the rest are integers, 8j ¼ 1; 2; 3; . . .; m. Step 4: Call integer solution from (30) Za and that from (31) Zb . The optimal solution ðZopt Þ is given by (32) if a maximization problem and (33) if a minimization problem. Zopt ¼ Max½Za ; Zb :
ð32Þ
Zopt ¼ Min ½Za ; Zb :
ð33Þ
Step 4: Verify optimality to determine the actual optimal integer solution.
500
E. Munapo
7.2
Verification of Optimality
We assume the continuous optimal solution and integer optimal solution are known as given in Fig. 3.
Fig. 3. Proof of optimality
Where I. II. III. IV. V.
‘copt is when hyperplane x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm is at Zcopt , ‘opt is when hyperplane x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm is at Zopt , ‘s is when hyperplane x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm has the smallest value, ‘m is when hyperplane x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm has the largest value, c1 x1 þ c2 x2 þ . . . þ cn xn is the green objective plane and it moves from the
continuous optimal point ðZcopt Þ to the optimal integer point ðZopt Þ. From Fig. 3, it can be noted that there are no other integer points in the shaded region i.e. in the region between the continuous optimal point ðZcopt Þ and the optimal integer point ðZopt Þ besides that optimal integer point. 7.2.1 Searching for the Optimal Integer Point Armed with these important ideas we can now develop the branch, cut and free algorithm. The first stage is to assume that Zopt is not known. From the diagram we know that this optimal integer point ðZopt Þ can be on the left hand side or the right hand side of ‘copt . Since Zopt is not known then ‘s and ‘m are also not known. We can avoid not knowing ‘s and ‘m ; but we still get ‘opt . This is done by searching from ‘copt and going in both directions until ‘opt is obtained. After obtaining ‘opt there is a need to verify this value for optimality.
Branch Cut and Free Algorithm
501
7.2.2 Optimality Is Verification Let Zopta be the optimal solution from Sub-Problem a. Optimality can be verified by adding constraint set (34) to Sub-Problem a. x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm ‘opta 1; c1 x1 þ c2 x2 þ . . . þ cn xn Zopta 1:
ð34Þ
Let Zoptb be the optimal solution coming from Sub-Problem b. Optimality in this case can be verified by adding constraint set (35) to the Sub-problem b. x1 þ x2 þ x3 þ . . . þ xj þ . . . þ xm ‘optb þ 1; c1 x1 þ c2 x2 þ . . . þ cn xn Zoptb 1:
ð35Þ
The optimal integer solution Zopt is optimal to the original ILP if both (34) and (35) are infeasible. Mixed Linear Integer Problem In the case of a mixed linear integer problem, only those variables that are supposed to be integer in the original mixed linear integer problem can be freed of integral restriction.
7.3
Numerical Illustration 2 [18]
Minimize Z ¼ 162x1 þ 38x2 þ 26x3 þ 301x4 þ 87x5 þ 5x6 þ 137x7 ; Such that 165x1 þ 45x2 þ 33x3 þ 279x4 þ 69x5 þ 6x6 þ 122x7 ; Where x1 ; x2 ; . . .; x7 0 and are integers. The continuous optimal solution for Illustration 4 is given in Table 5.
Table 5. Continuous optimal tableau (36).
ð36Þ
502
E. Munapo
The variable x3 is the only original variable that is in the optimal basis. This variable must be freed from integral restrictions. The continuous optimal solution for Numerical Illustration 2 is given in (37). x1 ¼ x2 ¼ x4 ¼ x5 ¼ x6 ¼ x7 ¼ 0; x3 ¼ 568:88788; Zcopt ¼ 14790:8495: b‘ ¼ Maxf0; 0; 568:8788; 0; 0; 0; 0g ¼ 568:8788:
ð37Þ ð38Þ
The largest value 568.8788 comes from variable x3 which implies that this variable is to be freed of the integral restriction.
0
x3 ≤ 568
a x3 = 568, x6 = 4.8333, Z = 14792.1667.
x3 ≥ 569
b x3 = 569, Z = 14894. (fathomed)
Fig. 4. The two initial sub-problems of the LIP in Numerical Illustration 2.
The branch and cut algorithm is now used to search Sub-Problem a for integer points. x3 ¼ 568 i; i ¼ 0; 1; 2. . . i ¼ 0 : Add x3 ¼ 568 to problem where x3 is a free variable. Solving by branch and cut we have (39). x3 ¼ 568; x6 ¼ 5:0000; Z ¼ 14793:
ð39Þ
There is no need to search Sub-Problem b since it is fathomed already. i.e. x3 ¼ 569; Z ¼ 14794:
ð40Þ
Zopt ¼ Min½14793; 14794 ¼ 14793:
ð41Þ
Branch Cut and Free Algorithm
503
This solution can be verified by adding (42) to Sub-Problem a. x3 568 1 ¼ 567; 162x1 þ 38x2 þ 26x3 þ 301x4 þ 87x5 þ 5x6 þ 137x7 14792:
ð42Þ
Solving Sub-Problem a using the branch ad cut algorithm with x3 as a free variable we obtain an infeasible solution which shows that there are no other integer points on the left hand side of ‘copt besides (41). Similarly we add (43) to Sub-Problem b. x3 569 þ 1 ¼ 570; 162x1 þ 38x2 þ 26x3 þ 301x4 þ 87x5 þ 5x6 þ 137x7 14792:
ð43Þ
Adding (43) to Sub-Problem b results in infeasibility and this verifies that (39) is optimal.
8 Computational Experience One hundred randomly generated pure LIPs with variables ranging from 10 to 110 were used in the computational analysis. The branch and cut algorithm was compared with the proposed branch, cut and free algorithm. The same number of cuts were used for each of these two algorithms. What emerged from the computations is that freeing a basic variable in a branch and cut algorithm is more effective than the plain branch and cut algorithm in solving pure LIPs.
9 Conclusions In the paper we presented a way of selecting the variable to be freed. We also presented a way of dividing the problem into simpler parts as given in (51) and (52). It is easier to search the separate divisions ði ¼ 0; 1; 2. . .Þ of Sub-Problem a or ði ¼ 1; 2; 3; . . .Þ of Sub-Problem b than the original ILP problem as a whole. In addution we presented optimality verification of the optimal integer solution ðZopt Þ: This is a new avenue for research that will attract the attention of many researchers in the area of linear integer programming. A lot has been done for linear integer programming in terms of exact methods such as branch and cut, branch and price and the hybrid branch, cut and price. We were not aware of variables that have been freed of integral restrictions before and the concept used in solving the linear integer problem. Variable freeing may provide the answer to the difficult general linear integer problem. Large numbers of branches are prevented as the variables with large variable ranges can be identified and not subjected to integral restrictions. In the paper only one variable was freed. There is a need to explore ways to free more than one variable. Variable freeing is an area in its early stages of development.
504
E. Munapo
Acknowledgments.
We are grateful to the anonymous reviewers and conference organizers.
References 1. Abdel-Basset, M., El-Shahat, D., Faris, H., Mirjalili, S.: A binary multi-verse optimizer for 0-1 multidimensional knapsack problems with application in interactive multimedia systems. Comput. Ind. Eng. 132, 187–206 (2019) 2. Alanne, A.: Selection of renovation actions using multi-criteria “knapsack” model. Autom. Constr. 13, 377–391 (2004) 3. Alrabeeah, M., Kumar, S., Al-Hasani A., Munapo, E., Eberhard, A.: Computational enhancement in the application of the branch and bound method for linear integer programs and related models. Int. J. Math. Eng. Manage. Sci. 4(5), 1140–1153 (2019) https://doi.org/ 10.33889/IJMEMS.2019.4.5-090 4. Amiri, A.: A Lagrangean based solution algorithm for the knapsack problem with setups, Expert Syst. Appl. 1431 (2020) 5. Barnhart, C., Johnson, E.L., Nemhauser, G.L., Savelsbergh, M.W.P., Vance, P.H.: Branch and price column generation for solving huge integer programs. Oper. Res. 46, 316–329 (1998) 6. Brunetta, L., Conforti, M., Rinaldi, G.: A branch and cut algorithm for the equicut problem. Math. Program. 78, 243–263 (1997) 7. Dahmani, I., Hifi, M., Saadi, T., Yousef, L.: A swarm optimization-based search algorithm for the quadratic knapsack problem with conflict Graphs, Expert Syst. Appl. 14815 (2020) 8. Fomeni, F.D., Kaparis, K., Letchford, A.N.: A cut-and-branch algorithm for the Quadratic Knapsack Problem. Discrete Optimization (2020) 9. Fukasawa, R., Longo, H., Lysgaard, J., Poggi de Aragao, M., Uchoa, E., Werneck, R.F.: Robust branch-and-cut-price for the Capacitated vehicle routing problem. Math. Program. Series A 106, 491–511 (2006) 10. Ladányi, L., T.K. Ralphs, L.E.: Branch, cut and price: sequential and parallel. In: Computational Combinatorial Optimization, Naddef, N., Jüenger, M., eds, Springer, Berlin ( 2001) 11. Lahyani, R., Chebil, K., Khemakhem, M., Coelho, L.C.: Matheuristics for solving the multiple knapsack problem with setup. Comput. Ind. Eng. Vol. 129, 76–89 (2019) 12. Lai, X., Hao, J.K., Fu, Z.H., Yue, Y.: Diversity-preserving quantum particle swarm optimization for the multidimensional knapsack problem, Expert Syst. Appl., 1491 (2020) 13. Lai, X., Jin-Kao Hao, Fu, Z.H., Yue, D.: Diversity-preserving quantum particle swarm optimization for the multidimensional knapsack problem, Expert Syst. Appl. Vol. 1491 (2020) 14. Micheli, G., Weger, V.: On rectangular unimodular matrices over the algebraic integers. SIAM J. Discr. Math. 33(1), 425–437 (2019) 15. Mitchell, J.E.: Branch and cut algorithms for integer programming. In: Floudas, C.A., Pardalos, P.M., (Eds.), Encyclopedia of Optimization, Kluwer Academic Publishers (2001) 16. Munapo, E.: Network reconstruction – a new approach to the traveling salesman problem and complexity. In: Intelligent Computing and Optimization Proceedings of the 2nd International Conference on Intelligent Computing and Optimization 2019 (ICO 2019), pp. 260–272 (2020) 17. Munapo, E.: Improvement of the branch and bound algorithm for solving the knapsack linear integer problem. Eastern-Euro. J. Enter. Technol. 2(4), 59–69 (2020)
Branch Cut and Free Algorithm
505
18. Munapo, E.: Improving the optimality verification and the parallel processing of the general knapsack linear integer problem. In: Research Advancements in Smart Technology, Optimization, and Renewable Energy (2020) 19. Oprea, S.V., Bâra, A., Ifrim, G.A., Coroianu, L.: Day-ahead electricity consumption optimization algorithms for smart homes. Comput. Ind. Eng. 135, 382–401 (2019) 20. Padberg, M., Rinaldi, G.: A branch and cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM Rev. 33(1), 60–100 (1991) 21. Salvelsbergh, M.W.P.: A branch and price algorithm to solve the generalized assignment problem. Oper. Res. 45, 381–841 (1997) 22. Taha, H.A.: Operations Research: An Introduction, Pearson Educators, 10th Edition (2017)
Resilience in Healthcare Supply Chains Jose Antonio Marmolejo-Saucedo(B) and Mariana Scarlett Hartmann-Gonz´ alez Facultad de Ingenier´ıa, Universidad Panamericana, Augusto Rodin 498, Ciudad de M´exico 03920, Mexico {jmarmolejo,0172952}@up.edu.mx
Abstract. The recent COVID-19 pandemic that the world is experiencing right now should be the catalyst for companies to reflect on the processes of their supply chains. Global supply chains, regardless of the type of industry, will need to adopt changes in their operations strategy. The implementation of mathematical models of optimization and simulation will allow the adoption of proposals for the design of resilient supply chains to respond to the immediate challenge. This work proposes the use of optimization-simulation techniques to reduce the impact of interruptions in the workforce, the closure of facilities and transportation. A hypothetical case study is presented where various disruption scenarios are tested and the best strategies to achieve the recovery of the desired service levels are analyzed. Keywords: Epidemic outbreaks · COVID-19 Simulation · Resilient · Optimization
1
· Supply chain design ·
Introduction
Many solutions have been proposed by different societies in order to improve the way they carry out their activities and to adapt measures that allow them to prepare for and mitigate disasters or catastrophes that could happen at any time. It is important to consider that historical data and previous events are very relevant to study in order to mitigate disasters [7]. For example, it could be considered a disaster or a catastrophe, in the interruption of daily activities, whether due to fires, earthquakes and floods. Therefore, governments seek to be prepared to minimize the damage that could occur through proper disaster management [1,3]. It should be noted that these disasters affect a country, a government, or a society in a specific manner, which can be controlled, and the effects mitigated in short periods. As well as other countries can support the affected country, either by sending medical supplies, rescue support personnel and basic necessities, among other things. Resilience plays an extremely important role, since interruptions in the health sector are a reality and something very common. Considering that not only a pandemic can affect, but also natural disasters and social, economic and political conflicts, among other things, for which a rapid response is necessary. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 506–519, 2021. https://doi.org/10.1007/978-3-030-68154-8_45
Resilience in Healthcare Supply Chains
507
So a platform that manages to connect hospitals with suppliers and distributors would be amazing. Since it would allow to provide existing inventories, the time necessary to place orders, all this in real time considering the current demand. The platform would also allow suppliers to share planned and forecast orders for the optimization of operations. However, it would allow them to identify potential problems in advance allowing them to react, either by increasing production or how to manage the existing stock. In order to achieve a balance of supply and demand, with accurate information in real time. The healthcare industry currently relies on the just-in-time (JIT) distribution model, it has operated in the same way for several decades. JIT is known to have helped the industry control costs and reduce waste, while the processes and technology systems that support it failed to meet the demands of a global pandemic. So with COVID-19, suppliers found it necessary to design interim measures to address the shortage of personal protective equipment (PPE) and some other key things. Approaches are expected to be applied long after we return to pre-pandemic activity levels. As a consequence, the virus has challenged the industry to rethink its definition of supply chain resilience.
2
Literature Review
The new way of managing knowledge is based on Digital Twins, and one of the main fields to which it is being applied is in health. Since it is important to take into consideration that they allow to give impartial conclusions of the patients. That’s why healthcare supply chains have currently been considered different from the usual supply chains due to their high level of complexity, the presence of high value medical materials and, finally, the fact that they deal with human lives. On the other hand, it is important to mention that international health systems are under constant and increasing pressure to reduce waste and eliminate unnecessary costs while improving the quality and consistency of care provided to the patient. As well as providing 100% service level, to avoid shortages in hospital wards. That’s why authors have been developing different ways to manage medical knowledge. MyHealthAvatar and Avatar Health [5] are two clear examples of how health knowledge can be created, which aim to collect and track people’s lifestyle and health data. MyHealthAvatar is a project whose main feature is data analysis and presentation with which doctors can visualize it. Because results are presented in a 3d avatar, health status dashboard, disease risks, clock view, daily events. While Avatar Health collects information through health monitoring devices, to check that the patient’s habits and parameters are correct. So, it would act as an equivalent of a human, considering its physical state, condition of life and habits. These would allow the use of this information not only individually, but also collectively, which could be very useful for Digital Twins focused on health issues. Because diseases could be predicted, drugs that the population needed, emergencies, diseases, epidemics. That’s why with the development of big data, the cloud and the Internet of things, the use of the digital twins as a precision simulation technology of reality has been enhanced.
508
J. A. Marmolejo-Saucedo and M. S. Hartmann-Gonz´ alez
It is extremely important to consider that simulation is essential in the field of health and research. In order to be able to plan and allocate medical resources, prediction of medical activities, among other things. So, when digital twins and health care are combined, a new and efficient way to provide more accurate and faster services will result. Digital twins act as a digital replica of the physical object or service they represent in the healthcare industry, providing monitoring and evaluation. They can provide a secure environment to test the impact of changes on a system’s performance, so problems, how and when they might occur can be predicted, with time to implement the necessary changes or procedures, allowing for optimal solutions and risk reduction. On the other hand, digital twins actively play a crucial role in both hospital design and patient care. As an example, the authors seek to manage the health of the life cycle of elderly patients, in order to have and use their information, both physically and virtually. Aiming to monitor, diagnose and precede health issues, considering portable medical devices that will be sending information about patients to the Digital Twin. On the other hand, there is also Supply Chain concern associated with pharmaceuticals. Because this is essential for customer service and supply of drugs to patients in pharmacies, considering that supply represents between 25 to 30% of costs for hospitals. In this way it is considered vital to be able to maintain cost and service objectives. Today, society is facing a pandemic, which is not only causing deaths, it is also severely affecting Supply Chains. Because it is characterized by long-term disruption existence, disruption propagations (i.e., the domino effect), and high uncertainty. While governments and agencies seek to stop the spread of Covid-19 and provide treatment to infected people, manufacturers of industries are constantly fighting to control the growing impact of the epidemic on their supply chains. Therefore, several authors have been writing about the topic. As an example, an author considered it extremely important to carry out simulations on the impact of COVID-19 on supply chains, in order to be able to investigate and reduce the impact of future epidemic outbreaks. Taking this into consideration, Digital Twins could help mitigate this global impact by providing important data in business decision-making. Other authors considered developing a practical decision support system based on physician knowledge and the Fuzzy Inference System (FIS) which helps manage demand in the healthcare supply chain to reduce stress in the community, to break down the COVID-19 chain of spread and, in general, mitigate outbreaks due to disruptions in the healthcare supply chain. What they are doing is dividing the residents of the community into four groups according to the level of risk of their immune system and by two indicators of age and pre-existing diseases. These individuals are then classified and required to comply with regulations depending on which group they are in. Finally, the efficiency of the proposed approach was measured in the real world using the information from four users and the results showed the effectiveness and precision of the proposed approach. It is important to recognize that some companies are better prepared than others to mitigate the impact, because they have developed and implemented business continuity and supply chain risk management strategies. They have also
Resilience in Healthcare Supply Chains
509
diversified their supply chains from a geographical perspective to reduce risks. Another important factor is that they usually have a diversified portfolio of suppliers. In order not to compromise their key products and reduce their dependence on any supplier, they have therefore considered the inventory strategy to avoid interruption of the supply chain. While logistics has sought to better understand risks and drive specific actions based on priorities, so agility has been developed within production and distribution networks to quickly reconfigure and maintain supply to global demand.
3
Health Care Supply Chain Design
Unlike what happens today, because the world population has been affected by an infectious and highly contagious disease better known as COVID-19. So, each country is trying to serve its population and safeguard its own people. Therefore, the ability to support in such a disastrous situation makes decision-making by top management, the interaction of societies and the activities of supply chains impossible. So far, the effect of COVID-19 is unknown and its research very expensive, so it is considered highly dangerous once contracted by the person, because the contagion capacity is very high. It is sad but very true that many countries are faced with the fact that they do not have the necessary medical and human resources to be able to combat this virus, and even less considering the outbreak and contagion rate of this disease. So the health care supply chain is being severely affected, because “everyone” needs it at the same time and does not have such responsiveness. Therefore, it is important that governments look for the correct way to prioritize health personnel in order to provide better service to the community and find the best way to manage the health care supply chain to avoid its interruption and mitigate harm to the population in the best possible way. The proposed model has as its main objective the use of mitigation activities to reduce the effects of COVID-19, that is, reduce the interruptions in the health care supply chains and provide better service to communities that do not have access to health care. Several different solutions have been described by [6], among which are increasing the flexibility of companies and supply chains, such as: the postponement of production; the implementation of strategic stocks to supply various demands; the use of a flexible supplier base to be able to react more quickly; the use of the make or buy approach; planning of transportation alternatives; and the active management of income and prices, directing consumption to products with greater availability. Figure 1 shows the possible combinations to design a supply chain considering the most important factors that occur in the industry. Figure 2 presents the process diagram for designing resilient supply chains, which consists of a prior analysis of possible disruptions.
510
J. A. Marmolejo-Saucedo and M. S. Hartmann-Gonz´ alez
Fig. 1. Resilience in health care supply chains
Fig. 2. Flow diagram for resilience in health care supply chains
4
Mathematical Model
In this section, we apply the supply chain design model for a resilient supply network. We consider the model into a generalized network. The model is a mixed-integer linear problem. Let K be the set of manufacturing plants. An element k ∈ K identifies a specific plant of the company. Let I be the set of the potential warehouse. An element i ∈ I is a specific warehouse. Finally, let J be the set of current distribution centers, a specific distribution center is any j ∈ J. Let Z denote the set of integers numbers {0, 1}.
Resilience in Healthcare Supply Chains
4.1
511
Parameters
Qk = Capacity of plant k. βi = Capacity of warehouse i. Fi = Fixed cost of opening warehouse in location i. Gki = Transportation cost per unit of the product from the plant k to the warehouse i. Cij = Cost of shipping the product from the warehouse i to the distribution center (CeDis) j. dj = Demand of the distribution center j.
4.2
Decision Variables
We have the following sets of binary variables to make the decisions about the opening of the distribution center, and the distribution for the cross-docking warehouse to the distribution center. 1 If location i is used as a warehouse, Yi = 0 otherwise, 1 If warehouse i supplies the demand of CeDis j, Xij = 0 otherwise, Wki = The amount of product sent from plant k to the warehouse i is represented by continuous variables We can now state the mathematical model as a (P) problem based on [2].
min
Wki ,Yi ,Xij
Z=
Gki Wki +
k∈K i∈I
Fi Yi +
i∈I
Cij dj Xij
(1)
i∈I j∈J
Subject to constraints: Capacity of the plant
Wki ≤ Qk ,
∀k ∈ K
(2)
i∈I
Balance of product
dj Xij =
j∈J
Wki ,
∀i ∈ I
(3)
k∈K
Single warehouse to distribution center Xij = 1,
∀j ∈ J
(4)
i∈I
Warehouse capacity j∈J
dj Xij ≤ βi Yi ,
∀i ∈ I
(5)
512
J. A. Marmolejo-Saucedo and M. S. Hartmann-Gonz´ alez
Demand of items pYi ≤
Wki ,
∀i ∈ I
(6)
k∈K
p = min{dj } Wki ≥ 0,
∀i ∈ I, ∀k ∈ K
Yi ∈ Z, Xij ∈ Z,
∀i ∈ I
∀i ∈ I, ∀j ∈ J
(7) (8) (9) (10)
The objective function (1) considers in the first term the cost of shipping the product from the plant k to the warehouse i. The second term contains the fix cost to open and operate the warehouse i. The last term incorporates the cost of fulfilling the demand of the distribution center j. Constraint (2) implies that the output of plant k does not violate the capacity of plant k. Balance constraint (3) ensures that the amount of products that arrive to a distribution center j is the same as the products sent from the plant k. The demand of each distribution center j will be satisfied by a single warehouse i, this is achieved by constraint (4). Constraint (5) bounds the amount of products that can be sent to a distribution center j from an opened cross-docking warehouse i. Constraint (6) guarantees that any opened warehouse i receives at least the minimum amount of demand requested by a given distribution center j. Constraint (7) ensures that the minimum demand of each distribution center j is considered. Finally, constraints (8), (9) and (10) are the non-negative and integrality conditions.
5
Case Study
The case study considers a company that manufactures medical supplies that has its base of operations in central Mexico, and wants to locate new warehouses of finished product to reduce delivery times. Likewise, it is intended to use a multistore strategy that allows increasing the resilience of the supply chain. To simulate the case of an epidemic or pandemic, a hypothetical situation is presented where the various potential warehouse locations will be affected depending on the distribution of the disease. In other words, the government establishes different times for closing operations depending on the geographic area. Decisions are required to locate new facilities for the production and distribution of injectable products. Likewise, various operating scenarios (under disruption conditions) of the supply, manufacturing, inventory and product distribution process are modeled and analyzed, see Fig. 3.
Resilience in Healthcare Supply Chains
513
Fig. 3. Current health care supply chain
The customer demand presented in this paper tends to be of uniform distribution, each considering its different parameters of maximum and minimum order of the 2 products modeled. For Carsilaza’s product there is a maximum order per customer of 24,000 pieces and a minimum of 320 pieces, with a monthly average of 65,000 pieces sold per month, see Fig. 4. On the other hand, for Nuverasa’s product there is a maximum order per customer of 12,000 pieces and a minimum of 110 pieces, with an average of 28,000 pieces sold per month, see Fig. 5.
Fig. 4. Carsilaza’s demand
This section will seek to compare the performance of several inventory policies that allow the design of supply chain networks, in order to determine which is the best option according to what is needed. The first approach consider disruption scenarios during supply chain design decisions, among which are location and allocation decisions. Whereas, the second approach makes decisions about the design of the supply chain without considering the disruption scenarios. Therefore, each one will give their own result. In this way, when comparing the total
514
J. A. Marmolejo-Saucedo and M. S. Hartmann-Gonz´ alez
Fig. 5. Nuverasa’s demand
profits under these two approaches, the benefits of considering the disruptions in the supply chain design model are implied. Therefore, it can be concluded that not only the location costs of the facilities come to affect the location decisions, but also the rates of disruption in the facilities are important factors in determining and deciding where the warehouses will be located. For example, despite the low cost of installation location, a decision may be made not to open a warehouse at a candidate site due to high disruption rates. Additionally, the number of clients to be served and client assignments to distribution centers may depend on disruption rates in warehouses. Therefore, a significant increase in total profit can be achieved by considering facility disruptions in the supply chain design model. The Inventory Control Policy was created in order to define inventory levels, a policy (s, S) with Safety Stock is modeled, which is better known as Min-Max policy with Safety Stock. Which assumes that both the time between orders and the order quantity are variable, where the latter varies between the order level S and the Reorder Point (ROP) s. Considering this, the parameters to be defined are the safety stock (SS), the minimum inventory value (s) and the maximum level (S): √ (11) SS = z · σ · LT s = d · (LT ) + SS
(12)
S =2·s
(13)
Where z is the z-value obtained from the tables of the normal distribution, σ is the standard deviation of the demand, LT is the delivery time of the supply and d is the demand. Also, weekly consumption has been considered for the calculation of the inventory parameters. As well as a service level equal to 99.9% has been established for all classes of products; thus, z is set to 3.5. The initial stock when the simulation starts has been set equal to the maximum value (S). Considering the suppliers, it can be assumed that they have sufficient capacity
Resilience in Healthcare Supply Chains
515
to always satisfy the demand, without any problem. In other words, they have a very intense production rhythm and can produce for the national and international market. Therefore, their inventory levels are modeled to be infinite. The model is implemented in Anylogistix software (ALX) and simulated with a one-year period (Jan 1, 20, Dec 31, 20) considering different scenarios, see [4]. First, the starting point is analyzed without any disruption. Next, a disruption will be inserted into the model and then supply chain performances with and without different recovery policies are evaluated to mitigate the impact of the disruption. Considering the disruption, the model introduces a complete closure of the supplier. Recovery policies are based on different strategies. The first one requires to increase the inventory levels in the main warehouse, for all products with high demand. The second consists in activating lateral transhipment between facilities (plants, warehouses and distribution centers. However, in order to be more effective, in some cases this action requires also an increase in the inventory levels.
6
Computational Results
From the analysis of the demand for products, it was sought to establish inventory policies in order to be able to offer the best level of service to customers, which is intended to be greater than 90%. Different inventory policies were tested, among which are: • Min-max policy: Products are ordered when the inventory level falls below a fixed replenishment point (s). The ordered quantity is set to such a value that the resulting inventory quantity equals S. • Min-max policy with safety stock: Products are ordered when the inventory level falls below a fixed replenishment point (s safety stock). The ordered quantity is set to such a value that the resulting inventory quantity equals S safety stock. • RQ policy: When the inventory level falls below a fixed replenishment point (R), the fixed replenishment quantity (Q) of products is ordered. The time horizon considered is 1 year of customer order. The model was simulated several times, in order to observe different scenarios and be able to choose the best possible option. The policy that obtained the best result from the different KPIs analyzed is the Min-Max policy with different ranges in both minimums and maximums for products. For the minimum of Carsilaza the quantity of 40,000 pieces and a maximum of 60,000 pieces was stipulated. While for Nuverasa the minimum amount of 15,000 pieces was stipulated and a maximum of 25,000 pieces. Obtaining the following KPIs as a result (Fig. 6):
516
J. A. Marmolejo-Saucedo and M. S. Hartmann-Gonz´ alez
Fig. 6. The proposed inventory policy
When evaluating another scenario with the Min-Max policy with safety stock, it showed a total cost of $ 3, 273,810 USD, but the level of service offered was 90% and a Profit of $ 321, 112,234 USD. So it was decided to choose the previously presented scenario because the profit was above 20M USD and the service offered was 95%. It is necessary to consider that the higher the service level is, the higher the costs will be, since these are directly proportional together with the inventory level. The following results obtained show the service level, the average available inventory and the lead time for a recovery strategy based on Min-Max inventory levels for a disruptive scenario, see Figs. 7, 8, 9, 10, 11 and 12. Likewise, a comparison of KPIs is presented, corroborating that the proposed inventory strategy substantially improves profit and other indicators.
Fig. 7. Service level by product for proposed inventory strategy
Resilience in Healthcare Supply Chains
Fig. 8. Service level by product for current inventory strategy
Fig. 9. Available inventory for proposed inventory strategy
Fig. 10. Available inventory for current inventory strategy
517
518
J. A. Marmolejo-Saucedo and M. S. Hartmann-Gonz´ alez
Fig. 11. Lead time for proposed inventory strategy
Fig. 12. Lead time for current inventory strategy
7
Conclusions
In this paper, the effect of different disruptions on the resilient design of a healthcare supply chain is studied. The disruptions considered model those that can occur in cases of global pandemics such as COVID-19. The design of resilient healthcare chains is proposed through the mathematical modeling of the problem and the dynamic simulation of various inventory policies. Likewise, different KPIs are used to evaluate the performance of the proposals in the different disruption scenarios. Specialized software for supply chain analysis is used to implement the proposals and the alternatives for resilient designs are contrasted for different levels of service. The presented study offers an alternative to reduce the severe impacts in the supply chains dedicated to health care.
Resilience in Healthcare Supply Chains
519
References 1. Acar, M., Kaya, O.: A healthcare network design model with mobile hospitals for disaster preparedness: a case study for Istanbul earthquake. Transp. Res. Part E Logist. Transp. Rev. 130, 273–292 (2019). https://doi.org/10.1016/j.tre.2019.09. 007. http://www.sciencedirect.com/science/article/pii/S136655451930314X 2. Marmolejo, J., Rodr´ıguez, R., Cruz-Mejia, O., Saucedo, J.: Design of a distribution network using primal-dual decomposition. Math. Probl. Eng. 2016, 9 (2016) 3. Rezaei-Malek, M., Tavakkoli-Moghaddam, R., Cheikhrouhou, N., TaheriMoghaddam, A.: An approximation approach to a trade-off among efficiency, efficacy, and balance for relief pre-positioning in disaster management. Transp. Res. Part E Logist. Transp. Rev. 93, 485–509 (2016). https://doi.org/10.1016/j.tre.2016. 07.003. http://www.sciencedirect.com/science/article/pii/S136655451630134X 4. anyLogistix supply chain software: supply chain digital twins, February 2020. https://www.anylogistix.com/resources/white-papers/supply-chain-digital-twins/ 5. Spanakis, E.G., Kafetzopoulos, D., Yang, P., Marias, K., Deng, Z., Tsiknakis, M., Sakkalis, V., Dong, F.: myhealthavatar: personalized and empowerment health services through Internet of Things technologies. In: 2014 4th International Conference on Wireless Mobile Communication and Healthcare - Transforming Healthcare Through Innovations in Mobile and Wireless Technologies (MOBIHEALTH), pp. 331–334 (2014). https://doi.org/10.1109/MOBIHEALTH.2014.7015978 6. Tang, C.S.: Robust strategies for mitigating supply chain disruptions. Int. J. Logist. Res. Appl. 9(1), 33–45 (2006). https://doi.org/10.1080/13675560500405584 7. Yan, Y., Hong, L., He, X., Ouyang, M., Peeta, S., Chen, X.: Pre-disaster investment decisions for strengthening the Chinese railway system under earthquakes. Transp. Res. Part E Logist. Transp. Rev. 105, 39–59 (2017). https://doi.org/10.1016/j.tre. 2017.07.001. http://www.sciencedirect.com/science/article/pii/S1366554516306913
A Comprehensive Evaluation of Environmental Projects Through a Multiparadigm Modeling Approach Roman Rodriguez-Aguilar1(&), Luz María Adriana Reyes Ortega2, and Jose-Antonio Marmolejo-Saucedo3 1
Facultad de Ciencias Económicas y Empresariales, Universidad Panamericana, Ciudad de México, Augusto Rodin 498, 03920 Mexico City, México [email protected] 2 Facultad de Ingeniería, Universidad Anáhuac, Huixquilucan, México 3 Facultad de Ingeniería, Universidad Panamericana, Ciudad de México, Augusto Rodin 498, 03920 Mexico City, México
Abstract. The evaluation of environmental projects has been structured in most cases on financial profitability indicators, to obtain private and public financing. However, the environmental performance measures of the evaluated projects have been left aside. The present work is a proposal for the evaluation of environmental projects using cost-effectiveness criteria, which take into account the environmental results of the project and its implementation costs, additionally, the uncertainty analysis of the projects are integrated through simulation methods and real options. The results show that the cost-effectiveness evaluation approach of environmental projects allows the integration of environmental results measures in the decision to implement a project beyond only financial indicators. Keywords: Environmental projects Cost-effectiveness Discrete simulation Dynamic simulation Real options
1 Introduction The impact of human activities on the environment has become more relevant in recent years worldwide. This has generated the need to evaluate technical proposals that allow reducing the environmental impact derived mainly from production and consumption, it has been chosen to reduce the generation of waste, increase recycling and produce through clean energy. In this transition process, the United Nations determined a set of sustainable development objectives with a horizon to 2030, among these objectives it is worth highlighting the objectives related to energy and caring for the environment. In this regard, it is sought to be able to have clean energy sources that allow access worldwide to the entire population as well as maintain the systemic balance and ensure the care of natural resources for future generations. In this transformation process towards more sustainable production and consumption, it is necessary to have the collaboration of public and private institutions. For the design of public policies and proposals that allow achieving the objectives set. An © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 520–529, 2021. https://doi.org/10.1007/978-3-030-68154-8_46
A Comprehensive Evaluation of Environmental Projects
521
essential point in this stage of transition towards more sustainable approaches is the evaluation of the proposals from the public and private point of view. Since in most cases the investments are made from the private initiative with public support, once the technical evaluation stage is approved, the financial feasibility of the environmental projects is evaluated. In this area where the need to innovate through the application of new approaches is observed to be able to define if an environmental project is viable, taking into account both financial and environmental results metrics. The classic methodologies for evaluating projects to access financing consider the financial profitability of the project and the period of recovery of the investments, leaving the evaluation of environmental results in second place, considering this stage as part of an ex-post evaluation of the project. However, through different quantitative tools such as discrete and dynamic simulation models, the integration of project uncertainty through real options, as well as the definition of cost-effectiveness criteria in ex-ante evaluations, it is possible to project and evaluate the results of an environmental project in a feasibility study. The accelerated depletion of renewable and non-renewable natural resources in recent years has generated the need to implement policies focused on sustainable development, which allow guaranteeing access to natural resources for future generations. As part of the implementation of these policies, the need to develop financing by the specific characteristics of projects focused on environmental conservation. Emphasis has been placed on promoting the generation of clean energy and projects focused on reducing the environmental impact of the activity productive [1]. A fundamental factor in evaluating the financial viability of these projects is to determine the degree of effectiveness of the desired environmental results, which is not necessarily compatible with classic investment project evaluation approaches, in many cases, the presence of negative net benefits is justifiable when intangible benefits are taken into account. It is necessary to take into account the intangible results in problems intertemporal related to the management and conservation of natural resources [2]. The need arises to establish a framework according to the characteristics of environmental projects and above all to emphasize the evaluation of the expected results that cannot be measured through the profitability of the investment alone. It is necessary to have comprehensive methods for evaluating environmental projects that allow evaluating financial viability, operational feasibility, as well as the expected environmental impact. The integration of environmental aspects as requirements in the operation of contemporary companies, as well as the development of sustainable companies, have generated the need to objectively assess the technical, financial, and especially environmental viability of the proposals. Until now, there are evaluation proposals, but most of them are focused on a single aspect of the evaluation, likewise, the classic approaches of project evaluation are not adapted efficiently in the case of environmental projects due to the particularities and objectives of the same [3, 4]. At the national and international level, schemes have been adopted to promote sustainable development through policies to reduce environmental impact, specialized regulation, and generation of financing schemes for clean energy projects; as well as a decrease in the environmental impact of productive activity [5, 6]. Consistent with these policies, it is necessary to have project evaluation schemes beyond traditional financial or cost-benefit evaluations. Since an Environmental project can be beneficial
522
R. Rodriguez-Aguilar et al.
in environmental terms, despite operating with negative net results. An additional factor to consider is the treatment of uncertainty since in the case of environmental projects multiple factors can impact the feasibility of the project. Until now, the approach to evaluating environmental projects has been analytical, focusing on specific project segments, which is why there is a need for an eclectic approach based on robust methods that allow support for decision-making. The work is structured as follows, in section two the methodological framework of the proposal is presented, briefly describing the quantitative tools that will be used. Section three addresses the application of the proposed methodology in a case study and finally the conclusions and recommendations are presented.
2 Methodology The proposal proposes the integration of a multiparadigm modeling approach that allows addressing the evaluation of each key stage in the development of an environmental project, so the theoretical framework will be integrating various quantitative approaches and will be specified according to the stage of the environmental project. 2.1
Discrete Event Simulation
By simulating discrete events it is possible to evaluate the operational feasibility of the proposal, in existing operational processes or the design of new products/processes. One of the advantages of discrete simulation is that it allows simulating the operation of a system in a controlled environment and identifying its possible failures through the analysis of feasible scenarios [7]. The discrete simulation approach is based on the mapping of an operational process based on activities, seeking to consider a standard process and its possible behavior in a controlled environment (Fig. 1).
Fig. 1. Discrete event simulation. Source: AnyLogic®.
In the specific case of environmental projects, the use of discrete simulation will allow evaluating the feasibility of the process to be implemented as well as the expected results according to different scenarios considered. One of the advantages of using discrete simulation is that it allows considering the uncertainty in the behavior of the system to be simulated.
A Comprehensive Evaluation of Environmental Projects
2.2
523
Dynamic Systems Modeling
For environmental projects, it is of great relevance to know in the long term the expected behavior of the variables of interest, such as CO2 emissions, energy generation, or the generation of pollutants. The modeling of dynamic systems is a tool that allows taking into account the behavior of a system and its interaction between its parts to robustly project expected trajectories of variables of interest. As well as designing intervention policies to achieve the desired objectives in the final trajectory. The foundation of dynamic simulation is based on the resolution of differential equations that represent the behavior of a variable of interest over time, taking into account its interaction with a set of auxiliary variables, causal relationships, and flows. The general approach of a dynamic model is through a differential equation given certain initial conditions. dy ¼ f ð x; yÞwithyðx0 Þ ¼ y0 : dx
ð1Þ
The objective is to identify the expected trajectory of a state variable of the differential equation. Since not all differential equations have a procedure to find the analytical solution, it is necessary to resort to the application of numerical methods for their solution. In the case of the application of dynamic systems to systems related to human activity, the approach known as System Dynamics was developed, which is a methodology for the analysis and temporal modeling in complex environments [8] (Fig. 2).
Fig. 2. The Systems Dynamics approach. Source: AnyLogic®.
The Systems Dynamics methodology considers the behavior over time of a level variable, flows, and the related environment. The application of this methodology will allow considering the interactions in a system to be modeled in a defined time horizon allowing to generate environmental variables of result for the financial evaluation and cost-effectiveness of the project. 2.3
Real Options
It is an approach highly used in recent years to address the financial evaluation of projects whose behavior differs from classical standards, so it is necessary to capture the uncertainty of the variables to be considered, as well as to evaluate decision-making
524
R. Rodriguez-Aguilar et al.
in the process. Development of the project. The project is considered as a financial option that can have the following statuses: a) Expand b) Collapse c) Stop and restart or temporary closure operations There are several options evaluation approaches, among the most widely used is the binomial model and the Black-Scholes model [9]. One of the advantages of applying the real options approach in project evaluation is that it allows managing uncertainty to increase the value of the project over time. 2.4
Cost-Effectiveness Evaluation
Cost-effectiveness studies allow evaluating a project or policy based on the expected results considered effective according to the objectives set. Unlike cost-benefit studies, the cost-effectiveness approach seeks to consider as outcome variables that are directly related to the project objectives, so a project may be financially unprofitable, but costeffective. It is an evaluation mechanism that goes beyond financial data and focuses more on the fulfillment of the expected results of the project. The applications of costeffectiveness studies are generally focused on the evaluation of health interventions, but their application in other sectors has been explored in recent years with positive results [10, 11]. The cost-effectiveness result is expressed in a relationship between the associated cost and the earnings according to the expected result using the costeffectiveness ratio. Cost effectivenessratio ¼
Costoftheintervention : Measureofeffectiveness
ð2Þ
The results are shown in what is known as the cost-effectiveness plan, which allows locating the alternatives with the trade-off between costs and effectiveness of the intervention (Fig. 3).
Fig. 3. Cost-effectiveness plane
The desired results are those located in plane II, being the most effective and least expensive options.
A Comprehensive Evaluation of Environmental Projects
2.5
525
Multiparadigm Evaluation Proposal
The integration of these three methodologies will allow evaluating the feasibility of the environmental project in terms of cost-effectiveness in various dimensions. Prioritizing above all the generation of environmental results, the objective is to have sufficiently robust evidence to accept a project that allows generating environmental benefits and that the evaluation does not focus exclusively on the variables of financial profitability. It is an iterative process where operational feasibility will be evaluated based on discrete simulation models that will allow simulating the operation of the project over a time horizon as well as evaluating the performance of performance measures. The second stage consists of modeling the behavior of the environmental result variable using a dynamic model that allows determining an expected trajectory during the life of the project. And finally, based on the information collected from the previous stages, the financial and outcome evaluations will be carried out through the use of real options and the estimation of a cost-effectiveness outcome measure (Fig. 4).
Fig. 4. Multiparadigm evaluation
The results of each evaluation stage will be focused on an element of the project and each result generated will be integrated into the final evaluation that seeks to evaluate the results and costs of the alternatives on a cost-effectiveness level.
3 Case Study Information used corresponding to a small service company seeking to implement an energy efficiency project. A set of project-specific parameters were defined to evaluate the project's results in environmental and economic terms (Table 1). The service company seeks to evaluate the feasibility of replacing heaters that use natural gas with solar heaters in its production process.
526
R. Rodriguez-Aguilar et al. Table 1. Parameters considered for the evaluation of the project. Parameter Units Total inversion Thousands of dollars GHG emissions tCO2e / MWh Cash flows volatility % Discount rate % Reference rate % Evaluation horizon (real options) Years Real options scenarios Probability for scenario
Value 500.00 0.527 40 10 7 15 [0, 1]
These general parameters were used as inputs to evaluate the project with the multiparadigm approach. The elaboration of each model is not detailed because the objective of the study is to present the methodological structure of the multi-paradigm evaluation. The discrete simulation model is based on the daily operations of the company, the objective of this model is to evaluate the feasibility in the change of technology when going from the use of gas heaters to solar heaters. Company operations were simulated for 8 business hours six days a week. Due to the intermittency of the photovoltaic generation, the simultaneous operation of gas and solar heaters is evaluated. Seeking to minimize the use of heaters as much as possible, which will reduce operating costs and reduce the emissions generated by the combustion of this fuel. For its part, the dynamic model will allow modeling based on the operation of the company with the integration of photovoltaic energy the emissions generated. It is important to consider that in this case the emissions generated will be compared concerning the status quo and in the same way, in the cost-effectiveness evaluation, the new technology will be compared with the status quo. Table 2 shows the main results of the discrete and dynamic simulation models, in a simulation horizon of one year, it is important to highlight that in the case of the dynamic model the horizon is long term. Table 2. Discrete and dynamic simulation results. Indicator Average production per hour Average layup buffer Average production cycle time Average system utilization Average CO2 emissions per year Dynamic system stabilization time
Value 100 units 15% 25 min 85% 308.75 tCO2e 3 years
With the information generated through the simulation models, the financial evaluation will be carried out using real options. In this case, three scenarios will be considered within the life of the project, the expansion, contraction of the project, and
A Comprehensive Evaluation of Environmental Projects
527
closure are considered. The evaluation will be carried out in year 10 with the following considerations: a) Increase the investment amount by 30%, with expenses of 30%. b) Reduce the investment amount by 25%, with 28% savings. c) Settle the project with a recovery value of 50%. Once the probability that the initial Net Present Value (NPV) of the project will rise or fall is calculated, using the binomial up-down model, the formula corresponding to each scenario is applied to each probability of year ðnÞ and then all periods are discounted, eliminating the effect of the probability of increase and decrease (up and down) and is brought to present value, removing the effect of the interest rate. From the amount obtained, the price of the exercise or initial investment is subtracted and the present value of the project is taken with real options (Table 3).
Table 3. Evaluation formulas for each type of real option. Type of Option Option to increase E% by investing I Option to reduce in C%, reducing investment from I1 to I2 Option to defer or wait a period Option to close or abandon with a liquidation value Closure option or temporary abandonment Opción de selección a escoger
Value FC t ¼ FC 0 þ maxðE FC 0 I; 0Þ FC t ¼ MAXðFC 0 I 1 ; C FC 0 I 2 Þ FC t ¼ MAXðFC n I; 0Þ FC t ¼ MAXðFC t ; Lt Þ FC t ¼ MAXðFC n cf cv; E FC n cf Þ FC t ¼ MAXðE FC n I C; FCn þ A; LÞ
The financial evaluation of the project in the three scenarios generated the following results (Table 4). Table 4. Option value in each scenario. Option value ($) Scenario 1 17,919 Scenario 2 − 179,250 Scenario 3 − 112, 961
With these results, it is concluded that, in year 10, it is not convenient to either reduce the size of the project or liquidate it, but rather expand it, since not only does it present a more valuable NPV, but also the value of the option is positive, indicating that it should be exercised. Once the results measures and the costs incurred to achieve environmental benefits in the project have been determined, it is necessary to evaluate whether these actions are cost-effective, not only with the initial situation but also with other options available
528
R. Rodriguez-Aguilar et al.
in the market, to ensure that the choice made is the best for the company. To determine which option is the most cost-effective, we must determine the costs associated with the measurement, but also the elements related to the measurement of results, which is the emission of Greenhouse gas (GHG), its savings, and the costs of producing them. In the case of the cost-effectiveness study, different proposals are evaluated to have a comparator (Table 5). Table 5. Cost-effectiveness results. Supplier A vs status quo Supplier B vs status quo Costs Supplier A $214, 957 $228,761 Costs Supplier B $289, 834 $289, 834 Emissions 14.29 17.44 Emissions status quo 20.02 20.02 Cost for avoid emissions $13, 063 $23, 684
Two providers of the technology to be implemented were evaluated and the costs of each option were estimated to evaluate which is the most cost-effective option, additionally, the outcome measures were considered for each alternative. Equipment A is cheaper than equipment B, also, it generates less GHG emissions. What makes it more cost-effective is that the cost of avoided emissions is also lower, so the company will make sure that choosing supplier A is better than supplier B.
4 Conclusions The world economies are gradually moving towards more environmentally friendly production models as a result of trying to harmonize the economic, social, and environmental axes to achieve sustainable growth. The integration of environmental objectives as millennium development goals is a great advance towards the protection of the environment and the pursuit of sustainable development. One of the great challenges for the countries in the evaluation and selection of environmental projects with the greatest possible impact, for this up to now only financial profitability criteria were considered, which contemplated the recovery of the investment in a defined time, especially in those projects where there is greater participation of private capital. However, this approach limits the environmental impact of many projects that could be profitable in environmental and not necessarily financial benefits. The use of a multi-paradigm evaluation approach allows a comprehensive evaluation of various aspects of the project, from an operational, technical, environmental, and financial perspective. The integration of a discrete and dynamic simulation method, as well as the financial evaluation of the uncertainty of the project through real options, allow having information of greater added value for decision-making on environmental projects. The integration of a results evaluation approach in terms of cost-effectiveness makes it possible to weigh the environmental results on the investment made in the projects.
A Comprehensive Evaluation of Environmental Projects
529
The case study presented shows the application of the proposed methodology through the evaluation of a reference environmental project that allows integrating the methodological approaches addressed. The results show that the integration of simulation methodologies, the use of real options as well as the cost-effectiveness approach, allow having robust and reliable information for making decisions. This is a first methodological proposal that seeks to build comprehensive approaches to the evaluation of environmental projects, and above all to prioritize environmental criteria over financial ones.
References 1. Panwar, N.L., Kaushik, S.C., Surendra, K.: Role of renewable energy sources in environmental protection: a review. Renew. Sustain. Energy Rev. 15(3), 1513–1524 (2011) 2. Kronbak, L.G., Vestergaard, N.: Enviromental cost-efectiveness analysis in intertemporal natural resource policy: evaluation of selective fishing gear. J. Environ. Manage (2013) 3. Manzini, F., Islas, J., Macías, P.: Model for evaluating the environmental sustainability of energy projects. Technol. Forecast. Soc. Chang. 78(6), 931–944 (2011) 4. Torres-Machi, C., Chamorro, A., Yepes, V., Pellicer, E.: Current models an practicies of econmic and enviromental evaluation of sustainable network-level pavement management. J. Constr. 13(2), 49–56 (2014) 5. SEMARNAT: Guía de Programas de Fomento a la Generación de Energía con Recusos Renovables (2015). Disponible en https://www.gob.mx/cms/uploads/attachment/file/47854/ Guia_de_programas_de_fomento.pdf 6. SENER: Prospectiva de Energías Renovables 2016–2030 (2016). Disponible en https:// www.gob.mx/cms/uploads/attachment/file/177622/Prospectiva_de_Energ_as_Renovables_ 2016-2030.pdf 7. Schriber, T.J., Brunner, D.T.: How Discrete‐Event Simulation Software Works. In: Banks, J. (Ed.) Handbook of Simulation (2007) 8. Rodríguez Ulloa, R., Paucar-Caceres, A.: Soft system dynamics methodology: combining soft systems methodology and system dynamics. Syst. Pract. Action Res. 18(3), (2015) 9. Calle, A., Tamayo, V.: Decisiones de inversión a través de opciones reales. Estudios Gerenc. 25(111), 7–26 (2009) 10. Finnveden, G., et al.: Recent developments in Life Cycle Assessment. J. Environ. Manage. 91(1), 1–21 (2009) 11. Uchida, E., Rozelle, S.: Grain of green: cost-effectiveness and sustainability of China's conservation set-aside program. Land Econ. 81(2), 247–264 (2005)
Plant Leaf Disease Recognition Using Histogram Based Gradient Boosting Classifier Syed Md. Minhaz Hossain1,2 and Kaushik Deb1(B) 1
Chittagong University of Engineering and Technology, Chattogram 4349, Bangladesh [email protected] 2 Premier University, Chattogram 4000, Bangladesh [email protected]
Abstract. Plant leaf disease (PLD) recognition’s current techniques lack proper segmentation and locating similar disorders due to overlapping features in different plants. For this reason, we propose a framework to overcome the challenges of tracing Region of Interest(ROI) under different image backgrounds, uneven orientations, and illuminations. Initially, modified Adaptive Centroid Based Segmentation (ACS) is applied to find K’s optimal value from PLDs and then detect ROIs accurately, irrespective of the background. Later, features are extracted using a modified Histogram Based Local Ternary Pattern (HLTP) that outperforms for PLDs with uneven illumination and orientation, capitalizing on linear interpolation and statistical threshold in neighbors. Finally, Histogram-based gradient boosting is utilized to reduce biasness for similar features while detecting disorders. The proposed framework recognizes twelve PLDs having an overall accuracy of 99.34% while achieves 98.51% accuracy for PLDs with more than one symptom, for instance, fungal and bacterial symptoms. Keywords: Plant leaf disease recognition · Modified adaptive centroid-based segmentation · Histogram-based local ternary pattern Histogram-based gradient boosting classifier
1
·
Introduction
Diagnosis and detection of various plant diseases through leaves’ symptoms are complicated for farmers and agronomists. It creates complexity due to various symptoms in the same plant and similar symptoms in different plant diseases. This complicated task may cause misleading to conclude the status of plants and their proper treatments. Automated plant diagnosis using the mobile application through the real field’s capturing image helps the agronomist and farmers make better decisions on plant health monitoring. Due to the growth of the Graphical Processing Unit(GPU) embedded processors, Machine Learning, Artificial Intelligence makes it possible to incorporates new models and methods to detect the appropriate ROIs and hence, identify the plant diseases correctly. However, memory space(number of parameters) is still in consideration for mobile-based PLD recognition. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 530–545, 2021. https://doi.org/10.1007/978-3-030-68154-8_47
Plant Leaf Disease Recognition Using Histogram
531
Machine learning techniques mainly investigate localizing the ROIs, feature extraction, and classification, such as in [9,11]. Limitations of learning-based techniques are: a. lack of sensitivity to proper segmentation in different image backgrounds and under different capture conditions and b. failing to trace similar symptoms in different plant disorders. The recent trend of Convolutional Neural Network (CNN) performs complex patterns using a large number of data. The state-of-the-art architecture of convolutional neural network (CNN) such as, VGG in [5,13],GoogleNet in [8], ResNet50, ResNet101, ResNet152, Inception V4 in [13], Student-teacher CNN in [4], AlexNet in [3,5,8] and DenseNet in [13] are applied in recognizing PLDs.Though CNN achieves better results, tuning the parameters depends on the CNN architecture, to an extend. Furthermore, space(memory) limitation, especially in handheld devices, to support such a high volume of network parameters is not considered. Last but not least, when exposed to a new dataset, CNN fails to generalize, and its accuracy drops down drastically [5,8]. Our primary emphasis is to modify the K means clustering to overcome the limitations of lack of sensitivity to proper segmentation in [10] and remove the noises, including unwanted objects beside the plant leaf or leaves. The modified ACS suggested here to find optimal K such that it can cause a. segment of the appropriate disease symptoms in different background images and uneven illuminations and b. identify the disorders having similar symptoms. This work also employs modified HLTP to alleviate the limitation of the traditional local ternary pattern (LTP), which outperforms the uneven illumination and orientation counterparts. Detecting ROIs from complex backgrounds and extracting histogram features under various health states generalizes better when exposed to the unspecified dataset. As memory space is a significant factor for mobile devices, we propose a PLD framework to recognize PLDS using histogram-based gradient boosting classifier instead of CNN. It improves PLDs’ recognition rate than various machine learning algorithms for histogram features and reduces the memory cost compared to CNN. The remaining paper is demonstrated as follows. Section 2 depicts the literature review including the related works; proposed framework for recognizing plant leaf diseases is described in Sect. 3; experiments, performance evaluation and observations are presented in Sect. 4; and lastly, conclusion of this paper is illustrated in Sect. 5.
2
Related Work
Plant/crop-related machine learning-based works are categorized into PLD recognition, prediction production of crop-based on weather parameters, and post-harvest monitoring of grains in [1]. A study has been conducted to predict the co-relations between the weather parameters(temperature, rainfall, evaporation, and humidity) and crop production in [2]. For this, the authors in [2] design a fuzzy rule-based system using the Takagi Sugeno-Kang approach. Besides, the machine learning and image processing based PLD recognition framework have
532
S. Md. Minhaz Hossain and K. Deb
several parts; the localization of symptoms of the disease (region of interest), feature extraction, and classification. Before localization, image enhancement technique is used in [11]. However, it is not always mandatory to improve the intensity of plant leaf images. Plant image intensities are changing under different capture conditions and uneven illumination. Two conditions are used based on statistical features to trace the changing pattern of plant images. It makes robust PLD detection and avoids unnecessary image enhancement. GrabCut algorithm in [9], the Genetic Algorithm in [11], k-means clustering in [10] has been used to get proper disease region in leaf image. Besides, in [10], a couple of limitations of lack of sensitivity to proper segmentation in K-means clustering due to improper initialization of K and localizing multiple disorders in a PLD image. In [11], there are some misclassifications between the two leaf spot conditions because of similar features. Our modified ACS overcomes the limitations of lack of sensitivity of segmentation using the auto initialization of K from the plant leaf images. Also, it makes the segmentation effective under different critical environments and in different backgrounds. The texture feature has been extracted by histogram-based local binary pattern (LBP) in [9], by color co-occurrence matrix (local homogeneity, contrast, cluster shade, energy, and cluster prominence)in [11]. In [9], histogram-based local binary pattern extracts the better feature under different orientations and uneven illuminations. We use a feature extraction method HLTP using linear interpolation and dynamic threshold. The neighbors found using interpolation make the feature extraction method sensitive to orientations. The variation of the gray level of neighbors makes it invariant to illumination in recognizing PLD. Moreover, multiple classifiers have been used to recognize the correct PLD in various works. One Class Support Vector Machine (OCSVM) is used in [9], and SVM is used in [11] for recognizing PLD. Further, Minimum Distance Criterion (MDC) is used in [11]. Though in all of the works, better accuracy is achieved, there still is a lack of proof in recognizing better in case of similar symptoms in different disorders. Also, there are many works for recognizing various plant diseases using CNN. PLD recognition frameworks using the CNN model still have some limitations. These limitations have an impact on the performance of CNN models. Some of the works are restricted to plain backgrounds, e.g., [5,8,13] and inconsistent with image capturing conditions of not doing data augmentation in [7]. Finally, sometimes, plant leaf diseases have a generalization problem in an independent dataset [5,8]. Using the ensemble learning classifiers, we can reduce the biasness of classifiers and improve accuracy than machine learning. Also, we can reduce the parameters than the state-of-the-art CNN PLD recognition models. Though random forest takes less time to build trees, gradient boosting classifiers are better in the benchmark result. Especially for histogram features, histogrambased gradient boosting classifiers perform well in considering memory cost and recognition rate than gradient boosting classifier.
Plant Leaf Disease Recognition Using Histogram
533
Fig. 1. The proposed framework for recognizing plant leaf disease.
We can conclude that auto initialization in this framework’s segmentation phase overcomes lacking sensitivity to proper segmentation in [10] using modified Adaptive Centroid Based Segmentation (ACS). The automatic initialization of K defined using ACS can effectively detect changes in image characteristics for different orientations and illuminations and improve generalization. This paper also explores histogram-based local ternary patterns (HLTP) to alleviate the limitation of the traditional local ternary pattern (LTP), outperforms in the uneven illumination and orientation. Finally, histogram-based gradient boosting classifier is used to classify PLD because of the classification phenomena of histogram over features. This classifier is more suitable than CNN in considering restricted memory devices like mobile. Besides, histogram-based features make this framework useful to recognize the health status of newly-added plant images, increasing the generalization. So, accuracy never falls in the newly added diverse plant image, and this phenomenon overcomes the limitation of drastic fall in the validation of CNN with new plant leaf images in [5,8].
3
Proposed Framework for Recognizing Plant Leaf Diseases
In this section, the proposed framework is demonstrated in detail. Initially, the disease recognition framework optionally enhances the plant leafs’ RGB image, and then modified adaptive centroid-based segmentation(ACS) is applied to trace the ROIs. After that, features selection from the grayscale image is executed using a histogram-based local ternary pattern. At last, the plant leaf disease is classified using a histogram-based gradient boosting classifier. The proposed PLD recognition framework has been exhibited in Fig. 1. 3.1
Dataset
In the experiment, 403 images of size 256 × 256 pixels comprising eight different plants, such as rice, corn, potato, pepper, grape, apple, mango, and cherry, and twelve diseases are used to train the proposed framework. The images are
534
S. Md. Minhaz Hossain and K. Deb
collected from the PlantVillage dataset1 except rice disease images. Rice disease images are gathered from the Rice diseases image dataset in Kaggle2 , the International Rice Research Institute (IRRI)3 and Bangladesh Rice Research Institute (BRRI)4 . We vary the image backgrounds among natural, plain, and complex to trace a disease properly in different backgrounds. Our framework includes six fungal diseases, two bacterial diseases, two diseases having both fungal and bacterial symptoms, one viral disease, and another one from a different category. Further, the framework considers various symptoms, such as small, massive, isolated, and spread. Twelve samples of eight plants are represented, considering different symptoms and image backgrounds, as shown in Fig. 2. For generalization, 235 independent (excluding the training dataset) images from twelve different classes are used during the test phase. Complete information regarding the plant leaf disease dataset is described in Table 1. Table 1. Dataset description of recognizing plant leaf disease. Health-wise condition
Plant Type Disease Samples
Fungal
Rice
Blast
54
30
Potato
Early-blight
42
39
Late-blight
21
10
Corn
Northern-blight
50
30
Mango
Sooty-mould
19
12
Cherry
Powdery-mildew
22
13
Rice
Bacterial leaf-blight
65
30
Pepper
Bacterial-spot
50
30
Sheath-rot
20
10
Black-rot
15
7
Bacterial
Fungal/Bacterial Rice Apple
# of training images
# of test images
# of training # of test images images (Health- wise ) (Health- wise ) 208
134
115
60
35
17
Virus
Rice
Tungro
10
5
10
5
Miscellenous
Grape
Black-measles
35
19
35
19
403
235
403
235
Total Images
3.2
Enhancing Image
If images are not captured precisely due to hostile conditions, image enhancement is needed to increase the PLD image quality. The enhancement is optional as it depends on the magnitude of degradation. Two enhancement conditions have been used here using statistical features such as mean(µ), median(x ), and mode(M0 ) of a plant leaf image. The first condition for image enhancement is devised as in Eq. 1. (1) µ < x < M0 1 2 3 4
https://www.kaggle.com/emmarex/plantdisease. https://www.kaggle.com/minhhuy2810/rice-diseases-image-dataset. https://www.irri.org/. http://www.brri.gov.bd/.
Plant Leaf Disease Recognition Using Histogram
535
Fig. 2. Samples of plant leaf disease images under numerous health conditions in various backgrounds and having different symptoms: (a) Rice Sheath-rot(natural background, spread symptoms), (b) Rice Tungro(natural background, spread symptoms), (c) Rice Bacterial leaf-blight(complex background, spread symptoms), (d) Rice blast (complex background, isolated, small symptoms), (e) Potato Early-blight(plain background, isolated small symptoms), (f) Potato Late-blight(plain background, isolated small symptoms), (g) Pepper Bacterial-spot(plain background, small symptoms), (h) Grape Black-measles(plain background, small symptoms), (i) Corn Northern Leafblight(plain background, spread, spot symptoms), (j) Apple Black-rot(plain background, small symptoms), (k) Mango Sooty-mould(natural background, spread symptoms) and (l) Cherry Powdery-mildew(natural background, small symptoms).
According to Eq. 1, the image enhancement condition performs effectively in tracing ROIs with the identical background color as shown in Fig. 3(a1 –c2 ). The second statistical condition for image enhancement is formulated as in Eq. 2). The second statistical condition is effective when there is a shadow of the leaf image on the background, as shown in Fig. 3(a2 –c4 ). Otherwise, the leaf image is directly converted to the L * a * b color space image without enhancement. µ < x > M0 3.3
(2)
Clustering by Adaptive Centroid Based Segmentation
The modified adaptive centroid-based segmentation (ACS) has been applied once the PLD image quality has been enhanced. In the beginning, the RGB (PLD) image space is converted to L * a * b color space for better perceptual linearity in differentiating colors. Conversion from RGB space to L * a * b color space significantly increases K-means clustering performance, especially when narrow distinguishes among symptoms colors in different plant leaf disorders. Differentiating among the color intensities having identical ROI color and background is nontrivial. Another challenge is distinguishing the basic color of ROIs in the same sunlight shade and shadowing the background. To overcome these challenges, we
536
S. Md. Minhaz Hossain and K. Deb
perform L * a * b color conversion before segmentation. In Fig. 3(c2 , c4 ), improvements in segmentation is shown comparing with Fig. 3(c1 , c3 ) having extra noise in the PLD RGB image. Our modified ACS focuses on initializing optimal K, automatically from the leaf image, to eliminate the limitation of lacking sensitivity of K in [10]. In traditional K-means, euclidean distance between each point and centroid has been calculated to check whether the point is in the same cluster. In the modified ACS, data points are investigated for eligibility by using a statistical threshold. After that, we calculate the distance between these eligible points and centroids, thus, comparatively reducing the effort to form clusters and restrict misclustering of data points. The statistical threshold (ST) value has been calculated by Eq. 3. N (3) ST = ((Xi − C)2 )/N i=1
Where, Xi , C, and N stand for data points, the centroid of data points, and the total number of data points. The automatic initialization of K defined using ACS can effectively detect image characteristics for different orientations and illuminations. ACS, also,increases the scalability of the proposed segmentation technique as shown in Fig. 3(c2 , c4 ) and Fig. 3(c1 , c3 ). A few examples under different circumstances, such as in the same colored reflection on ROIs, in the presence of the shadow behind the ROIs, overlapped blur images, the orientation of leaf images such as shrunk ROI and rotation, are as shown in Fig. 4(b1 –b5 ). 3.4
Selecting Features Using HLTP
Once the PLD image’s ROIs has been traced, the RGB segments are converted to grayscale images. Then HLTP has been applied to extract the features of leaf disease. We perform two approaches of feature extraction; namely HLTP1 (8 pixels with radius 1) and HLTP-2 (8 pixels with radius 2). Firstly, four neighboring points are determined using Eq. 7–Eq. 10. Other four points have been calculated by using linear interpolation coefficient for 45◦ in both HLTPs formulated using Eq. 11–Eq. 14. √ a=r− r (4) b=1−a
(5)
f (n + a) = a ∗ f (n + 1) + b ∗ f (n)
(6)
d0 = A(r0 , c0 − r) − I
(7)
d2 = A(r0 , c0 + r) − I
(8)
d4 = A(r0 − r, c0 ) − I
(9)
Plant Leaf Disease Recognition Using Histogram
537
d6 = A(r0 + r, c0 ) − I
(10)
d1 = a ∗ A(r0 + r − 1, c0 − r + 1) + b ∗ A(r0 + r, c0 − r) − I
(11)
d3 = a ∗ A(r0 + r − 1, c0 + r − 1) + b ∗ A(r0 + r, c0 + r) − I
(12)
d5 = a ∗ A(r0 − r + 1, c0 + r − 1) + b ∗ A(r0 − r, c0 + r) − I
(13)
d7 = a ∗ A(r0 − r + 1, c0 − r + 1) + b ∗ A(r0 − r, c0 − r) − I
(14)
Where, a and b are interpolation coefficients, and r is the radius. A(r0 , c0 ) stands for the matrix of PLD gray image I considering each neighbor of position (r0 , c0 ). In Eq. 6, f(n+a) is the unknown pixel, f(n), and f(n+1) are two known pixels. Unknown pixels, as shown in Eq. 11–Eq. 14, are formulated by Eq. 6 using Eq. 7–Eq. 10. In Eq. 7–Eq. 14, d0 , d1 , d2 , d3 , d4 , d5 , d6 , and d7 are all neighboring pixels’ derivatives. These derivatives are then put into 1 × 8 vector, d. 1 × 8 vector for each pixel Pi ; where, i= 0,1,2,3,...., i.e total (m × n) × 8 matrix is found; where, m is he width of the plant leaf disease image and n is the height of plant leaf images.Then, mean threshold(MT) for each pixel Pi is determined using the surrounding eight pixels of this pixel. Then we get two values; one contains the lower pattern values and another contains the upper pattern values formulated in [12]. From that using histogram, we get two vectors of 1 × 256; one from lower values and another from upper values. Traditional LTP has the limitation of uneven illumination and orientation in leaf image. In our modified HLTP, the mean threshold(MT) in [12] has been considered instead of a fixed threshold to overcome LTP’s drawback. It handles the variation of the gray level of neighbors and makes it invariant to illumination. Using linear interpolation in determining directives helps increase the ability to extract features from different oriented plant leaf images. It outperforms, as shown in Fig. 3(d2 −e2 ) and Fig. 3(d4 −e4 ) compared to traditional LTP, as shown in Fig. 3(d1 − e1 ) and Fig. 3(d3 − e3 ). Our modified HLTP functions effectively in the same colored reflection on ROIs, in the shadow behind the ROIs, overlapped blur images, and the orientation of leaf images such as shrunk ROI and rotation, as shown in Fig. 4(c1 − f5 ). 3.5
Classifying Using Histogram-Based Gradient Boosting Classifier
Finally, a histogram-based gradient boosting classifier in [6] is used to recognize PLD. Feature vectors developed in HLTP-1 and HLTP-2 have been applied to a histogram-based gradient boosting classifier. Histogram-based gradient boosting classifier is used due to its benchmark accuracy using histogram features and computational cost compared to gradient boosting classifier. Unlike the gradient boosting classifier, in a histogram-based gradient boosting classifier, optimum splitting feature points are found by feature histogram. So, computational complexity reduces due to the histogram data structure. Moreover, it takes memory cost of O(#f eatures ∗ #data ∗ 1 byte). In histogram-based gradient boosting classifier, for every feature, we build the Histogram using 255 bins. Then gradient and hessian are calculated based
538
S. Md. Minhaz Hossain and K. Deb
on the loss. As we classify 12 PLDs, we use categorical cross-entropy. Trees are expanded based on the information gain from every feature. Information gain is evaluated using the gradient and hessian of each feature. The maximum depth for each is considered as 20. Each leaf includes a minimum of 30 samples of PLD images, and each tree has 30 leaf nodes. As, histogram-based boosting classifier (inspired by LightGBM)in [6], adds each best split tree level-wise, a new gradient and hessian are calculated to predict the next one. The boosting process are examined up to maximum iterations of 10 to 1000 and are learned with a learning rate from 0.1 to 1. However, our classification method gets a minimum loss function using a learning rate of 0.2 and a maximum iteration of 100. The best-tuned parameters used to train the histogram-based gradient boosting is represented in Table 2. Table 2. Parameters used in histogram based gradient boosting classifier for plant leaf disease recognition. Parameters
Value(s)
Loss function Categorical cross-entropy 100 Max iterations Minimum samples in leaf node 30 30 Maximum leaf nodes 20 Max depth 255 Max bins 0.2 Learning rate
4
Results and Observations
In this section, the results of our experiments for recognizing plant leaf diseases are presented. Environment. The experiments for recognizing plant leaf disease are executed on Intel(R) Core i5 7200U 2.5 GHz with 4 GB RAM. The proposed framework is implemented in Python with packages sklearn and MATLAB. Dataset for Training and Test. In this experiment, 403 images of eight plants of size 256 × 256 pixels, are used to train and 235 PLD images are used to test for twelve classes from different sources.The statistics of different PLD train and test images is shown in Table 3. Effect of Image Enhancement Conditions. From Fig. 3, it is observed that without image enhancement, there are some noises in segmentation and also further have its impact on feature extraction in critical cases. Two image enhancement conditions of PLD images have been performed effectively in ROIs with
Plant Leaf Disease Recognition Using Histogram
539
Table 3. Dataset description according to the sources. Source condition
Plant Type Disease Samples
PlantVillage
Pepper
Bacterial-spot
50
30
Potato
Early-blight
42
39
Late-blight
21
10
Corn
Northern-blight
50
30
Mango
Sooty-mould
19
12
Apple
Black-rot
15
7
Cherry
Powdery-mildew
22
13
Grape
Black-measles
35
19
Rice
Blast
54
30
Bacterial leaf-blight
65
30
Sheath-rot
20
10
Tungro
10
5
403
235
Kaggle IRRI/BRRI
Rice
Total Images
# of training images
# of test images
# of training images (Sourcewise )
# of test images (Sourcewise )
254
160
119
60
30
15
403
235
Table 4. Comparison among the experiments using traditional K-means clustering, Local ternary pattern, modified adaptive centroid-based segmentation and modified histogram-based local ternary pattern. Frameworks
Accuracy F1-score
Traditional K-means clustering+ Local ternary pattern 90% 92.76% Traditional K-means clustering+ HLTP 94.89% Modified ACS+ Local ternary pattern 99.34% Our PLD framework(Modified ACS+ HLTP)
88% 89.4% 90.4% 94.10%
the same color background, due to higher mode as shown in Fig. 3(c2 ) and in a shadow of the leaf image on the background due to its higher median than other two statistical values, as shown in Fig. 3(c4 ). Effect of Modified Adaptive Centroid Based Clustering. The automatic initialization of K defined using ACS can effectively detect image characteristics for different orientations and illuminations. ACS, also,increases the scalability of our modified segmentation technique as shown in Fig. 3(c2 , c4 ) and Fig. 3(c1 , c3 ). In various critical circumstances, such as same-colored reflection on ROIs, when background and ROIs have the same color, ROIs in the natural background with shrunk, rotated, and overlapped blur images, modified ACS outperforms, as shown in Fig. 4(b1 − b5 ). Effect of Our HLTP on Feature Extraction. One thousand twentyfour(1024) histogram features (512 features of each HLTP-1 and HLTP-2) are extracted using HLTP. The dynamic mean threshold handles the variation of
540
S. Md. Minhaz Hossain and K. Deb
Fig. 3. Effect of image enhancement on recognizing plant leaf disease on critical situations: (a1 ) rice blast disease image and (a2 ) apple black rot disease image. (b1 ), and (b2 ) is the leaf image histogram of a1 and a2 , respectively. (c1 ), and (c3 ) is the color segmentation results of a1 and a2 respectively in traditional K-means clustering having extra noise without image enhancement, and (c2 ), and (c4 ) is the segmentation results of a1 and a2 respectively in our modified color segmentation algorithm with image enhancement. (d1 ), (d3 ) and (e1 ), (e3 ) are the lower and upper features of traditional LTP respectively. (d2 ), (d4 ) and (e2 ), (e4 ) are the lower and upper features of modified HLTP respectively.
neighbors’ gray level and makes it invariant to illumination. Linear interpolation in determining directives helps increase the ability to extract features from different oriented plant leaf images. It outperforms, as shown in Fig. 3(d2 –e2 ) and Fig. 3(d4 –e4 ) compared to traditional LTP, as shown in Fig. 3(d1 –e1 ) and Fig. 3(d3 –e3 ). HLTP functions effectively in the same colored reflection on ROIs, in the shadow behind the ROIs, overlapped blur images, and the orientation of leaf images such as shrunk ROI and rotation, as shown in Fig. 4(c1 –f5 ). From Table 4, it is observed that our proposed PLD recognition using HLTP comparatively achieves better accuracy of 99.34% and F1-score of 94.10%. Effect of Histogram-Based Gradient Boosting Classifier. A total of 1024 features are applied to the histogram-based gradient boosting classifier. Histogram-based gradient boosting classifier reduces computational complexity due to its histogram data structure. It also reduces the biasness of similar features in various PLDs because of histogram classification phenomena over features. Variance in histogram comparatively differentiates well. It improves accuracy than the other machine learning algorithms and requires less memory
Plant Leaf Disease Recognition Using Histogram
541
Fig. 4. The processing examples of rice images in our proposed PLD framework under different critical environments: (a1 − a5 ) are the RGB PLD samples. (b) Segmented ROIs after implementation of adaptive centroid-based segmentation. (c) HLTP-1 lower features. (d) HLTP-1 upper features. (e) HLTP-2 lower features and (f) HLTP-2 upper features.
Fig. 5. ROC curve of each plant leaf Fig. 6. Confusion matrix for recognizing plant diseases recognition of our framework. leaf diseases.
space than CNN. So, it is useful and reliable for recognizing PLDs using the mobile application. Performance Analysis. Two hundred thirty-five(235) plant leaf disease images of twelve classes are used to evaluate our PLD recognition framework’s
542
S. Md. Minhaz Hossain and K. Deb
performance. The recognition rate of each class is shown in a confusion matrix in Fig. 6. The summary of performance metrics, including accuracy, precision, recall, and F1-score, are shown in Table. 5. Our PLD recognition framework achieves accuracy, precision, recall, and F1 score of 99.34%, 94.66%, 93.54%, and 94.10%, respectively. For measuring the degree of separability among classes, the ROC curve has been shown in Fig. 5. AUC (The area under the ROC curve) for our proposed framework is 0.97. Minimum AUC is 0.85 for rice sheath-rot, and the maximum of AUC is 1 for five classes such as pepper bacterial-spot, grape black-measles, rice blast, cherry powdery-mildew, and rice tungro. Table 5. Performance evaluation of each classes using our proposed plant leaf disease recognition framework. Class
TP
FP
FN
Accuracy
Precision
Recall
F1-score
Corn northernblight
29
0
1
99.57%
P epper bacterialspot
30
0
0
Grape blackmeasles
19
0
0
Rice blast
30
0
0
potato earlyblight
39
4
0
98.29%
90.69%
100%
95.12%
7
1
1
99.15%
87.5%
87.5%
87.5%
Apple blackrot M ango sootymould
100%
96.67%
98.30%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
11
0
1
99.57%
100%
91.67 % 95.65%
Cherry powderymildew 13
0
0
100%
100%
100%
Rice bacterialleaf blight 29
1
1
99.14%
96.67%
96.67%
96.67%
P otato lateblight
0
2
99.15%
100%
80%
88.89%
8
100%
Rice sheathrot
7
2
3
97.87%
77.78%
70%
73.69%
Rice T ugro
5
1
0
99.57%
83.33%
100%
90.91%
99.34%
94.66%
93.54% 94.10%
Average
For further evaluation, we compare the performance of our PLD recognition with the benchmark method proposed by Pantazi et al. in [9] and Singh et al. in [11] on our dataset. The proposed method in [9] is significant for its high generalization using histogram features and the ability to overcome the intrinsic challenges(segmentation and different disorders with similar symptoms) under uncontrolled capture conditions. For comparing with the method in [9], the GrabCut algorithm for segmentation, Histogram-based Local Binary Pattern for feature extraction, and one class SVM for classification are executed on our dataset. The proposed method in [11] has significance in the auto initialization of clustering centers and generalization. For comparing with the method in [11], Genetic algorithm for segmentation, Color Co-occurrence method for feature extraction, and SVM for classification is executed on our dataset. From Table 6, it is visual that our proposed PLD recognition framework performs relatively better than other methods proposed in [9] and [11]. Our PLD recognition framework achieves accuracy and F1-score of 99.34% and 94.10%, respectively.
Plant Leaf Disease Recognition Using Histogram
543
Table 6. Comparison of performance evaluation with other state-of-the-art plant leaf disease recognition frameworks. Class
Our framework
Method in [9]
Method in [11]
Accuracy F1-score Accuracy F1-Score Accuracy F1-score Corn northernblight
99.57%
98.30%
95.74%
84.85%
97.02%
87.96%
P epper bacterialspot
100%
100%
100%
100%
99.15%
96.67%
Grape blackmeasles
100%
100%
97.87%
85.71%
97.45%
85%
Rice blast
100%
100%
99.15%
96.77%
99.58%
96.77%
potato earlyblight
98.29%
95.12%
98.72%
96.30%
97.46%
92.86%
Apple blackrot
99.15%
87.5%
99.15%
83.33%
97.02%
60%
M ango sootymould
99.57%
95.65%
97.00%
55.55%
98.30%
76.13%
Cherry powderymildew 100%
100%
99.15%
91.72%
98.30%
84.62%
Rice bacterialleaf blight 99.14%
96.67%
97%
89.66%
96.60%
85.19%
P otato lateblight
99.15%
88.89%
99.15%
84.21%
97.02%
58.83%
Rice sheathrot
97.87%
73.69%
96.59%
60%
94.46%
31.58%
Rice T ugro
99.57%
90.91%
99.57%
83.33%
98.72%
63.31%
Average
99.34%
94.10% 97.59%
76.57%
98.26%
85.02%
These evaluations are superior to the accuracy achieved by the state-of-the-art method. Moreover, we compare the PLD recognition framework results using histogram-based gradient boosting with the CNN-based PLD recognition model. As we have a small number of PLD images, we augment the PLD images using rotation, shifting, scaling, flipping, change in brightness, and contrast changes. Then, considering the number of network parameters, we execute the state-ofthe-art convolutional layers based architecture, AlexNet(input image of 224 × 224) using ImageNet weights and achieves 99.25% accuracy, as shown in Table 7. Table 7. Comparison between PLD recognition using histogram-based gradient boosting classifier and state-of-the-art CNN model. Method/Network
Accuracy #Network/Learning Parameters Storage required
Our proposed framework 99.34%
6
AlexNet
6.4 M
99.25%
0.62 MB 25.6 MB
Critical Analysis. Our framework recognizes well under different illumination, in the natural background, and complex background. However, there are still some misclassifications in detecting disease, as shown in Fig. 7(a–h). By analyzing these misclassifications, it is found that PLD images are misclassified due to multiple disease symptoms and changed symptom’s features, such as shape. These challenges are located for future work. Not only information of colors or intensities of ROIs in spatial order, but also geometric features are considered as features.
544
S. Md. Minhaz Hossain and K. Deb
Fig. 7. Some misclassified images: (a), (b), (c) are some false positive rice sheath-rot images. (d) is rice bacterial leaf blight.(e) and (f) are some false positive potato late blight images. (g) is false positive apple black-rot image and (h) is false positive corn northern leaf-blight.
5
Conclusion and Future Work
In our PLD recognition framework, ROIs are initially detected by modified ACS with automatic initialization of K. Then features have been extracted by HLTP. Finally, classification has been done by a histogram-based gradient boosting classifier. Our proposed PLD framework overcomes existing PLD recognition limitations, such as having image backgrounds, similar features in different disorders, and under uneven illumination and orientation for uncontrolled captured images. ACS eliminates the lack of sensitivity of k in K-means clustering [10] and performs effectively irrespective of the image backgrounds and similar features in different disorders. HLTP overcomes other challenges of PLD detection under uncontrolled capturing. Using linear interpolation and dynamic mean threshold, HLTP handles the orientation and variation of neighbors’ grey level. In this work, some diseases having fungal and bacterial symptoms such as rice sheath-rot and apple black-rot are recognized in a better rate of, on average, 98.51%, as shown in Table 5. Our PLD recognition framework achieves an average of 99% of accuracy for PLD with similar symptoms such as potato early-blight, potato late-blight, and corn northern-blight, as shown in Table 5. However, our proposed framework performs well and having high generalization ability but still has limitations of detecting multiple diseases. It can be solved by concatenating ROIs of multiple diseases.
References 1. Vasilyev, A.A., Vasilyev, G.N.S.: Processing plants for post-harvest disinfection of grain. In: Proceedings of the 2nd International Conference on Intelligent Computing and Optimization (ICO 2019) , Advances in Intelligent Systems and Computing 1072, 501–505 (2019) 2. Borse, K., Agnihotri, P.G.: Prediction of crop yields based on fuzzy rule-based system (FRBS) using the takagi sugeno-kang approach. In: Proceedings of the International Conference on Intelligent Computing and Optimization (ICO 2018), Advances in Intelligent Systems and Computing 866, 438–447 (2018)
Plant Leaf Disease Recognition Using Histogram
545
3. Boulent, J., Foucher, S., Th´eau, J., St-Charles, P.L.: Convolutional neural networks for the automatic identification of plant diseases. Frontiers in Plant Science 10 (2019) 4. Brahimi, M., Mahmoudi, S., Boukhalfa, K., Moussaoui, A.: Deep interpretable architecture for plant diseases classification. In: Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), pp. 111–116. IEEE (2019) 5. Ferentinos, K.P.: Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agriculture 145, 311–318 (2018) 6. Ke, G., Meng, Q., Finey, T., Wang, T., Chen, Ma, W., Ye, Q., Liu, T.Y.: Lightgbm: a highly efficient gradient boosting decision tree. In: 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. pp. 1–3 (2017) 7. Liang, W.J., Zhang, H., Zhang, G.F., Cao, H.X.: Rice blast disease recognition using a deep convolutional neural network. Scientific Reports 9(1), 1–10 (2019) 8. Mohanty, S.P., Hughes, D.P., Salath´e, M.: Using deep learning for image-based plant disease detection. Front. Plant Sci. 7, 1419 (2016) 9. Pantazi, X., Moshou, D., Tamouridou, W.: Automated leaf disease detection in different crop species through image feature analysis and one class classifiers. Comput. Electron. Agric. 156, 96–104 (2019) 10. Sharma, P., Berwal, Y.P.S., Ghai, W.: Performance analysis of deep learning cnn models for disease detection in plants using image segmentation. Information Processing in Agriculture (2019) 11. Singh, V., Misra, A.: Detection of plant leaf diseases using image segmentation and soft computing techniques. Inf. Process. Agric. 4, 41–49 (2017) 12. Taha H., Rassem, B.E.K.: Completed local ternary pattern for rotation invariant texture classification. The Scientific World Journal, p. 10 (2014) 13. Too, E.C., Yujian, L., Njuki, S., Yingchun, L.: A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 161, 272–279 (2019)
Exploring the Machine Learning Algorithms to Find the Best Features for Predicting the Breast Cancer and Its Recurrence Anika Islam Aishwarja1(B) , Nusrat Jahan Eva1 , Shakira Mushtary1 , Zarin Tasnim1 , Nafiz Imtiaz Khan2 , and Muhammad Nazrul Islam2 1
2
Department of Information and Communication Engineering, Bangladesh University of Professionals, Dhaka, Bangladesh [email protected] Department of Computer Science and Engineering, Military Institute of Science and Technology, Dhaka, Bangladesh [email protected] Abstract. Every year around one million women are diagnosed with breast cancer. Conventionally it seems like a disease of the developed countries, but the fatality rate in low and middle-income countries is preeminent. Early detection of breast cancers turns out to be beneficial for clinical and survival outcomes. Machine Learning Algorithms have been effective in detecting breast cancer. In the first step, four distinct machine learning algorithms (SVM, KNN, Naive Bayes, Random forest) were implemented to show how their performance varies on different datasets having different set of attributes or features by keeping the same number of data instances, for predicting breast cancer and it’s recurrence. In the second step, analyzed different sets of attributes that are related to the performance of different machine learning classification algorithms to select cost-effective attributes. As outcomes, the most desirable performance was observed by KNN in breast cancer prediction and SVM in recurrence of breast cancer. Again, Random Forest predicts better for recurrence of breast cancer and KNN for breast cancer prediction, while the less number of attributes were considered in both the cases. Keywords: Breast Cancer · Prediction · Recurrence selection · Data mining · Machine learning
1
· Attributes
Introduction
Breast cancer is the first and most common cancer for females. Around 10% of females are affected by breast cancer at any stage of their life. Again, among the cancer affected women, 34.3% are affected by breast cancer and showed a high mortality around the world [4]. Most breast cancers begin in the cells that line the ducts, while fewer breast cancers start in the cells lining the lobules [4]. Breast cancer’s causes are multi-factorial which involves family history, obesity, hormones, radiation therapy, and even reproductive factors [1]. The diagnosis c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 546–558, 2021. https://doi.org/10.1007/978-3-030-68154-8_48
Prediction of Breast Cancer and Its Recurrence
547
of breast cancer is carried out by classifying the tumor. Tumors can be either benign or malignant. In a benign tumor, the cells grow abnormally and form a lump and do not spread to other parts of the body [5]. Malignant tumors are more harmful than benign. Unfortunately, not all physicians are experts in distinguishing between benign and malignant tumors, while the classification of tumor cells may take up to two days [3]. Information and Communication Technologies (ICT) can play potential roles in cancer care. For example, data mining approaches applied to medical science topics rise rapidly due to their high performance in predicting outcomes, reducing costs of medicine, promoting patients’ health, improving healthcare value and quality, and making real-time decisions to save people’s lives [2]. Again, Machine Learning (ML) algorithms gain insight from labeled sample and make prediction for unknown samples; which greatly being used in health informatics [11,14–16], predicting autism [24], and the likes. Again, ML becomes very impactful in several fields like diseases diagnosis, disease prediction, biomedical, and other engineering fields [25,26]. It is an application of artificial intelligence (AI) that utilizes the creation and evaluation of algorithms that facilitate prediction, pattern recognition, and classification [9]. Machine learning algorithms like Support Vector Machine (SVM), Random Forest (RF), Naive Bayes (NB), and k Nearest Neighbours (KNN) are frequently used in medical applications such as the detection of the type of cancerous cells. The performance of each algorithm is generally varied in terms of accuracy, sensitivity, specificity, precision, and recall for predicting the possibility of being affected by diseases like diabetes, breast cancer, etc. Over the last few years, a number of studies have been conducted on the prediction of breast cancer. Those studies have focused on either the existence of the cancer or the recurrence of the cancer. Therefore, this study has the following objectives: first, explore the performance of different classifier models considering different sets of data having and equal number of data objects for predicting breast cancer and its recurrence. Second, to analyze the performance of the predictors, while considering different sets of attributes. This paper is organized into five separate sections as follows. Section 2 highlights the published literature focusing on breast cancer prediction models using data mining techniques. Section 3, explains the detailed description of data, various prediction algorithms, and measures their performance. The prediction results of all the classification and regression algorithms along with the accuracy, sensitivity, and specificity are presented in Sect. 4. Section 5 concludes with a summary of results and future directions.
2
Literature Review
Machine learning algorithms have been used for a long time in the case of predicting and diagnosing breast cancer. Several ML related studies were conducted using breast cancer wisconsin (Original) dataset. For example, Asri et al. [2] compared four different algorithms namely, SVM, Decision tree, Naive Bayes
548
A. I. Aishwarja et al.
and KNN to find out the best-performed classifiers for breast cancer prediction. As outcome, SVM shows the best accuracy of 97.13%. Karabatak and Ince [13] proposed an automatic diagnosis system for detecting breast cancer using Association Rules (AR) and Neural Network (NN) and then compared with the Neural Network Model. In the test stage, they used 3 fold cross-validation method and the classification rate was 95.6%. Islam et al. [12] proposed a model based on SVM and KNN then compare with other existing SVM-based models namely STSVM, LPS-SVM, LP-SVM, LSVM, SSVM, NSVM; and found that their model shows the highest accuracy in term of accuracy, sensitivity, specificity. In addition to this dataset, Huang et al. [10] used another dataset having 102294 data samples with 117 different attributes to compare the performance of SVM classifiers and SVM ensembles. They constructed the SVM classifiers using kernel functions (i.e., linear, polynomial, and RBF) and SVM ensembles using bagging and boosting algorithms. They found that for both datasets SVM ensembles performed better than classifiers. Khourdif and Bahaj [18] implemented Random Forest, Naive Bayes, SVM, KNN, Multilayer Perceptron (MLP) in WEKA to select the most effective algorithm with and without the Fast CorrelationBased Feature selection (FCBF). This method was used to filter irrelevant and redundant characteristics to improve the quality of cancer classification. The experimental results showed that SVM provided the highest accuracy of 97.9% without FCBF. Khourdif and Bahaj [17] used several machine-learning algorithms including Random Forest, Naive Bayes, Support Vector Machines, and K-Nearest Neighbors for determining the best algorithm for the diagnosis and prediction of breast cancer using WEKA tools. Among 699 cases they have taken 30 attributes with 569 cases and found the SVM as the most accurate classifier with accuracy of 97.9%. Again, to classify recurrent or non-recurrent cases of breast cancer Ojha and Goel et al. [27] used the Wisconsin Prognostic Breast Cancer dataset (194 cases with 32 attributes). Four clustering algorithms K-means, ExpectationMaximization(EM), Partitioning Around Medoids(PAM) and Fuzzy c-means, and four classification algorithms (SVM, C5.0, Naive Bayes, and KNN) were used. The study showed SVM and C5.0 achieved the best 81% accuracy; and the classification algorithms are better predictors than the clustering algorithms. Li et al. [20] detected the risk factors of breast cancer and multiple common risk factors adopting the Implementing Association Rule (IAR) algorithm and N-IAR algorithm (for n-item) using a dataset of Chinese women having 83 attributes and 2966 samples. Experimental results showed that the model based on the ML algorithm is more suitable than the classic Gail model. Stark et al. [25] proposed a new model with Logistic Regression, Gaussian Naive Bayes, Decision Tree, Linear Discriminant Analysis, SVM, and Artificial Neural Network (ANN) and trained using a dataset of 78,215 women (50–78 age). These models predicted better than previously used BCRAT (Gail model). Delen et al. [7] applied Artificial Neural Network(ANN), Decision Tree, Logistic Regression to predict the survivability rate of breast cancer patients in WEKA using the SEER
Prediction of Breast Cancer and Its Recurrence Table 1. Summary of related studies Ref Objective
No of features
ML technique
Results Accuracy Specificity Sensitivity
Precision
[2]
11
C4.5
95.13
0.96 (Benign)
SVM
97.13
Estimating the definiteness in classifying data
0.95 (Benign)
0.94 (Malignant) 0.91 (Malignant) 0.97 (Benign)
0.98 (Benign)
0.96 (Malignant) 0.95 (Malignant) NB
95.99
0.95 (Benign)
0.98 (Benign)
0.97 (Malignant) 0.91 (Malignant) k-NN
95.27
0.97 (Benign)
0.95 (Benign)
0.91 (Malignant) 0.94 (Malignant) [17] Predicting breast cancer detection and the risk of
11
death analysis. [13] An automatic diagnosis system for detecting BC.
11
[12] Developing a
11
classification model for breast cancer prediction
K-NN
96.1
0.961
0.961
SVM
97.9
0.979
0.979
RF
96
0.960
0.960
NB
92.6
0.926
0.926
NN AR1 + NN
95.2 97.4
AR2 + NN
95.6
LPSVM
97.1429
95.082
98.2456
LSVM
95.4286
93.33
96.5217
SSVM
96.5714
96.5517
96.5812
PSVM
96
93.4426
97.3684
NSVM
96.5714
96.5517
St-SVM
94.86
93.33
95.65
Proposed 98.57 model using SVM
95.65
100
Proposed 97.14 model using
92.31
100
96.5812
K-NN [20] Finding the best classifier
[27] Detecting and predicting breast cancer
83
10
Logistic Regression
0.8507
0.8403
0.8594
Dicision Tree
0.9174
0.9262
0.9124
Random Forest
0.8624
0.8577
0.8699
XGBoost
0.9191
0.9275
0.9142
LightGBM
0.9191
0.9248
0.9164
MLP
0.8128
0.8289
0.8059
Gali
0.5090
0.1880
0.5253
SVM KNN
70% 68%
RF
72%
Gradient
75%
Boosting [18] Filtering irrelevant 11 and redundant data in order to improve quality of cancer classification
[21] Building an 17 integration decision tree model for predicting breast cancer survivability
[7]
Developing prediction models for breast cancer survivability
KNN SVM
94.2% 96.1%
0.942 0.96
0.942 0.961
RF
95.2%
0.953
0.952
NB
94%
0.94
0.94
Multilayer Perceptron
96.3%
0.963
0.963
AUC
0.8805
AUC with undersampling
0.7422
0.7570
0.2325
0.7399
0.9814
Bagging algorithm
0.7659
0.7859
0.7496
ANN Decision trees
0.9121 0.9362
0.8748 0.9066
0.9437 0.9602
Logistic
0.892
ratio of 15%
72
regression
0.8786
0.9017
549
550
A. I. Aishwarja et al.
Cancer Incidence Public-Use Database (433272 cases with 72 attributes), while the Decision Tree showed the best result [21]. The summary of the literature review is shown in Table 1. Few important concerns are observed through this literature review. Firstly, machine learning has played a significant role in predicting the possibility of having breast cancer, recurrence, and for the diagnosis of breast cancer. Secondly, several types of researches have been conducted focusing on performance and comparison of algorithms. Thirdly, both the SVM and Naive Bayes algorithms showed comparatively better prediction accuracy than other ML algorithms. Fourthly, though a number of studies used different datasets haing various attributes, a little attention has been paid to explore that attributes of every dataset have a great impact on the overall performance of algorithms. Thus, this research focused to analyze Machine Learning algorithms on different datasets having different sets of attributes.
3
Methodology
This section discusses the implementation of four machine learning algorithms. The overview of the study methodology is presented in Fig. 1.
Fig. 1. The overview of the study methodology
3.1
Data Acquisition
At the first stage, four datasets were selected from the UCI machine learning repository [8]. “Breast Cancer Wisconsin (Original) dataset” as dataset 1 having 11 attributes and 699 instances and “Breast Cancer Wisconsin (Diagnostic) dataset” as dataset 2 having 32 attributes and 569 instances were used
Prediction of Breast Cancer and Its Recurrence
551
for predicting the breast cancer. Both of these datasets include two classes, namely Benign (B) and Malignant (M). As those datasets have different sets of attributes and a different number of instances, the least number of instances 569 were considered also from the dataset 1. Similarly “Breast Cancer Wisconsin (Prognostic) dataset” was used as dataset 3 and “Breast Cancer Data Set” was used as dataset 4 for predicting the recurrence of breast cancer. Dataset 3 has 33 attributes and 198 instances while dataset 4 has 10 attributes and 286 instances. Dataset 3 and dataset 4 include two classes, namely recurrence and non-recurrence. There are 30 common attributes between dataset 2 and dataset 3. However, some instances were deleted from dataset 4 due to noisy data and considered 180 instances. Similarly, 180 instances were considered also from dataset 3 (see Table 2). Table 2. Summary of the selected datasets Dataset
3.2
Source
Classes
Attributes Instances
Dataset 1 Breast Cancer Benign Wisconsin (Original) Malignant Data Set Dataset 2 Breast Cancer Wisconsin (Diagnostic) Data Set
11
569
32
569
Dataset 3 Breast Cancer Wisconsin (Prognostic) Data Set Dataset 4 Breast Cancer Data Set
33
180
10
180
Recurrence Non-recurrence
Data Pre-processing
Data pre-processing has a huge impact on the performance of the ML algorithms as irrelevant and redundant data can lead to provide erroneous outputs. As part of data pre-processing, the following tasks were performed meticulously: (a) duplicate rows existing in each of the datasets were removed from the datasets. (b) missing values in the datasets were handled properly: missing numerical attribute values were replaced with the mean value of the particular column whereas, missing categorical attribute values were replaced with most frequent values in the particular column, (c) the attributes having text/string inputs were encoded to have numerical class values as machine learning algorithms are not capable to deal with string, (d) numerical values were normalized to get values between zero to one, as it is easier for ML algorithms to deal with small values, and (e) a random train-test split 80-20 was applied on the dataset where 80% data was considered as a training set and the rest of the data was considered as
552
A. I. Aishwarja et al.
a test set, since ML models may be biased towards a particular class, if there doesn’t exist an equal number of class instances in the training data [19]. To solve this problem Synthetic Minority Over-sampling Technique (SMOTE) [6] was used, which is capable to remove class imbalance issues. 3.3
Model Building and Analysis
3.3.1 Random Forest The Random forest is a supervised learning algorithm. It builds multiple decision trees and merges them to get a more accurate and stable prediction. Some interesting findings can be observed by using Random Forest method as the prediction model. Considering predefined classes of attributes, and the predicting results using Random Forest are shown in Fig. 2. Here, datasets are shown on the X-axis, and performance measures are shown on the Y-axis. Different datasets come with different attributes. The result of data analysis is shown separately for class attributes. Considering the benign-malignant class attribute, the highest accuracy was obtained by dataset 2. Similarly, for recurrence-non recurrence class attribute, the highest accuracy was obtained by dataset 3. Overall performance of dataset 2 was the best as the outcome of accuracy, precision, sensitivity, and specificity were 95.9, 97.2, 96.3, and 96.2 respectively.
(a)Predicting using dataset 1 and dataset 2
(b)Predicting using dataset 3 and dataset 4
Fig. 2. Result of data analysis using Random Forest
3.3.2 KNN Among the supervised machine learning algorithms, K-nearest neighbors (KNN) is one of the most effective techniques. It performs classification on certain data points [23]. The KNN algorithm is a type of supervised ML algorithm that can be used for both classifications as well as regression predictive problems. It uses ‘attribute similarity’ to predict the values of new data-points and then the new data point will be assigned a value based on how closely it matches the points in the training set. This algorithm has been applied based on different sets of
Prediction of Breast Cancer and Its Recurrence
553
attributes of different datasets and the results are shown in Fig. 3. The highest accuracy was acquired by dataset 1 (for benign-malignant class attribute) and dataset 3 (for recurrence-non recurrence class attribute). Moreover, the best performance was obtained from dataset 1 as the outcome of accuracy, precision, sensitivity, and specificity were 95.9, 96, 98.5, and 93.42 respectively.
(a)Predicting using dataset 1 and dataset 2
(b)Predicting using dataset 3 and dataset 4
Fig. 3. Result of data analysis using KNN
3.3.3 SVM The SVMs are a set of related supervised learning methods that analyze data to recognize patterns and are used for classification and regression analysis [22]. SVM is an algorithm that attempts to find a linear separator (hyper-plane) between the data points of two classes in multidimensional space. The result of adopting SVM on predefined classes of attributes is shown in Fig. 4. The highest accuracy was found for dataset 2 (for benign-malignant class attribute) and dataset 4 (for recurrence-non recurrence class attribute). But overall best performance was observed for dataset 2 as the outcome of accuracy, precision, sensitivity, and specificity were 97.2, 97.3, 99.07, and 95.2 respectively. 3.3.4 Na¨ıve Bayes The Naive Bayes is a quick method for the creation of statistical predictive models based on the Bayesian theorem [27]. This classification technique analyses the relationship between each attribute and the class for each instance to derive a conditional probability for the relationships between the attribute values and the class. Finding from the performance of this algorithm from different datasets vary because of the selection of attributes; and shown in Fig. 5. For the benignmalignant class attribute, the highest accuracy was found for dataset 2 and for the recurrence-non recurrence dataset, highest accuracy was observed for dataset 3. The overall best performance can be observed for dataset 2 as the outcome of accuracy, precision, sensitivity, and specificity were 92.4, 92.6, 95.5, and 97.6 respectively.
554
A. I. Aishwarja et al.
(a)Predicting using dataset 1 and dataset 2
(b)Predicting using dataset 3 and dataset 4
Fig. 4. Result of data analysis using SVM
(a)Predicting using dataset 1 and dataset 2
(b)Predicting using dataset 3 and dataset 4
Fig. 5. Result of data analysis using Naive Bayes
4
Discussion
The analysis and result discussed in Sect. 3 highlight the diverse performance of the attributes used in different datasets, keeping the same number of data and by applying different machine learning classification algorithms. The summary of this finding is shown in Table 3. Here, dataset 1 has 11 attributes and dataset 2 has 32 attributes, but both of them have 569 levels of instances. Different levels of performance were observed as shown in Fig. 6 while they have differences only in the number of attributes. However, the highest accuracy for dataset 1 and dataset 2 was obtained by KNN and SVM respectively. Similarly, for predicting the recurrence-non recurrence of breast cancer, dataset 3 and dataset 4 showed different performance as shown in Fig. 6. Dataset 3 consists of 33 levels of attributes and dataset 4 consists of 10 attributes. Similarly, the different levels of performance were observed due to the different sets of attributes, while the highest accuracy for dataset 3 and dataset 2 was acquired by KNN and Random Forest respectively.
Prediction of Breast Cancer and Its Recurrence
555
Table 3. Accuracy obtained by different machine learning algorithms on various datasets Algorithm
Dataset 1 Dataset 2 Dataset 3 Dataset 4
Random Forest
95.20%
95.91%
87%
86%
KNN
95.90%
91.77%
88.90%
83.60%
Na¨ıve Bayes
92.40%
92.54%
83.30%
54.50%
Support Vector Machine 94.50%
97.22%
68.50%
80.00%
(a) Malignant and Benign
(b) Recurrence and non-recurrence
Fig. 6. Accuracy comparison
Again, the study results indicated that dataset 2 having 32 attributes (for benign-malignant cancer) and dataset 3 having 33 attributes (for recurrence-non recurrence) showed better performance for different algorithms. Hence, it can be said that attributes used in dataset 2 are better for benign-malignant breast cancer prediction while the attributes used in dataset 3 are better for recurrencenon recurrence of breast cancer prediction. Thus the attributes of dataset 2 are observed to be the best for predicting benign-malignant breast cancer prediction. Similarly, the attributes of dataset 3 are the best for predicting the recurrence of breast cancer. Furthermore, KNN and SVM performed best for dataset 1 and dataset 2 respectively. In this case, the difference in accuracy gained by KNN and SVM was below 2% while the attribute of dataset 1 and dataset 2 is 11 and 32 respectively. Similarly, KNN and Random Forest performed best for dataset 3 and dataset 4 respectively. Here, the difference of accuracy was below 3% while the attributes of dataset 3 and dataset 4 are 33 and 10 respectively. In both cases, increasing the number of attributes doesn’t make a huge difference while the collection of attributes increases diagnosis or medical cost. From a cost-effective point of view, in both cases less number of attributes can be used, 11 attributes (dataset 1) can be used for predicting breast cancer adopting the KNN. On the other hand, 10 attributes (dataset 4) can be used for predicting recurrence of breast cancer adopting the Random Forest algorithm.
556
5
A. I. Aishwarja et al.
Conclusion
In this research, different datasets with various attributes were implemented using four algorithms. Aiming to find which attributes and algorithms tend to be more effective. Machine learning approaches have been increasing rapidly in the medical field due to their monumental performance in predicting and classifying disease. Research on which algorithm and attributes perform better in breast cancer prediction has been done before but the reasons for their performing better were not explored. In this research, considering the performance of algorithms, the best accuracy was observed by KNN in breast cancer prediction and SVM in the recurrence of breast cancer. This result indicated that while detecting breast cancer, a dataset with diverse attributes tends to be more accurate in prediction. One of the limitations of this work is, this study considers datasets having limited number of instances. In future, larger datasets can be considered for exploring the different insight. Besides, more algorithms can be implemented to validate and generalize the outcomes of this research.
References 1. Aaltonen, L.A., Salovaara, R., Kristo, P., Canzian, F., Hemminki, A., Peltom¨ aki, P., Chadwick, R.B., K¨ aa ¨ri¨ ainen, H., Eskelinen, M., J¨ arvinen, H., et al.: Incidence of hereditary nonpolyposis colorectal cancer and the feasibility of molecular screening for the disease. N. Engl. J. Med. 338(21), 1481–1487 (1998) 2. Asri, H., Mousannif, H., Al Moatassime, H., Noel, T.: Using machine learning algorithms for breast cancer risk prediction and diagnosis. Procedia Comput. Sci. 83, 1064–1069 (2016) 3. Bharat, A., Pooja, N., Reddy, R.A.: Using machine learning algorithms for breast cancer risk prediction and diagnosis. In: 2018 3rd International Conference on Circuits, Control, Communication and Computing (I4C), pp. 1–4. IEEE (2018) 4. Chaurasia, V., Pal, S.: Data mining techniques: to predict and resolve breast cancer survivability. Int. J. Comput. Sci. Mob. Comput. IJCSMC 3(1), 10–22 (2014) 5. Chaurasia, V., Pal, S., Tiwari, B.: Prediction of benign and malignant breast cancer using data mining techniques. J. Algorithms Comput. Technol. 12(2), 119–126 (2018) 6. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002) 7. Delen, D., Walker, G., Kadam, A.: Predicting breast cancer survivability: a comparison of three data mining methods. Artif. Intell. Med. 34(2), 113–127 (2005) 8. Frank, A., Asuncion, A., et al.: UCI machine learning repository (2010), 15, 22 (2011). http://archive.ics.uci.edu/ml 9. Gokhale, S.: Ultrasound characterization of breast masses. Indian J. Radiol. Imaging 19(3), 242 (2009) 10. Huang, M.W., Chen, C.W., Lin, W.C., Ke, S.W., Tsai, C.F.: SVM and SVM ensembles in breast cancer prediction. PLoS ONE 12(1), e0161501 (2017) 11. Inan, T.T., Samia, M.B.R., Tulin, I.T., Islam, M.N.: A decision support model to predict ICU readmission through data mining approach. In: Pacific ASIA Conference on Information Systems (PACIS), p. 218 (2018)
Prediction of Breast Cancer and Its Recurrence
557
12. Islam, M.M., Iqbal, H., Haque, M.R., Hasan, M.K.: Prediction of breast cancer using support vector machine and k-nearest neighbors. In: 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), pp. 226–229. IEEE (2017) 13. Karabatak, M., Ince, M.C.: An expert system for detection of breast cancer based on association rules and neural network. Expert Syst. Appl. 36(2), 3465–3469 (2009) 14. Khan, N.S., Muaz, M.H., Kabir, A., Islam, M.N.: Diabetes predicting mHealth application using machine learning. In: 2017 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE), pp. 237–240. IEEE (2017) 15. Khan, N.S., Muaz, M.H., Kabir, A., Islam, M.N.: A machine learning-based intelligent system for predicting diabetes. Int. J. Big Data Anal. Healthcare (IJBDAH) 4(2), 1–20 (2019) 16. Khan, N.I., Mahmud, T., Islam, M.N., Mustafina, S.N.: Prediction of cesarean childbirth using ensemble machine learning methods. In: 22nd International Conference on Information Integration and Web-Based Applications Services (IIWAS 2020) (2020) 17. Khourdifi, Y., Bahaj, M.: Applying best machine learning algorithms for breast cancer prediction and classification. In: 2018 International Conference on Electronics, Control, Optimization and Computer Science (ICECOCS), pp. 1–5. IEEE (2018) 18. Khourdifi, Y., Bahaj, M.: Feature selection with fast correlation-based filter for breast cancer prediction and classification learning learning algorithms. In: 2018 International Symposium on Advanced Electrical and Communication Technologies (ISAECT), pp. 1–6. IEEE (2018) 19. Kotsiantis, S., Kanellopoulos, D., Pintelas, P.: Handling imbalanced datasets: a review. Gests Int. Trans. Comput. Sci. Eng. 30, 25–36 (2006). Synthetic Oversampling of Instances Using Clustering 20. Li, A., Liu, L., Ullah, A., Wang, R., Ma, J., Huang, R., Yu, Z., Ning, H.: Association rule-based breast cancer prevention and control system. IEEE Trans. Comput. Soc. Syst. 6(5), 1106–1114 (2019) 21. Liu, Y.Q., Wang, C., Zhang, L.: Decision tree based predictive models for breast cancer survivability on imbalanced data. In: 2009 3rd International Conference on Bioinformatics and Biomedical Engineering, pp. 1–4. IEEE (2009) 22. Mangasarian, O.L., Musicant, D.R.: Lagrangian support vector machines. J. Mach. Learn. Res1, 161–177 (2001) 23. Miah, Y., Prima, C.N.E., Seema, S.J., Mahmud, M., Kaiser, M.S.: Performance comparison of machine learning techniques in identifying dementia from open access clinical datasets. In: Advances on Smart and Soft Computing, pp. 79–89. Springer (2020) 24. Omar, K.S., Mondal, P., Khan, N.S., Rizvi, M.R.K., Islam, M.N.: A machine learning approach to predict autism spectrum disorder. In: 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), pp. 1–6. IEEE (2019) 25. Stark, G.F., Hart, G.R., Nartowt, B.J., Deng, J.: Predicting breast cancer risk using personal health data and machine learning models. PLoS ONE 14(12), e0226765 (2019)
558
A. I. Aishwarja et al.
26. Vasant, P., Zelinka, I., Weber, G.W. (eds.): Intelligent Computing and Optimization. Springer International Publishing (2020). https://doi.org/10.1007/978-3-03033585-4 27. Yarabarla, M.S., Ravi, L.K., Sivasangari, A.: Breast cancer prediction via machine learning. In: 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI), pp. 121–124. IEEE (2019)
Exploring the Machine Learning Algorithms to Find the Best Features for Predicting the Risk of Cardiovascular Diseases Mostafa Mohiuddin Jalal1(B) , Zarin Tasnim1 , and Muhammad Nazrul Islam2 1
2
Department of Information and Communication Engineering, Bangladesh University of Professionals, Dhaka, Bangladesh [email protected] Department of Computer Science and Engineering, Military Institute of Science and Technology, Dhaka, Bangladesh [email protected]
Abstract. Nowadays, cardiovascular diseases are considered as one of the fatal and main reasons for mortality all around the globe. The mortality or high-risk rate can be reduced if an early detection system for cardiovascular disease is introduced. A massive amount of data gets collected by healthcare organizations. A proper and careful study regarding the data can be carried out to extract some important and interesting insight that may help out the professionals. Keeping that in mind, in this paper, at first six distinct machine learning algorithms(Logistic Regression, SVM, KNN, Na¨ıve Bayes, Random Forest, Gradient Boosting) were applied to four different datasets encompasses different set of features to show their performance over them. Secondly, the prediction accuracy of the ML algorithms was analyzed to find out the best set of features and the best algorithm to predict cardiovascular diseases. The results find out the best suited eleven feature and also showed that Random Forest performs well in terms of accuracy in predicting cardiovascular diseases. Keywords: Prediction · Machine learning · Cardiovascular disease Classification · Healthcare · Feature identification
1
·
Introduction
Cardiovascular diseases have been one of the note-worthy reasons for mortality all over the world. According to the reports of WHO, every year more than 18 million people die because of cardiovascular diseases which covers almost 31% of global death [23]. Damages in parts or all of the heart, coronary artery, or inadequate supply of nutrients and oxygen to this organ result in cardiovascular disease. Several lifestyle choices can increase the risk of heart disease that include, for example, high blood pressure and cholesterol, smoking, overweight and obesity, and diabetes. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 559–569, 2021. https://doi.org/10.1007/978-3-030-68154-8_49
560
M. M. Jalal et al.
Disease detection is generally dependent on the experience and expertise of doctors [24,26]. Though the decision support system could be a more feasible choice in the diagnosis of cardiovascular diseases through prediction [3]. Healthcare organizations and hospitals collect data from patients regarding various health-related issues all around the world. The collected set of data can be utilized using various machine learning classification techniques to draw effective insights that are overwhelming for human minds to comprehend. To rectify the hospital errors, prevention, easy detection of diseases along with better health policy-making, and preventable hospital deaths data mining applications can be used [10–12,19]. In the vein, prediction of cardiovascular disease using machine learning can efficiently assist medical professionals [5,20,23]. A number of studies have been conducted focusing to the ML and cardiovascular diseases that have considered a specific dataset (on specific features) and applied several ML algorithms for comparing their performance. Again, a very few studies have considered the performance analysis of several algorithms using different datasets. Thereby this research aims to identify the most suited data mining classification techniques and the best set of features used in classification. Six ML techniques (Logistic Regression, Support Vector Machine, K-NN, Na¨ıve Bayes, Random Forest, and Gradient Boosting) were applied to generate prediction models on four distinct datasets. This paper compared the accuracy of different ML algorithms on individual datasets There are five distinct sections in this paper, namely, Sect. 2 highlights the previous works focusing on cardiovascular disease prediction using machine learning. Section 3 illustrates the details of data, several ML algorithms, and the performance evaluation of the models in the disease prediction. Section 4 explains the result and overall performance of all the classification algorithms in terms of accuracy, precision, sensitivity, and specificity. Section 5 concludes by highlighting the main outcomes, limitations and future work.
2
Literature Review
Several studies have been conducted using the UCI machine learning repository heart disease dataset to classify the presence of cardiovascular disease in the human body. This multivariate dataset involves 303 instances along with 75 features. For example, Rajyalakshmi et al. [22] explored different machine learning algorithms including Decision Tree (DT), Support Vector Machine (SVM), Random Forest (RF), and a hybrid model named HRFLM that combines Random Forest and Linear method for predicting cardiovascular diseases. The hybrid model showed the best accuracy among the implemented algorithms. Similarly, Sen et al. [25] implemented the Na¨ıve Bayes, SVM, Decision Tree, and KNN algorithms and found that SVM shows the best accuracy (83%). Again, Dinesh et al. [6] predicted the possibility of cardiovascular diseases using six different algorithms, that includes Logistic Regression, Random Forest, SVM, Gradient Boosting, and an ensemble model, while Logistic regression showed the best accuracy (91%). In another study, Maji et al. [17] proposed the use of hybridization
Predicting the Risk of Cardiovascular Diseases
561
techniques to reduce the test cases to predict the outcome using ANN, C4.5, and hybrid Decision Tree algorithms. The result also showed that the hybrid Decision tree performed better in terms of accuracy. Prediction of cardiovascular diseases has also been conducted using different datasets to bring out the best accuracy like UCI Statlog Dataset having 270 instances along with 13 features. Dwivedi et al. [7] proposed an automatic medicinal system using advanced data mining techniques like the multilayer perceptron model and applied several classification techniques like Na¨ıve Bayes, SVM, logistic Regression, and KNN. Another study conducted by Georga et al. [9] explained AI methods(Random Forest, Logistic Regression, FRS, GAM, GBT algorithms) to find out the most effective predictors to detect Coronary Heart Disease (CAD). In this study, a more coherent and clustered dataset along with hybridization were thought to be incorporated to find out the best result. UCI Cleveland data has also been used in classification to predict the presence of cardiovascular disease in the human body in many studies. For example, Pouriyeh et al. [21] investigated and compared the performance of the accuracy of different classification algorithms including Decision Tree, Na¨ıve Bayes, SVM, MLP, KNN, Single Conjugative Rule Learner, and Radial Basis Function. Here, the hybrid model of SVM and MLP produced the best result. Latha [16] suggested a comparative analytical approach to determine the performance accuracy of ensemble techniques and found that the ensemble was a good strategy to improve accuracy. Again, Amin et al. [2] identified significant features and improve the prediction accuracy by using different algorithms including, KNN, Decision Tree, Na¨ıve Bayes, Logistic Regression, SVM, Neural Network, and an ensemble model that combines Na¨ıve Bayes and Logistic regression. It used the most impactful 9 features to calculate the prediction and the Na¨ıve Bayes gave the best accuracy. Alaa et al. [1] used UK Biobank data to develop an algorithmic tool using AutoPrognosis that automatically selects features and tune ensemble models. Five ML algorithms were used in this study, namely, SVM, Random Forest, Neural Network, AdaBoost, and Gradient Boosting. The AutoPrognosis showed a higher AUC-ROC compared to all other standard ML models. In sum, the literature review showed that most of the studies focusing on cardiovascular diseases and ML were striving to explore the algorithm that showed the best performance in predicting the possibility of cardiovascular diseases. The summary of the literature review is shown in Table 1. Again, though different studies are conducted using different sets of datasets having a different number of features, no study has been conducted to explore how the performance accuracy of different algorithms are varying due to different set of features used in different datasets. Thus, this study focus on this issue.
3
Methodology
This section describes the overall working procedure used to obtain the research objectives. The overall methodology of this study is shown in Fig. 1
562
M. M. Jalal et al. Table 1. Summary of related studies
Ref No. of features Objectives
ML techniques Results
[22] 13
Logistic Regression
87.00%
Random Forest
81.00%
Naive Bayes
84.00%
Gradinet Boosting
84.00%
Accuracy Specificity Sensitivity Precision
[23] 13
[7]
13
[20] 13
Predicting whether a person has heart disease or not by applying several ML algorithms and provides diagnosis
Presenting a survey of various models based on algorithm and their performance
Evaluating six potential ML algorithms based on eight performance indices and finding the best algorithm for prediction
Comparing different algorithms of decision tree classification in heart disease diagnosis
SVM
78.00%
Naive Bayes
84.16%
SVM
85.77%
KNN
83.16%
Decision Tree
77.55%
Random Forest
91.60%
ANN
84%
79%
87%
84%
SVM
82%
89%
77%
90%
Logistic regression
85%
81%
89%
85%
KNN
80%
76%
84%
81%
Classification Tree
77%
73%
79%
79%
Naive Bayes
83%
80%
85%
84%
J48 with Reduced Errorpruning Algorithm
57%
Logistic mode 56% tree Random forest [21] 14
[2]
14
Applying traditional ML algorithms and ensemble models to find out the best classifier for disease prediction
Identifying significant features and mining techniques to improve CVD prediction
Decision Tree
78%
83%
77%
Naive Bayes
83%
87%
84%
Knn, k = 1
76%
78%
78%
Knn, k = 3
81%
84%
82%
Knn, k = 9
83%
84%
85%
Knn, k = 15
83%
84%
85%
MLP
83%
82%
82%
Radial basic function
84%
86%
85%
Single conjunctive rule learner
70%
70%
73%
SVM
84%
90%
83%
SVM
85%
Vote
86%
Naive Bayes
86%
Logistic Regression
86%
NN
85%
KNN
83%
Decision Tree
83% (continued)
Predicting the Risk of Cardiovascular Diseases
563
Table 1. (continued) Ref No. of features Objectives
ML techniques Results Accuracy Specificity Sensitivity Precision
[6]
13
[25] 14
Finding significant KNN features and introducing SVM several combinations with Logistic ML techniques Regression
72% 77%
Naive Bayes
70%
Random Forest
74%
Comparing performance Naive Bayes of various ML algorithms SVM and predicting CVD Decision Tree KNN
[17] 13
59%
Proposing hybridization ANN technique and validating using several performance C4.5 measures to predict CVD Hybrid-DT
83% 84% 76% 76% 77% 77% 78%
Fig. 1. The overview of research methodology
3.1
Data Acquisition
At first, we acquired data from various sources namely UCI machine learning repository [8] and Kaggle. Four different datasets were used in this research having different number of instances and the features of these datasets are not similar. As Dataset 1, we have used the “Kaggle Cardiovascular Disease Dataset” while having 13 features and 70000 instances. The “Cleveland dataset” from kaggle having 14 attribues and 303 instances as Dataset 2. Likewise, we used “Kaggle Integrated Heart Disease dataset” as Dataset 3 that contains 76 features and 899 instances. As Dataset 4, we used “UCI Framingham Dataset” having 14 features and 4240 instances. However, we have considered 11 and 14 features for computational purpose from dataset 3 and dataset 4 respectively. There are 8 common features in each of the datasets.
564
3.2
M. M. Jalal et al.
Data Pre-processing
Performance of ML algorithms hugely relies on data pre-processing. In the dataset, there is often redundant and inappropriate data which lead to inaccurate outcomes. Thus, the noise, ambiguity and redundancy of data need to reduce for better classification accuracy [14,28]. The following operations were performed: (a) duplicate rows in each of the datasets are removed; (b) the rows having ambiguous data are removed; (c) missing numerical values in each of the datasets are replaced with the mean value of the particular column; (d) the columns having text or string values are converted to numeric data to apply ML algorithms; (e) data having numeric values are normalized to obtain values between one and zero, since ML algorithms show better outcomes if values of numeric columns are changed in the dataset to a common scale, without distorting differences in the ranges of values; (f) training and testing are performed with a random train-test split of 70–30; (g) outcomes of ML algorithms can be biased if equal number of class instances doesn’t exist in the training set of data [13]. To eradicate class instances imbalance problems, Synthetic Minority Over-sampling Technique (SMOTE) [4] was used. 3.3
Analyzing ML Algorithms
The objectives of the analysis are to find the optimum set of features and an algorithm for the prediction of cardiovascular diseases. For this, different machine learning algorithms which used mostly for predicting cardiovascular disease were chosen that includes, Logistic Regression, Support Vector Machine, K-th Nearest Neighbor, Na¨ıve Bayes, Random Forest and Gradient Boosting. These algorithms were applied on the selected four datasets by splitting the dataset into 70% as training set and 30% as testing set. 3.3.1 Logistic Regression Logistic Regression is one of the widely used ML algorithms. It is used in modeling or predicting because of its less computational complexity [18]. LR is considered as the standard statistical approach to modeling binary data. It is a better alternative for a linear regression which assigns a linear model to each of the class and predicts unseen instances basing on majority vote of the models [15]. Some interesting findings can be observed by using the Logistic Regression method as the prediction model considering predefined classes of features and results are shown in Fig. 2. Here, datasets are shown in X axis and performance measures are shown in Y axis. Different datasets come with different features and outcomes. The highest accuracy 93.8% was attained by Dataset 3 with 93.7% precision, 100% sensitivity and 0% specificity.
Predicting the Risk of Cardiovascular Diseases
565
3.3.2 Support Vector Machine (SVM) SVM is a supervised pattern classification model which is used as a training algorithm for learning classification and regression rules from gathered data. The purpose of this method is to separate data until a hyperplane with high minimum distance is found. SVM is used to classify two or more data types [27]. Results of applying SVM on predefined classes of features are shown in Table 2. Here, the best performance measure was observed for dataset 3, having accuracy, precision, sensitivity, specificity of 96.39%, 93.79%, 100%, 0% respectively. 3.3.3 K-th Nearest Neighbor KNN is a supervised learning method which is used for diagnosing and classifying cancer. In this method, the computer is trained in a specific field and new data is given to it. Additionally, similar data is used by the machine for detecting (K); hence, the machine starts finding KNN for the unknown data [27]. This algorithm has been performed based on different sets of features of different datasets. The performance measure of this algorithm is found to be less effective than previous algorithms. Here better performance was observed using dataset 3 (see Table 2). The performance measures namely, accuracy, precision, sensitivity and specificity were 88.37%, 92.57%, 99.67%, 0% respectively. 3.3.4 Na¨ıve Bayes Na¨ıve Bayes refers to a probabilistic classifier that applies Bayes’ theorem with robust independence assumptions. In this model, all properties are considered separately to detect any existing relationship between them.
Fig. 2. Performance of Logistic regression algorithm on different datasets
566
M. M. Jalal et al.
It assumes that predictive features are conditionally independent given a class. Moreover, the values of the numeric features are distributed within each class. NB is fast and performs well even with a small dataset [27]. Findings from the performance of this algorithm from different datasets vary because of the selection of features. Results of these are shown in Table 2. The best accuracy can be obtained from Dataset 3 having accuracy, precision, sensitivity, specificity of 92.3, 93.9, 100, 0% respectively. 3.3.5 Random Forest (RF) RF algorithm is used at the regularization point where the model quality is highest. RF builds numerous numbers of Decision Trees using random samples with a replacement to overcome the problem of Decision Trees. RF is used in the unsupervised mode for assessing proximities among data points [27]. By performing Python scripts on 4 different datasets on predefined features. Results of these are shown in Table 2. The best accuracy of this algorithm can be observed for dataset 3. The accuracy, precision, sensitivity and specificity of this algorithm for this particular dataset are 93.85, 94.67, 100, 37.5% respectively. 3.3.6 Gradient Boosting The gradient boosting is a machine learning technique for regression and classification [6]. Different datasets with different sets of features were considered to compute the performances for this algorithm and the results are showed in Table 2. The best outcome in terms of accuracy comes from dataset 3. Overall performance measures namely, accuracy, precision, sensitivity and specificity were 89.8, 91.65, 96.36, 13.7% respectively. Table 2. Performance of the selected ML techniques for different datasets Dataset
Performance measures
Logistic regression
SVM
K-NN
Na¨ıve Bayes
Random forest
Gradient boosting
Dataset 1
Accuracy
71.36%
72.7%
66.9%
58.43%
72.78%
72.79%
Precision
81.09%
70.58%
67.37%
55.18%
71.94%
72.37%
Sensitivity
61.44%
77.09%
75.7%
90.08%
81.66%
80.58%
Specificity
71.36%
67.82%
56.46%
26.76%
61.55%
63.55%
Accuracy
86.89%
88.52%
86.89%
88.52%
88.52%
80.33%
Precision
85.18%
88.46%
85.18%
88.46%
88.46%
75.86%
Sensitivity
85.18%
85.18%
85.18%
85.18%
85.18%
81.48%
Dataset 2
Dataset 3
Dataset 4
Specificity
88.23%
91.18%
88.23%
91.18%
91.18%
79.41%
Accuracy
93.8%
96.39%
88.37%
92.3%
94.85%
89.8%
Precision
93.7%
93.79%
92.57%
93.9%
94.68%
91.65%
Sensitivity
100%
100%
99.67%
100%
100%
96.36%
Specificity
0%
0%
5%
0%
37.5%
13.7%
Accuracy
84.46%
84.01%
80.69%
83.54%
84.54%
85.89%
Precision
84.81%
84.06%
86.38%
88.34%
84.88%
86.5%
Sensitivity
99.15%
99.89%
92.11%
93.3%
99.25%
99.07%
Specificity
7.18%
1.1%
9.62%
20.79%
7.73%
3.85%
Predicting the Risk of Cardiovascular Diseases
4
567
Discussion
The study result highlights that with different dataset al.ong with different features show diversity in the result. Datasets used here for computation purposes consist of different numbers of features. As they have different features, this could be one of the reasons of varying the prediction accuracy. Again, different datasets show difference in accuracy as shown in Table 2. For dataset 1, 2, 3 and 4 comparatively better accuracy was obtained using Gradient Boosting; SVM, Na¨ıve Bayes and Random Forest; Random Forest; Random Forest and Gradient Boosting respectively. The results thus indicate that Random Forest shows the best accuracy in most of the datasets. Each of the algorithm applied in each particular dataset, the best performance was observed for dataset 3 having 11 features. Thus, the results indicated that for predicting cardiovascular disease in the human body, features used in the dataset 3 is most likely to be the best recommended attributes. The features considered in Dataset 3 are Age, Sex, Chest Pain Type, Resting Blood Pressure, Smoking Year, Fasting Blood Sugar, Diabetes History, Family History Coronary, ECG, Pulse Rate and Presence of Disease.
5
Conclusion
In this research, six different ML algorithms were implemented on four different datasets having different set of features to predict cardiovascular disease. The result showed that 11 features of dataset 3 are the most efficient features while the Random Forest showed the best accuracy for most of the datasets with different set of features. While the existing work have primarily focused on the implementation of several algorithms in a particular dataset and then compared their performance; This study demonstrated performance comparison among multiple datasets having different set of features along with evaluating multiple machine learning algorithms on them. One of the limitations of this work is, this study considers only the traditional and ensemble ML algorithms. Hybrid and different ensemble models could have been developed for a different insight. In future, an app or tool can be developed using ML algorithms to detect cardiovascular disease. Besides, algorithms can also be applied on new datasets to validate and generalize the outcomes of this research related to the best features to predict cardiovascular diseases.
References 1. Alaa, A.M., Bolton, T., Di Angelantonio, E., Rudd, J.H., van der Schaar, M.: Cardiovascular disease risk prediction using automated machine learning: a prospective study of 423,604 uk biobank participants. PLoS ONE 14(5), e0213653 (2019) 2. Amin, M.S., Chiam, Y.K., Varathan, K.D.: Identification of significant features and data mining techniques in predicting heart disease. Telematics Inform. 36, 82–93 (2019)
568
M. M. Jalal et al.
3. Bhatt, A., Dubey, S.K., Bhatt, A.K.: Analytical study on cardiovascular health issues prediction using decision model-based predictive analytic techniques. In: Soft Computing: Theories and Applications, pp. 289–299. Springer (2018) 4. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002) 5. Dangare, C.S., Apte, S.S.: Improved study of heart disease prediction system using data mining classification techniques. Int. J. Comput. Appl. 47(10), 44–48 (2012) 6. Dinesh, K.G., Arumugaraj, K., Santhosh, K.D., Mareeswari, V.: Prediction of cardiovascular disease using machine learning algorithms. In: 2018 International Conference on Current Trends towards Converging Technologies (ICCTCT), pp. 1–7. IEEE (2018) 7. Dwivedi, A.K.: Performance evaluation of different machine learning techniques for prediction of heart disease. Neural Comput. Appl. 29(10), 685–693 (2018) 8. Frank, A., Asuncion, A., et al.: UCI machine learning repository, 2010, vol. 15, p. 22 (2011). http://archive.ics.uci.edu/ml 9. Georga, E.I., Tachos, N.S., Sakellarios, A.I., Kigka, V.I., Exarchos, T.P., Pelosi, G., Parodi, O., Michalis, L.K., Fotiadis, D.I.: Artificial intelligence and data mining methods for cardiovascular risk prediction. In: Cardiovascular Computing– Methodologies and Clinical Applications, pp. 279–301. Springer (2019) 10. Inan, T.T., Samia, M.B.R., Tulin, I.T., Islam, M.N.: A decision support model to predict ICU readmission through data mining approach. In: PACIS, p. 218 (2018) 11. Khan, N.S., Muaz, M.H., Kabir, A., Islam, M.N.: Diabetes predicting mhealth application using machine learning. In: 2017 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE), pp. 237–240. IEEE (2017) 12. Khan, N.S., Muaz, M.H., Kabir, A., Islam, M.N.: A machine learning-based intelligent system for predicting diabetes. Int. J. Big Data Anal. Healthcare (IJBDAH) 4(2), 1–20 (2019) 13. Kotsiantis, S., Kanellopoulos, D., Pintelas, P.: Handling imbalanced datasets: a review, gests international transactions on computer science and engineering. Synthetic Oversampling Instances Using Clustering 30, 25–36 (2006) 14. Krak, I., Barmak, O., Manziuk, E., Kulias, A.: Data classification based on the features reduction and piecewise linear separation. In: International Conference on Intelligent Computing & Optimization, pp. 282–289. Springer (2019) 15. Kumar, G.R., Ramachandra, G., Nagamani, K.: An efficient prediction of breast cancer data using data mining techniques. Int. J. Innov. Eng. Technol. (IJIET) 2(4), 139 (2013) 16. Latha, C.B.C., Jeeva, S.C.: Improving the accuracy of prediction of heart disease risk based on ensemble classification techniques. Informat. Med. Unlocked 16, 100203 (2019) 17. Maji, S., Arora, S.: Decision tree algorithms for prediction of heart disease. In: Information and Communication Technology for Competitive Strategies, pp. 447– 454. Springer (2019) 18. Miah, Y., Prima, C.N.E., Seema, S.J., Mahmud, M., Kaiser, M.S.: Performance comparison of machine learning techniques in identifying dementia from open access clinical datasets. In: Advances on Smart and Soft Computing, pp. 79–89. Springer (2020) 19. Omar, K.S., Mondal, P., Khan, N.S., Rizvi, M.R.K., Islam, M.N.: A machine learning approach to predict autism spectrum disorder. In: 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), pp. 1–6. IEEE (2019)
Predicting the Risk of Cardiovascular Diseases
569
20. Patel, J., TejalUpadhyay, D., Patel, S.: Heart disease prediction using machine learning and data mining technique. Heart Disease 7(1), 129–137 (2015) 21. Pouriyeh, S., Vahid, S., Sannino, G., De Pietro, G., Arabnia, H., Gutierrez, J.: A comprehensive investigation and comparison of machine learning techniques in the domain of heart disease. In: 2017 IEEE Symposium on Computers and Communications (ISCC), pp. 204–207. IEEE (2017) 22. Rajyalakshmi, P., Reddy, G.S., Priyanka, K.G., Sai, V.L.B.S., Anveshini, D.: Prediction of cardiovascular disease using machine learning. Entropy 23, 24 23. Ramalingam, V., Dandapath, A., Raja, M.K.: Heart disease prediction using machine learning techniques: a survey. Int. J. Eng. Technol. 7(2.8), 684–687 (2018) 24. Rani, K.U.: Analysis of heart diseases dataset using neural network approach. arXiv preprint arXiv:1110.2626 (2011) 25. Sen, S.K.: Predicting and diagnosing of heart disease using machine learning algorithms. Int. J. Eng. Comput. Sci 6(6) (2017) 26. Soni, J., Ansari, U., Sharma, D., Soni, S.: Predictive data mining for medical diagnosis: an overview of heart disease prediction. Int. J. Comput. Appl. 17(8), 43–48 (2011) 27. Tahmooresi, M., Afshar, A., Rad, B.B., Nowshath, K., Bamiah, M.: Early detection of breast cancer using machine learning techniques. J. Telecommun. Electr. Comput. Eng. (JTEC) 10(3–2), 21–27 (2018) 28. Vasant, P., Zelinka, I., Weber, G.W. (eds.): Intelligent Computing and Optimization. Springer International Publishing (2020). https://doi.org/10.1007/978-3-03033585-4, https://doi.org/10.1007%2F978-3-030-33585-4
Searching Process Using Boyer Moore Algorithm in Digital Library Laet Laet Lin(&) and Myat Thuzar Soe Faculty of Computer Science, University of Information Technology, Yangon, Myanmar {laetlaetlin,myatthuzarsoe}@uit.edu.mm
Abstract. Reading helps to learn more vocabularies and their usages. People should read books to get knowledge. Nowadays, digital libraries are used to search for various books. If the internet connection is available, digital libraries can be accessed easily via computers, laptops, and mobile phones. Students and readers can search for the desired books from this digital library by typing substrings of a book title that matches with parts of the original book title. To be able to make such a searching process effectively, string searching algorithms can be applied. There are many sophisticated and efficient algorithms for string searching. In this paper, a Boyer Moore (BM) string-searching algorithm is applied to search the desired books effectively in the digital library. And then the BM algorithm is presented by comparing it with Horspool’s algorithm to prove the performance efficiency of the BM algorithm for the searching process. Keywords: Boyer Moore (BM) algorithm Horspool’s algorithm Bad-symbol shift Good-suffix shift String matching Digital library
1 Introduction Reading helps students and readers to gain knowledge. Users can use digital libraries to get knowledge or information by reading educational or non-educational books. Most frequently users find or track via computers or mobile phones the desired books that are available in the library. The effectiveness of libraries is based on the results of how they make users easier in finding books. A good computerized library system is to help users for easy and effective use. To achieve this, string searching algorithms are required. String searching sometimes called string matching is the act of checking the existence of a substring of m characters called the pattern in a string of n characters called the text (where m n) and finding its location in that text [1–3]. String searching algorithms are the basic components of existing applications for text processing, intrusion detection, information retrieval, and computational biology [4]. Better string-matching algorithms can improve the applications’ efficiencies more effectively. So, a fast string matching algorithm is an important area of research. There are several more sophisticated and more efficient algorithms for string searching. The most widely known of them is the BM algorithm suggested by R. Boyer and J. Moore. The BM algorithm is the most efficient string-matching algorithm because a lot of character comparisons are ignored during the process of searching. It performs the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 570–579, 2021. https://doi.org/10.1007/978-3-030-68154-8_50
Searching Process Using Boyer Moore Algorithm in Digital Library
571
pattern’s character comparisons in reverse order from right to left and, in the case of mismatched, the entire pattern to be searched is not required to compare [5, 6]. This paper aims to recommend an efficient BM algorithm for the string processing of digital libraries. The remainder of this paper is organized as follows. Section 2 presents related work focusing on using the BM algorithm for various related applications. Section 3 explains a background theory about the BM algorithm and the Horspool’s algorithm in detail. Section 4 highlights the searching processes of BM and Horspool algorithms and comparative analysis between them. Finally, the conclusion of this paper is described in Sect. 5.
2 Related Work The authors [7] developed a detection method to detect web application vulnerabilities by using the BM string-matching algorithm. Their proposed method performed well in terms of the ability to accurately detect vulnerabilities based on false-negative and have no false positive with low processing time. Using the BM algorithm as a web-based search, the authors [8] implemented a dictionary application for midwifery. The authors [9] applied the BM algorithm for the application of a baby name dictionary based on Android. The authors [10] proposed a new algorithm as a regular expression pattern matching algorithm. This new algorithm was derived from the BM algorithm, and it was an efficient generalized Boyer-Mooretype pattern matching algorithm for regular languages. The authors [11] studied different kinds of string matching algorithms and observed their time and space complexities. The performance of these algorithms is tested with biological sequences such as DNA and Proteins. Based on their study, the BM algorithm is extremely fast for large sequences, it avoids lots of needless comparisons by pattern relative to the text, and its best-case running complexity is sub-linear. The researcher [12] analyzed the efficiency of four string-matching algorithms based on different pattern lengths and pattern placement. The researcher showed that among the four algorithms, the Horspool algorithm which is a simplified version of the BM algorithm is the fastest algorithm without considering the pattern length and pattern placement.
3 Background Theory In this section, the BM algorithm and the Horspool’s algorithm are presented in detail. 3.1
Boyer Moore (BM) Algorithm
BM algorithm is the most well-known string-matching algorithm because of its efficient nature. Based on BM’s concept, many string-matching algorithms were developed [13, 14]. BM algorithm compares a pattern’s characters with their partners in the text
572
L. L. Lin and M. T. Soe
by moving from right to left [1, 2, 5, 15]. It determines the size of the shift by using a bad-symbol shift table and a good-suffix shift table. 3.1.1 Bad-Symbol Shift The bad-symbol shift is guided by the character c in the text T that does not match with its partner in the pattern P. If the text’s c does not include in P, P is shifted to pass this text’s c. This shift size is calculated by the following equation: txtðcÞ r:
ð1Þ
where txt(c) is an entry in the bad-symbol shift table, and r is the number of matched characters. This table is a list of possible characters that can be found in the text. Texts may contain punctuation marks, space, and other special characters. The txt(c) is computed by the formula: 8 Length m of the pattern P; if character c in the text is not among > > > < P's first ðm - 1Þ characters; ð2Þ txtðcÞ ¼ > Distance from the rightmost character c amidst P's first ðm - 1Þ > > : characters to its final character; otherwise: Let us build a bad-symbol shift table for searching a pattern PKSPRS in some text. It is shown in Table 1, all entries of the table are equal to 6, except for K, P, R, and S, which are 4, 2, 1, and 3, respectively. Table 1. Bad-symbol shift table c K P R S Other Characters txt(c) 4 2 1 3 6
An example of shifting the pattern using the above Table 1 is shown in Fig. 1. In this example, the pattern PKSPRS is searched in some text. Before the failure of comparison on the text’s character Z, the comparisons of the last two characters were matched. So, the pattern can be moved to the right 4 positions because the shift size is txt(Z) − 2 = 6 − 2 = 4. T0
... P
K
S
Z
R
S
P
R
S
P
K
...
S
P
Fig. 1. Shifting the pattern using bad-symbol table
Tn-1
R
S
Searching Process Using Boyer Moore Algorithm in Digital Library
573
Bad-symbol shift dist1 is calculated by txt(c) − r if this value is greater than zero and by 1 if it is less than or equal to zero. This is described as a formula: dist1 ¼ maxftxtðcÞ r; 1g:
ð3Þ
3.1.2 Good-Suffix Shift The good-suffix shift is directed by a match of the pattern’s last r > 0 characters. The pattern’s ending portion is referred to as its suffix of size r and denotes it suf(r). If other events (not preceded by the same character as in its rightmost event) of suf(r) contain in P, then P can be shifted according to the distance dist2 between like a second rightmost event of suf(r) and its rightmost event. In the case not including another event of suf(r) in P, the longest prefix of size k < r which matches the suffix of the same size k is needed to be found. If such a prefix exists, the shift size dist2 is calculated by the distance between the corresponding suffix and this prefix; dist2 is set to P’s length m, otherwise. Table 2 shows a sample of the good-suffix shift table for the pattern QLVLQL.
Table 2. Good-suffix shift table r Pattern 1 QLVLQL 2 QLVLQL 3 QLVLQL
dist2 2 4 4
4 QLVLQL 4 5 QLVLQL 4
3.1.3 Algorithm Steps The steps in the BM string-matching algorithm are as follows: Step 1: Build a bad-symbol shift table as discussed earlier for a specified pattern P and the alphabet used in both text T and P. Step 2: Build a good-suffix shift table as discussed earlier by using P. Step 3: Against the starting of T, align P. Step 4: Until either a matched substring is found, or the P string comes to past the final character of T, repeat the following step. Beginning with P’s last character, compare the corresponding characters of P and T until all m pairs of the characters match or a character pair that mismatch is found after r 0 pairs of the characters are matched. In the mismatched situation, fetch txt(c) from the bad-symbol shift table where c is T’s mismatched character. When r > 0, additionally fetch the
574
L. L. Lin and M. T. Soe
corresponding dist2 from the good-suffix shift table. Move P to the right according to the number of positions calculated by the formula: dist ¼
3.2
if r ¼ 0; dist1 wheredist1 ¼ maxftxtðcÞ r; 1g: maxfdist1 ; dist2 g if r [ 0;
ð4Þ
Horspool’s Algorithm
Horspool’s algorithm for string matching is a simplified version of the BM algorithm. Both algorithms use the same bad-symbol shift table; the BM algorithm also uses the good-suffix shift table [2]. The steps in this algorithm are as follows: Step 1: Build a bad-symbol shift table like the BM algorithm. Step 2: Against the starting of the text T, align the pattern P. Step 3: Repeat the following step until either a matching substring is found, or the P reaches beyond the final character of the text. Starting with the P’s final character, compare the corresponding characters in the P and T until either all m characters are matched, or a mismatching pair is found. In the mismatched case, fetch the entry txt (c) of the shift table where c is the T’s character currently aligned against the P’s final character, and move the P by txt(c) characters to the right along with the text. In these two algorithms, the worst-case time complexity of Horspool’s algorithm is in O(nm). But it is not necessarily less efficient than the BM algorithm on the random text. On the other hand, the worst-case time complexity of the BM algorithm is linear if only the very first occurrence of the pattern is searched [2]. The BM algorithm takes O (m) comparisons when the pattern string is absent in the text string. The best-case time efficiency of the BM algorithm is O(n/m) [16, 17].
4 Discussion and Result In this section, the searching process for the BM algorithm is illustrated by comparing it with Horspool’s algorithm which is a simplified version of the BM algorithm. Consider the problem of searching for the desired book in a digital library. A book title is represented by a text that comprises English letters and spaces, and the book title or the segment of the book title is the pattern. Text: VIP_AZWIN_ ZAWZAZA Pattern: ZAWZAZ
Searching Process Using Boyer Moore Algorithm in Digital Library
575
Table 3. Bad-symbol shift table for above sample c A W Z Other Characters txt(c) 1 3 2 6
Table 4. Good-suffix shift table for above sample r 1 2 3 4 5
V
I
P
_
A
Z
Z
A
W
Z
A
Z
Pattern ZAWZAZ ZAWZAZ ZAWZAZ ZAWZAZ ZAWZAZ
W
I
N
dist2 2 5 5 5 5
_
Z
A
W
Z
A
Z
A
Fig. 2. First step
4.1
Searching Process with the BM Algorithm
First, construct the bad-symbol shift and the good-suffix shift tables. The bad-symbol shift table to find dist1 value and the good-suffix shift table with dist2 values are shown in Table 3 and Table 4, respectively.
V
I
P
_
A
Z
Z
A
W
Z
A
Z Z
W
I
N
_
Z
A
W
Z
A
Z
A
W
Z
A
Z
A
Fig. 3. Second step
As shown in Fig. 2, first, the pattern string is aligned with the starting characters of the text string. As shown in Fig. 3, after matching with two pairs of characters, the pattern’s character ‘Z’ fails to match its partner ‘_’ in the text. So, the algorithm retrieves txt(_) = 6 from bad symbol table to compute dist1 = txt(_) – 2 = 4 and also retrieves dist2 = 5 from good-suffix shift table. And then the pattern is moved to the right by max {dist1, dist2} = max {4, 5} = 5.
576 V
L. L. Lin and M. T. Soe I
P
_
A
Z
W
I
N
_
Z
Z
A
W
Z
A
Z Z
A
W
Z
A
Z
A
W
Z
A
Z
A
Fig. 4. Third step
As shown in Fig. 4, after matching one pair of Z’s and failing the next comparison on the text’s space character, the algorithm fetches txt(_) = 6 from bad-symbol table to compute dist1 = 6 − 1 = 5 and also fetches dist2 = 2 from good-suffix shift table. And then the pattern is moved to the right by max {dist1, dist2} = max {5, 2} = 5. Lastly, after matching all the pattern’s characters with their partners in the text, a matching substring is found in the text string. Here, the total number of character comparisons is 11. V
I
P
_
A
Z
Z
A
W
Z
A
Z
Z
A
W
Z
W
I
A
Z
N
_
Z
A
W
Z
A
Z
A
Fig. 5. First step
V
I
P
_
A
Z
W
I
Z
A
W
Z
A
Z
N
_
Z
A
W
Z
Z
A
W
Z
A
Z
A
Z
A
Fig. 6. Second step
4.2
Searching Process with the Horspool’s Algorithm
For searching the pattern ZAWZAZ in a given text comprised of English letters and spaces, the shift table to be constructed is the same as shown in Table 3. As shown in Fig. 5, first, the pattern is aligned with the beginning of the text and the characters are compared from right to left. A mismatch occurs after comparing the character ‘Z’ in the pattern and the character ‘_’ in the text. The algorithm fetches txt(Z) = 2 from the badsymbol shift table and shifts the pattern by 2 positions to the right along with the text. In the next step, as shown in Fig. 6, after the last ‘Z’ of the pattern fails to match its partner ‘I’ in the text, the algorithm fetches txt(I) = 6 from the bad-symbol shift table and shifts the pattern by 6 positions to the right along with the text.
Searching Process Using Boyer Moore Algorithm in Digital Library V
I
P
_
A
Z
W
I
N
_
Z
A
W
Z
Z
A
W
Z
A
Z
Z
A
W
Z
577
A
Z
A
Z
A
Fig. 7. Third step
In the next step, as shown in Fig. 7, after failing the second comparison on the character ‘W’ in the text, the algorithm fetches txt(Z) = 2 from the bad-symbol shift table and shifts the pattern by 2 positions to the right along with the text. Finally, after matching all the pattern’s characters with their partners in the text, a matched substring is found in the text. Here, the total number of character comparisons is 12. 4.3
Comparison Between BM and Horspool Algorithms
In this section, the BM algorithm is compared with its simplified version, Horspool’s algorithm, based on the number of character comparisons. These two algorithms are implemented in the java language and are compared by searching the patterns of different sizes like 3, 5, 7, 9, 11, 13, and 15 respectively in a text string of 947 characters. The total number of character comparisons of each algorithm using different pattern lengths is shown in the graph form as in Fig. 8. The number of comparisons for both algorithms can vary based on the pattern string.
BM Algorithm
Horspool Algorithm
Number of Comparisons
400 350 300 250 200 150 100 50 0 3
5
7
9 11 Pattern Lengths
13
15
Fig. 8. Comparison between BM and Horspool algorithms based on the number of character comparisons
578
L. L. Lin and M. T. Soe
According to the results in Fig. 8, the BM algorithm produces less comparison time than Horspool’s algorithm. So, if the BM algorithm is used in the searching process of the digital library system, the performance efficiency of this library system will be improved effectively. A further experiment about the searching time and the accuracy of the searching process in the digital library will be carried out.
5 Conclusion In the text processing areas, finding the specific string in the least time is the most basic factor. String-searching algorithms play a critical role in such a manner. Most of the other string-searching algorithms use the basic concepts of the BM algorithm because it has the least time complexity. In this paper, the BM algorithm is used for the process of finding the desired books from the digital library. The result also based on the number of comparisons shows that the BM algorithm is more efficient than the Horspool’s algorithm. Therefore, if the text processing applications, such as the digital library system, use the BM algorithm, the searching time will be fast, and their performance will be significantly increased. Acknowledgments. We are appreciative of the advisors from the University of Information Technology who gave us valuable remarks and suggestions all through this project.
References 1. Rawan, A.A.: An algorithm for string searching. In: International Journal of Computer Applications, vol. 177, no: 10, pp. 0975–8887 (2019) 2. Anany, L.: Introduction to the Design and Analysis of Algorithms. Villanova University, Philadelphia (2012) 3. Robert, S., Kevin, W.: Algorithms. Princeton University, Princeton (2011) 4. Bi, K., Gu, N.J., Tu, K., Liu, X.H., Liu, G.: A practical distributed string-matching algorithm architecture and implementation. In: Proceedings of World Academy Of Science, Engineering and Technology, vol. 1, no: 10, pp. 3261–3265 (2007) 5. Robert, S.B., Strother, M.J.: A fast string searching algorithm. Assoc. Comput. Mach., 20 (10), 762–772 (1977) 6. Abdulellah, A.A., Abdullah, H.A., Abdulatif, M.A.: Analysis of parallel Boyer-Moore string search algorithm. Glob. J. Comput. Sci. Technol. Hardware Compu. 13, 43–47 (2013) 7. Ain, Z.M.S., Nur, A.R., Alya, G.B., Kamarularifin, A.J., Fakariah, H.M.A., Teh, F.A.R.: A method for web application vulnerabilities detection by using Boyer-Moore string matching algorithm. In: 3rd Information Systems International Conference, vol. 72, pp. 112–121 (2015) 8. Rizky, I.D., Anif, H.S., Arini, A.: Implementasi Algoritma Boyer Moore Pada Aplikasi Kamus Istilah Kebidanan Berbasis Web. Query: J. Inf. Syst. 2, 53–62 (2018) 9. Ayu, P.S., Mesran, M.: Implementasi algoritma boyer moore pada aplikasi kamus nama bayi beserta maknanya berbasis android. Pelita informatika: informasi dan informatika 17, 97– 101 (2018) 10. Bruce, W.W., Richard, E.W.: A Boyer-Moore-style algorithm for regular expression pattern matching. Sci. Comput. Program. 48, 99–117 (2003)
Searching Process Using Boyer Moore Algorithm in Digital Library
579
11. Pandiselvam, P., Marimuthu, T, Lawrance. R.: A comparative study on string matching algorithm of biological sequences. In: International Conference on Intelligent Computing (2014) 12. DU, V.: A comparative analysis of various string-matching algorithms. In: 8th International Research Conference, KDU (2015) 13. Robbi, R., Ansari, S.A., Ayu, P.A., Dicky, N.: Visual approach of searching process using Boyer-Moore algorithm. In: Journal of Physics, vol. 930 (2017) 14. Mulyati, I.A.: Searching process using Bayer Moore algorithm in medical information media. In: International Journal of Recent Technology and Engineering (IJRTE), vol. 8 (2019) 15. Michael, T.G., Roberto T.: Algorithm Design and Applications. John Wiley and Sons, Hoboken (2015) 16. Abd, M.A., Zeki, A., Zamani, M., Chuprat, S., El-Qawasmeh, E. (eds.): Informatics Engineering and Information Science. New York (2011) 17. Yi, C.L.: A survey of software-based string matching algorithms for forensic analysis. In: Annual ADFSL Conference on Digital Forensics, Security and Law (2015)
Application of Machine Learning and Artificial Intelligence Technology
Gender Classification from Inertial Sensor-Based Gait Dataset Refat Khan Pathan1 , Mohammad Amaz Uddin1 , Nazmun Nahar1 , Ferdous Ara1 , Mohammad Shahadat Hossain2(&) , and Karl Andersson3 1
BGC Trust University Bangladesh Bidyanagar, Chandanaish, Bangladesh [email protected], [email protected], [email protected], [email protected] 2 University of Chittagong, Chittagong 4331, Bangladesh [email protected] 3 Lulea University of Technology, 931 87 Skellefteå, Sweden [email protected]
Abstract. The identification of people’s gender and events in our everyday applications by means of gait knowledge is becoming important. Security, safety, entertainment, and billing are examples of such applications. Many technologies could also be used to monitor people’s gender and activities. Existing solutions and applications are subject to the privacy and the implementation costs and the accuracy they have achieved. For instance, CCTV or Kinect sensor technology for people is a violation of privacy, since most people don’t want to make their photos or videos during their daily work. A new addition to the gait analysis field is the inertial sensor-based gait dataset. Therefore, in this paper, we have classified people’s gender from an inertial sensor-based gait dataset, collected from Osaka University. Four machine learning algorithms: Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Bagging, and Boosting have been applied to identify people’s gender. Further, we have extracted 104 useful features from the raw data. After feature selection, the experimental outcome exhibits the accuracy of gender identification via the Bagging stands at around 87.858%, while it is about 86.09% via SVM. This will in turn form the basis to support human wellbeing by using gait knowledge. Keywords: Gait Inertial sensor Gender classification Bagging Boosting
1 Introduction Gender is one of the understandable and straightforward human information, yet that can opens up the entryway to the collection of facts used in various pragmatic operations. In the process of gender determination, the individual’s gender is determined by assessing the diverse particularities of femaleness and maleness [1]. Automatic human gender categorization is an interesting subject in pattern recognition, since gender includes very important and rich knowledge about the social activities of individuals © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 583–596, 2021. https://doi.org/10.1007/978-3-030-68154-8_51
584
R. K. Pathan et al.
[2]. In particular, information on gender can be employed by professional and intelligent frameworks, which are portion of applications for health-service, smart spaces, and biometric entrance control. Throughout the recent years the identification of demographic characteristics for people including age, sex, and ethnicity utilizing computer vision has been given growing consideration. It would be beneficial if a computer structure or device could properly identify a given person. A great number of potential areas are identified where gender identity is very important. Although a person can easily differentiate between male and female, computer vision technique considers it as a challenge to identify the gender. Many psychological and medical tests [3, 4] showed that gait characteristics are able to recognize people’s gender. Several biometrics have been developed for human identification and authentication by checking the face, fingerprinting, palm printing, iris, gait, or a combination of these characteristics [5–7]. Human Gait is widely considered in surveillance. Because gait characteristics reflect how a person walks and explains his physical ability. It is hard to mimic the gait of others. The development of sensing technologies and sensor signal processing techniques paved the way for the use of sensors to identify gender status. Inertial Measurement Unit (IMU), which contains a 3D accelerometer, a 3D spinner, and a magnetometer, has been utilized to assess physical movement through differentiation of human activity [8]. This paper deals with the issue of human gender recognition by using the gait dataset. Inertial sensor-based gait data is utilized for gender prediction. Institute of Scientific and Industrial Research (OU-ISIR) of Osaka University made the Gait inertial sensor data [9]. In the field of gait analysis, the inertial sensor-dependent gait data set represents a relatively new addition. Therefore, a lot of investigational activities that include machine learning algorithms in the gait data set are founded on the image. Hence, the majority of the walking steps and its manner related data sets were assessed for gait identification. A few research works on personal verification of the sensor-based inertial gait dataset have been conducted. The reason is that personal verification consists of a number of different ways, of which gender is very hard to predict. The goal of this work is to develop a way to effectively identify gender from inertial sensor-based gait data. In this paper, for gender classification we have studied some supervised machine learning model. For the classification purpose at first we have extracted some statistical feature. During the feature extraction process a primary collection of raw variables has been compressed to get more useful features. The feature data still explains the original data set thoroughly and accurately. We have extracted 104 features from the original dataset. But all this features are not necessary for the classification process. For that reason we apply feature selection technique known as NCA (Neighborhood Component Analysis). In this paper, we also compare the classification process before and after the selection of feature.
Gender Classification from Inertial Sensor-Based Gait Dataset
585
The remaining of the paper is prepared in different sections. The related work is discussed in section two. The methodology of the study is described in section three. Section four discusses the result of the experiment. The final section demonstrates the conclusion and the future work.
2 Related Work Many useful techniques or methods are used to find out the gender of a person by using the gait data. Kanij Mehtanin Khabir et al. [10] explored twelve types of time-domain features: average, median, maximum, minimum, variance, root mean square, mean absolute deviation, standard error of mean, standard deviation, skewness, kurtosis, vector sum from the inertial sensor dataset. They measured 88 features which are suitable for classification of the dataset and regression problems. SVM provides the highest accuracy among other classifiers. This proposed model has some over fitting problems because training set of the data with compare to test set has higher accuracy difference. The statistical features such as global minimum, global maximum, step duration, step length, mean, root mean square, standard deviation, entropy, energy and amplitude from different components of accelerations and angular velocities [11]. They estimated total 50 features for every single step. The variety of features was too small, so this experiment only works as proof of concept for now. Makihara et al. [12] describes gender identification by apply a Video-based gait feature analysis with the help of a multi-view gait database. In this research, they did a deep machine learning process to predict gender using their own created multi-view gait database. Tim Van hamme et al. [13] explored the best solution of gender information using IMU Sensor-Based Gait Traces data. They compared distinctive component engineering and machine learning algorithms, including both conventional and profound machine learning techniques. ThanhTrung Ngo et al. [14] organized a challenging competition on gender detection using the OU-ISIR inertial sensor dataset. Several processing and feature extraction are done by using deep learning method, conventional classification methods and sensor orientation handling methods. The amounts of features are not enough to generate real time usable model. Ankita Jain et al. [15] used accelerometer and gyroscope sensor readings for gender identification using Smartphone. They combined the collected data from accelerometer and gyroscope sensor for betterment of the experimental performance. Bagging classifier gives the best accuracy in this experiment. Rosa Andrie Asmara et al. [17] used Gait Energy Image (GEI) and Gait Information Image (GII) processes for gender recognition. The Gait Information Image (GII) method performed better than Gait Energy Image (GEI) using SVM. The accuracy of those works is too low because the shortage of features.
586
R. K. Pathan et al.
Jang-HeeYoo et al. [16] is used a sequential set of 2D stick facts to characterize the gait signature. Each gait signature discriminated by the 2D sticks and joint angles of the hip, knee and ankle. The support vector machine (SVM) is used to classify the gender recognition of this method. Another researcher take out 2D and 3D gait features founded on silhouette and shape descriptors and combined them for gender classification [18]. This combined feature gives the higher accuracy for the Dgait dataset. They used kernel SVM for this experiment. In [13, 14] they have used deep and shallow architecture for the classification of gender. Deep learning consists of several representational levels, while the shallow has few levels. In [15] they used behavioral biometric gait information for gender classification on smartphones. They collected data from 42 subjects. In [16] they extracted motion based feature and joint angle feature from gait data. Their dataset is only limited in medical purpose. In [17] the data was collected only from 20 subjects. For this reason the dataset accuracy are low. In [18] they have collected their gait data from 53 subjects walking in different direction. These data sets clearly show us the distortion of the age and gender ratio.
3 Dataset Description The University of Osaka has developed OU-ISIR gait dataset, inertial dataset. The dataset is relatively well developed and is the biggest sensor-based inertial gait database sensor [19]. These dataset used 3 IMU (inertial measurement unit) sensors known as IMUZ which is used to capture gait signals. The IMUZ comprises of a 3-axial accelerometer and a 3-axial gyroscope sensor. These three sensors are on the right, one on the left and the one at the center-back of the belt. These three sensors had been mounted to the belt with various orientations (90° for center left, center right and 180° for left-right pair).Gait data was collected for five days from 745 visitors. Each visitor entered and departed only once from the designated data capture tool. The dataset included equal number of gender (384 males and 384 females).In each IMUZ sensor, triaxial accelerometer and triaxial gyroscope sequence of signals are captured. Therefore 6D data are collected from each signal. Five activities data have been collected known as: slope-up walk, slope-down walk, level walk, step-up walk and step-down walk. For each subject, data were extracted for only level walk, slope down and slope up walk. The data has four labels namely: ID, Age, gender and Activity. Figure 1 is an example of sequence of signals for accelerometer data and gyroscope data.
Acceleration signals
Gryosocope(rad/s)
Gender Classification from Inertial Sensor-Based Gait Dataset
587
2 1
Gx
0
Gy
-1
Gz
-2
Time
1 0
Ax Ay
-1
Az -2
Time
Fig. 1. Example of signals for gyroscope and accelerometer data
4 Methodology This section presents the proposed methodology framework consisting of collecting data, data pre-processing, feature extraction and classifiers for machine learning to classify gender as illustrated in Fig. 2. The dataset which is collected by using 3-axis accelerometer and gyroscope sensor at The University of Osaka was considered in this research. The dataset has been preprocessed to extract features, which have been divided into training and testing dataset. The training dataset was trained by the machine learning classifiers. Below is the description of each of the components of the proposed methodology as shown in Fig. 2.
Fig. 2. Graphical illustration of proposed methodology
588
R. K. Pathan et al.
4.1
Feature Extraction
We have obtained important features which are given as a classification input. The main part of the classification is the extraction of features. The walking patterns of men and women are different biologically. We have taken advantage of statistical and energy motion features since we are trying to classify the patterns on different surfaces like the plane, stairs. Therefore in order to obtain precise representation of the walking pattern for gender classification, we have computed both time and frequency domain feature for 6D components. Time-domain feature includes maximum, minimum, mean, median, mean absolute deviation, skewness, kurtosis, variance, standard error of mean, standard deviation, root mean square, vector sum of mean, vector sum of minimum, vector sum of maximum, vector sum of median, vector sum of standard deviation, vector sum of variance, vector sum of standard error of mean, vector sum of skewness, vector sum of kurtosis, Entropy and vector sum of entropy and frequency domain feature included energy, magnitude, vector sum of energy and vector sum of magnitude. Fast Fourier Transform (FFT) has been used to compute the frequency domain feature. The total numbers of features are 104. Table 1. Shows name of the time domain and frequency domain feature.
Table 1. Feature for gender classification Domain Time
Sensor Type Accelerometer, Gyroscope
Axis x, y, z
Frequency
Accelerometer, Gyroscope
x, y, z
Feature Name Mean, Median, Minimum, Maximum, Standard Deviation, Mean Absolute deviation, Standard Error of mean, skewness, Kurtosis, Variance, Root mean square, Root mean square, Vector sum, Vector sum of mean, Vector sum of median, Vector sum of maximum, Vector sum of minimum, Vector sum of Standard deviation, Vector sum of square error of deviation, Vector sum of square error of deviation, Vector sum of mean absolute deviation, Vector sum of skewness, Vector sum of kurtosis, Vector sum of variance, Vector sum of root mean square, Entropy Energy, Magnitude, Vector sum of energy, Vector sum of magnitude
Gender Classification from Inertial Sensor-Based Gait Dataset
589
Table 2 shows the some of the feature value which is extracted from the gait data. Table 2. Some of feature value after feature extraction ax-mean −0.041 −0.045 −0.021 −0.020 −0.017 −0.020 −0.019 0.002
4.2
ax-median 0.0093 −0.057 −0.003 0.039 −0.034 −0.004 −0.024 0.017
ax-max 1.514 1.998 1.264 1.381 2.599 1.997 0.598 0.587
ax-min −1.719 −1.207 −0.882 −1.249 −1.485 −0.846 −0.567 −0.621
ax-mad 0.296 0.296 0.234 0.292 0.303 0.264 0.162 0.179
ax-skew −0.290 0.568 0.261 −0.213 1.425 0.902 0.228 −0.102
ax-kurtosis 4.483 5.729 4.093 3.342 10.554 7.443 2.844 2.945
Feature Selection
All the features are not need to classify gender. Increasing features includes multiple dimensions and therefore this can lead to an overfitting problem. A smart feature selection methodology has been implemented to find the relevant features to eliminate the overfitting problem caused by unnecessary features. It is fundamental for the learning algorithm to concentrate on the applicable subset of features and ignore the remainder of the highlights. The specific learning algorithm takes a shot at the training set to choose the right subset of the extracted feature that can be applied to test set. Neighborhood component Analysis (NCA) is one of the popular techniques that are used for feature selection process. NCA is a non-parametric approach to identify features with view to optimizing regression and classification algorithm predictability. NCA uses a quadratic distance calculation of k-nearest neighbor (KNN) supervised classification algorithm by reducing Leave-one-leave out (LOO) error. Quadratic distance metrics can be represented by using symmetric positive and semi- define metrics. A linear transformation of the input feature, denoted by matrix A can result in higher KNN classification performance. Let Q ¼ AT A is matric, two points x1 and x2’s distance can be calculated by using the following terms. d ðx1 ; x2 Þ ¼ ðx1 x2 ÞT Qðx1 x2 Þ ¼ ðAx1 Ax2 ÞT ðAx1 Ax2 Þ To prevent a discontinuity of the LOO classification error, the soft cost function of the neighbor assignments in the transformed space can be used. In the transformed space, the probability pij that point j is as the nearest point I can be described as 2 exp Axi Axj pij ¼ P Axi Axj 2 exp k6¼j
590
R. K. Pathan et al.
Where pii ¼ 0. Transformation matrix A can be achieved by correctly classifying the expected number of points
A ¼ max
2 exp Axi Axj 2 2 kjj AjjF P j2Ci k6¼j exp Axi Axj
XX i
Where k parameter is used for maximizes the NCA probability and minimizes the Frobenius norm. To solve the optimization problem, the conjugate method can be used. If A is confined to diagonal matrix, the diagonal inputs represent the weight of the every input feature. Therefore, the selection of the feature can be obtained on the basis of the importance of the weights. Matlab statistics and machine learning Toolbox functions fscnca is used for NCA feature selection process. After applying the feature selection method NCA, the numbers of features are now 84. 4.3
Classification of Gender
The classification problem consists of two classes: male and female. The main goal of our research is to classify human gender with high accuracy. For the classification purpose, we have used four algorithms of classification namely KNN (K-nearest neighbor), SVM (Support Vector Machine) [21, 22], Bagging [24], and Boosting [25, 26] algorithm. 4.4
Model Training
We have divided the entire dataset into two sections for training and evaluation. We separated 30% data from the dataset, which has been considered as the test data before preprocessing. The remaining 70% data, considered as the training. There are two contradictory factors when dividing a dataset. If we have limited training data, the prediction of the parameter would have greater variance. And if we have fewer data on testing, the performance figures would have a higher variance. The data should be split so that none of them is too large which relies more on the volume of data. The dataset we have used is not large so no spilt ratio will give us the better result for that reason we have to do the cross-validation. These two sets than have been translated into two different threads. After preprocessing, we have collected 1,556 training samples and 456 test samples. For the training phase, with the training set of 1100 samples, we trained our model. Finally 456 numbers of samples used to test the models. These test dataset was never seen by the model.
Gender Classification from Inertial Sensor-Based Gait Dataset
591
5 Result and Discussion Once model training is completed, the learned models are used to classify the gender of people. We have also demonstrated the k-fold effects of cross-validation So that the model didn’t overfit the training dataset. Different parameters for each model and the maximum accuracy of each model have been observed. First, we construct the models using 104 features derived from sensor data. To minimize the number of features, we have used a feature selection method. Our primary objective is to select the most discriminative time-domain and frequencydomain feature for gender classification and also find out the best accuracy among the four classification algorithms. For this purpose, we have used four well-known classifiers KNN (K-Nearest Neighbor), SVM (Support Vector Machine), Bagging, and Boosting algorithm. For the experimental purpose, we have used accuracy, MAE, and RMSE to measure the performance of the algorithms and we also compared classification model performance with three metrics [27]: precision, Recall, and F1-score. Matlab 2018 software was used for calculating all the result. For Feature selection Matlab Statistics and Machine learning toolbox was used and for classification problem.
Table 3. Classification accuracy before and after feature selection Classifier Name SVM K-Nearest Neighbor Bagging Boosting
Accuracy Before feature Selection 83.898% 79.661% 83.615% 81.425%
Accuracy After feature Selection 86.091% 81.456% 87.854% 84.105%
Table 3 depicts the results from the comparison of the classification accuracy before and after feature selection process. 1,556 samples have been used for these classifications where each class contains the same number of samples. It can be seen from the table that the accuracy of the all classifier are higher than 80%.Bagging algorithm offers the best possible result with 87.5% compare to the other classifier. Because bagging reduces the variance of one estimate, since several estimates from various models are combined. Support vector machine shows comparatively lower accuracy. The results from the table show that the accuracy of the each algorithm is increased by 3% when Neighborhood component analysis (NCA) dimension reduction method is applied to its original feature. This is because NCA method has decreased the dimensionality of the data. Therefore the NCA method for reducing dimensionality makes the classification process simpler and increases the classification accuracy rate.
Acc before selecƟon SVM
KNN
84.105
87.854
81.456
86.091
81.425
83.615
79.661
100 95 90 85 80 75 70 65 60
83.898
R. K. Pathan et al.
Accuracy in %
592
Acc aŌer selecƟon Bagging
BoosƟng
Fig. 3. Graphical representation of accuracy before and after feature selection
Figure 3 shows the graphical representation of classification accuracy before and after feature selection. It can be seen from the graph that the accuracy rate of the classifier is increased after the feature selection method is applied. From the observation, we can see that the bagging algorithm gives the better accuracy before and after selection process. Bagging algorithm shows the better result because it combines the weak learners and aggregates the weak learners in such way that the output is average of the weak learners. That’ why bagging algorithm has less variance and therefore it gives the better result than the other algorithm. Table 4. Performance matrices for gender classification Classifier Name SVM K-Nearest Neighbor Bagging Boosting
MAE 0.139 0.185 0.121 0.159
RMSE 0.373 0.430 0.348 0.399
Precision 0.867 0.827 0.899 0.858
Recall 0.855 0.798 0.855 0.820
F-Score 0.860 0.813 0.876 0.839
From Table 4, it has been seen that the Bagging algorithm shows better performance among the four algorithms. It also gives the lowest MAE and RMSE. Bagging algorithm has the highest F1-score, Precision, and Recall.
Gender Classification from Inertial Sensor-Based Gait Dataset
593
Fig. 4. ROC curve for model evaluation
Figure 4 shows the ROC curve for all classifier of male class and it can be noticed that Bagging, Boosting and KNN classifier curve converge quickly where SVM classifier curve converge slowly. Figure 5 illustrates the example of a confusion matrix in the Bagging classifier. The input of this classification is based on data after the process of feature selection (NCAbased features). The confusion matrix is carried out between two classes namely Male and Female classes. The table row represents the real class and the column represents the class that classifiers know. The value in the table indicates the probability of the actual class being recognized. The Table shows that the male class is correctly recognized with 87.4% and female class are correctly recognized with 85.3%.
Fig. 5. Confusion matrix
We have performed a comparative evaluation of our methodology with the existing work. The experimental results are presented in Table 5. From the result we can see that our proposed methodology outperforms than the other existing method.
594
R. K. Pathan et al. Table 5. Comparative study of the proposed approach with existing work
Research Study Ankita jain et al. [15] Khabir et al. [10] R.A Asmara [17] The proposed Algorithm
Approach Bagging SVM SVM Bagging
Accuracy 76.83% 84.76% 70% 87.85%
6 Conclusion and Future Work In this research, we have tried to identify the best machine learning algorithm to classify gender from the inertial sensor-based gait dataset. The largest sensor-based inertial OU-ISIR gait dataset has been analyzed for this experiment. The use of timedomain and the frequency-domain features are the essential part of our paper, and we also select features which are most important for the classification of gender. These extracted features are used successfully to train our selected classification models. From the result, it has been observed that after selecting the features from the 104 features, the accuracy of the classifier is increased, which in turn could be used to ensure the safety and security of the human beings In the future, we will use some deep learning methods for gender classification [28–31] and also apply some methodology to remove uncertainty [32–37]. However, in this study, we have used only 1100 data for the training set. In the future, we will use more data for the training phase and we will also use other feature selection method like PCA and t-SNE for training our dataset.
References 1. Wu, Y., Zhuang, Y., Long, X., Lin, F., Xu, W.: Human gender classification: a review. arXiv preprint arXiv:1507.05122,2015 2. Udry, J.R.: The nature of gender. Demography, 31(4), 561–573 (1994) 3. Murray, M.P., Drought, A.B., Kory, R.C.: Walking patterns of normal men. JBJS 46(2), 335–360 (1964) 4. Murray, M.P.: Gait as a total pattern of movement: including a bibliography on gait. Am. J. Phys. Med. Rehabilitation 46(1), 290–333 (1967) 5. Xiao, Q.: Technology review-biometrics-technology, application, challenge, and computational intelligence solutions. IEEE Comput. Intell. Magazine, 2(2), 5–25 (2007) 6. Wong, K.Y.E., Sainarayanan, G., Chekima, A.: Palmprint based biometric system: a comparative study on discrete cosine transform energy, wavelet transform energy, and sobelcode methods (< Special Issue > BIOMETRICS AND ITS APPLICATIONS). Int. J. Biomed. Soft Comput. Human Sci. Official J. Biomed. Fuzzy Syst. Assoc. 14(1), 11–19 (2009) 7. Hanmandlu, M., Gupta, R.B., Sayeed, F., Ansari, A.Q.: An experimental study of different features for face recognition. In: 2011 International Conference on Communication Systems and Network Technologies, pp. 567–571. IEEE, June 2011 8. Sasaki, J.E., Hickey, A., Staudenmayer, J., John, D., Kent, J.A., Freedson, P.S.: Performance of activity classification algorithms in free-living older adults. Med. Sci. Sports and Exercise, 48(5), 941 (2016)
Gender Classification from Inertial Sensor-Based Gait Dataset
595
9. Ngo, T.T., Makihara, Y., Nagahara, H., Mukaigawa, Y., Yagi, Y.: The largest inertial sensor-based gait database and performance evaluation of gait-based personal authentication. Pattern Recogn. 47(1), 228–237 (2014) 10. Khabir, K.M., Siraj, M.S., Ahmed, M., Ahmed, M.U.: Prediction of gender and age from inertial sensor-based gait dataset. In: 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), pp. 371–376. IEEE, May 2019 11. Riaz, Q., Vögele, A., Krüger, B., Weber, A.: One small step for a man: estimation of gender, age and height from recordings of one step by a single inertial sensor. Sensors, 15(12), 31999–32019 (2015) 12. Makihara, Y., Mannami, H., Yagi, Y.: Gait analysis of gender and age using a large-scale multi-view gait database. In: Asian Conference on Computer Vision, pp. 440–451. Springer, Heidelberg, November 2010 13. Garofalo, G., ArgonesRúa, E., Preuveneers, D., Joosen, W.: A systematic comparison of age and gender prediction on imu sensor-based gait traces. Sensors, 19(13), 2945 (2019) 14. Ngo, T.T., Ahad, M.A.R., Antar, A.D., Ahmed, M., Muramatsu, D., Makihara, Y., Hattori, Y.: OU-ISIR wearable sensor-based gait challenge: age and gender. In: Proceedings of the 12th IAPR International Conference on Biometrics, ICB (2019) 15. Jain, A., Kanhangad, V.: Investigating gender recognition in smartphones using accelerometer and gyroscope sensor readings. In: 2016 International Conference on Computational Techniques in Information and Communication Technologies (ICCTICT), pp. 597–602. IEEE, March 2016 16. Yoo, J.H., Hwang, D., Nixon, M.S.: Gender classification in human gait using support vector machine. In: International Conference on Advanced Concepts for Intelligent Vision Systems, pp. 138–145. Springer, Heidelberg, September 2005 17. Asmara, R.A., Masruri, I., Rahmad, C., Siradjuddin, I., Rohadi, E., Ronilaya, F., Hasanah, Q.: Comparative study of gait gender identification using gait energy image (GEI) and gait information image (GII). In: MATEC Web of Conferences, vol. 197, p. 15006. EDP Sciences (2018) 18. Borràs, R., Lapedriza, A., Igual, L.: Depth information in human gait analysis: an experimental study on gender recognition. In: International Conference Image Analysis and Recognition, pp. 98–105. Springer, Heidelberg, June 2012 19. Lu, J., Tan, Y.-P.: Gait-based human age estimation, IEEE Transactions 20. Trung, N.T., Makihara, Y., Nagahara, H., Mukaigawa, Y., Yagi, Y.: Performance evaluation of gait recognition using the largest inertial sensor-based gait database. In: 2012 5th IAPR International Conference on Biometrics (ICB). IEEE, pp. 360–366 (2012) 21. Burges, C.J.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discovery 2(2), 121–167 (1998) 22. Szegedy, V.V.O.: Processing images using deep neural networks. USA Patent 9,715,642, 25 July 2017 23. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996) 24. Johnson, R.W.: An introduction to the bootstrap. Teach. Stat. 23(2), 49–54 (2001) 25. Rahman, A., Verma, B.: Ensemble classifier generation using non-uniform layered clustering and Genetic Algorithm. Knowledge-Based Syst. 43, 30–42 (2013) 26. Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996) 27. Sokolova, M., Lapalme, G.: A systematic analysis of performance measures for classification tasks. Inf. Process. Manage. 45(4), 427–437 (2009)
596
R. K. Pathan et al.
28. Chowdhury, R.R., Hossain, M.S., ul Islam, R., Andersson, K., Hossain, S.: Bangla handwritten character recognition using convolutional neural network with data augmentation. In: 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), pp. 318–323. IEEE, May 2019 29. Ahmed, T.U., Hossain, M.S., Alam, M.J., Andersson, K.: An integrated CNN-RNN framework to assess road crack. In: 2019 22nd International Conference on Computer and Information Technology (ICCIT), pp. 1–6. IEEE, December 2019 30. Ahmed, T.U., Hossain, S., Hossain, M.S., ul Islam, R., Andersson, K.: Facial expression recognition using convolutional neural network with data augmentation. In: 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), pp. 336–341. IEEE, May 2019 31. Islam, M.Z., Hossain, M.S., ul Islam, R., Andersson, K.: Static hand gesture recognition using convolutional neural network with data augmentation, May 2019 32. Biswas, M., Chowdhury, S.U., Nahar, N., Hossain, M.S., Andersson, K.: A belief rule base expert system for staging non-small cell lung cancer under uncertainty. In: 2019 IEEE International Conference on Biomedical Engineering, Computer and Information Technology for Health (BECITHCON), pp. 47–52. IEEE, November 2019 33. Kabir, S., Islam, R.U., Hossain, M.S., Andersson, K.: An integrated approach of belief rule base and deep learning to predict air pollution. Sensors 20(7), 1956 (2020) 34. Monrat, A.A., Islam, R.U., Hossain, M.S., Andersson, K.: A belief rule based flood risk assessment expert system using real time sensor data streaming. In: 2018 IEEE 43rd Conference on Local Computer Networks Workshops (LCN Workshops), pp. 38–45. IEEE, October 2018 35. Karim, R., Hossain, M.S., Khalid, M.S., Mustafa, R., Bhuiyan, T.A.: A belief rule-based expert system to assess bronchiolitis suspicion from signs and symptoms under uncertainty. In: Proceedings of SAI Intelligent Systems Conference, pp. 331–343. Springer, Cham, September 2016 36. Hossain, M.S., Monrat, A.A., Hasan, M., Karim, R., Bhuiyan, T.A., Khalid, M.S.: A belief rule-based expert system to assess mental disorder under uncertainty. In: 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV), pp. 1089–1094. IEEE, May 2016 37. Hossain, M.S., Habib, I.B., Andersson, K.: A belief rule based expert system to diagnose dengue fever under uncertainty. In: 2017 Computing Conference, pp. 179–186. IEEE, July 2017
Lévy-Flight Intensified Current Search for Multimodal Function Minimization Wattanawong Romsai, Prarot Leeart, and Auttarat Nawikavatan(&) Department of Electrical Engineering, Faculty of Engineering, Southeast Asia University, 19/1 Petchkasem Road, Nongkhangphlu, Nonghkaem 10160, Bangkok, Thailand [email protected], [email protected], [email protected]
Abstract. This paper proposes the novel trajectory-based metaheuristic algorithm named the Lévy-flight intensified current search (LFICuS) for multimodal function minimization. The proposed LFICuS is the new modified version of the intensified current search (ICuS) initiated from the electrical current flowing through the electric networks. The random number drawn from the Lévy-flight distribution and the adjustable search radius mechanism are conducted to improve the search performance. To perform its effectiveness, the proposed LFICuS is tested against ten selected standard multimodal benchmark functions for minimization. Results obtained by the LFICuS will be compared with those obtained by the ICuS. As simulation results, it was found that the proposed LFICuS is much more efficient for function minimization than the ICuS. Keywords: Lévy-flight intensified current search Intensified current search Function minimization Metaheuristic algorithm
1 Introduction Based on modern optimization, many metaheuristic algorithms have been launched for solving several real-world optimization problems under complex constraints [1, 2]. From literature reviews, the current search (CuS) is one of the most interesting trajectory-based metaheuristic algorithms [3]. The CuS development scenario begins in 2012 once it was firstly proposed as an optimizer to solve the optimization problems [3]. The CuS algorithm mimics the principle of an electric current behavior in the electric circuits and networks. It performed superior search performance to the genetic algorithm (GA), tabu search (TS) and particle swarm optimization (PSO) [3]. The CuS was successfully applied to control engineering [4] and signal processing [5]. During 2013-2014, the adaptive current search (ACuS) was launched [6] as a modified version of the conventional CuS. The ACuS consists of the memory list (ML) used to escape from local entrapment caused by any local solution and the adaptive radius (AR) conducted to speed up the search process. The ACuS was successfully applied to industrial engineering [6] and energy resource management [7]. For some particular problems, both the CuS and ACuS are trapped by local optima and consume much search time. In 2014, the intensified current search (ICuS) was proposed to improve its © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 597–606, 2021. https://doi.org/10.1007/978-3-030-68154-8_52
598
W. Romsai et al.
search performance [8]. The ICuS algorithm consists of the ML, AR and the adaptive neighborhood (AN) mechanisms. The ML regarded as the exploration strategy is used to store the ranked initial solutions at the beginning of search process, record the solution found along each search direction, and contain all local solutions found at the end of each search direction. The ML is also applied to escape the local entrapments caused by local optima. The AR and AN mechanisms regarded as the exploitation strategy are together conducted to speed up the search process. The ICuS was successfully applied to many control engineering problems including single-objective and multi-objective optimization problems [8–11]. For some optimization problems especially in large-space multimodal problems, the ICuS might be trapped by local optima. This is properly because the random number with the uniform distribution used in the ICuS algorithm is not efficient enough for such the problems. Thus, it needs to be modified to enhance its search performance and to speed up the search process. In this paper, the new trajectory-based metaheuristic algorithm called the Lévyflight intensified current search (LFICuS) is peoposed. The proposed LFICuS is the newest modified version of the ICuS. The random number with the Lévy-flight distribution and the adjustable search radius mechanism are conducted to improve the search performance. This paper consists of five sections. An introduction is given in Sect. 1. The ICuS algorithm is briefly described and the proposed LFICuS is illustrated in Sect. 2. Ten selected standard multimodal benchmark functions used in this paper are detailed in Sect. 3. Results and discussions of the performance evaluation of the ICuS and LFICuS are provided in Sect. 4. Finally, conclusions are followed in Sect. 5.
2 ICuS and LFICuS Algorithms The ICuS algorithm is briefly described in this section. Then, the proposed LFICuS algorithm is elaborately illustrated as follows. 2.1
ICuS Algorithm
The ICuS algorithm is based on the iteratively random search by using the random number drawn from the uniform distribution [8]. The ICuS possesses the ML regarded as the exploration strategy, the AR and AN mechanisms regarded as the exploitation strategy. The ML is used to escape from local entrapment caused by any local solution. The ML consists of three levels: low, medium and high. The low-level ML is used to store the ranked initial solutions at the beginning of search process, the medium-level ML is conducted to store the solution found along each search direction, and the highlevel ML is used to store all local solutions found at the end of each search direction. The AR mechanism conducted to speed up the search process is activated when a current solution is relatively close to a local minimum by properly reducing the search radius. The radius is thus decreased in accordance with the best cost function found so far. The less the cost function, the smaller the search radius. The AN mechanism also applied to speed up the search process is invoked once a current solution is relatively close to a local minimum. The neighborhood members will be decreased in accordance with the best cost function found. The less the cost function, the smaller the
Lévy-Flight Intensified Current Search
599
neighborhood members. With ML, AR and AN, a sequence of solutions obtained by the ICuS very rapidly converges to the global minimum. Algorithms of the ICuS can be described by the pseudo code shown in Fig. 1.
Fig. 1. ICuS algorithm.
2.2
Proposed LFICuS Algorithm
Referring to Fig. 1, the ICuS employs the random number drawn from the uniform distribution for generating the neighborhood members as feasible solutions. The probability density function (PDF) f(x) of the continuous uniform distribution can be expressed in (1), where a and b are lower and upper bounds of random process. Mean l and variance r2 of the continuous uniform distribution are limited by bounds of random process. The random number drawn from the uniform distribution is considered as nonscale-free characteristics. Another random number with scale-free characteristics is the random number drawn from the Lévy-flight distribution [12], where its PDF is stated in
600
W. Romsai et al.
(2) and c is the scale parameter. The random with Lévy-flight distribution has an infinite mean and infinite variance [12]. Then, it is more efficient than the random with uniform distribution. Many metaheuristics including the cuckoo search (CS) [13] and flower pollination algorithm (FPA) [14] utilize the random with Lévy-flight distribution for exploring the feasible solutions.
Fig. 2. Proposed LFICuS algorithm.
8
>
> : lev ði 1; j 1Þ þ 1 : a;b ðai 6¼bj Þ 8
0 UniqueSpeed = list of unique values of Speed in Signal For speed in UniqueSpeeds: secondarySignal = Signal where Speed == speeds fragments = secondarySignal divided by window for fragment in fragments: spectrum = amplitude spectrum of fragment spectrum = spectrum – idle spectrum av = weighted average of the spectrum
Fig. 4. A pseudo-code that describes the data manipulation process.
4.4
Road Quality Classifier Creation
The classifier used to assess road quality is actually a set of instances of KMeans classifiers. A separate model has been created for each of the speed values, which allows to easily immunize the classifier against differences in vibration in relation to machine speed. This structure also allows for the easy development of the classifier.
784
A. Skoczylas et al.
New models can be created when speed range of the vehicle extends, while those already created can be developed further. Each of the models in the classifier is a separate KMeans working on onedimensional data. These data are weighted average spectra calculated from signal segments for the currently processed speed. Each of the KMeans models is designed to divide the spectrum data into 3 groups (corresponding to bad, medium, and good road quality). This is also partly the reason for choosing this grouping algorithm. As one of the few, it allows grouping data into a specific number of groups, and KMeans from a group of algorithms that allow it seemed to work best. With this classifier design, there is also the problem of assigning different markings each time the model is started. The solution to this is to sort the assigned labels using group centers. The lowest of all groups always get the label 0, medium one – 1, while the highest – 2. Figure 5 presents the classifier which we were able to obtain based on the collected data. The colors marked the next detected road quality (and their boundaries): white (lowest) - good, gray - medium, black (highest) - bad. Division of groups in the classifier is not simple (as in the case of simple decision thresholds), and as the speed increases, the acceptable vibrations of good and medium quality roads increase. After exceeding the speed of 15 km/h, the vibrations seem to drop, however, there are too few measurements to be able to definitely conclude. It can also be stated that the increase in thresholds is similar in both directions (forward/backward), at least up to a speed of 5 km/h because after that there are not enough measurements on the negative side. Maximum vibrations are recorded for a speed of 7–9 km/h.
Fig. 5. Road Classifier model with established values, white (lowest) - good, gray (medium) – medium, black (highest) – bad.
Road Quality Classification Adaptive to Vehicle Speed Based on Driving Data
4.5
785
Road Quality Classification
The classification is carried out using a similar methodology as in the case of building a road quality classifier, with the difference that it is simplified. There is also an additional validation (similar to that used for the classifier contruction), which results in assigning labels not specifically related to the quality of the road, but to the quality of the elements needed to estimate it. At the beginning, the classifier checks whether it is dealing with one of two special variants: idling (speed = 0) or empty measurements (speed = NaN). Accelerometer readings in these two cases do not bring any information about the quality of the path, therefore they are not processed in any way. The appropriate labels are then assigned to these fragments. If the fragment does not belong to these variants, its average speed value is pulled out. Such an estimated speed of the fragment is further checked for compliance with the models. If there is no model for the set speed, then the label informing about that is assigned to the fragment. When the speed is supported, the amplitude spectrum is created from the fragment, the idling spectrum is subtracted from it and the weighted average is further calculated. Based on the average result and using a model for a given speed, a group belonging (road type) label is obtained.
5 Experimental Results The algorithm was learned on the data from the vehicles hauling the spoil in one of the mines owned by KGHM Polska Miedź SA. A total of 10 work shifts were used to learn the algorithm, each lasting about 6 h (4 days of experiments, each consists of 2 shifts). The Speed of the machine was recorded using internal measurement unit SYNAPSA, the vibration were recorded by the accelerometer obtained from the IMU (NGIMU) mounted on the machine. Then the algorithm was tested on one additional work shift, from which it was decided to show one fragment. The designated section of road against the background of the mine map with color-coded quality is shown in Fig. 6. The correctness of the method was confirmed by the passengers of the vehicle driving on this route.
Fig. 6. Designated road quality shown in color on the map.
786
A. Skoczylas et al.
At the moment, the described method lacks unequivocal confirmation of its correctness. Single fragments of routes and the results of the algorithm’s operation have been confirmed by machine operators, however, this assessment is highly inaccurate and also subjective, thus some further works in this direction are planned.
6 Summary The article deals with the issue of assessing the condition of mining road infrastructure, which is crucial from the point of view of efficient, sustainable and safe exploatation. The authors proposed an algorithm based mainly on data from an inertial measuring unit and the speed signal. A classification model based on the spectral analysis and KMeans algorithm was developed, which allows to perform a three-state assessment of the road surface condition adaptively to the driving speed of a mining vehicle. The paper presents the integration of the result data from the classification model with the GIS map and the vehicle movement path estimated from an inertial navigation. This approach allows to achieve a holistic view of the condition of road infrastructure in the underground mine. In the future, it is suggested to integrate the proposed solution with an IoT platform based on thousands of low-cost sensors installed on the each of vehicles in mine. In this way, on-line measurements can cover the entire road network in a mine. Besides that, the more data the machines will gather, the greater that classifier can be in terms of accuracy. At this moment, a relatively small sample of data allowed for the creation of a classifier which performed the detection with a satisfactory accuracy for a machine running at the most popular (average) speeds. Since machines rarely run at higher speeds, more data is needed to obtain similar results for these speeds. There is also the need to compare classification results with the more accurate and tested method of the road quality detection, because the validation method presented here is highly subjective. Acknowledgements. This work is a part of the project which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 780883.
References 1. Bishop, R.: A survey of intelligent vehicle applications worldwide. In: Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No. 00TH8511), pp. 25–30. IEEE, October 2000 2. Eriksson, J., Girod, L., Hull, B., Newton, R., Madden, S., Balakrishnan, H.: The pothole patrol: using a mobile sensor network for road surface monitoring. In: Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services, pp. 29–39, June 2008 3. U.S. Department of Transportation, Traffic Safety Facts – Crash Stats, June 2015 4. Pothole (2002). http://www.pothole.info 5. Gillespie, T.D.: Everything you always wanted to know about the IRI, but were afraid to ask. In: Road Profile Users Group Meeting, Lincoln, Nebraska, pp. 22–24, September 1992
Road Quality Classification Adaptive to Vehicle Speed Based on Driving Data
787
6. Pierce, L.M., McGovern, G., Zimmerman, K.A.: Practical guide for quality management of pavement condition data collection (2013) 7. Ferguson, R.A., Pratt, D.N., Turtle, P.R., MacIntyre, I.B., Moore, D.P., Kearney, P.D., Breen, J.E.: U.S. Patent No. 6,615,648. Washington, DC: U.S. Patent and Trademark Office (2003) 8. Pothole Marker And More - Apps on Google Play, Google Play 9. Mahmoudzadeh, M.R., Got, J.B., Lambot, S., Grégoire, C.: Road inspection using full-wave inversion of far-field ground-penetrating radar data. In: 2013 7th International Workshop on Advanced Ground Penetrating Radar, pp. 1–6. IEEE, July 2013 10. Basavaraju, A., Du, J., Zhou, F., Ji, J.: A machine learning approach to road surface anomaly assessment using smartphone sensors. IEEE Sens. J. 20(5), 2635–2647 (2019) 11. Vittorio, A., Rosolino, V., Teresa, I., Vittoria, C.M., Vincenzo, P.G.: Automated sensing system for monitoring of road surface quality by mobile devices. Procedia-Soc. Behav. Sci. 111, 242–251 (2014) 12. Åstrand, M., Jakobsson, E., Lindfors, M., Svensson, J.: A system for under-ground road condition monitoring. Int. J. Min. Sci. Technol. 30, 405–411 (2020) 13. Harikrishnan, P.M., Gopi, V.P.: Vehicle vibration signal processing for road sur-face monitoring. IEEE Sens. J. 17(16), 5192–5197 (2017) 14. Sayers, M.W., Gillespie, T.D., Queiroz, C.A.V.: The international road roughness experiment: establishing correlation and a calibration standard for measurements. University of Michigan, Ann Arbor, Transportation Re-search Institute, January 1986 15. Stefaniak, P.K., Zimroz, R., Sliwinski, P., Andrzejewski, M., Wyłomanska, A.: Multidimensional signal analysis for technical condition, operation and performance understanding of heavy duty mining machines. In: International Conference on Condition Monitoring of Machinery in Non-Stationary Operation, pp. 197–210. Springer, Cham, December 2014 16. Stefaniak, P., Gawelski, D., Anufriiev, S., Śliwiński, P.: Road-quality classification and motion tracking with inertial sensors in the deep underground mine. In: Asian Conference on Intelligent Information and Database Systems, pp. 168–178. Springer, Singapore, March 2020 17. Wodecki, J., Stefaniak, P., Michalak, A., Wyłomańska, A., Zimroz, R.: Technical condition change detection using Anderson-Darling statistic approach for LHD machines–engine overheating problem. Int. J. Min. Reclam. Environ. 32(6), 392–400 (2018) 18. Wodecki, J., Stefaniak, P., Śliwiński, P., Zimroz, R.: Multidimensional data segmentation based on blind source separation and statistical analysis. In: Advances in Condition Monitoring of Machinery in Non-Stationary Operations, pp. 353–360. Springer, Cham (2018) 19. Stefaniak, P., Zimroz, R., Obuchowski, J., Sliwinski, P., Andrzejewski, M.: An effectiveness indicator for a mining loader based on the pressure signal measured at a bucket’s hydraulic cylinder. Procedia Earth and Planet. Sci. 15, 797–805 (2015)
Fabric Defect Detection System Tanjim Mahmud1(&), Juel Sikder1, Rana Jyoti Chakma1, and Jannat Fardoush2
2
1 Department of Computer Science and Engineering, Rangamati Science and Technology University, Rangamati, Bangladesh {tanjim.cse,rchakma}@rmstu.edu.bd, [email protected] Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh [email protected]
Abstract. Fabric inspection is very significant in textile manufacturing. Quality of fabric defends on vital activities of fabric inspection to detect the defects of fabric. Profits of industrialists have been decreased due to fabric defects and cause disagreeable loses. Traditional defect detection methods are conducted in many industries by professional human inspectors who manually draw defect patterns. However, such detection methods have some shortcomings such as exhaustion, tediousness, negligence, inaccuracy, complication as well as timeconsuming which cause to reduce the finding of faults. In order to solve these issues, a framework based on image processing has been implemented to automatically and efficiently detect and identify fabric defects. In three steps, the proposed system works. In the first step, image segmentation has been employed on more than a few fabric images in order to enhance the fabric images and to find the valuable information and eliminate the unusable information of the image by using edge detection techniques. After the first step of the paper, morphological operations have been employed on the fabric image. In the third step, feature extraction has been done through FAST (Features from Accelerated Segment Test) extractor. After feature extraction, If PCA (Principal Component Analysis) is applied as it reduces the dimensions and preserves the useful information and classifies the various fabric defects through a neural network and used to find the classification accuracy. The proposed system provides high accuracy as compared to the other system. The investigation has been done in a MATLAB environment on real images of the TILDA database. Keywords: Defect detection FAST (Features from Accelerated Segment Test) Neural network PCA (Principal Component Analysis)
1 Introduction The textile industry is a rising sector. Development and advancement of the sector normally bring to build the going through huge investment. Be that as it may, the textile, like any other sector, industry experienced various issues. These include some insurance to diminish the effect of misfortunes that are budgetary, client disappointment, time squandering, and so on. Fabric defects are probably the greatest test © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 788–800, 2021. https://doi.org/10.1007/978-3-030-68154-8_68
Fabric Defect Detection System
789
confronting the textile business. Fabric is made in a day by day life utilizing fibers and a usually utilized material. Most fabrics are delivered after passing through a series of making stages. Various machines and methods are utilized during the making stages. Fabrics are subjected to pressures and stresses along these lines that cause defects. As indicated by their structures and directions, defects take various names. The textile business has distinguished in more than 70 types of defects [1] such as laddering, endout, hole, and oil spot as shown in Fig. 1. Unexpected tasks might be the reason for various defects on the fabric surface during the manufacturing of fabric [2]. The lack of defects can diminish the cost of fabric by 50–60% [1]. The decrease in the impacts in the production process is typical for the industrialist.
Fig. 1. Different defects in a fabric
Thus, fabric manufacturing is one of the largest traditional businesses where fabric inspection systems can play a vital role in growing the manufacturing rate. These days, the significance of an inspection process nearly rises to with manufacturing process in the current industrialist viewpoint. The idea of inspection process is to recognize the happened errors or defects, on the off chance that any exist, at that point to change argument or give alert of inspector for checking the manufacturing procedure [3]. For the most part, fabric defects recognition utilizes two kinds of investigation models [4]. The essential one is that the human-based inspection systems as shown in Fig. 2.The second framework is automated based inspection systems as shown in Fig. 3. Accordingly, human-based defect detection done by specialists’ turns out to be rapidly a mind-boggling and fussy task [5, 6]. In this manner, having proficient and automated based frameworks nearby is a significant necessity for improving unwavering quality and accelerating quality control, which may expand the profitability [7–10]. The subject of automated based defect detection has been examined in a few works in the most recent decades. In spite of the fact that there is no widespread methodology for handling this issue, a few strategies dependent on image processing procedures have been proposed in recent years [11–13]. These strategies were utilized to recognize defects at the image level, so the precision rate is little and additionally, it is hard to find
790
T. Mahmud et al.
the defects precisely. In this way, they can’t be stretched out to various fabrics. As of late, some different techniques dependent on local image-level have been proposed, which utilize the base unit as the fundamental activity object to extract image features. These methodologies can be ordered into four principle gatherings: Statistical, Signal processing-based, Structural methodology, and Model-based methodology.
Fig. 2. Human-based inspection system
Fig. 3. Machine automated inspection system
In the statistical approach, gray-level properties are utilized to describe the textural property of texture image or a measure of gray-level reliance, which are called 1st-order statistics and higher-order statistics, separately [14]. The 1st-order statistics, for example, mean and standard deviation [15, 16], rank function [17], local integration, can gauge the variance of gray-level intensity among different features between defective areas and background. The higher-order statistics depends on the joint probability distribution of pixel sets, for example, gray-level co-occurrence matrix [18] gray-level difference strategy [15] and autocorrelation method. In any case, the inconvenience of this strategy is that defects size is sufficiently enormous to empower a compelling estimation of the texture property. So this methodology is feeble in handling local little defects. Additionally, the calculation of higher-order statistics is tedious [17]. In the subsequent class model-based methodology, the generally utilized strategies are Markov random field Gaussian Markov random field [16]. The texture features of a contemplated texture and can signify to all the more exactly spatial interrelationships between the gray-levels in the texture. However, like the methodologies based on second-order statistics, additionally it is tough for model-based methodology to deal with identifying small-sized defects in light of the fact that the methodologies as a rule require an adequately large region of the texture to assess the parameters of the models. The structural approach generally utilized on properties of the primitives of the defect-free fabric texture for the nearness of the flawed region, and their related placement rules. Apparently, the practicability of this methodology is to congestion to those textures with regular macro texture.
Fabric Defect Detection System
791
Not at all like the above methodologies which separate the defects as far as the visual properties of the fabric texture, the signal processing based methodology extract features by applying different signal processing procedures on the fabric image. It is projected that the distinguishable between the defect and the non-defect can be improved in the handled fabric image. This methodology further comprises of the accompanying techniques: Spatial filtering, Karhunen-Loeve transform, Fourier transform, Gabor transform, and Wavelets transform. As a weakness of this methodology, its performance is effortlessly influenced by the noise in the fabric image. These coefficients exemplify to optimal the defect-free fabric image, be that as it may, not the optimal separation between the defect and the non-defect. They are progressively proficient in the separation of fabric defects than different techniques that depend on the texture investigation at a single scale [19]. Contrasted with the Gabor transform; the wavelet transform has the benefit of greater adaptability in the decomposition of the fabric image [20]. Consequently, the wavelet transform is seen as the most suitable way to deal with the feature extraction for fabric defect detection.
Table 1. Taxonomy of some most recent related works Article [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32]
Classifier Artificial Neural Network Artificial Neural Network Artificial Neural Network Support Vector Machine Artificial Neural Network Artificial Neural Network Artificial Neural Network Artificial Neural Network Artificial Neural Network Artificial Neural Network Model-based clustering Artificial Neural Network Artificial Neural Network
Machine learning technique Counterpropagation Backpropagation Resilient backpropagation NA Backpropagation Backpropagation Least mean square error (LMS) Backpropagation Backpropagation Learning vector quantization (LVQ) NA Backpropagation Resilient backpropagation
Accuracy rate 82.97% 78.4% 85.57% 77% 84% 81% 87.21% 85.9% 76.5% 67.11% 65.2% 71.34% 69.1%
Table 1 Illustrates the taxonomy of most recent fabric defects detection methods, in light of their classifier, machine learning technique, and accuracy rate. In this paper, we propose an innovative defect detection algorithm which has the capability to cope with different types of defects. Our algorithm is based on four phases. In the initial phase, image segmentation has been utilized on an excess of a couple of fabric images so as to enhance the fabric image and to locate the important data and wipe out the unusable data of the image by utilizing different edge detection strategies. After the initial phase, morphological operations have been utilized on the
792
T. Mahmud et al.
fabric image. In the third step, feature extraction has been done through FAST (Features from Accelerated Segment Test) extractor. After feature extraction, If PCA is applied as it lessens the dimensions and preserves the helpful data, and characterizes the different fabric defects through a neural network; additionally classifier has been utilized to find the accuracy rate. The proposed framework gives high precision when contrasted with the other framework. The investigation has been done in MATLAB environment on real images of the TILDA database [33]. The remaining of the paper is arranged as follows: In Sect. 2, the various types of fabric defects are presented. In Sect. 3 explains our proposed approach for defect detection. In Sect. 4 presents the application of our system and analysis. Finally, Sect. 5 accomplishes the paper and presents our future research plans.
2 Defects in Fabric In order to prepare various categories and forms of fabric items in the industry, fabric materials are used. Consequently, yarn quality and/or loom defects affect the fabric quality. Fabric defect has been estimated [34] that the price of fabrics is reduced by 45%-65% due to the presence of defects such as dye mark/dye Spot, slack warp, faulty pattern card, holes, spirality, grease oil/ dirty stains, mispick, slub, wrong end, slack end, and so on [1]. In a fabric, defects can occur due to: machine faults, color bleeding, yarn problems, excessive stretching, hole, dirt spot, scratch, poor finishing, crack point, material defects, processing defects, and so on [35, 36].
3 Proposed Methodology for Defect Detection
Fig. 4. Block diagram of the developed system
Figure 4 shows the steps of methodology, to sum up; the following steps are image segmentation, feature extraction, PCA (Principal Component Analysis), and image classification.
Fabric Defect Detection System
3.1
793
Image Segmentation
Image segmentation is a fundamental advance in image analysis. Segmentation isolates an image into its objects or segment parts. Edge detection is a mechanism in image processing to make the image segmentation procedure and pattern recognition more precise [37, 38]. It fundamentally diminishes the measure of information and filters out pointless data, while protecting the helpful properties in an image. The adequacy of many image processing relies upon the flawlessness of identifying significant edges. It is one of the procedures for detecting strength discontinuities in a digital image. Essentially we can say, the way toward arranging and setting sharp discontinuities in an image is known as edge detection. There are many edge detection strategies available, every procedure intended to be keen on particular types of edges. Factors that are concerned about the choice of an operator for edge detection include edge direction, edge structure, and noise condition. The paper applied the histogram equalization strategy on fabric image as shown in Fig. 6. After that edge detection strategy has been applied as shown in Fig. 6. There are numerous operators in edge detection methods model Roberts, Sobel, and Prewitt [39, 40] and the result shows that the Canny’s edge detection method performs superior to every other strategy. 3.2
Feature Extraction
The feature extractor applied on the dataset of images as shown in Fig. 6 which relies on the extractor of the local feature. The point of local feature portrayal is to express to the image which depends on some notable regions. The image relies upon its local structures with a lot of local feature extractors and which is get from a lot of image regions called interest regions [41]. FAST (Features from Accelerated Segment Test) has been applied to the fabric picture to extract the features as shown in Fig. 6. Rosten and Drummond proposed initially that FAST is a strategy for recognizing interest regions in an image [42]. An interest region in an image is a pixel that has a very much characterized position and can be vigorously identified. An interest region has high local data substance and they ought to be in a perfect world repeatable between various images [43]. 3.3
PCA (Principal Component Analysis)
It is a straight forward method used in dimensionality reduction to decrease the features that are not helpful. It protects the valuable features of the information or image [44, 45]. If applied after feature extraction through feature extractor FAST, it gives a great performance or accuracy via classifier.
794
3.4
T. Mahmud et al.
Image Classification
Neural network [26, 45] is a machine learning technique that has been applied to classify the Images of fabric defects as shown in Fig. 6 and applied the pattern recognition framework and to get the great outcomes after train the framework by dataset [46, 47]. It can partition the dataset into a testing stage and training stage to locate the hidden neurons in the pattern recognition framework as shown in Fig. 5. The classifier will apply for the classification accuracy when feature extraction from the extractor. The classifier has been applied to the real image of fabric [33].
Fig. 5. Neural network
4 Application of the System and Analysis 4.1 Phase 1 Phase 2 Phase 3 Phase 4
Proposed Methodology For enhancing the fabric images to apply edge detection method For the application of FAST (Features from Accelerated Segment Test) extract the features and discover the interest regions Consequently, applied the PCA (Principal Component Analysis) to decrease the dimensions and reserve the beneficial data Applied machine learning algorithms neural network to get better accuracy as it provides better outcomes
Fabric Defect Detection System
Original image Gray image
Histogram
Binary image
Fig. 6. Application of the system
Fig. 7. MATLAB environment for defect detection
795
Detected Region
796
T. Mahmud et al. Table 2. Comparison of neural network-based classification models Article [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] Developed system
Accuracy Comment 82.97% 78.4% 85.57% 77% 84% 85.9% 76.5% 67.11% 65.2% 71.34% 69.1% 97.21% Greatest accuracy among all developed systems
As compared to the other techniques our methodology provides better accuracy as shown in Table 2. Experiments are worked on TILDA database [33] and give better result in terms of 100% detection rate for the training set and 97.21% accuracy for the test set.
Fig. 8. System snapshot
Fabric Defect Detection System
797
Accuracy 120.00% 100.00% 80.00% 60.00% 40.00% 20.00% 0.00%
Accuracy
Article
Fig. 9. Performance comparison of different studies
The proposed methodology feature extraction technique as compared to the other discussed methodology feature extraction techniques using machine learning algorithms gives better accuracy as shown in Fig. 9.
5 Conclusion and Future Work The detection of faulty fabrics plays an important role in the success of any fabric industry. The fabric industry needs a real-time quality control to find defects quickly and efficiently. Manual control is inefficient and time-consuming that leads to heavy loss. On the other hand, automatic quality control is considerably more proficient, in light of the fact that it is a real-time and autonomous compared to manual productivity. Till now all the fabric detection systems suggested by all the researchers, the accuracy rate for detecting defective fabric is very low. However, this paper analyzed the shortcomings of the traditional approach for fabric defect detection, and proposed an innovative fabric defect detection technique based on a FAST (Features from Accelerated Segment Test) extractors and PCA (Principal Component Analysis) combined with a neural network classification, to enhance the recognition accuracy rate that texture fabrics cannot be effectively detected by present existing techniques. The paper concludes that the proposed fabric defect detection technique gave better accuracy after applied the machine learning algorithm and PCA in comparison to the other referenced approaches. Additionally, our method notably showed its efficiency in separating defect-free from defective areas of the fabric. Moreover, after a series of improvements, our method exhibited better recognition performance for the fabric images. Having successfully trained the neural network, 30 samples of each type of defect were used to
798
T. Mahmud et al.
assess the accuracy of the network classification. The defective images were then graded with a 97.21% overall accuracy score. With a 100% accuracy score, the dye spot defect was identified. The experimentation has been applied to the real images of the TILDA database dataset. The implementation has been done in MATLAB software as shown in Fig. 7 and Fig. 8. It automatically detects the fabric defect. In the future, we will focus on sensor data-oriented systems and developing an optimal system to match more accurately with a real system for fabric defect detection as well as applying our machine learning algorithms for other different feature extractors.
References 1. Stojanovic, R., Mitropulos, P., Koulamas, C., Karayiannis, Y., Koubias, S., Papadopoulos, G.: Real-time vision-based system for textile fabric inspection. Real-Time Imaging 7, 507– 518 (2001) 2. Aasim A.: A catalogue of visual textile defects, ministry of textiles (2004) 3. Newman, T.S., Jain, A.K.: A survey of automated visual inspection. Comput. Vis. Image Underst. 61(2), 231–262 (1995) 4. Kumar, A.: Computer-vision-based fabric defect detection: a survey. IEEE Trans. Ind. Electron. 55(1), 348–363 (2008) 5. Huart, J., Postaire, J.G.: Integration of computer vision on to weavers for quality control in the textile industry. In: Proceeding SPIE 2183, pp. 155–163, February 1994 6. Dorrity, J.L., Vachtsevanos, G.: On-line defect detection for weaving systems. In: Proceeding IEEE Annual Technical Conference Textile, Fiber, and Film Industry, pp. 1– 6, May 1996 7. Ryan G.: Rosandich: Intelligent Visual Inspection, Chapman & Hall, London (U.K.) (1997) 8. Batchelor, B.G.: Lighting and viewing techniques. In: Batchelor, B.G., Hill, D.A., Hodgson, D.C. (eds) Automated Visual Inspection. IFS and North Holland (1985) 9. Roberts, J.W., Rose, S.D., Jullian, G., Nicholas, L., Jenkins, P.T., Chamberlin, S.G., Maroscher, G., Mantha, R., Litwiller, D.J.: A PC-based real time defect imaging system for high speed web inspection. In:Proceeding SPIE 1907, pp. 164–176 (1993) 10. Bayer, H.A.: Performance analysis of CCD-cameras for industrial inspection. In: Proceeding SPIE 1989, pp. 40–49 (1993) 11. Cho, C., Chung, B., Park, M.: Development of real-time vision-based fabric inspection system. IEEE Trans. Ind. Electron. 52(4), 1073–1079 (2005) 12. Kumar, A.: Computer-vision-based fabric defect detection: a survey. IEEE Trans. Ind. Electron. 55(1), 348–363 (2008) 13. Ngana, H., Panga, G., Yung, N.: Automated fabric defect detection a review. Image Visi. Comput. 29(7), 442–458 (2011) 14. Smith, B.: Making war on defects. IEEE Spectr. 30(9), 43–47 (1993) 15. Fernandez, C., Fernandez, S., Campoy P., Aracil R.: On-line texture analysis for flat products inspection. neural nets implementation. In: Proceedings of 20th IEEE International Conference on Industrial Electronics, Control and Instrumentation, vol. 2, pp. 867–872 (1994) 16. Ozdemir S., Ercil A.: Markov random fields and Karhunen-Loeve transforms for defect inspection of textile products. In: IEEE Conference on Emerging Technologies and Factory Automation, vol. 2, pp. 697–703 (1996)
Fabric Defect Detection System
799
17. Bodnarova A., Williams J. A., Bennamoun M., Kubik K. Optimal textural features for flaw detection in textile materials. In: Proceedings of the IEEE TENCON 1997 Conference, Brisbane, Australia, pp. 307–310 (1997) 18. Gong, Y.N.: Study on image analysis of fabric defects. Ph.D. dissertation, China Textile University, Shanghai China (1999) 19. Zhang, Y.F., Bresee, R.R.: Fabric defect detection and classification using image analysis. Text. Res. J. 65(1), 1–9 (1995) 20. Nickolay, B., Schicktanz, K., Schmalfub, H.: Automatic fabric inspection– utopia or reality. Trans. Melliand Textilberichte 1, 33–37 (1993) 21. Habib, M.T., Rokonuzzaman, M.: A set of geometric features for neural network-based textile defect classification, ISRN Artif. Intell. 2012, Article ID 643473, p. 16 (2012) 22. Saeidi, R.D., Latifi, M., Najar, S.S., Ghazi Saeidi, A.: Computer Vision-Aided Fabric Inspection System For On-Circular Knitting Machine, Text. Res. J. 75(6), 492–497 (2005) 23. Islam, M.A., Akhter, S., Mursalin, T.E.: Automated textile defect recognition system using computer vision and artificial neural networks. In: Proceedings World Academy of Science, Engineering and Technology, vol. 13, pp. 1–7, May 2006 24. Murino, V., Bicego, M., Rossi, I.A.: Statistical classification of raw textile defects. In: 17th International Conference on Pattern Recognition (ICPR 2004), ICPR, vol. 4, pp. 311–314 (2004) 25. Karayiannis, Y.A., Stojanovic, R., Mitropoulos, P., Koulamas, C., Stouraitis, T., Koubias, S., Papadopoulos, G.: Defect detection and classification on web textile fabric using multi resolution decomposition and neural networks. In: Proceedings on the 6th IEEE International Conference on Electronics, Circuits and Systems, Pafos, Cyprus, pp. 765–768, September 1999 26. Kumar, A.: Neural network based detection of local textile defects. Pattern Recogn. 36, 1645–1659 (2003) 27. Kuo, C.F.J., Lee, C.-J.: A back-propagation neural network for recognizing fabric defects. Text. Res. J. 73(2), 147–151 (2003) 28. Mitropoulos, P., Koulamas, C., Stojanovic, R., Koubias, S., Papadopoulos, G., Karayiannis, G.: Real-time vision system for defect detection and neural classification of web textile fabric. In: Proceedings SPIE, vol. 3652, San Jose, California, pp. 59–69, January 1999 29. Shady, E., Gowayed, Y., Abouiiana, M., Youssef, S., Pastore, C.: Detection and classification of defects in knitted fabric structures. Text. Res. J. 76(4), 295–300 (2006) 30. Campbell, J.G.,. Fraley, C., Stanford, D., Murtagh, F., Raftery, A.E.: Model-based methods for textile fault detection, Int. J. Imaging Syst. Technol. 10(4), 339–346, July 1999 31. Islam, M.A., Akhter, S., Mursalin, T.E., Amin, M.A.: A suitable neural network to detect textile defects. Neural Inf. Process. 4233, 430–438. Springer, October 2006 32. Habib, M.T., Rokonuzzaman, M.: Distinguishing feature selection for fabric defect classification using neural network. J. Multimedia 6 (5), 416–424, October 2011 33. TILDA Textile texture database, texture analysis working group of DFG. http://lmb. informatik.unifreiburg.de 34. Srinivasan, K., Dastor, P. H., Radhakrishnaihan, P., Jayaraman, S.: FDAS: a knowledgebased frame detection work for analysis of defects in woven textile structures, J. Text. Inst. 83(3), 431–447 (1992) 35. Rao Ananthavaram, R.K., Srinivasa, Rao O., Krishna P.M.H.M.: Automatic defect detection of patterned fabric by using RB method and independent component analysis. Int. J. Comput. Appl. 39(18), 52–56 (2012) 36. Sengottuvelan, P., Wahi, A., Shanmugam, A.: Automatic fault analysis of textile fabric using imaging systems. Res. J. Appl. Sci. 3(1), 26–31 (2008)
800
T. Mahmud et al.
37. Abdi. H., Williams, L.J.: Principal component analysis. Wiley Interdisciplinary Rev. Comput. Stat. 2 (4), 433–459 (2010). https://doi.org/10.1002/wics.101 38. Kumar, T., Sahoo, G.: Novel method of edge detection using cellular automata. Int. J. Comput. Appl. 9(4), 38–44 (2010) 39. Zhu, Q.: Efficient evaluations of edge connectivity and width uniformity. Image Vis. Comput. 14, 21–34 (1996) 40. Senthilkumaran. N., Rajesh, R.: Edge detection techniques for image segmentation – a survey of soft computing approaches. Int. J. Recent Trends Eng. 1(2), 250–254 (2009) 41. Rizon, M., Hashim, M.F., Saad, P., Yaacob, S.: Face recognition using eigen faces and neural networks. Am. J. Appl. Sci. 2(6), 1872–1875 (2006) 42. Rosten, E., Porter, R., Drummond,T.: FASTER and better: a machine learning approach to corner detection, IEEE Trans Pattern Anal Mach Intell. 32, 105–119 (2010) 43. Wikipedia, Corner Detection. http://en.wikipedia.org/wiki/Corner_detection. Accessed 16 March 2011 44. Chang, J.Y., Chen, J.L.: Automated facial expression recognition system using neural networks. J. Chin. Inst. Eng. 24(3), 345–356 (2001) 45. Jianli, L., Baoqi, Z.: Identification of fabric defects based on discrete wavelet transform and back-propagation neural network. J. Text. Inst. 98(4), 355–362 (2007) 46. Tamnun, M.E., Fajrana, Z.E., Ahmed, R.I.: Fabric defect inspection system using neural network and microcontroller. J. Theor. Appl. Inf. Technol. 4(7) (2008) 47. Bhanumati, P., Nasira, G.M.: Fabric inspection system using artificial neural network. Int. J. Comput. Eng. 2(5), 20–27 May 2012
Alzheimer’s Disease Detection Using CNN Based on Effective Dimensionality Reduction Approach Abu Saleh Musa Miah1(&), Md. Mamunur Rashid1, Md. Redwanur Rahman1, Md. Tofayel Hossain1, Md. Shahidujjaman Sujon1, Nafisa Nawal1, Mohammad Hasan1(&), and Jungpil Shin2(&) 1
Department of CSE, Bangladesh Army University of Science and Technology (BAUST), Saidpur, Bangladesh [email protected], [email protected] 2 School of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu, Fukushima 965-8580, Japan [email protected]
Abstract. In developed countries, Alzheimer’s disease (AD) is one of the major causes of death. Until now, clinically there is not have any diagnostic method available but from a research point of view, this disease detection accuracy is produced by computational algorithms. There are many researchers who are working to find about Alzheimer’s disease property, its stages, and classification ways. This research plays a vital role in clinical tests for medical researchers and in the overall medical sector. One of the major problems found by the researchers in the field is the large data dimension. In the study, we proposed an efficient dimensionality reduction method to improve Alzheimer’s dis-ease (AD) detection accuracy. To implement the method first we cleaned the dataset to remove the null value and removing other unacceptable data by some preprocessing tasks. On the preprocessed data first, we have split into training and test dataset then we employed a dimension reduction method and there-fore applied a machine learning algorithm on the reduced dataset to produce accuracy for detecting Alzheimer’s disease. To overserve and calculate the accuracy we computed confusion matrix, precision, recall, f1-score value and finally accuracy of the method as well. Reducing the dimension of data here we applied consequently Principle component analysis (PCA), Random Projection (RP) and Feature Agglomeration (FA). On the reduced features, we have applied the Random Forest (RF) and Convolution neural network (CNN) machine-learning algorithm based on the dimensionality reduction method. To evaluate our proposed methodology here we have used Alzheimer’s disease neuroimaging initiative (ADNI) dataset. We have experimented for (i) Random forest with principal component analysis (RFPCA), (ii) Convolution neural network (CNN) with (PCA) (CNNPCA), (iii) Random forest with Ran-dom projection (RFRR), and (iv) Random forest with Feature agglomeration(RFFA) to differentiate patients with the AD from healthy patients. Our model namely Random forest with Random projection(RFRP) has produced 93% accuracy. We believe that our work will be recognized as a groundbreaking discovery in this domain. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 801–811, 2021. https://doi.org/10.1007/978-3-030-68154-8_69
802
A. S. M. Miah et al. Keywords: Alzheimer’s disease (AD) Diagnosis Dimentia neural network (CNN) Principle component analysis (PCA)
Convolution
1 Introduction Alzheimer’s disorder is a neurodegenerative disease that is the conventional state of dementia [1]. In our cutting-edge society, it is the most high-priced disorder & it is characterized by means of cognitive, mental, and behavioral disruption. In other words, the most prominent cause of dementia is Alzheimer’s disease, a most simple concept for memory loss and other cognitive competencies that interfere with everyday life [2]. As of Today, Approximately 60 to 80% of instances of dementia are due to Alzheimer’s in Bangladesh. It most often starts in individuals over the age of 65, alt-hough Alzheimer’s early-onset is 4–5% of instances. There are about 460,000 people struggling from dementia in our country, according to Alzheimer’s Disease International Asia Pacific Report-2015, which would be doubled in 2030 and tripled via 2050 [3, 4]. So Bangladesh is no longer unfamiliar with this disease and the influence of AD is now not negligible. Bangladesh has a noticeably young population, among one hundred sixty million people 8% are older people, the amount of the older people approximately 12 million [5]. One can think that 12 million older people there would be at least a few thousand struggling with the dementia. Older humans who are struggling from reminiscence disturbances are often stigmatized or branded as “Foolish”. Some facts on the amount of AD patients in Bangladesh are accessible. In this nation, there are no correct epidemiological records of AD [6]. The impacted patient and their household participants are constantly facing quite a number issues. There is restricted funding for project AD studies. It is excessive time, therefore, to agree with proactively about the sickness and its administration and to take needed motion in this respect. The coverage makers, health professional and allied businesses need to come ahead to make countrywide precedence for AD in Bangladesh [7]. In this research field there have many open source database [8, 9], ADNI is the most widely used (adni.loni.usc.edu) [10]. Moreover, OASIS (www.oasis-brains.org) and AIBL (aibl.csiro.au) are also usable Alzheimer open source database. Another clinical open source database also most used in recent year namely J-ADNI database [11, 12], which contained longitudinal studies data in japan. In the last decade, machine-learning approach has been applied to detecting Alzheimer disease with a great success [13–16]. Alonso et al. and Ni et al. employed not only machine learning but also data mining tools to detect Alzheimer’s disease, and they are working for enhancing the productivity and quality of health center and medicine research [17, 18]. Esmaeilza-deh et al. has applied 3D convolution neural network to detect Alzheimer disease with a magnetic resonance imaging (MRI) data set collected from 841 people [14]. Long et al. has proposed a methodology based on MRI data 427 patients with support vector machine [19]. They proposed some mathematical and statistical can be used for network as a black box concept. David et al. employed an ensemble method to predict the conversion from mild cognitive impairment (MCI) to Alzheimer’s disease (AD). They have used 51 person’s data for cognitively normal controls and 100 person’s data for MCI patients’ data and then combined 5 types score calculated using
Alzheimer’s Disease Detection Using CNN
803
natural language processing and machine learning [21]. Liu et al. also employed ensemble method to detect Alzheimer disease [22]. One of the limitations of those work is high dimensionality of feature vector. In the study, we applied machine learning and data mining methodology based on the Alzheimer’s disease neuroimaging initiative (ADNI) dataset for classifying the various stages of Alzheimer disease. Especially we have classified Alzheimer disease (AD) and normal control (NC) from ADNI dataset using Random forest and Cognitive neural network (CNN) model. To reduce the dimensionality of feature vector in this field Davatzikos et al. and Yoon et al. employed principle component analysis (PCA) algorithm [23, 24]. Moreover, the technique of the algorithm is to select parameters carefully. To overcome the high dimensionality problem we have applied here three-dimensionality reduction method to reduce the dimensionality of feature vector namely Principle component analysis (PCA), Random projection (RP) and Feature agglomeration (FA) technique. We have experimented Random Forest and Cognitive Neural Network (CNN) through the dimensionality reduction method. Finally, we found combined of Random projection (RP) and Random forest (RF) (RPRF) produced best result.
2 Dataset Our dataset contains data of Alzheimer’s Disease Neuroimaging Initiative (ADNI) (http://adni.loni.usc.edu/) 1.5 T Database with both Healthy Controls (HC), Alzheimer’s disease (AD) patients. The dataset contains data of total 627 peoples. Where 363 of them were Male, who had an average age of 75.49 years and with range between 56. 4 to 89 years. Whereas, there were 264 female with averaging age of 74.72 years, ranging from 89.6 to 55.1 years. Information about subject of ADNI dataset shows in Table 1.
Table 1. Dataset Age/Gender Min Max Average Age
Male 56.4 89 75.49
Female 55.1 89.6 74.72
3 Proposed Methodology We have proposed combination of four methods, which we implied separately for observing various accuracies. The following flow chart Fig. 1 shows various methodologies which have implemented in general.
804
A. S. M. Miah et al.
Fig. 1. Flowchart of the proposed method.
3.1
Preprocessing
We took the ADNI dataset and load the dataset, we used Python as language and used pyCharm as our IDE. Then the dataset was proceeded for next step. We checked for null values in dataset using a Pandas Dataframe and find null values. If any null values are found then we used .sum () to convert them into Boolean function. After that again . sum () was used for converting them to binary values. Then we removed the irrelavent data from our dataset. All the data are sorted according to their features and classes. All the columns containing true/false values, APOES were calculated. And normalized using one hot encoding. 3.2
Feature Reduction
We used the following methods for feature reduction task: Principal Component Analysis (PCA) PCA is a dimensionality reducing strategy that is used to map features onto lower dimensional space. The transformation of data can be linear or nonlinear. This algorithm based on a transformation function like T = XW which maps data vector from one space to a new space. In new space its select a specific number of component from all component based on eigenvector using Eq. (1) that is gives the truncated transformation that is gives the truncated transformation. TL ¼ XWL
ð1Þ
Here TL is the reduced component eigen vector. In other words, PCA look like a linear transformation, which is like Eq. (2) t ¼ WTx
ð2Þ
Alzheimer’s Disease Detection Using CNN
805
Where x 2 Rp ; t 2 RL , the columns of p L and matrix W comes from an orthogonal basis for the L features components that are selected by construction, and that is means selected only L columns from all column [25], The output matrix maximizes the original data variance that has been preserved and minimizing the total squared reconstruction error being, T TW TL W T 2 L 2
ð3Þ
Or; kX XL k22
ð4Þ
Random Projection Random projection is a method used to minimize the dimensionality of the Euclidean space set of points. In contrast to other methods, random projection methods are known for their strength, simplicity, and low error rate. In random projection, the original ddimensional data is projected to a k-dimensional (k 0. 4: for each i, i ≤ m for each j, (i + 1) ≤ j ≤ m Lk−1 then if q1[i] ≠ q2[j], q1, q2 q = q1[i] q2[i]; include_in_ck = true; q for (k-1) subset p, p if each p Lk-1 then include_in_ck = false; break; end if end for q; end if if include_in_ck = true then Ck = Ck end if end for end for Ck 5: for each r, r compute sup_itemset = Load1.r + Load2.r + ...+ Loadt.r. if sup_itemset ≥ min_sup_count then add it to Lk; end if and end for 6: Repeat steps 4 to 5 till Lk−1 ≠ . Li. 7: L =
In this algorithm, the m and n represent total no of items and transactions in the dataset D, respectively. The min_sup_count is the predefined minimum support count threshold, and Q is a Boolean matrix with size m n. The items are stored in the rows, and transactions of an item are stored in the columns. Lk is the frequent k itemset that has generated from Ck, where Ck is the candidate itemset that has developed from Lk-1 itemset, whereas k 2. After scanning dataset D, the Q matrix has constructed. In the
1030
S. Roy et al.
matrix Q, if the item is present in a specific transaction, then put decimal value one (i.e.,1) otherwise zero (i.e., 0), at the same time calculating support count and stored in sup_count array. Here, sup_count is a two-dimensional array where the row contains itemset, and the column contains the support count value of the respective itemset. After scanning each tuple, compare with min_sup_count count value to determine whether it goes to frequent k itemset or not. Further, based on the number of nodes, decomposed the frequent one itemset into multiple loads (i.e., Load1, Load2, …., Loadt). To calculate the support count of frequent k itemset, we have applied the bitwise AND operator (i.e., &) on the decomposed matrix. Algorithm 1 shows a pseudo-code of improved boolean load-matrix-based FPM algorithm and the improvement part of the algorithm is highlighted (step 2). Initially, dataset D scanned then a boolean matrix (i.e. Qmn) is constructed and calculated support count for each row i, which is stored in sup_count[i] and compared with min_sup_count. If sup_count[i] min_sup_count, then enter rows is added to the k1 matrix (i.e. F1) and listed in k-1 itemset (i.e. L1). After that, vertically split the Q matrix into multiple matrices (i.e., Load1, Load2, Load3, …., Loadt) where the size of each matrix size is m x where x is a positive number. Here the matrix is decomposed based on the available nodes to use in a distributed environment and each load is assigned to individual nodes to execute. Now, join operation is performed between two sets of items from Lk-1 items (i.e., q) and joined such a way that q1[i] \ q2[i] = ⌀. Then the k-1 subsets are generated from q (i.e., p) and if each p is present in Lk-1, then stored in Ck. Then, calculate the support count of each itemset r 2 Ck (i.e., sup_itemset) from the simulation of Load.r values. Then, Load.r is computed by applying bitwise AND operation (i.e., &) on the Load matrices, which is stored binary digits. Finally, combining all the frequent itemset (i.e., {L1 [ L2 [ L3 [ …… Lk-1 [ Lk}) generated the final frequent itemset L.
4 Discussion of the Improved Algorithm Operation Process In this segment of the paper, we have discussed the proposed algorithm. Let assume, the transactional dataset D presented in Table 1, which has contained the set of items I = {I0, I1, I2, I3, I4, I5, I6, I7, I8} and set of transactions T = {IT0, IT1, IT2, IT3, IT4, IT5, IT6, IT7, IT8, IT9}. The min_sup_count is a predefined threshold for minimum support count. Ck is a candidate k itemset that has generated from k-1 itemset where k 2. The main intention of this researches is to find out the large, frequent itemset with less time consumption to draw the association rules. In this discussion, according to our proposed algorithm, we have presented as a transactional dataset in Table 1, which will convert as the boolean matrix. In this matrix, we have shown the items as rows and the transactions as columns.
An Improved Boolean Load Matrix-Based Frequent Pattern Mining
1031
Table 1. Transaction dataset Item I0 I1 I2 I3 I4 I5 I6 I7 I8
Transactions IT0, IT1, IT3, IT1, IT2, IT3, IT5, IT6, IT9 IT0, IT3, IT5, IT1, IT2, IT5, IT0, IT1, IT2, IT4, IT5, IT7 IT0, IT7, IT8, IT1, IT4, IT6,
IT4, IT6 IT4, IT6, IT8 IT6, IT7, IT9 IT6, IT8 IT4, IT6, IT8, IT9 IT9 IT7, IT9
We have constructed the boolean matrix Q by scanning each itemset Ii. If specific transaction ITi present, then we have inserted 1 for the position Q[Ii][ITi], if not present, then inserted 0. For example, if IT1 2 I1, then we have inserted value one (i.e.,1) at the position [I1] [IT1] of boolean matrix Q (i.e., Q[I1] [IT1] = 1). Again IT2 2 I1, so we have inserted value one (i.e.,1) at the position [I1] [IT2] of boolean matrix Q (i.e., Q[I1] [IT1] = 1). But IT5 62 I1, so we have inserted zero (i.e.,0) at the position [I1] [IT5] of boolean matrix Q (i.e., Q[I1] [IT5] = 0). In the same way, we have prepared the entire boolean matrix represented in Table 2. At the same time, we have calculated support count (i.e., sup_count) for each row and stored it into the sup_count array with the same row indicator. In this analysis, we have considered value four (i.e., 4) as the minimum support count (i.e., min_sup_count = 4). After calculating the support count of each row, we have compared with the minimum support count (i.e., min_sup_count) to determine whether it goes to frequent-1 itemset or not. Table 2. Matrix 1 with all items; some of them will not include in the frequent matrix but just shown to explain. I0 I1 I2 I3 I4 I5 I6 I7 I8
IT0 1 0 0 1 0 1 0 1 0
IT1 1 1 0 0 1 1 0 0 1
IT2 0 1 0 0 1 1 0 0 0
IT3 1 1 0 1 0 0 0 0 0
IT4 1 1 0 0 0 1 1 0 1
IT5 0 0 1 1 1 0 1 0 0
IT6 1 1 1 1 1 1 0 0 1
IT7 0 0 0 1 0 0 1 1 1
IT8 0 1 0 0 1 1 0 1 0
IT9 0 0 1 1 0 1 0 1 1
sup_count 5 6 3 6 5 7 3 4 5
Table 3 represents all items with boolean values and respective row sup_count. After comparing the minimum support count value with support count value, we
1032
S. Roy et al.
figured out that row no 3 and 7 will not include in the frequent-1 matrix, which is F1. All items of frequent matrix-1 are listed as frequent itemset, which is L1. Table 3. Frequent-1 Matrix I0 I1 I3 I4 I5 I7 I8
IT0 1 0 1 0 1 1 0
IT1 1 1 0 1 1 0 1
IT2 0 1 0 1 1 0 0
IT3 1 1 1 0 0 0 0
IT4 1 1 0 0 1 0 1
IT5 0 0 1 1 0 0 0
IT6 1 1 1 1 1 0 1
IT7 0 0 1 0 0 1 1
IT8 0 1 0 1 1 1 0
IT9 0 0 1 0 1 1 1
sup_count 5 6 6 5 7 4 5
So, the L1 will be like L1 ¼ ffI0 g; fI1 g; fI3 g; fI4 g; fI5 g; fI7 g; fI8 gg After that, we have split the dataset into multiple loads based on the number of available nodes. In this example, we have considered-2 nodes available. So, we have divided the dataset into the two loads, which are Load1 and Load2 that represented in Table 4.
Table 4. Load wise divided frequent matrix Load Transaction I0 I1 I3 I4 I5 I7 I8
Load1 IT0 IT1 1 1 0 1 1 0 0 1 1 1 1 0 0 1
IT2 0 1 0 1 1 0 0
IT3 1 1 1 0 0 0 0
IT4 1 1 0 0 1 0 1
Load2 IT5 IT6 0 1 0 1 1 1 1 1 0 1 0 0 0 1
IT7 0 0 1 0 0 1 1
IT8 0 1 0 1 1 1 0
IT9 0 0 1 0 1 1 1
Then we have to perform the join operation on frequent-1 itemset (i.e., L1) to generate candidate-2 itemset, which is C2. After completing the join operation, we have to check that each subset of a joined itemset is present in the previous itemset (i.e., Lk-1) or not. If each subset of a joined itemset is present in the Lk-1 itemset, then include the joined itemset into the candidate-2 itemsets; otherwise, the joined itemset will not involve in the candidate-2 itemsets. After performing these steps, we will get C2, which is shown below.
An Improved Boolean Load Matrix-Based Frequent Pattern Mining C2 ¼
1033
fI0 ; I1 g; fI0 ; I3 g; fI0 ; I4 g; fI0 ; I5 g; fI0 ; I7 g; fI0 ; I8 g; fI1 ; I3 g; fI1 ; I4 g; fI1 ; I5 g; fI1 ; I7 g; fI1 ; I8 g; fI3 ; I4 g; fI3 ; I5 g; fI3 ; I7 g; fI3 ; I8 g; fI4 ; I5 g; fI4 ; I7 g; fI4 ; I8 g; fI5 ; I7 g; fI5 ; I8 g; fI7 ; I8 g
All subset of C2 present in L1 (i.e., C2 2 L1). Now we have to calculate the support count of all itemset to obtain frequent itemset 2 (i.e., L2). Support count calculated from the candidate-2 itemsets by performing bitwise AND operator (i.e., &) on each row values for individual load matrices. In this example, to calculate the support count for {I0, I1}, we have to perform bitwise AND (i.e., &) operation between the row values of I0 and I1 from load1 and load2 matrix. For load1 calculated value is 3 (i.e. {1, 1, 0, 1, 1} & {0, 1, 1, 1, 1}) and for load2 value is 1 (i.e. {1, 1, 0, 1, 1} & {0, 1, 1, 1, 1}). To calculate the support count of {I0, I1}, we have to add all load matrices result, which is 4 (i.e., 3 + 1 = 4). In the same way, we have to calculate the support count value of remaining itemset for candidate-2 itemset (i.e., C2). The support count calculation process of each itemset for C2 is exhibit below. {I0, {I0, {I0, {I0, {I0, {I0, {I1, {I1, {I1, {I1, {I1, {I3, {I3, {I3, {I3, {I4, {I4, {I4, {I5, {I5, {I7,
I 1} I 3} I 4} I 5} I 7} I 8} I 3} I 4} I 5} I 7} I 8} I 4} I 5} I 7} I 8} I 5} I 7} I 8} I 7} I 8} I 8}
= = = = = = = = = = = = = = = = = = = = =
{1, {1, {1, {1, {1, {1, {0, {0, {0, {0, {0, {1, {1, {1, {1, {0, {0, {0, {1, {1, {1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,
1} 1} 1} 1} 1} 1} 1} 1} 1} 1} 1} 0} 0} 0} 0} 0} 0} 0} 1} 1} 0}
& & & & & & & & & & & & & & & & & & & & &
{0, {1, {0, {1, {1, {0, {1, {0, {1, {1, {0, {0, {1, {1, {0, {1, {1, {0, {1, {0, {0,
1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1,
1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0,
1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1} 0} 0} 1} 0} 1} 0} 0} 1} 0} 1} 0} 1} 0} 1} 1} 0} 1} 0} 1} 1}
+ + + + + + + + + + + + + + + + + + + + +
{0, {0, {0, {0, {0, {0, {0, {0, {0, {0, {0, {1, {1, {1, {1, {1, {1, {1, {0, {0, {0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1,
0} 0} 0} 0} 0} 0} 0} 0} 0} 0} 0} 1} 1} 1} 1} 0} 0} 0} 1} 1} 1}
& & & & & & & & & & & & & & & & & & & & &
{0, {1, {1, {0, {0, {0, {1, {1, {0, {0, {0, {1, {0, {0, {0, {0, {0, {0, {0, {0, {0,
1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1,
0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1,
1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0,
0} 1} 0} 1} 1} 1} 1} 0} 1} 1} 1} 0} 1} 1} 1} 1} 1} 1} 1} 1} 1}
= = = = = = = = = = = = = = = = = = = = =
4 3 2 4 1 3 2 4 5 1 3 2 3 3 3 4 1 2 3 4 2
Now we have to compare each itemset support count value with the minimum support count value to determine whether it will go to the frequent itemset-2 or not. If the support count of an itemset is greater than the minimum support count value (i.e., min_sup_coun), then added to the frequent itemset-2 (i.e., L2). L2 is shown below. L2 ¼ ffI0 ; I1 g; fI0 ; I5 g; fI1 ; I4 g; fI1 ; I5 g; fI4 ; I5 g; fI5 ; I8 gg Similarly, we have generated candidate itemset 3 with all possible itemset (i.e., C3), which has generated from L2 by performing join operation with L2 itemset itself.
1034
S. Roy et al.
We did not include some of the itemsets that are not present in L2. We have included if each subset is present in L2 (i.e., C3 2 L2). For example, one possible itemset was {I0, I1, I4}. Possible subset of {I0, I1, I4} with max k-1 items (i.e. 2) are {I0, I1}, {I0, I4}, {I1, I4} as {I0, I4} 62 L2 that is why we did not include {I0, I1, I4} itemset into the C3. The candidate itemset 3 (i.e., C3) demonstrates below. C3 ¼ ffI0 ; I1 ; I5g; fI1 ; I0 ; I5 gg After getting the final candidate itemset-3, we have to calculate the support count of each itemset, which demonstrates bellow. fI0 ; I1 ; I5 g ¼ f1; 1; 0; 1; 1g & f0; 1; 1; 1; 1g & f1; 1; 1; 0; 1g þ f0; 1; 0; 0; 0g &f0; 1; 0; 1; 0g & f0; 1; 0; 1; 1g ¼ 3 fI1 ; I4 ; I5 gg ¼ f0; 1; 1; 1; 1g & f0; 1; 1; 0; 0g & f1; 1; 1; 0; 1g þ f0; 1; 0; 1; 0g & f1; 1; 0; 1; 0g & f0; 1; 0; 1; 1g ¼ 4 Since we have found both itemsets support count value if greater than or equal to minimum support count (i.e., 4), so they have added into the frequent itemset-3 (i.e., L3), the L3 demonstrates below. L3 ¼ fI1 ; I4 ; I5 g Now, we have to find out the candidate itemset-4 by performing the join operation on the L3 itemset itself. But we have found that L3 contains only one itemset so we cannot complete the join operation. So, we will not get any other candidate set. Finally, we have to generate the final frequent itemset by combining all the frequent itemset. Which is L = {L1 [ L2 [ L3 [ L4 [ …. Lk-1 [ Lk}. The last large, frequent itemset (i.e., L) is presented below from the given dataset. L¼
fI0 g; fI1 g; fI3 gfI4 g; fI5 g; fI7 g; fI8 g; fI0 I1 g; fI0 I5 g fI1 I4 g; fI1 I5 g; fI4 I5 g; fI5 I8 g; fI1 ; I4 ; I5 g
5 Performance Evaluation and Complexity Analysis The performance evaluation is made by comparing the proposed method with the existing method [15]. The experimental data were generated from a machine with the configurations 2.90 GHz, Intel(R), Core (TM)-i7 7820HQ CPU, 16 GB main memory. Python Language is used to implement the proposed design. Both existing method [15] and Proposed method are coded and execute in the same platform. In this performance evaluation, the one Load is considered. Apriori and FP growth Algorithm dataset from Kaggle [18] is used for the performance evaluation. The dataset contains 12 items and 12526 transactions. In this experiment the minimum support count (min_sup_count) is
An Improved Boolean Load Matrix-Based Frequent Pattern Mining
1035
set 20 for mining the dataset. Noted that the minimum support count value can be varied depends on needs. In this experiment, we have found that our proposed algorithm has given at least two times better performance than the existing boolean load-matrix based algorithm in terms of computational time on the generation of the boolean matrix (i.e., Q), frequent1 matrix (i.e., F1) and frequent-1 itemset (i.e., L1). The frequent-1 matrix (i.e., F1) and frequent-1 itemset (i.e., L1) generation execution time for existing and proposed algorithms. In Eq. (1) shows the time complexity to construct the boolean matrix (i.e., Q), frequent-1 matrix (i.e., F1), and frequent-1 itemset (i.e., L1) of the existing algorithm. In Eq. (2) shows the time complexity to generate the boolean matrix (i.e., Q), frequent1 matrix (i.e., F1), and frequent-1 itemset (i.e., L1) of the proposed algorithm. Equation (2) is for step-2 of the proposed algorithm. Here T is the time complexity. Time Complexity ðT Þ for existing boolean matrix ¼ ðm nÞ þ c1 þ ðm nÞ þ c2 þ m þ c3 ¼ 2 ðm nÞ þ m þ c1 þ c2 þ c3 ¼ 2 ðm nÞ þ m þ c4 Time Complexity ðT Þ for proposed boolean matrix ¼ ðm nÞ þ c1 þ c2
ð1Þ
ð2Þ
¼ ðm nÞ þ c3 To generate the boolean matrix (i.e., Q) existing algorithm needs (m n) + c1 time, to generate sup needs (m n) + c2 times, for frequent-1 matrix (i.e., F1) and frequent-1 itemset (i.e., L1) needs m + c3 times. To generate all boolean matrix (i.e., Q), support_count, frequent-1 matrix (i.e., F1) and frequent-1 itemset (i.e., L1) needs (m n) + c3. So, proposed algorithm is faster than existing algorithm. Though, Big O notation for both algorithms are same O(m n) but second one (Eq. 2) is faster than first one (Eq. 1) because for Eq. 1 is needed 2 (m n) + m and for Eq. 2 is needed only (m n). In step-4 of proposed algorithm, we have reduced space for candidate set because we have added an itemset if satisfied the condition instead of adding into the candidate set and then pruning. The proposed and exiting program execution result of the comparison has been demonstrated in Fig. 1.
1036
S. Roy et al.
Fig. 1. Comparison result between existing and proposed boolean-load matrix FPM algorithms.
6 Conclusion In this research, we have presented an updated algorithm to generate large, frequent itemset from a massive dataset. We have reduced three steps than the existing algorithm [15], and we just added itemset if that itemset satisfies the condition instead of counting and then pruning from the created itemset. Our proposed algorithm is performing at least two times faster than the existing algorithm on the generation of the boolean matrix (i.e., Q), frequent-1 matrix (i.e., F1), and frequent-1 itemset (i.e., L1) in terms of computational time. That comparison has shown in Eqs. 1 and 2. It also reduced space for candidate set generation. The enhanced proposed algorithm can be used for Market Basket Analysis (MBA), Clickstream Analysis (CA), etc. In the future, this research can be extended in a distributed environment for the parallel computation of the load matrices.
References 1. Fang, X.: An improved apriori algorithm on the frequent itemset. In: International Conference on Education Technology and Information System (ICETIS 2013). IEE (2013) 2. Ghafari, S.M., Tjortjis, C.: A survey on association rules mining using heuristics. In: Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 9, no. 4, p. e1307. Wiley (2019)
An Improved Boolean Load Matrix-Based Frequent Pattern Mining
1037
3. Nath, B., Bhattacharyya, D.K., Ghosh, A.: Incremental association rule mining: a survey. In: Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 3, no. 3, pp. 157– 169. Wiley (2013) 4. Han, J., Pei, J., Yin, Y., Mao, R.: Mining frequent patterns without candidate generation: a frequent-pattern tree approach. In: Data Mining and Knowledge Discovery, vol. 8, no. 1, pp. 53–87. Springer, Netherlands (2004) 5. Chee, C.-H., Jaafar, J., Aziz, I.A., Hasan, M.H., Yeoh, W.: Algorithms for frequent itemset mining: a literature review. In: Springer's Artificial Intelligence Review, pp. 1–19 (2018) 6. Brin, S., Motwani, R., Ullman, J.D., Tsur, S.: Dynamic itemset counting and implication rules for market basket data. In: ACM SIGMOID Record, pp. 255–264 (1997) 7. Chun-sheng, Z., Yan, L.: Extension of local association rules mining algorithm based on apriori algorithm. In: International Conference on Software Engineering and Service Science, pp. 340–343. IEEE, China (2014) 8. Thakre, K.R., Shende, R.: Implementation on an approach for Mining of Datasets using APRIORI Hybrid Algorithm. In: International Conference on Trends in Electronics and Informatics, pp. 939–943. IEEE, India (2017) 9. Wu, B., Zhang, D., Lan, Q., Zheng, J.: An efficient frequent patterns mining algorithm based on apriori algorithm and the FP-tree structure. In: International Conference on Convergence and Hybrid Information Technology, vol. 1, pp. 1099–1102. IEEE, South Korea (2008) 10. Du, J., Zhang, X., Zhang, H., Chen, L.: Research and improvement of apriori algorithm. In: International Conference on Information Science and Technology, pp. 117–121. IEEE, China (2016) 11. Yu, N., Yu, X., Shen, L., Yao, C.: Using the improved apriori algorithm based on compressed matrix to analyze the characteristics of suspects. Int. Express Lett. Part B Appl. Int. J. Res. Surv. 6(9), 2469–2475 (2015) 12. Yuan, X.: An improved apriori algorithm for mining association rules. In: Conference Proceedings, vol. 1820, no. 1. AIP (2017) 13. Liao, B.: An improved algorithm of apriori. In: International Symposium on Intelligence Computation and Applications (ISICA), pp. 427–432. Springer, Berlin (2009) 14. Wu, H., Lu, Z., Pan, L., Xu, R., Jiang, W.: An improved aprioribased algorithm for association rules mining. In: International Conference on Fuzzy Systems and Knowledge Discovery, vol. 2, pp. 51–55, IEEE, China (2009). 15. Sahoo, A., Senapati, R.: A Boolean load-matrix based frequent pattern mining algorithm. In: International Conference on Artificial Intelligence and Signal Processing (AISP), IEEE, India (2020). ISSN:2572-1259 16. Robu, V., Santos, V.D.: Mining frequent patterns in data using apriori and eclat: a comparison of the algorithm performance and association rule generation. In: 6th International Conference on Systems and Informatics (ICSAI). IEEE, China (2019). ISBN:978-1-7281-5257-8 17. Jin, K.: A new algorithm for discovering association rules. In: International Conference on Logistics Systems and Intelligent Management (ICLSIM), ISBN 978-1-4244-7331-1, IEEE, China (2010) 18. (Dataset for Apriori and FP growth Algorithm). https://www.kaggle.com/newshuntkannada/ dataset-for-apriori-and-fp-growth-algorithm
Exploring CTC Based End-To-End Techniques for Myanmar Speech Recognition Khin Me Me Chit(&) and Laet Laet Lin University of Information Technology, Yangon, Myanmar {khinmemechit,laetlaetlin}@uit.edu.mm
Abstract. In this work, we explore a Connectionist Temporal Classification (CTC) based end-to-end Automatic Speech Recognition (ASR) model for the Myanmar language. A series of experiments is presented on the topology of the model in which the convolutional layers are added and dropped, different depths of bidirectional long short-term memory (BLSTM) layers are used and different label encoding methods are investigated. The experiments are carried out in lowresource scenarios using our recorded Myanmar speech corpus of nearly 26 h. The best model achieves character error rate (CER) of 4.72% and syllable error rate (SER) of 12.38% on the test set. Keywords: End-to-end automatic speech recognition Connectionist temporal classification Low-resource scenarios Myanmar speech corpus
1 Introduction ASR plays a vital role in human-computer interaction and information processing. It is the task of converting spoken language into text. Over the past few years, automatic speech recognition approached or exceeded human-level performance in languages like Mandarin and English in which large labeled training datasets are available [1]. However, the majority of languages in the world do not have a sufficient amount of training data and it is still challenging to build systems for those under-resourced languages. A traditional ASR system is composed of several components such as acoustic models, lexicons, and language models. Each component is trained separately with a different objective. Building and tuning these individual components make developing a new ASR system very hard, especially for a new language. By taking advantage of Deep Neural Network’s ability to solve complex problems, end-to-end approaches have gained popularity in the speech recognition community. End-to-end models replaced the sophisticated pipelines with a single neural network architecture. The most popular approaches to train an end-to-end ASR include Connectionist Temporal Classification [2], attention-based sequence-to-sequence models [3], and Recurrent Neural Network (RNN) Transducers [4]. CTC defines a distribution over all alignments with all output sequences. It uses Markov assumptions to achieve the label sequence probabilities and solves this efficiently by dynamic programming. It has simple training and decoding schemes and showed great results in many tasks [1, 5, 6]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1038–1046, 2021. https://doi.org/10.1007/978-3-030-68154-8_87
Exploring CTC Based End-To-End Techniques for Myanmar Speech Recognition
1039
The sequence-to-sequence model contains an encoder and a decoder and it usually uses an attention mechanism to make alignment between input features and output symbols. They showed great results compared to CTC models and in some cases surpassed them [7]. However, their computational complexity is high and they are hard to parallelize. RNN-Transducer is an extension of CTC. It has an encoder, a prediction network, and a joint network. It uses the outputs of encoder and prediction networks to predict the labels. It is popular due to its capability to do online speech recognition which is the main challenge for attention encoder-decoder models. Due to its high memory requirement in training and the complexity in implementation, there is less research for RNN-Transducer although it has obtained several impressive results [8]. The CTC-based approach is significantly easier to implement, computationally less expensive to train, and produces results that are close to state-of-the-art. Therefore it is a good start to explore the end-to-end ASR models using CTC. In this paper, several experiments are carried out with the CTC based speech models on low-resource scenarios using the Myanmar language dataset ( 26 h). We compare different label encoding methods: character-level encoding, syllable-level encoding, and sub-word level encoding on our ASR model. We also vary the number of BLSTM layers and explore the effect of using a deep Convolutional Neural Network (CNN) encoder on top of BLSTM layers.
2 Related Work Most of the early works in Myanmar speech recognition are mostly Hidden Markov Model (HMM) based systems. Hidden Markov Model is used together with Gaussian Mixture Model (GMM) and Subspace Gaussian Mixture Model (SGMM) and the performance of the model increases with the increased use of training data [9]. HMM-GMM models are also used in spontaneous ASR systems and tuning acoustic features, number of senones and Gaussian densities tends to affect the performance of the model [10]. Since the Deep Neural Networks (DNNs) have gained success in many areas, they are also used in automatic speech recognition systems usually in the combination with HMMs [11–14]. Research has shown that the DNN, CNN, and Time Delay Network models outperformed the HMM-GMM models [14]. However, many of these works needed several components where we have difficulties to tune each individual component. End-to-end ASR models simplified these components into a single pipeline and achieved state-of-the-art results in many scenarios. To the best of our knowledge, no research has been conducted on end-to-end training in Myanmar ASR systems. In this paper, we introduce an end-to-end Myanmar ASR model with the CTC based approach.
1040
K. M. M. Chit and L. L. Lin
3 End-To-End CTC Model This section introduces the CTC based end-to-end model architecture. We explore the architectures with the initial layers of VGGNet [15] and up to 6 layers of BLSTM. Batch Normalization is applied after every CNN and BLSLM layer to stabilize the learning process of the model. A fully connected layer with the softmax activation function is followed after the BLSTM layers. The CTC loss function is used to train the model. Figure 1 shows the model architecture, which includes a deep CNN encoder and a deep bidirectional LSTM network.
Label
CTC Loss
Fully Connected
BLSTM
Deep CNN (VGGNet)
Spectrogram
Fig. 1. The architecture of CTC based end-to-end ASR model. The different architectures are explored by varying the number of BLSTM layers from 3 to 6 and by removing and adding deep convolutional layers.
3.1
Deep Convolutional LSTM Network
Since the input audio features are continuous, the time dimension is usually downsampled to minimize the running time and complexity of the model. In order to perform the time reduction, one way is to skip or concatenate the consecutive time frames. In this work, the time reduction is achieved by passing the audio features through CNN blocks containing two max-pooling layers. It is observed that using convolutional layers speeds up the training time and also helps with the convergence of the model. As an input to the CNN layers, we use two dimensions of spectrogram features and the dimension of length one as the channel. After passing through two max-pooling layers, the input features are downsampled to (1/4 1/4) along the time-frequency axes. The architecture of the CNN layers is shown in Fig. 2.
Exploring CTC Based End-To-End Techniques for Myanmar Speech Recognition
1041
Fig. 2. The architecture of convolutional layers (VGGNet). Each convolutional block consists of two convolutional layers with Batch Normalization and a max pooling layer. Each convolutional layer uses the ReLU activation function.
A stack of BLSTM layers is followed after the CNN layers. The models with 3 to 6 layers of BLSTM are experimented using 512 hidden units in each layer and direction. Since the Batch Normalization is shown to improve the generalization error and accelerate training [16], Batch Normalization is added after every BLSTM layer. 3.2
Connectionist Temporal Classification (CTC)
In speech recognition, the alignment between input audio and output text tokens is needed to consider. CTC is a way to solve this problem by mapping the input sequence to the output sequence of shorter length. Given the input sequence of audio features X ¼ ½x1 ; x2 ; . . .; xT and the output sequence of labels Y ¼ ½y1 ; y2 ; . . .; yU , the model tries to maximize PðYjXÞ the probability of all possible alignments that assigns to the correct label sequence. An additional blank symbol is introduced in CTC to handle the repeated output tokens and silence audio. The CTC objective [17] for a ðX; Y Þ pair is described as follows: PðYjX Þ ¼
X A2AX;Y
YT t¼1
Pt ðat jX Þ:
ð1Þ
where A is a set of valid alignments, t denotes a time-step and a indicates a single alignment. In Eqs. 1, we need to solve the expensive computation of the sum of all possible probabilities. However, it can be efficiently computed with the help of dynamic programming. During the model training, as we minimize the loss of the training dataset D, CTC loss is reformulated as the negative sum of log-likelihood. The equation is written as follows: X log PðYjX Þ: ð2Þ ðX;Y Þ2D
1042
K. M. M. Chit and L. L. Lin
4 Experimental Setup 4.1
Description for Myanmar Speech Corpus
Due to the scarcity of public speech corpora for the Myanmar language, we build a corpus of read Myanmar speech. The dataset contains 9908 short audio clips each with a transcription. The content of the corpus is derived from the weather news of the “Department of Meteorology and Hydrology (Myanmar)” [18] and “DVB TVnews” [19]. The dataset contains 3 female and 1 male speakers. The audio clips are recorded in a quiet place with a minimum of background noise. The lengths of audio clips vary from 1 to 20 s and the total length of all audio clips is nearly 26 h. Audio clips are in single-channel 16-bit WAV format and are sampled at 22.05 kHz. Most of the transcriptions are collected from the above-mentioned data sources and some are hand-transcribed. Since the transcriptions are mixed with both Zawgyi and Unicode font encodings, all the texts are firstly normalized into Unicode encoding. Myanmar Tools [20] is used to detect the Zawgyi strings and ICU Transliterator [21] is used to convert Zawgyi to Unicode. The numbers are also expanded into full words and the punctuations and the texts written in other languages are dropped. The audio clips and transcriptions are segmented and aligned manually. For the experimental purpose, we randomly split 70% ( 18 h) of the total data as the training set, 10% ( 2 h) as the development set and 20% ( 5 h) as the test set. We make sure to contain the audio clips of all four speakers in each dataset (Table 1). Table 1. Statistics of the Myanmar speech corpus. Total clips Total duration Mean clip duration Min clip duration Max clip duration Mean character per clip Mean syllable per clip Distinct characters Distinct syllables
4.2
9908 25 h 18 min 37 s 9.19 s 0.72 s 19.92 s 121 38 57 866
Training Setup
As input features, we use log spectrograms computed every 10 ms with 20 ms window. For CNN networks, the initial layers of the VGGNet with Batch Normalization layers are used as described in Sect. 3.1. The time resolution of the input features is reduced by 4 times after passing the convolutional layers. We use 3 to 6 layers of BLSTM each with 512 hidden units. A Batch Normalization with momentum 0.997 and epsilon 1e-5 is followed after each CNN and BLSTM layer. For the fully connected layer, we use the softmax activation function. All the models are trained with the CTC loss function. For the optimization, the Adam optimizer is used with the initial learning rate k of 1e-4.
Exploring CTC Based End-To-End Techniques for Myanmar Speech Recognition
1043
The learning rate is reduced by the factor of 0.2 if the validation loss stops improving for certain epochs. The batch size of 8 is used in all experiments. The early stopping call back is also used to prevent overfitting. The different output label encoding methods are compared: character-level, syllable-level and sub-word level encodings. We use a set of regular expression rules [22] to segment the syllable-level tokens and use Byte Pair Encoding (BPE) to create the sub-word tokens. Since the experiments are performed with different label encoding methods, the results are evaluated using both CER and SER metrics. All the experiments are carried out on NVIDIA Tesla P100 GPU with 16GB GPU memory and 25 GB RAM (Google Colab). We do not use the external language model in this work. We will further investigate the effect of combining the decoding process with the external language model in the future.
5 Experimental Results 5.1
Effect of Using Different Label Encoding Methods
Only a small amount of research is done in character-level Myanmar ASR system. So it is quite interesting to explore Myanmar ASR models using the character-level output tokens. Since the Myanmar language is a syllable-timed language, the syllable-level tokens are also considered to be used. The dataset contains 57 unique characters and 866 unique syllables. We also conduct a number of experiments on the BPE sub-word tokenization in which a specific vocabulary size is needed to define. Because the dataset is relatively small, only small vocabulary sizes of 100, 300 and 500 BPE subword units are used in the experiments. Table 2 shows the results of varying the different label encoding methods. It is observed that using large label encoding units is not very beneficial for a small dataset. The syllable-level model (866 tokens) and BPE model (500 tokens) show high error rates on both development and test sets. The best results are obtained by the characterlevel encoder with 4.69% CER and 11.86% SER on the development set and 4.72% CER and 12.38% SER on the test set. 5.2
Effect of Using Convolutional Layers
The use of many convolutional blocks may over-compress the number of features and make the length of the input audio features smaller than the length of output labels. This can cause problems in CTC calculation. This usually occurs when training the character-level ASR model of which the output of the model requires at least one time step per output character. For this reason, we limit the amount of downsampling with the maximum use of 2 max pooling layers. As can be seen in Table 2, the use of convolutional layers tends to increase the performance of the model. All the models with convolutional layers outperform the models without convolutional layers. Moreover, the downsampling effect of the convolutional blocks significantly reduces the training time and speeds up the convergence of the model.
1044
K. M. M. Chit and L. L. Lin
Table 2. Comparison of the effect of using convolutional layers and different label encoding units on both development and test sets. Each model contains the same number of convolutional layers, 5 layers of LSTM layers and a fully connected layer. The input features are not downsampled for the models without convolutional layers. The experiments are evaluated with both character error rate (CER) and syllable error rate (SER) metrics. Unit
With CNN Dev CER SER Char 4.69 11.86 Syllable 21.95 20.67 BPE 100 16.44 18.80 BPE 300 9.61 19.12 BPE 500 10.77 24.73
5.3
Test CER 4.72 22.68 13.72 10.38 11.58
SER 12.38 22.34 16.58 20.35 27.34
Without CNN Dev Test CER SER CER 5.29 12.12 5.67 22.09 23.46 23.41 19.51 26.69 18.72 9.65 20.88 9.98 22.08 34.61 22.67
SER 12.61 24.83 25.27 21.83 36.03
Effect of Using Different Numbers of BLSTM Layers
Since the Myanmar speech corpus is relatively small, the effect of varying model sizes are explored starting from a small model depth of 3. Only the depth of BLSTM layers is varied and the depth of CNN layers and other parameters are kept constant. Table 3 shows that the performance of the model keeps increasing until it reaches the depth of 6 LSTM layers on the test set. The model with 5 LSTM layers is the best fit for the small dataset which achieves the overall best results of 4.72% CER and 12.38% SER on test set. Table 3. Comparison of character-level ASR models with different numbers of BLSTM layers. Convolutional layers are used in every experiment. A fully connected layer is followed after the BLSTM layers. The hidden unit size of BLSTM layers is set to 512 units. Number of BLSTM Layers Dev CER 3 4.65 4 4.58 5 4.69 6 5.26
SER 11.85 12.54 11.86 14.45
Test CER 5.03 4.93 4.72 5.45
SER 13.24 13.75 12.38 15.46
6 Conclusion In this paper, we explore the CTC based end-to-end architectures on the Myanmar language. We empirically compare the various label encoding methods, different depths of BLSTM layers and the use of convolutional layers on the low-resource Myanmar speech corpus. This work shows that a well-tuned end-to-end system can achieve state-of-the-art results in a closed domain ASR even for low-resource
Exploring CTC Based End-To-End Techniques for Myanmar Speech Recognition
1045
languages. As future work, we will further investigate the integration of the language model to our end-to-end ASR system and will explore the other end-to-end multitasking techniques. Acknowledgments. The authors are grateful to the advisors from the University of Information Technology who gave us helpful comments and suggestions throughout this project. The authors also thank Ye Yint Htoon and May Sabal Myo for helping us with the dataset preparation and for technical assistance.
References 1. Amodei, D., Anubhai, R., Battenberg, E., Case, C., Casper, J., Catanzaro, B., Chen, J., Chrzanowski, M., Coates, A., Diamos, G., Elsen, E., Engel, J., Fan, L., Fougner, C., Han, T., Hannun, A., Jun, B., LeGresley, P., Lin, L., Narang, S., Ng, A., Ozair, S., Prenger, R., Raiman, J., Satheesh, S., Seetapun, D., Sengupta, S., Wang, Y., Wang, Z., Wang, C., Xiao, B., Yogatama, D., Zhan, J., Zhu, Z.: Deep Speech 2: end-to-end speech recognition in English and Mandarin. arXiv:1512.02595 [cs.CL] (2015) 2. Graves, A., Fernández, S., Gomez, F., Schmidhuber, J.: Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: ICML 2006: Proceedings of the 23rd International Conference on Machine Learning. ACM Press, New York (2006) 3. Chan, W., Jaitly, N., Le, Q., Vinyals, O.: Listen, attend and spell: a neural network for large vocabulary conversational speech recognition. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4960–4964. IEEE (2016) 4. Graves, A.: Sequence transduction with recurrent neural networks, arXiv:1211.3711 [cs.NE] (2012) 5. Zweig, G., Yu, C., Droppo, J., Stolcke, A.: Advances in all-neural speech recognition. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4805–4809. IEEE (2017) 6. Hannun, A., Case, C., Casper, J., Catanzaro, B., Diamos, G., Elsen, E., Prenger, R., Satheesh, S., Sengupta, S., Coates, A., Ng, A.Y.: Deep speech: scaling up end-to-end speech recognition, arXiv:1412.5567 [cs.CL] (2014) 7. Shan, C., Zhang, J., Wang, Y., Xie, L.: Attention-based end-to-end speech recognition on voice search. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4764–4768. IEEE (2018) 8. Li, J., Zhao, R., Hu, H., Gong, Y.: Improving RNN transducer modeling for end-to-end speech recognition. In: 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 114–121. IEEE (2019) 9. Mon, A.N., Pa, W.P., Thu, Y.K.: Building HMM-SGMM continuous automatic speech recognition on Myanmar Web news. In: International Conference on Computer Applications (ICCA2017), pp. 446–453 (2017) 10. Naing, H.M.S., Pa, W.P.: Automatic speech recognition on spontaneous interview speech. In: Sixteenth International Conferences on Computer Applications (ICCA 2018), Yangon, Myanmar, pp. 203–208 (2018) 11. Nwe, T., Myint, T.: Myanmar language speech recognition with hybrid artificial neural network and hidden Markov model. In: Proceedings of 2015 International Conference on Future Computational Technologies (ICFCT 2015), pp. 116–122 (2015)
1046
K. M. M. Chit and L. L. Lin
12. Naing, H.M.S., Hlaing, A.M., Pa, W.P., Hu, X., Thu, Y.K., Hori, C., Kawai, H.: A Myanmar large vocabulary continuous speech recognition system. In: 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), pp. 320– 327. IEEE (2015) 13. Mon, A.N., Pa Pa, W., Thu, Y.K.: Improving Myanmar automatic speech recognition with optimization of convolutional neural network parameters. Int. J. Nat. Lang. Comput. (IJNLC) 7, 1–10 (2018) 14. Aung, M.A.A., Pa, W.P.: Time delay neural network for Myanmar automatic speech recognition. In: 2020 IEEE Conference on Computer Applications (ICCA), pp. 1–4. IEEE (2020) 15. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (ICRL) (2015) 16. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift, arXiv:1502.03167 [cs.LG] (2015) 17. Hannun: Sequence Modeling with CTC. Distill. 2 (2017) 18. Department of Meteorology and Hydrology. https://www.moezala.gov.mm/ 19. DVB TVnews. https://www.youtube.com/channel/UCuaRmKJLYaVMDHrnjhWUcHw 20. Google: Myanmar Tools. https://github.com/google/myanmar-tools 21. ICU - International Components for Unicode. https://site.icu-project.org/ 22. Thu, Y.K.: sylbreak. https://github.com/ye-kyaw-thu/sylbreak
IoT Based Bidirectional Speed Control and Monitoring of Single Phase Induction Motors Ataur Rahman, Mohammad Rubaiyat Tanvir Hossain(&) and Md. Saifullah Siddiquee
,
Department of Electrical & Electronic Engineering, Chittagong University of Engineering and Technology, Chattogram 4349, Bangladesh [email protected]
Abstract. This paper presents the construction and laboratory investigation of an IoT (Internet of Things) based smart system to control, measure, and monitor the bidirectional speed control of single-phase induction motors (SPIM) remotely. The design prototype consisting of two single-phase induction motors demonstrates multi-motor control. The motors are turned ON and OFF by a specific relay operation. To achieve the desired motor speed, the stator voltage control method has been applied by using Pulse Width Modulation (PWM) technique. For reversing the motor direction of rotation, the stator magnetic field is reversed by swapping the contacts of auxiliary winding by relay operation. Whenever the desired value is submitted for a specific operation from a specially designed website, the desired control signal is generated from a programmed microcontroller according to the user's command via a webserver using GSM communication. Motor status data is measured using an IR sensor and observed remotely on the monitoring panel integrated with a web application. The result shows little deviation compared to direct field measurement. The IoT-based smart motor control system can be used in this modern age to continuously track, control, and monitor machines, goods, plants, etc. for versatility in multi-purpose applications. Keywords: Internet of Things induction motor
Bidirectional speed control Single-phase
1 Introduction In this modern age, induction motors (IM) are widely used in industry, automotive, aerospace, military, medical, and domestic equipment and appliances because of their low costs, robustness, high power, and low maintenance [1]. The bulk of the electric power that is being consumed globally, is by single-phase induction motors (SPIMs) driving either a fan, pump, or compressor type of load, including their applications in heating, ventilating, and air conditioning [2]. Traditionally, motor-driven systems run at a nearly constant speed and are usually designed to provide a load margin of 20–30% over the full load value for a long duration. A significant portion of the input energy is wasted for controlling their outputs to meet the variable load demands where the power © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1047–1058, 2021. https://doi.org/10.1007/978-3-030-68154-8_88
1048
A. Rahman et al.
drawn from the utility remains essentially the same as at full load. This energy loss can be avoided by driving the motors at a speed that results in the desired regulated outputs. The input power decreases as the speed is decreased. This decrease can be estimated by recognizing that in induction motor, Torque k1 ðspeedÞ2
ð1Þ
and therefore, the power required by the motor-driven system is, Power k2 ðspeedÞ3
ð2Þ
where k1 and k2 are the constants of proportionality. If the motor efficiencies can be assumed to be constant as their speed and loadings change, then the input power required by the induction motor would also vary as the speed cubed. Thus instead of constant speed operation of induction motors the variable speed driven system can result in significant energy conservation. Several methods exist for the variable speed operation of a single-phase induction motor. Considering simplicity and low cost, the most common type is the control of applied voltage to the motor. Also, bidirectional rotation control is important in various applications, like a conveyor belt, exhaust fan, etc. By reversing the direction of rotation, the same motor can be used as a water feed pump and smoke exhauster in a boiler. Present-day industries are increasingly being shifted towards automation. As the internet has become widespread today, remote accessing, monitoring, and controlling of systems are possible [3]. With rapidly increasing IoT (Internet of Things) technology the network of physical objects or things that are interconnected with electronics can be communicating with each other and be managed by computers [4–7]. IoT has provided a promising way to build powerful industrial automation by using wireless devices, sensors, and web-based intelligent operations [7–9]. For web-based solutions for controlling and monitoring single-phase induction motors (SPIM), diverse research, design, and implementations have been performed [10, 11]. In [12, 13], the authors proposed schemes to observe IM parameters using the ZigBee protocol. But because of the low data speed and high cost of ZigBee, the proposed systems are not suitable to cover a longer distance. P.S. Joshi et al. [14] have proposed wireless speed control of IMs using GSM, Bluetooth, and Wi-Fi technology which has a low range of communication features as compared to IoT. In [15], IoT based IM monitoring system is developed based on analyzing sensor data collected from local and cloud servers, Wi-Fi enabled Raspberry Pi and a web application. For predictive maintenance of motors to avoid delayed interruptions in the production, an IoT based Induction Motor monitoring scheme is proposed using ARM-Cortex in [16]. The authors in [17] have proposed an IoT based Induction Motor controlling and monitoring using PLC and SCADA. But the system is costly, complex, and requires trained manpower. Authors in [18] implemented wireless sensor networks based on IoT and Bluetooth Low Energy to monitor and control the speed, torque, and safety of SPIM. In [19], three separate SPIM or one three-phase induction motor is controlled (ON/OFF) simultaneously according to voltage and current data collection through the internet embedded with an Ethernet board. The majority of web-based implementations
IoT Based Bidirectional Speed Control and Monitoring of SPIMs
1049
have been for monitoring and speed control of induction motors. While bidirectional rotation of motors has not been widely investigated. This paper addresses this gap by providing a simple web-based communication solution for bidirectional speed control and monitoring of multiple single-phase induction motors. The speed control of the induction motors has been performed by the stator voltage control method using the pulse width modulation (PWM) technique [20]. The direction of rotation of the motor has been changed by reversing the stator magnetic field by swapping the connection of auxiliary stator winding with simple, lowcost relay switching. The parameters of the prototype design are observed in LCD by a microcontroller and shared to remote computer/mobile using IoT through GPRS which makes the system flexible and user friendly.
2 Proposed System Design Figure 1 shows the proposed system's architecture in the block diagram form. It mainly consists of a web application developed on a remote computer or smartphone, a GSM/GPRS module for interfacing with the server system, a Microcontroller, an LCD, IGBTs with gate drive circuits, Bridge rectifiers, IR sensors, a relay module, and two single-phase induction motors (SPIM).
Fig. 1. Block diagram of IoT based bidirectional speed control and monitoring of SPIM.
The IoT based bidirectional speed and status observation of the motors is accomplished by the web application. The GSM/GPRS module acts as an interface between the web server and the microcontroller unit to send and receive command and output signals. Following the specific command input from the field operator, the programmed
1050
A. Rahman et al.
microcontroller generates PWM signals. These signals are sent through the isolation circuit to the IGBT gate driver circuits to operate the IGBT switches. By the PWM AC chopping technique, input AC voltages to the stator windings of the induction motors are varied. As a result, the motor speeds are controlled. The IR sensors are used for measuring the RPM of the motors and the results are compared with fine-tuned digital tachometer readings. The relay module with four sets of relays is employed for the motor on/off and direction control. The overall motor status is observed in the LCD and the web application developed in the remote computer or smartphone.
3 Implementation of the Proposed System The circuit diagram of the proposed scheme for the controlling and monitoring of induction motors with IoT is depicted in Fig. 2. Functionally, the system can be divided into four major functional units described as follows:
Fig. 2. Circuit diagram of the proposed control scheme
3.1
Speed Control Scheme for Induction Motor
The speed of a single-phase induction motor can be controlled by varying the RMS value of the stator voltage at a constant frequency. This technique is usually used in fan, pump, or compressor type motor-driven systems with high slip. The stator voltage can be controlled by three methods; integral cycle control, phase control, and PWM control. The PWM control method is simple, cost-effective, and has low input
IoT Based Bidirectional Speed Control and Monitoring of SPIMs
1051
harmonic content compared to other methods. Here the supply voltage is chopped and modulated with a high switching frequency IGBT gate drive signal generated from the microcontroller unit. The change in the duty cycle of the IGBT switch changes the effective value of the load voltage and current. The chopped voltage can be expressed by multiplying the sinusoidal line voltage with the switching signal. The detailed analysis is given in [20]. The switching function can be written by the Fourier series expansion of the pulse for one period as: d ð t Þ ¼ a0 þ
X1 n¼1
ðan cosnxs t þ bn sinnxs tÞ
ð3Þ
Where a0 is the DC component, an and bn are the Fourier coefficients, xs is the switching frequency. a0 , an and bn have the following values a0 ¼
ton ton ¼ ¼D T ton þ toff
ð4Þ
1 sinð2pnDÞ np
ð5Þ
an ¼ bn ¼
1 ½1 þ cosð2pnDÞ np
ð6Þ
The load voltage can be calculated by multiplication of supply voltage, vs ðtÞ ¼ V m sinxt and the switching function d ðtÞ. vL ðtÞ ¼ vs ðtÞ:d ðtÞ ¼ V m sinxt:d ðtÞ hX1 i vL ðtÞ ¼ a0 V m sinxt þ ð a V cosnx t:sinxt þ b V sinnx t:sinxt Þ s n m s n¼1 n m
ð7Þ ð8Þ
The high-frequency terms in Eq. (8) are filtered by the motor’s inner inductance that results in the load voltage to be expressed as: vL ðtÞ ¼ a0 V m sinxt ¼ D:V m sinxt
ð9Þ
The RMS value of the stator load voltage can be calculated as D:V m V Lrms ðtÞ ¼ p 2
ð10Þ
As the input power required by the induction motor varies as the speed cubed, decreasing the motor speed to half of the rated speed by voltage control method to meet the variable load demands the input power drawn from the utility will drop to oneeighth of the rated power input. So there is a significant reduction in the power that is being consumed.
1052
3.2
A. Rahman et al.
ON/OFF and Bidirectional Rotation Control
A simple and low-cost relay module consisting of four sets of relays is used for the motor on/off switching and bidirectional rotation control as shown in Fig. 2. Two relays are connected in series with the power line and each of the two motors. To turn on a motor, the microcontroller sends a low signal to the specific relay for a specific motor. The relay coil gets energized and builds up a connection between common and ‘NO’ relay contacts to connect the power line with the motor which remains disconnected by default. For bidirectional rotation control swapping of the contacts of motor auxiliary winding occurs by relay operation. Hence the motor auxiliary winding gets electrically connected in reverse fashion. As the direction of current flow reverses in the auxiliary winding the stator magnetic field is reversed which reverses the rotational direction of the rotor. The on/off control relay is energized only after the switching of winding swapping relays to protect the motor from the probability of getting shortcircuited. 3.3
Speed Measurement and Motor Status Monitoring
The Measurement of the speed of induction motors is done by IR sensor, disc, and microcontroller as shown in Fig. 3. IR sensor emits infrared rays through the discs or blades of the motors and the receiver of the sensor experiences high voltage peaks when an obstacle or blade cuts the rays. By counting the number of peaks, the speed of rotation in rpm is determined using the programmed microcontroller. The rpm values are sent to the LCD unit and the web application in a remote computer via the GSM/GPRS module. The rpm values obtained from the IR sensor are verified with a fine-tuned digital tachometer as shown in Fig. 4.
Fig. 3. Measurement of the speed of SPIM using IR sensor
IoT Based Bidirectional Speed Control and Monitoring of SPIMs
1053
Fig. 4. Measurement of the speed of SPIM using a digital tachometer
3.4
IoT Based Communication Platform
As user interfaces for the framework, a dynamic web application connected to the internet with the help of a web host service (i.e., 000webhost) is built using the code of HTML, CSS, and PHP. This is essentially a two-page website. One is for receiving input commands from the user as shown in Fig. 5 and another is for monitoring machine control parameters. Users can send motor ON/OFF command signals, desired motor speed as a percentage of rated speed, and also the preferred direction of rotation via the input tab. By HTTP protocol, communication is made with the server and GPRS module by applying the POST method of PHP to send device data. The data parameters are translated into the GPRS module which is already enabled to communicate with specific HTTP server and microcontroller using AT command’s set such as for setup connection (i.e., AT + CGATT, AT + SAPBR, AT + HTTPINIT, etc.), and GET & POST method (i.e., AT + HTTPPARA, AT + HTTPDATA, AT + HTTPACTION, AT + HTTPREAD and so on). The microcontroller sends motor status data like ON/OFF, direction, and speed data (from measuring unit) to the webserver via GPRS through reverse communication, and finally the designed website receives the status data using the GET method of PHP. On the monitoring unit, such as, in remote computer/ mobile devices using the web application, and in LCD, the status of motors can be observed as shown in Figs. 6 and 7 respectively.
1054
A. Rahman et al.
Fig. 5. Web application control panel for the user interface.
Fig. 6. Web application based panel for monitoring the motor status
Fig. 7. Observation of motor status in LCD
4 Result and Discussion The laboratory implementation of multiple SPIM speed and direction control and monitor using IoT technology is shown in Fig. 8. As described earlier, an operator from a remote location can observe the system and provide command inputs for the desired operation of motors.
IoT Based Bidirectional Speed Control and Monitoring of SPIMs
1055
Fig. 8. A laboratory prototype of IoT based bidirectional speed of single-phase induction motor
This proposed system offers ON/OFF control, clockwise (CW) and counterclockwise (CCW) rotation control, speed control as a percentage of rated speed through the specially designed website over the internet. The motor parameters, measured by using the IR sensors, are processed by the microcontroller and shared to remote computer/mobile by using GPRS and also displayed in LCD for field operators. Concerning the variation of the PWM duty cycle, the SPIM speed variation has been observed in the web server and LCD and also compared with field measurement data obtained from a fine-tuned digital tachometer. It has been found that the deviation of field-measured tachometer-data and IR sensor-based remote data is negligible. This comparison is presented in Fig. 9 by using a comparative bar diagram.
Fig. 9. Digital Tachometer and IR sensor-based speed measurement comparative bar diagram.
1056
A. Rahman et al.
Concerning the PWM duty cycle, the practically measured RPM values are plotted against percent duty cycle variation as shown in Fig. 10 and a second-order polynomial Eq. (11) has been derived. N ¼ 0:641D2 þ 113:5619D 2557:6
ð11Þ
Here, N = Speed values in RPM; D = Duty cycle in percentage (%). This equation can be used to get motor speed empirically based on the corresponding percent of duty ratio. For various desired speed values of single-phase induction motor, the required percent of PWM duty cycles obtained from the above equation are summarized in Table 1.
Fig. 10. Polynomial curve fitting using measured data.
Table 1. Percentage of the duty cycle for different motor speed values Speed, N (RPM) Duty cycle, D (%) Speed, N (RPM) Duty cycle, D (%) 1700 55.87 2100 65.75 1800 58.03 2200 69.01 1900 60.36 2300 72.96 2000 62.91 2400 78.40
IoT Based Bidirectional Speed Control and Monitoring of SPIMs
1057
5 Conclusion Multiple SPIM speed and direction control and monitoring using IoT technology have been presented in this paper. The speed control of the induction motors has been performed by the stator voltage control method using the PWM technique. The ON/OFF and the bidirectional rotation (CW/CCW) of the motors have been controlled by the cheap and simple operation of relays. The motor operational parameters have been monitored remotely in a computer/smartphone using IoT through GPRS. For a variation of the PWM duty cycle, the SPIM speed variation has been observed and was compared with field measurement data obtained from a fine-tuned digital tachometer. It has been found that the deviation of direct field measurement using the tachometer and IR sensor-based remote measurement is little. Also from the analysis, it has been shown possible to calculate the desired rpm of the motor beforehand following the percentage of the duty cycle from the second-order polynomial equation derived. In industrial, manufacturing, agricultural, and household applications, the proposed IoT based smart control system can be introduced with more sensors to continuously track multiple control variables of machines, plants, etc. It is also possible to develop an android based application to make the system more user-friendly.
References 1. Sarb, D., Bogdan, R.: Wireless motor control in automotive industry. In: 2016 24th Telecommunications Forum (TELFOR), pp. 1–4. Belgrade (2016). https://doi.org/10.1109/ TELFOR.2016.7818790 2. Mohan, N.: Power Electronics: A First Course, 1st edn. Wiley, USA (2012) 3. Internet World Stats. https://www.internetworldstats.com/stats.htm. Accessed 03 June 2019 4. Doshi, N.: Analysis of attribute-based secure data sharing with hidden policies in smart grid of IoT. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-00979-3_54 5. Khan, A.A., Mouftah, H.T.: Energy optimization and energy management of home via web services in smart grid. In: IEEE Electrical Power and Energy Conference, pp. 14–19, London (2012). https://doi.org/10.1109/EPEC.2012.6474940 6. Cheah, P.H., Zhang, R., Gooi, H.B., Yu, H., Foo, M.K.: Consumer energy portal and home energy management system for smart grid applications. In: 10th International Power & Energy Conference (IPEC), pp. 407–411. Ho Chi Minh City (2012). https://doi.org/10.1109/ ASSCC.2012.6523302 7. Kuzlu, M., Pipattanasomporn, M., Rahman, S.: Communication network requirements for major smart grid applications in HAN, NAN, and WAN. Comput. Netw. 67, 74–88 (2014). https://doi.org/10.1016/j.comnet.2014.03.029 8. Tsai, W., Shih, Y., Tsai, T.: IoT-type electric fan: remote-controlled by smart-phone. In: Third International Conference on Computing Measurement Control and Sensor Network (CMCSN), pp. 12–15. Matsue (2016). https://doi.org/10.1109/CMCSN.2016.17 9. Kuzlu, M., Rahman, M.M., Pipattanasomporn, M., Rahman, S.: Internet-based communication platform for residential DR programmes. IET Networks 6(2), 25–31 (2017). https:// doi.org/10.1049/iet-net.2016.0040
1058
A. Rahman et al.
10. Kamble, I., Patil, Y.M.: A review of parameters monitoring and controlling system for industrial motors using wireless communication. Int. J. Res. Appl. Sci. Eng. Technol. 7(1), 47–49 (2019). https://doi.org/10.22214/ijraset.2019.1010 11. Potturi, S., Mandi, R.P.: Critical survey on IoT based monitoring and control of induction motor. In: IEEE Student Conference on Research and Development (SCOReD), pp. 1–6. Selangor, Malaysia (2018). https://doi.org/10.1109/SCORED.2018.8711222. 12. Patil, R.R., Date, T.N., Kushare, B.E.: ZigBee based parameters monitoring system for induction motor. In: IEEE Students' Conference on Electrical, Electronics and Computer Science, Bhopal, pp. 1–6 (2014). https://doi.org/10.1109/SCEECS.2014.6804469 13. Khairnar, V.C., Sandeep, K.: Induction motor parameter monitoring system using zig bee protocol & MATLAB GUI: Automated Monitoring System. In: Fourth International Conference on Advances in Electrical, Electronics, Information, Communication, and BioInformatics (AEEICB), Chennai, pp. 1–6 (2018). https://doi.org/10.1109/AEEICB.2018. 8480992 14. Joshi, P.S., Jain, A.M.: Wireless speed control of an induction motor using PWM technique with GSM. IOSR J. Electr. Electron. Eng. 6(2), 1–5 (2013). https://doi.org/10.9790/16760620105 15. Rekha, V.S.D., Ravi, K.S.: Induction motor condition monitoring and controlling based on IoT. Int. J. Electron. Electr. Comput. Syst. 6(9), 74–89 (2015) 16. Şen, M., Kul, B.: IoT-based wireless induction motor monitoring. In: XXVI International Scientific Conference Electronics (ET), pp. 1–5. Sozopol (2017). https://doi.org/10.1109/ET. 2017.8124386. 17. Venkatesan, L., Kanagavalli, S., Aarthi, P.R., Yamuna, K.S.: PLC SCADA based fault identification and protection for three-phase induction motor. TELKOMNIKA Indonesian J. Electr. Eng. 12(8), 5766–5773 (2014) 18. Kathiresan, S., Janarthanan, M.: Design and implementation of industrial automation using IOT and Bluetooth LE. Int. J. Adv. Res. Trends Eng. Technol. (IJARTET) 3(19), 335–338 (2016). https://doi.org/10.20247/IJARTET.2016.S19040062 19. Çakır, A., Çalış, H., Turan, G.: Remote controlling and monitoring of induction motors through internet. TELKOMNIKA Indonesian J. Electr. Eng. 12(12), 8051–8059 (2014). https://doi.org/10.11591/telkomnika.v12i12.6719 20. Yildirim, D., Bilgic, M.: PWM AC chopper control of single-phase induction motor for variable-speed fan application. In: 2008 34th Annual Conference of IEEE Industrial Electronics, Orlando, FL, pp. 1337–1342 (2008). https://doi.org/10.1109/IECON.2008. 4758148.
Missing Image Data Reconstruction Based on Least-Squares Approach with Randomized SVD Siriwan Intawichai and Saifon Chaturantabut(&) Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University, Pathumthani 12120, Thailand [email protected], [email protected]
Abstract. In this paper, we introduce an efficient algorithm for reconstructing incomplete images based on optimal least-squares (LS) approximation. Generally, LS method requires a low-rank basis set that can represent the overall characteristic of an image, which can be obtained optimally via the singular value decomposition (SVD). This basis is called proper orthogonal decomposition (POD) basis. To significantly decrease the computational cost of SVD, this work employs a randomized singular value decomposition (rSVD) to compute the basis from the available image pixels. In this work, to preserve the 2-dimensional structure of the image, the test image is first subdivided into many 2-dimensional small patches. The complete patches are used to compute the POD basis for reconstructing corrupted patches. For each incomplete patch, the known pixels in the neighborhood around the missing components are used in the LS approximation together with the POD basis in the reconstruction process. The numerical tests compare the execution time used in computing this optimal low-rank basis by using rSVD and SVD, as well as demonstrate the accuracy of the resulting image reconstructions. Keywords: Missing data reconstruction Singular value decomposition Randomized SVD Least-squares approximation
1 Introduction Missing data problems have been important issues in many research fields such as biology, medical, engineering, image processing, remote sensing etc. In image processing, pixels in some images may be corrupted or simply missing. In some cases, pixels may be randomly deleted in order to save network bandwidth during data transmission. The missing data reconstruction methods are extensively developed to restore those corrupted or missing pixels. In 2008, Emmanuel J. C. et al. [1] considered matrix completion in the recovery of a data matrix from a sampling of its entries which led to the notions of missing data reconstruction algorithms. For example, Jin Z. et al. [2] and Feilong C. et al. [3] demonstrated the reconstruction algorithm by working on sparse matrix completion. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1059–1070, 2021. https://doi.org/10.1007/978-3-030-68154-8_89
1060
S. Intawichai and S. Chaturantabut
Singular value decomposition (SVD) is one of the fundamental techniques in matrix completion, which can be used for reconstructing missing data. SVD is the most widely used for matrix decomposition (see [4–8]). It is a stable and effective method to split the system into a set of linearly independent components. Also, SVD is the optimal matrix decomposition in a least square sense that it packs the maximum signal energy as few coefficients as possible (see more detail [7]). SVD is adapted for dimensionality reduction techniques which are applied for solving many problems, such as low-rank matrix approximations [6, 7], image processing (see [9–12]), image compression ([10, 11]), face recognition [11] and missing data image reconstruction ([12–15]). Due to the tradition way of computing the full SVD of a high-dimension matrix is often expensive as well as memory intensive, the improved methods for decrease the computational work have been proposed. Sarlos T. [16], Liberty E. et al. [17] and Halko N. et al. [18] introduced randomized singular value decomposition (rSVD) which is a more robust approach based on random projections. These method can decrease the computational work of extracting lowrank approximations and preserve accurately. In [19] and [20], rSVD methods was applied to reconstruct missing data components of image with the notion of least-squares approximation. The least-squares method is used for estimating missing data with an optimal low-rank basis in Euclidean norm, which is also called proper orthogonal decomposition (POD) basis. In general, POD basis can be computed by using SVD. However, computing SVD can be time consuming and can dominate the execution time during the reconstruction process. The rSVD method is therefore used for improving the accuracy and decreasing the computation times. The image reconstruction method in [19] and [20] computed a POD basis by using all complete columns in the image. Then, each incomplete column with missing pixels was approximated by using this POD basis with the coefficients computed from the remaining pixels in that column. This approach was shown to decrease the computation times with equivalent reconstruction errors when compared to the traditional approach. However, this approach may not be applicable or efficient when almost all columns of the image are incomplete or the incomplete columns contain missing pixels in different rows. Moreover, this approach may not preserve the 2dimensional structure of the original image. This work resolves the above limitations by separating the test image into many 2dimensional small patches and then using the complete patches to build the POD basis for reconstructing corrupted patches. For each incomplete patch, the known pixels in the neighborhood around the missing components are used in the LS approximation together with the POD basis to reconstruct the missing pixels. This approach provides the flexibility of controlling the size of patches, and hence the number of complete patches that are used for POD basis can be chosen arbitrary. This reconstruction approach also applies the rSVD method to compute POD basis for decreasing the computation times. The remainder of this paper is organized as follows. In Sect. 2, we consider the background knowledge of SVD and rSVD. The approach for reconstructing missing data based on rSVD via least-squares method is discussed in Sect. 3. The numerical experiments shown in Sect. 4 use these approaches for constructing the projection basis and compare the CPU times together with the reconstruction errors. The rSVD
Missing Image Data Reconstruction
1061
approach is shown to use least execution time, for sufficiently small dimensions of the POD basis, while providing the same level of accuracy when compared to the other approach. Finally, some concluding remarks are given in Sect. 5.
2 Background Knowledge 2.1
Singular Value Decomposition
The singular value decomposition (SVD) of a matrix X ¼ ½x1 ; x2 ; . . .; xn 2 Rmn can be expressed as X ¼ URV T
ð1Þ
where U 2 Rmm and V 2 Rnn are matrices with orthonormal columns. The column vectors of U and V are left and right singular vectors, respectively, denoted as ui and vi , R ¼ diagðr1 ; . . .; rr Þ 2 Rmn , r ¼ minfm; ng where r1 r2 . . . rr [ 0, are known as singular values. Using the subscript T to denote the transpose of the matrix. The left singular vectors of X are eigenvectors of XX T and the right singular vectors of X are eigenvectors of X T X. Truncated SVD is obtained by computing full SVD and then truncating it by selecting the top k dominant singular values and their corresponding singular vectors such that Xk ¼ Uk Rk VkT ¼
k X
ri ui vTi
ð2Þ
i¼1
Where k\r is the numerical rank, Uk 2 Rmk and Vk 2 Rnk are matrices with orthonormal columns Rk ¼ diagðr1 ; . . .; rk Þ 2 Rkk where r1 r2 . . . rk [ 0. Here, Xk ¼ ½~x1 ; ~x2 ; . . .; ~xk is the best rank k approximation to the matrix X by the low-rank matrix approximation error measured by 2-norm: kX Xk k22 ¼ r2k þ 1 ; or kX Xk k2F ¼
n X
kxi ~xi k22 ¼
i¼1
r X l¼k þ 1
The SVD method is summarized in Algorithm 1. Algorithm 1: The SVD algorithm INPUT : A data matrix X m n with target rank k . OUTPUT : The SVD of X : U , ,V Step 1. Set B X T X Step 2. Computing the eigendecomposition, B VDV T Step 3. Calculating D Step 4. Computing U XV 1
r2l
ð3Þ
1062
2.2
S. Intawichai and S. Chaturantabut
Randomized Singular Value Decomposition
We define the randomized singular value decomposition (rSVD) of X as, ^kV ^k ¼ U ^kT ; ^kR X
ð4Þ
^ k 2 Rmk and V ^k 2 Rnk are matrices with where k\r is the numerical rank, U kk ^k 2 R orthonormal columns R is a diagonal matrix with the singular values ^1 r ^2 . . . r ^k [ 0: Details are given as follows. r Define the random projection of a matrix as, Y ¼ XX
ð5Þ
where X is a random matrix. The rSVD algorithm as considered by [18] explores approximate matrix factorizations using random projections, separating the process into two stages: the first stage, random sampling is used to obtain a reduced matrix whose range approximates the range of the data matrix. For a given e [ 0; we wish to find a matrix Q with orthonormal columns such that X QQT X 2 e: 2
ð6Þ
Without loss of generality, we assume Q 2 Rml ; l n. The columns of Q form an orthogonal basis for the range of XX which is an approximation to the range of X; where X is a matrix composed of the random vectors. The second stage of the rSVD method is to compute the SVD of B :¼ QT X. ~ ^SV ^ T is the SVD of B, which can be obtained from the orthogonal Suppose B ¼ U projection of X onto the low dimensional subspace spanned by columns of Q. We ^ of X from the product QU. ~ finally obtain the approximated POD basis or U We would like the basis matrix Q to contain as few columns as possible, but it is even more important to have an accurate approximation of the input matrix. However, there are many methods for constructing a matrix Q such as QR factorization, Eigenvalue decomposition, SVD. In this work, we will compute this matrix by using QR decomposition as summarize in Algorithm 2. Algorithm 2: The rSVD algorithm INPUT : A data matrix X m n with target rank k and an oversampling parameter p OUTPUT : The rSVD of X : U , ,V Step 1. Draw a random matrix with dimension n (k p) Step 2. Form the matrix product, Y X Step 3. Construct Q from the QR decomposition of Y Step 4. Set B QT X Step 5. Compute an SVD of the small matrix: B USV T Step 6. Set U QU
Missing Image Data Reconstruction
1063
3 Missing Image Data Reconstruction Approach pffiffiffiffi pffiffiffiffi Let X be a gray scale image. We begin with exacting all the m m patches from the image X to form a matrix S, by S ¼ ½s1 ; s2 ; . . .; sn 2 Rmn , where si ; ði ¼ 1; 2; . . .; nÞ are the vectorized image patches ordered as columns of S, m is the number of pixels in each patch and n is the number of patches. The patches can split to the known patches and corrupted patches, which form complete and incomplete data vectors respectively. However, we need to reconstruct an input image X through a matrix S as illustrated in Fig. 1 which gives a general framework for an image reconstruction. By applying the approach in [19, 20] to approximate an incomplete data vector. This use a projection onto a subspace spanned by a basis that represents the related complete data vectors. First, let fs1 ; s2 ; . . .; sns g Rn be a complete data set, and form a matrix of a complete data Sc ¼ ½s1 ; s2 ; . . .; sns 2 Rnns . This matrix Sc will be used to compute the projection basis matrix for approximating incomplete patches. Let ^s 2 Rn be an incomplete vectorized patch and n ¼ nc þ ng where nc ; ng are the numbers of known and unknown components, respectively. Suppose that C ¼ ½ec1 ; . . .; ecnc 2 Rnnc and G ¼ ½eg1 ; . . .; egng 2 Rnng , where eci ; egi 2 Rn are the ci th, gi -th column of the identity matrix In , for fc1 ; c2 ; . . .; cnc g; fg1 ; g2 ; . . .; gng g f1; 2; . . .; ng are the indices of the known and unknown components, respectively, of ^s.
Fig. 1. The overview of missing image data reconstruction approach.
Let ^sc :¼ C T ^s 2 Rnc and ^sg :¼ GT ^s 2 Rng . Then, the known components and the unknown components are given in the vectors ^sc and ^sg , respectively. The overview of the steps described in this section for approximating an incomplete image is shown in Fig. 2.
1064
S. Intawichai and S. Chaturantabut
Fig. 2. Stage-diagram of the proposed missing image data reconstruction approach
Note that, pre-multiplying C T is equivalent to extracting the nc rows corresponding to the indices fc1 ; c2 ; . . .; cnc g. Similarly, GT is equivalent to extracting the ng rows corresponding to the indices fg1 ; g2 ; . . .; gng g. The missing components contained in ^sg will be approximated by first projecting ^s onto the column span of a basis matrix U with rank k. ^s Ua; or ^sc Uc a; and ^sg Ug a; for some coefficient vector a 2 Rk , and where Uc :¼ CT U 2 Rnc k ; Ug :¼ GT U, 2 Rng k . The known components contained in ^sc are then used to determine the coefficient vector a through the approximation ^sc Uc a from the following leastsquares problem: min k^sc Uc ak22
a2Rk
ð7Þ
y y The solution of the above problem is given by a ¼ Uc ^sc where Uc ¼ ðUcT Uc Þ1 UcT is the Moore-Penrose inverse. That is, ^sg Ug a ¼ Ug Ucy^sc
ð8Þ
Missing Image Data Reconstruction
1065
The details of these steps are provided in Algorithm 3 below. Algorithm 3: Standard POD Least-Squares approach INPUT
: Complete data set {s j }njs1 with known entries in sc
nc
n
and
rank {s j }njs1 , incomplete data s
and unknown entries in sg
ng
n
, where n nc ng
OUTPUT : Approximation of sg Step 1. Create snapshot matrix : S [ s1 , s2 , ..., sns ]
n ns
and let r rank (S )
Step 2. Construct basis U of rank k r for S Step 3. Find coefficient vector a from sc using Least-Squares problem: mink sc U c a a
2 2
Step 4. Compute the approximation sg U g a
Next, we consider the optimal basis which is obtained from the singular value decomposition (SVD) because it is optimal in the least-squares sense. The basis defined above can be obtained from the left singular vector of the matrix S. Recall that S ¼ ½s1 ; s2 ; . . .; sns 2 Rnns and k\r ¼ rank (SÞ. The SVD of S is S ¼ URV T , where U ¼ ½u1 ; . . .; ur 2 Rnr and V ¼ ½v1 ; . . .; vr 2 Rns r are matrices with orthonormal columns and R ¼ diagðr1 ; . . .; rr Þ 2 Rrr with r1 r2 . . . rr [ 0. The optimal solution of least-squares problem minSk kS Sk k2F , rank (Sk Þ ¼ k is r 2 2 P r2l . Then Sk ¼ Uk Rk VkT with minimum error S Sk 2 ¼ r2k þ 1 or S Sk F ¼ l¼k þ 1
optimal orthonormal basis of rank k (the POD basis) of dimension k is the matrix formed by the first k left singular vectors, i.e. Uk ¼ ½u1 ; . . .; uk 2 Rnk , k r. However, it could be computational intensive to obtain the standard SVD. We will use the rSVD methods in Sect. 2.2 to find the optimal orthonormal basis, which reduce the computational complexity of the standard approach for computing SVD.
4 Numerical Experiments In this Section, we show the numerical experiments of the rSVD and SVD methods for constructing the projection basis and compare the CPU times together with the reconstruction errors. Here, we use the relative error in 2-norm. We use a standard test image, called Lena picture. This image is considered in gray scale color with size 512 512 shown in Fig. 3. We consider this image with 2:75% missing pixels in a block of missing pixels with size 15 20 pixels spreading over the test image. We consider two cases with different patch sizes for image reconstructing: Case I. Clipping the image to 256 patches with size 32 32 pixels or the dimension of the matrix representing this image is 1024 256. Case II. Clipping the image to 1024 patches with size 16 16 pixels or the dimension of the matrix representing this image is 256 1024.
1066
S. Intawichai and S. Chaturantabut
Fig. 3. The standard test image, called Lena picture (a) The original gray-scale image (b) Incomplete image with missing pixels.
Note that, the numbers of the known patches must be enough for computing the basis. Also, the available pixels in the corrupted patch are sufficiently used to reconstruct the missing pixels. The performance of both cases are shown in Fig. 4 and Fig. 5.
Fig. 4. The reconstructed images (when clipped to 256 patches with size 32 32 pixels) by using the rSVD method. The images in (a)-(e) show the reconstruction using basis of rank k = 10, 15, 20, 25, 30, respectively.
By using the reconstruction approach explained in this work, we investigate the results when ranks k = 10, 15, 20, 25 and 30 are used. For the rSVD method, the reconstructed results are accurate and seem to be indistinguishable from the original image, as shown in Fig. 4. The comparisons between using rSVD and SVD methods for these two cases are shown in Fig. 5 when ranks k ¼ 10; 20; 30 are used.
Missing Image Data Reconstruction
1067
The relative error of the reconstruction results and the computational time for constructing basis set are shown in Fig. 6 and Fig. 7, respectively. In addition, the efficacy of the proposed reconstruction strategy is measured by the peak signal-to-noise ratio (PSNR), as shown in Table 1 and Fig. 8.
Fig. 5. The reconstructed images of case I, (1a)–(3a) and (1b)–(3b) show the images when using the rSVD method and SVD method, respectively with rank k = 10, 20, 30. Similarly, the reconstructed images of case II, (1c)–(3c) and (1d)–(3d) show the images when using the rSVD method and SVD method, respectively with rank k = 10, 20, 30.
1068
S. Intawichai and S. Chaturantabut
Fig. 6. Relative errors of the reconstruction results of the Lenna picture when 2.75% of pixels are missing: (a) Case I and (b) Case II.
Fig. 7. Computational time for computing basis used in the reconstruction of the Lenna picture when 2.75% of pixels are missing: (a) Case I and (b) Case II. Table 1. Reconstruction performances of images under using rSVD and SVD method with different rank k. PSNR Case I. (dB) rSVD k = 10 32.3814 k = 15 32.4651 k = 20 32.0866 k = 25 32.2250 k = 30 32.1546
SVD 32.3476 32.3421 32.0097 32.0922 31.8980
Case II. rSVD 32.2426 32.2796 31.7313 31.9056 31.7345
SVD 31.3460 31.0623 30.9716 30.6867 30.5484
From Table 1, both cases show that using the rSVD method is a little more accurate than SVD method. Although the rSVD method gives slightly more PSNR values in some test cases, it uses significantly less computation time.
Missing Image Data Reconstruction
1069
Fig. 8. Reconstruction performances in term of PSNR: (a) Case I and (b) Case II.
5 Conclusions This work presented an image reconstruction approach. A given incomplete image was first divided into many patches, the only corrupted patches are reconstructed while the known patches are used to compute the POD basis. The available image pixels in patches were used to form a low-dimensional subspace and approximate the missing pixels by applying the least-squares method. By using the available pixels in the corrupted patches around the missing pixels, the incomplete image is shown to be approximated efficiently and accurately. Instead of using a traditional approach based on the standard SVD for computing this low-dimension basis, this work efficiently used rSVD, which was shown in the numerical tests to give substantially less reconstruction time with the same order of accuracy. Acknowledgments. The authors gratefully acknowledge the financial support provided by this study was supported by Thammasat University Research Fund, Contract No. TUGR 2/12/2562 and Royal Thai Government Scholarship in the Area of Science and Technology (Ministry of Science and Technology).
References 1. Emmanuel, J.C., Benjamin, R.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2008) 2. Jin, Z., Chiman, K., Bulent, A.: A high performance missing pixel reconstruction algorithm for hyperspectral images. In: Proceedings of 2nd International Conference on Applied and Theoretical Information Systems Research (2012) 3. Feilong, C., Miaaomiao, C., Yuanpeng, T.: Image interpolation via low-rank matrix completion and recovery. IEEE Trans. Circuits Syst. Video Technol. 25(8), 1261–1270 (2015) 4. Pete, G.W.S.: On the early history of the singular value decomposition. Siam Rev. 35, 551– 566 (1993) http://www.jstor.org/stable/2132388 5. Dan, K.: A singularly valuable decomposition: the svd of a matrix. The College Math. J. 27 (1), 2–23 (1996)
1070
S. Intawichai and S. Chaturantabut
6. Dimitris, A., Frank, M.: Fast computation of low-rank matrix approximations, J. Assoc. Comput. Mach., 54 (2007) 7. Gilbert, S.: Linear Algebra and its Applications, 3rd edn. Harcourt Brace Jovanovich, San Diego, USA (1988) 8. Gene, H.G., Charies, F.V.L.: Matrix Computations, 3rd ed., Johns Hopkins University Press, Baltimore, Maryland (1996) 9. Harry, C.A., Claude, L.P.: Singular value decompositions and digital image processing, 24, 26–53 (1976) 10. Samruddh, K., Reena, R.: Image compression using singular value decomposition. Int. J. Adv. Res. Technol. 2(8), 244–248 (2013) 11. Lijie, C.: Singular Value Decomposition Applied To Digital Image Processing, Division of Computing Studies. Arizona State University Polytechnic Campus, Mesa, Arizona, 1–15 12. Sven, O.A., John, H.H., Patrick, W.: A critique of SVD-based image coding systems. In: IEEE International Symposium on Circuits and Systems (ISCAS), vol. 4, pp. 13–16 (1999) 13. Vitali, V.S., Roger, L.: Fast PET image reconstruction based on svd decomposition of the system matrix. IEEE Trans. Nuclear Sci. 48(3), 761–767 (2001) 14. Rowayda, A.S.: SVD based image processing applications: state of the art, contributions and research challenges. Int. J. Adv. Comput. Sci. and Appl. 3(7), 26–34 (2012) 15. Davi, M.L., João, P.C.L.C., João, L.A.C.: Improved MRI reconstruction and denoising using SVD-based low-rank approximation. In: Workshop on Engineering Applications, pp. 1–6 (2012) 16. Sarlos, T.: Improved approximation algorithms for large matrices via random projections. In: 47th Annual IEEE Symposium Foundations of Computer Science, pp. 143–152(2006) 17. Liberty, E., Woolfe, F., Martinsson, P.G., Rokhlin, V., Tygert, M.: Randomized algorithms for the low-rank approximation of matrices. Proc. Natl. Acad. Sci. 104(51), 20167–20172 (2007) 18. Halko, N., Martinsson, P.G., Tropp, J.A.: Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. Siam Rev. 53(2), 217–288 (2011) 19. Siriwan, I., Saifon, C.: An application of randomized singular value decomposition on image reconstruction using least-squares approach, Preprint in Thai Journal of Mathematics (2018) 20. Siriwan, I., Saifon, C.: A Numerical Study of Efficient Sampling Strategies for Randomized Singular Value Decomposition, Thai Journal of Mathematics, pp. 351–370 (2019)
An Automated Candidate Selection System Using Bangla Language Processing Md. Moinul Islam, Farzana Yasmin, Mohammad Shamsul Arefin(B) , Zaber Al Hassan Ayon, and Rony Chowdhury Ripan Computer Science and Engineering, Chittagong University of Engineering and Technology, Chattogram 4349, Bangladesh [email protected], [email protected], [email protected], [email protected], [email protected]
Abstract. Recruiting or selecting the right candidates from a vast pool of candidates has always been a fundamental issue in Bangladesh as far as employers are concerned. In the case of candidate recruitment, different government organizations, nowadays, ask the applicants to submit their applications or resumes written in Bengali in the form of electronic documents. Matching the skills with the requirements and choosing the best candidates manually from all the resumes written in Bengali is very difficult and time-consuming. To make the recruitment process more comfortable, we have developed an automated candidate selection system. First, it takes the CVs (written in Bengali) of candidates and the employer’s requirements as input. It extracts information from the candidate’s CV using Bangla Language Processing (BLP) and Word2Vec embedding. Then, it generates an average cosine similarity score for each CV. Finally, it ranks the candidates according to the average cosine similarity scores and returns the dominant candidate’s list. Keywords: Automation · Bangla language processing selection · Word2vec · Cosine similarity · Gensim
1
· Candidate
Introduction
Data mining is a logical process used to find relevant data from a large data set. It is the process of analyzing data from different aspects and summarizes them into useful information [11]. Data mining helps us extract this information from an extensive dataset by finding patterns in the given data set. Patterns that are certain enough according to the user’s measures are called knowledge [18]. Data mining is a sub-process of knowledge discovery in which the various available data sources are analyzed using different data mining algorithms. Like data mining, text mining is used in extracting essential pieces of information from the text [3]. Since all data on the web and social media are available in a fuzzy and c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1071–1081, 2021. https://doi.org/10.1007/978-3-030-68154-8_90
1072
M. M. Islam et al.
random manner, it is sometimes hard for humans to understand and process the data effectively. In that type of case, text mining tools help to create a bridge and relationship among the texts. This process is also known as Knowledge Discovery from Texts (KDT) [15]. There are several text mining applications, such as speech recognition, social media data analysis, content enrichment, etc. [1,2]. Recruitment, or selecting the right candidates from a vast pool of candidates, has always been a fundamental issue as far as employers are concerned. Generally, the recruitment process follows like this for a company. Candidates send CV to the company. Recruiter selects some candidates from those CVs followed by conducting personality and other technical eligibility evaluation tests, interviews, and group discussions. The human Resource (HR) department is one of the most critical departments of a company. The HR department plays a vital role in selecting an expert candidate for a particular post. The first task of the HR department is to shortlist the CVs of various candidates who applied for the specific post. High level of uncertainty when the HR department checks all the CVs, ranks them and selects a candidate for a particular job position because of many applicants. This uncertainty occurs due to the various occupation domain expert’s different opinions and preferences in the decision-making process. This evaluation process involves excessive time consumption and monotonous work procedure. To reduce the load of the HR department in the recruiting process, an automation tool like CV ranker is very helpful. Recently, some studies have focused on automated candidate selection system in [9,13,16,17,20]. An elaborate discussion on these methodologies is given in Sect. 2. The common phenomenon of the most existing techniques is done for a CV that is written in Bengali. In Bangladesh, as Bengali is our native language, and for many government posts, there is a need for a CV written in Bengali. However, any automated candidate selection system for Bengali CV has not been done yet. In this study, we have developed an automated candidate selection system that takes the CVs (written in Bengali) of candidates and the employer’s requirements as input. After that, our system extracts information of the candidates using Bangla Language Processing (BLP) from the CVs, generates an average Cosine Similarity Score for each CV. Finally, our system ranks the candidates according to these average Cosine Similarity Scores and returns the dominant candidates’ list. The rest of this paper is organized as follows: We present the related works on the automated candidate selection system in Sect. 2. Section 3 provides the details of the proposed methodology of the automated candidate selection system. Our experimental results are shown in Sect. 4. Finally, we conclude the paper in Sect. 5 by providing the future directions of this research.
2
Related Work
Few contributions are regarding the automated candidate selection system. Such as, Evanthia et al. [7] implement automated candidate ranking, based on objec-
An Automated Candidate Selection System
1073
tive criteria that can be extracted from the applicant’s LinkedIn profile. The e-recruitment system was deployed in a real-world recruitment scenario, and expert recruiters validated its output. Menon et al. [14] represent an approach to evaluate and rank candidates in a recruitment process by estimating emotional intelligence through social media data. The candidates have to apply for a job opening by filling an online resume and accessing their Twitter handle. This system estimates the candidate’s emotion by analyzing their tweets, and the professional eligibility is verified through the entries given in the online resume. Faliagka et al. [6] present an approach for evaluating job applicants in online recruitment systems, using machine learning algorithms to solve the candidate ranking problem, and performing semantic matching techniques. The system needs access to the candidate’s full profile and the recruiters’ selection criteria, including assigning weights to a set of candidate selection criteria. The proposed scheme can output consistent candidate rankings compared to the ones assigned by expert recruiters. De Meo et al. [5] propose an Extensible Markup Language (XML)-based multi-agent recommender system for supporting online recruitment services. XML is a standard language for representing and exchanging information. It embodies both representation capabilities, which are typical of Hypertext Markup Language, and data management features, which are characteristic of the Database Management System. Kessler et al. [12] present the E-Gen system: Automatic Job Offer Processing system for Human Resources implementing two tasks - analysis and categorization of job postings. Fazel et al. [8] present an approach to matchmaking job seekers and job advertisements. When searching for jobs or applicants, a job seeker or recruiter can ask for all the job advertisements or applications that match his/her application in addition to expressing his/her desired job-related descriptions. Getoor et al. [10] represent a survey on Link Mining using the Top-k query processing technique. Link mining refers to data mining techniques that explicitly consider these links when building predictive or descriptive models of the linked data. Most of the CV ranking research is done on the CV, written in the English language. In this study, we have developed an automated candidate selection system that takes the CVs (written in Bengali) of candidates and the employer’s requirements as input. Then, it extracts information of the candidates using Bangla Language Processing (BLP) from the CVs, generates an average Cosine Similarity Score for each CV. Finally, it ranks the candidates according to these average Cosine Similarity Scores and returns the dominant candidates’ list.
3
Methodology
In this chapter, we describe the overall methodology of candidate selection system.
1074
M. M. Islam et al.
Fig. 1. Data preparation module
3.1
Data Preparation Module
The data preprocessing module’s function is to extract data, tokenize data and extract keywords from that data. Since resumes are written in Bengali, Bangla Language Processing (BLP), Natural Language Processing (NLP) is used to extract information from the resumes. For a computer to store text and numbers that are written in Bengali, there needs to be a code that transforms characters into numbers. The Unicode standard defines such a coding system by using character encoding. The Unicode Standard consists of a set of code charts for visual reference, an encoding method and standard character encodings, a collection of reference data files, and many related items. Different character encodings can implement Unicode. The most commonly used encodings are UTF-8, UTF16, etc. UTF-8 is a variable width character encoding capable of encoding all 1,112,064 valid code points in Unicode using one to four 8-bit bytes [19]. Bengali Unicode block contains characters for the regional languages such as Bengali, Assamese, Bishnupriya Manipuri, etc. The range for Bangla Unicode is U+0980 to U+09FF. Before going into the score generation module, data needs to be in the list form because of the word2bec model discussed lengthly in Score Generation Module Sect. 3.2. Converting a sequence of characters into a sequence of tokens is known as tokenization or lexical analysis. The program that performs lexical analysis is termed as tokenizer. Tokens are stored in the list for further access. Finally, these tokens, as several lists, go into the score generation module. 3.2
Score Generation Module
Word Embedding: Word embedding is the collective name for language modeling and feature learning techniques in natural language processing (NLP), where words or phrases from the vocabulary maps to vectors of real numbers. Conceptually it involves a mathematical embedding from space with many dimensions per word to a continuous vector space with a much lower dimension. Word Embedding [21] is all about improving networks’ ability to learn from text data by representing that data as lower-dimensional vectors (which is also known as Embedding). This technique reduces the dimensionality of text data and can
An Automated Candidate Selection System
1075
Fig. 2. Score generation module
also learn some new traits about words in a vocabulary. Besides, it can also capture the context of a word in a document, semantic and syntactic similarity, relation with other words, etc. Word2Vec is a popular method to construct such word embedding. This method takes input as a large corpus of text and produces a vector space as output. Each vector consists of many hundred dimensions or more than a hundred dimensions, with each unique word from the large corpus of text being assigned a corresponding vector in the space. Word vectors are positioned in the vector space so that words that share common contexts in the corpus are located close to one another in the space. Considering the following sentence: “Have a good day” with exhaustive vocabulary V = Have, a, good, day. One-hot encoding of this vocabulary list is done by putting one for the element at the index representing the corresponding word in the vocabulary and zero for other words. One-hot encoded vector representation of these words are: Have = [1,0,0,0]; a=[0,1,0,0]; good=[0,0,1,0]; day=[0,0,0,1]. After creating a one-hot encoded vector for each of these words in V, the length of our onehot encoded vector would be equal to the size of V (=4). Visualization of these encodings represented by 4-dimensional space, where each word occupies one of the dimensions and has nothing to do with the rest (no projection along the other dimensions). In this study, the Common Bag of Words (CBOW) (based on Neural Network) method is used to obtain Word2Vec. CBOW method takes each word’s context as the input and tries to predict another word similar to the context [20]. For the sentence “Have a great day,” if the input to the Neural Network is the word (“great”), the model will try to predict a target word (“day”) using a single context input word (“great”). More specifically, the one-hot encoding of the input word is used to measure the output error compared to the one-hot encoding of the target word (“day”). In Fig. 3, a one hot encoded vector of size V goes as input into the hidden layer of N neurons, and the output is again a V length vector with the elements being the softmax values. Wvn(V*N dimensional matrix) is the weight matrix that maps the input x to the hidden layer. W’nv (N*V dimensional matrix)is
1076
M. M. Islam et al.
Fig. 3. A simple CBOW model with only one word in the context
the weight matrix that maps the hidden layer outputs to the final output layer. The hidden layer neurons copy the weighted sum of inputs to the next layer. The only non-linearity is the softmax calculations in the output layer. But, the above CBOW model in Fig. 3 used a single context word to predict the target. This CBOW model takes C context words. When Wvn is used to calculate hidden layer inputs, we take an average over all these C context word inputs. Cosine Similarity: Cosine Similarity is used for determining the similarity between two non-zero vectors of an inner product space by measuring the cosine of the angle between them [4]. The cosine of zero degrees (Cos(0)) is 1, and it is less than 1 for any angle in the interval (0,π] radians. The cosine of two non-zero vectors can be calculated by using the Euclidean dot product formula. Given two vectors of attributes, A and B, the cosine similarity, cos(Θ), is represented using a dot product and magnitude as where Ai and Bi are components of vector A and B, respectively. A.B = ||A||||B||cosΘ (1) similarity(A, B) = cosΘ =
n Ai Bi A.B = n i=12 n 2 ||A||||B|| i=1 Ai i=1 Bi
(2)
Cosine similarity returns a list of scores for each skill. After that final score for a CV is calculated by taking the mean of all the scores. Thus, all the CV goes through the model and generates scores, which is then stored in a database to rank them. Finally, Recruiters can see all the CVs in sorted order.
4 4.1
Implementation and Experimental Results Experimental Setup
Using Bangla Language Processing (BLP), an automated candidate selection system has been developed on a machine having Windows10, 2.50 GHz Core i5-3210 processor with 8 GB RAM. The system has been developed in Python 3.7.3, in which gensim, tensorflow, and keras is used to complete this project.
An Automated Candidate Selection System
4.2
1077
Implementation and Performance Evaluation
For implementing our study, we have collected 50 CVs (written in Bengali), and all the experiments are done on these CVs. All the data are extracted from these CV documents using UTF-8 encoding, and the output of the extracted data is shown in Fig. 4.
Fig. 4. Console output of extracted data from CV
Here, all the data in Fig. 4, is not needed for calculating cosine similarity. So, important keywords like “Skills,” “CGPA,” etc. are extracted to match the company requirements and calculate average cosine similarity. All the important keywords extracted from CV is shown in Fig. 5a.
Fig. 5. Snapshot of the system which indicates important keywords and company requirements required
1078
M. M. Islam et al.
Fig. 6. Bar plot of all the CV’s average cosine similarity scores Table 1. Average cosine similarity scores for all the CVs CV No Avg. Cosine Similarity Score 1
0.54
2
0.67
3
0.65
4
0.60
5
0.58
6
0.62
7
0.61
8
0.52
9
0.56
10
0.58
For measuring Cosine Similarity, these keywords needed to be in the vector form. So, all the keywords are transformed into vectors using Word2Vec embedding. Company requirements are shown in Fig. 5b, also transformed into vectors in the same process. Finally, Cosine Similarity is calculated using Eq. 2. Cosine Similarity gives a list of scores for all the keywords that are matched. So, taking an average of all the scores, an average cosine similarity score is calculated. Thus, for each CV, an average cosine similarity score is measured and is shown in Table. 1. A bar chart of similarity scores for 10 CVs among all the CVs are also shown in Fig. 6. From which, we can see that CV No. = 2 has the highest score. Using the average Cosine Similarity Score, all the CVs are sorted. The top 5 sorted CVs are shown in Fig. 7.
An Automated Candidate Selection System
1079
Fig. 7. Top 5 best CV among all the candidates
5
Conclusion
In this paper, we have narrated an automated candidate selection system that ranks all the CVs (written in Bengali) by extracting information and calculating an average Cosine Similarity Score. Automating the complete candidate selection task may help the HR agencies save time, cost, and effort to search and screen the pioneer applicants from vast applications. There are many automated candidate ranking systems available online in which all the CV format is in English. But we have developed an automated candidate selection system suitable with the help of Bangla Language Processing (BLP) for a CV that is written in Bengali. In the system performance evaluation, we have used 50 CVs of technical background testing of the system and found that the system works efficiently to return the best candidates by matching the given requirements with the candidate’s qualifications. Altogether, the system performs better in filtering the CV documents written in Bengali and the candidates based on the CV document’s information. But, Our current method can not capture temporal features. That is why it will fail to find sequential similarity between sentences. Suppose the recruiter wants to prioritize a candidate’s recruitment qualities via giving quality tags sequentially in a sentence. In that case, the current method will fail to rank CVs according to the priorities. Our future work includes a system that can directly take a job description from the recruiters and then evaluate CVs more dynamically. We are planning to start with a bi-directional LSTM encoder with the cosine similarity computing layer to give our model the capability of understanding both the semantic and syntactic meaning of a sentence.
References 1. Vasant, P., Zelinka, I., Weber, G.W.: Intelligent computing & optimization. Springer, Berlin (2018)
1080
M. M. Islam et al.
2. In: Intelligent Computing and Optimization, Proceedings of the 2nd International Conference on Intelligent Computing and Optimization 2019 (ICO 2019), Springer International Publishing, ISBN 978-3-030-33585 -4 3. Aggarwal, C.C., Zhai, C.: Mining text data. Springer Science & Business Media, Berlin (2012) 4. Croft, D., Coupland, S., Shell, J., Brown, S.: A fast and efficient semantic short text similarity metric. In: 2013 13th UK Workshop on Computational Intelligence (UKCI), pp. 221–227. IEEE (2013) 5. De Meo, P., Quattrone, G., Terracina, G., Ursino, D.: An xml-based multiagent system for supporting online recruitment services. IEEE Trans. Syst. Man, CybernetPart A: Syst. Humans 37(4), 464–480 (2007) 6. Faliagka, E., Ramantas, K., Tsakalidis, A.K., Viennas, M., Kafeza, E., Tzimas, G.: An integrated e-recruitment system for cv ranking based on ahp. In: WEBIST, pp. 147–150 (2011) 7. Faliagka, E., Tsakalidis, A., Tzimas, G.: An integrated e-recruitment system for automated personality mining and applicant ranking. Internet research (2012) 8. Fazel-Zarandi, M., Fox, M.S.: Semantic matchmaking for job recruitment: an ontology-based hybrid approach. In: Proceedings of the 8th International Semantic Web Conference, vol. 525 (2009) 9. Gedikli, F., Bagdat, F., Ge, M., Jannach, D.: Rf-rec: Fast and accurate computation of recommendations based on rating frequencies. In: 2011 IEEE 13th Conference on Commerce and Enterprise Computing, pp. 50–57. IEEE (2011) 10. Getoor, L., Diehl, C.P.: Link mining: a survey. Acm Sigkdd Explorations Newsletter 7(2), 3–12 (2005) 11. Hand, D.J., Adams, N.M.: Data mining. Wiley StatsRef: Statistics Reference Online pp. 1–7 (2014) 12. Kessler, R., Torres-Moreno, J.M., El-B`eze, M.: E-gen: automatic job offer processing system for human resources. In: Mexican International Conference on Artificial Intelligence, pp. 985–995. Springer (2007) 13. Kumari, S., Giri, P., Choudhury, S., Patil, S.: Automated resume extraction and candidate selection system. Int. J. Res. Eng. Technol. [IJRET] 3, 206–208 (2014) 14. Menon, V.M., Rahulnath, H.: A novel approach to evaluate and rank candidates in a recruitment process by estimating emotional intelligence through social media data. In: 2016 International Conference on Next Generation Intelligent Systems (ICNGIS), pp. 1–6. IEEE (2016) 15. Mining, K.D.T.D.: What is knowledge discovery. Tandem Computers Inc 253 (1996) 16. More, S., Priyanka, B., Puja, M., Kalyani, K.: Automated cv classification using clustering technique (2019) 17. Shabnam, A., Tabassum, T., Islam, M.S.: A faster approach to sort unicode represented bengali words. Int. J. Comput. Appl. 975, 8887 (2015) 18. Suresh, R., Harshni, S.: Data mining and text mining—a survey. In: 2017 International Conference on Computation of Power, Energy Information and Communication (ICCPEIC), pp. 412–420. IEEE (2017) 19. Wikipedia contributors: Utf-8 — Wikipedia, the free encyclopedia. https://en. wikipedia.org/w/index.php?title=UTF-8&oldid=974048414 Accessed 22 August 2020
An Automated Candidate Selection System
1081
20. Yasmin, F., Nur, M.I., Arefin, M.S.: Potential candidate selection using information extraction and skyline queries. In: International Conference on Computer Networks, Big data and IoT, pp. 511–522. Springer (2019) 21. Zhang, Y., Jatowt, A., Tanaka, K.: Towards understanding word embeddings: Automatically explaining similarity of terms. In: 2016 IEEE International Conference on Big Data (Big Data), pp. 823–832. IEEE (2016)
AutoMove: An End-to-End Deep Learning System for Self-driving Vehicles Sriram Ramasamy(&) and J. Joshua Thomas Department of Computing, UOW Malaysia, KDU Penang University College, 10400 Penang, Malaysia [email protected], [email protected]
Abstract. End to End learning is a deep learning approach that has been used to solve complex problems that would usually be carried out by humans with great effect. A deep structure was designed within this study to simulate humans’ steering patterns in highway driving situations. The architecture of the network was based on image processing algorithm which is integrated with deep learning convolutional neural network (CNN). There are five aspects in this work, which enables the vehicle to detect the lanes, detect the speed of the vehicle, detect the angle of the road, recognize the objects on the road and predict the steering angle of the vehicle. A self-derived mathematical model is used to calculate the road angles for the prediction of vehicle’s steering angles. The model is trained on 2937 video frame samples and validated on 1259 samples with 30 epochs. The video of the local road was set as the output which will show the difference between actual steering angles and predicted steering angle. The experiments have been carried out in a newly built industrial park with suitable industry 4.0 standard design of urban smart development. Keywords: Autonomous driving Steering angle Deep neural network Selfdriving vehicles
1 Introduction 1.1
Background of the Work
AutoMove, an End-to-End Deep Learning for Self-Driving Vehicles have five main features which is to detect the lanes of the road, detect the speed of the vehicle from white-dashed lines, detect the angle of the road, object detection and predicting the steering angle of the vehicle. This work involves five phases to complete. Firstly, dataset is taken using smartphone in a Malaysian local road. Then, the dataset is analyzed and added as an input to detect the lanes for the vehicle to navigate inside a lane (Thomas et al. 2019). After detecting the lanes, white-dashed lines on the middle of the road are taken to calculate the speed of the vehicle. The speed is determined by calculating the distance of starting and ending point of white-dashed lines and the time taken to reach the end line. Next, the angle of the road is calculated. Then, the objects on the road are detected. Finally, the steering angle of the vehicle is predicted based on the curvature of the road. If the angle is around zero to five degrees, the steering angle will be straight whereas if the angle goes under zero degree, the steering angle will turn © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1082–1096, 2021. https://doi.org/10.1007/978-3-030-68154-8_91
AutoMove: An End-to-End Deep Learning System
1083
left and if the angle is more than five degree the steering angle will turn right The rest of the article are organized such as, literature review in Sect. 2. Section 3 discussed the methodology, and navigation of the self-driving vehicle. The implementation of the five stages are described in Sect. 4. Section 5 has covered the integration of the image processing algorithm with convolutional neural network (CNN) for the automatic navigation. Section 6 concludes the work done.
2 Literature Review 2.1
Image Recognition
2.1.1 Traffic Sign Recognition Rubén Laguna and his group mates developed a traffic sign recognition system using image processing techniques (Laguna et al. 2014). There are four steps in this application. Firstly, image pre-processing step was done. Next, is the detection of regions of interest (ROIs), that involves image transformation to grayscale and implementing edge detection by the Laplacian of Gaussian (LOG) filter. Then, the potential traffic signs detection is identified, where comparison between ROIs and each shape pattern will be done (Laguna et al. 2014). Third step is recognition stage using a cross-correlation algorithm, where if each potential traffic sign is validated, it will be classified based on the traffic sign database. Finally, previous stages are managed and controlled by a graphical user interface. The results obtained showed a good accuracy and performance of the developed application, acceptable conditions of size and contrast of the input image were taken in consideration (Laguna et al. 2014). 2.2
Deep Learning Neural Network
2.2.1 Convolutional Neural Network (CNN) In 2016, NVIDIA Corporation implemented End to End Learning for Self-Driving Cars using a convolutional neural network (CNN). Raw pixels from a single front-facing camera was map directly to steering commands (Bojarski et al. 2016, 3-6). According to the authors, this end-to-end approach proved powerful. The system learns to drive in traffic with less training data on local roads with or without lane markings. It also worked in places with blurry visual guidance such on impervious surface roads and in parking lots. The system automatically learns internal model of the needed processing steps like detecting suitable road signs with only the human steering angle as the training signal (Bojarski et al. 2016, 3-6). Images are fed into CNN, which then measure a proposed steering command. After the training is done, the steering from the video images will be generated from the network. For data collection, training data taken by driving on a different type of roads and in a diverse set of weather conditions and lighting. Mostly, the road data was collected in central New Jersey. Data was collected in cloudy, clear, snowy, foggy, and rainy weather, both day and night. 72 h of driving data was collected as of March 28, 2016 (Bojarski et al. 2016, 3-6). The weights of network were trained to reduce the mean squared error between the adjusted steering command and steering command output by the network. There are 9 layers for the network including 5 convolutional layers, a normalization layer and 3 fully
1084
S. Ramasamy and J. Joshua Thomas
connected layer. The input image is separated into YUV planes and send to the network (Bojarski et al. 2016, 3-6). Next is the training. Selecting the frames to use is the first step to train a neural network. The collected data is labelled with weather condition, road type, and driver’s activity (staying in lane, turning and switching lanes). The data that driver was staying in a lane was selected and others were discarded. Then the video sampled at 10 FPS (Bojarski et al. 2016, 3-6). After that is data augmentation. The data will be augmented after selecting the final set of frames by adding artificial rotations and shifts to teach the network how to restore from a poor orientation or position. The final stage is simulation. Pre-recorded videos will be taken by the simulator from a forward-facing onboard camera and images that relatively what would appear if the CNN were, instead, steering the vehicle will be generated. In conclusion, during training, the system learns to detect the outline of a road without the explicit labels. Thomas, J. J et al., has referred multiple works in deep learning which we have used for the literature review. 2.3
Object Detection
2.3.1 Faster R-CNN The R-CNN technique trains CNN end-to-end to do classification for proposal regions into object background or categories. R-CNN mostly act as a classifier, and the object bounds cannot not be predicted. The performance of the region proposal module defines the accuracy. Pierre Sermanet and his team proposed a paper in 2013 under the title of “OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks” which shows the ways of using deep networks for predicting object bounding boxes (Sermanet et al. 2013). In the OverFeat method, a fully connected layer is trained to predict the coordinates of the box for the localisation task that consider a single object. Then, the fully connected layer is turned into a convolutional layer for multiple class object detection (Sermanet et al. 2013). Figure 1 shows the architecture of Faster R-CNN, which is a single, unified network for object detection.
Fig. 1. Architecture of Faster R-CNN (Sermanet et al. 2013)
AutoMove: An End-to-End Deep Learning System
1085
Fast R-CNN implement end-to-end detector training on shared convolutional features and display decent accuracy and speed. On the other hand, R-CNN fails to do real time detection because of its two-step architecture (Joshua Thomas & Pillai 2019). 2.3.2 YOLO YOLO stands for “You only look once”. According to Redmon (2016), it is an object detection algorithm that runs quicker than R-CNN because of its simpler architecture. Classification and bounding box regression will be done at the same time. Figure 2 shows how YOLO detection system works.
Fig. 2. YOLO Detection
A single convolutional network predicts several bounding boxes and class probabilities for those boxes simultaneously. YOLO trains on full images and the detection performance will be optimized directly. Thomas, J. J., Tran, H. N has worked on graph neural network application which has been referred in this work for literature.
3 Methodology 3.1
System Architecture
Figure 3 shows the overview of AutoMove system architecture. The entire process was done after collecting the road dataset using a smartphone. A smartphone camera was mounted in front of the steering of a car. The video of the road approximately 1.5 km was taken while driving on the road. Then, the dataset was fed into the code. The first step to produce a self-driving vehicle is to detect the lanes. Then, the speed of the vehicle is identified by using the white line markers on the road. Next, objects that are surrounded on the road are detected using object detection. Besides, road angles will be calculated in order to perform autonomous navigation.
4 Implementation 4.1
Lane Detection
Lane detection is implemented by colour thresholding. The first step is to convert the image to grayscale identifying the region of interest. The pixels with a higher brightness value will be highlighted in this step. Figure 4 shows the code snippet to convert the image to grayscale and Fig. 5 shows the image that is converted to grayscale.
1086
S. Ramasamy and J. Joshua Thomas Collect road dataset with straight and curve road
Object Detection on the road
Speed Detection
Find the road angle
Lane Detection
Specify the objects to be detected such as vehicles
Predict the steering angle
Train the actual steering angle with neural network
Navigation for Self-driving vehicle
Fig. 3. Overall stages in AutoMove
#read image img = cv2.imread('road png.PNG') #convert image to grayscale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) Fig. 4. Code snippet to convert image to gray scale
After applying threshold, it is required to remove as much of the noise as possible in a frame (Lane detection for self driving vehicles, 2018). A mask is added to the region of interest to remove the unwanted noise near the lanes so that the lanes can be detected clearly. Figure 6 shows the image after removing the unwanted lines.
AutoMove: An End-to-End Deep Learning System
1087
Fig. 5. Image that are converted to grayscale
Fig. 6. Image after removing noises
After removing all the noises, higher order polynomial is needed to incorporate to detect curved roads. For that, the algorithm must have an awareness of previous detection and a value of confidence. In order to fit a higher order polynomial, the image is sliced in many horizontal strips. On each strip, straight line algorithm was applied, and pixels were identified corresponding to lane detections. Then, the pixels will be combined all the stripes and a new polynomial function will be created that fits the best (Bong et al. 2019). Finally, a blue mask is attached on the lane to see the difference between the lanes and the other images on the road. Figure 7 and 8 shows the lane detection on two different sets of roads. 4.2
Speed Detection
Speed detection in autonomous vehicles is very crucial to reduce number of accidents on the road. According to Salvatore Trubia (2017), one of the countermeasures is the use of in-vehicle technology to help drivers to maintain the speed limits and also prevent the vehicle from overtaking the speed limit (Thomas et al. 2020).
1088
S. Ramasamy and J. Joshua Thomas
Fig. 7. Lane detection on curved road
Fig. 8. Lane Detection on straight road
In this work, white line markers are used to measure the speed of the vehicle. When a line marker is present, number of frames will be tracked until the next lane marker appears in the same window. Then, the number of frame rate of the video is also taken which will be extracted from the video to determine the speed. Figure 9 shows the architecture to calculate the linear speed of the vehicle.
Fig. 9. Speed calculation
Based on the Fig. 9, on a straight road, the far white lines will be ignored because the distance between the dash cam and the white line is very far from the video. Therefore, nearest white straight line which can be seen fully from the dash cam will be taken. Starting and endpoint of the white dashed line will be taken to measure the speed. When it goes from one frame to another, the position of the white dashed lines will be measured (mcCormick 2019). Then, when the white dashed lines reach the end point of it, the number of frames took to reach the point will be calculated. Equation 1 is the calculation to calculate the speed. Speed ¼
distance ; where time is number of frames time
AutoMove: An End-to-End Deep Learning System
1089
To get the position of the white dashed lines, a cv2.rectangle is used. A rectangle is drawn in between of two lines. Then, using matplotlib, the output is plotted to see the position of rectangle. It is plotted to the nearest white line. Figure 10 shows the output of the drawn rectangle.
Fig. 10. Output of the drawn rectangle
Based on the output, if the number of frame counts is 14, then the speed of the vehicle will be 10.7 km/h. This is calculated using the Eq. 1. Figure 11 shows the output using matplotlib and cv2 and Fig. 12 shows the speed of the vehicle on a straight road.
Fig. 11. Matplotlib speed detection
Fig. 12. Speed detection actual road
To measure the speed on the curve, the same method is used. However, the position of rectangle will be changed according to the position of the white lines. Figure 13 shows the speed of the vehicle on a curve road.
1090
S. Ramasamy and J. Joshua Thomas
Fig. 13. Speed Detection output for curved road
4.3
Road Angle Detection
According to Cuenca (2019), vehicle speed and steering angle are required to develop autonomous driving. Supervised learning algorithms such as linear regression, polynomial regression and deep learning are used to develop the predictive models. In this work, due to lack of hardware support, the road angle is taken to predict the steering angle of the vehicle. The video dataset is used to measure real-time road angle. The equation of a polynomial regression (Curve Fitting and Interpolation, 2019) is: y ¼ ao þ a1 x þ a2 x 2 þ e Then, Rosenbrock function (Croucher, 2019) is created to optimize the algorithms. The mathematical equation is stated in Eq. 3: f ðx; yÞ ¼ n
X
h i 2 i ¼ 1 b xi þ x21 þ ða xi Þ2
In this formula, a and b parameters represent the constants and usually set to a = 1 and b = 100. Figure 14 shows the code snippet for Rosenbrock function.
def curve ( x, t, y ): return x[0] * t * t + x[1] * t + x[2] Fig. 14. Rosenbrock function
AutoMove: An End-to-End Deep Learning System
Fig. 15. Road angle calculation
1091
Fig. 16. Curved road angle average
Based on the Fig. 15, the difference angle for each frame is taken by subtracting top angle with bottom angle of the rectangle which is being placed on the white-dashed lines. However, the average angle is calculated using a formula called exponential moving average, (EMA) (Fig. 16). EMA ¼ Difference AngleðtÞ k þ EMAð yÞ ð1 kÞ Where: t = difference angle y = average angle k = alpha Figure 17 and 18 shows the top and bottom angle of straight road which are used to determine the difference angle.
Fig. 17. Top angle of straight road
Fig. 18. Bottom angle of straight road
Figure 19 and 20 shows the top and bottom angle of curved road which are used to determine the difference angle (Fig. 21). In this work, YOLO algorithm is used for object detection as it is faster compared to other classification algorithms. In real time, YOLO algorithm process 45 frames per second (Redmon and Farhadi 2016). Furthermore, even though it makes localization error, less false positives will be predicted in the background. To make use of YOLO algorithm, an image is divided as grids of 3 3 matrixes.
1092
S. Ramasamy and J. Joshua Thomas
Fig. 19. Top angle of curved road
Fig. 20. Bottom angle of curved road
Fig. 21. Bounding boxes
4.4
Dataset
To develop this work, a new dataset was created. A One Plus 6 model smartphone was placed on a car’s dashboard. A video of the driving was taken at Jalan Barat Cassia 2, Batu Kawan, Penang which consist of straight and curve roads. The video size is 1920 1080 and the frame rate is 30fps per second. 4.5
Autonomous Navigation
In this section, we will discuss about the methods to navigate the autonomous vehicle using the trained model. Firstly, the trained model (Autopilot.h5) is loaded. Then, the model is predicted using a Keras model. According to the official Tensorflow documentation (tf.keras.Model| TensorFlow Core r2.0 2019), model groups layers into an object with training and inference features. The model then predicts the processed image which is the image that was fed into the training model. After that, a steering image is read from the local disk to visualize the steering angle difference on the output window. Then, the video of the local road was set as the output to predict the steering angle. Figure 22 shows the overall end-to-end learning.
AutoMove: An End-to-End Deep Learning System
1093
Fig. 22. Autonomous navigation of the novel end-to-end learning
To determine the accuracy of the steering angle, the difference between actual and predicted steering angle were measured. At the last frame of the video, the accuracy for the whole model is 91.3784%. Table 1 shows the actual and predicted angle for straight road.
Fig. 23. Straight road: AutoMove
Fig. 24. Curved road: AutoMove
Figure 23, 24 shows the output of autonomous navigation straight road and curved road. Based on the output, the steering of the vehicle is quite straight as the steering wheel does not turn on a straight road. This is because, the steering angle is in between zero to five degrees which was set to drive straight. However, when the road is curved, the accuracy reduces to 85% which can be seen in Table 2. Furthermore, the steering wheel is set according to few parameters, for instance, if the steering angle is less than zero degrees, then the steering wheel will turn left whereas if the steering angle is more than five degrees, the steering wheel will shift to right. Table 2 shows the actual and predicted angle for curved road.
1094
S. Ramasamy and J. Joshua Thomas Table 1. Actual and predicted angle for straight road Actual steering angle (°) Predicted steering angle (°) Accuracy (%) 0.4912 3.1006 91.3743 0.5635 2.7528 91.3754 0.3902 −0.2838 91.3771 -0.3001 −1.2578 91.3784
Table 2. Actual and predicted steering angle for curved road Actual steering angle (°) Predicted steering angle (°) Accuracy (%) −10.8001 −9.5169 85.8128 −10.8005 −9.8750 85.8192 −10.3392 −7.8328 85.8265 −10.7338 −10.0125 85.8334
5 Conclusion This work has developed mainly to reduce accidents in Malaysia. According to Othman O. Khalifa (2008), the United Nations has categorised Malaysia as 30th in the ranking of highest number of fatal accidents with an average of 4.5 causalities per 10,000 vehicles. Furthermore, this system was also developed to help elderly people who are too old to drive safely. This is because as driver age increases, changes in flexibility, visual acuity, reaction time, memory and strength will affect the ability to drive (Lutin et al. 2013). Acknowledgement. The authors would like to thank the UOW Malaysia KDU Penang University College on selecting this work for the launch of the new Batu Kawan campus, mainland Penang, Malaysia. There is an urgent need for low-cost but high throughput novel road analysis operations for self-driving vehicle in Batu Kawan UOW Malaysia Campus, hence this simulation might provide to be one of the ways forward.
References Thomas, J.J., Karagoz, P., Ahamed, B.B., Vasant, P.: Deep learning techniques and optimization strategies in big data analytics. IGI Global 10, 978 (2020). https://doi.org/10.4018/978-17998-1192-3 Thomas, J.J., Tran, H.N.T., Lechuga, G.P., Belaton, B.: Convolutional graph neural networks: a review and applications of graph autoencoder in chemoinformatics. In: Thomas, J.J., Karagoz, P., Ahamed, B. B., Vasant, P., (Ed.), Deep Learning Techniques and Optimization Strategies in Big Data Analytics (pp. 107–123). IGI Global (2020) https://doi.org/10.4018/978-1-79981192-3.ch007 Assidiq, A., Khalifa, O., Islam, M., Khan, S.: Real time lane detection for autonomous vehicles. In: 2008 International Conference on Computer and Communication Engineering (2008)
AutoMove: An End-to-End Deep Learning System
1095
Bojarski, M., Del Testa, D., Dworakowski, D.: End to End Learning for Self-Driving Cars (2016). https://arxiv.org/abs/1604.07316 Accessed 10 March 2019 Croucher, M.: Minimizing the Rosenbrock Function (2011). https://demonstrations.wolfram. com/MinimizingTheRosenbrockFunction/ Accessed 11 October 2019 Curve Fitting and Interpolation (2019) [ebook] (2019). http://www.engineering.uco.edu/ *aaitmoussa/Courses/ENGR3703/Chapter5/ch5.pdf Accessed 6 October 2019 García Cuenca, L., Sanchez-Soriano, J., Puertas, E., Fernandez Andrés, J., Aliane, N.: Machine learning techniques for undertaking roundabouts in autonomous driving. Sensors 19(10), 2386 (2019) Geethapriya, S., Duraimurugan, N., Chokkalingam, S.: Real-Time Object Detection with Yolo. Int. J. Eng. Adv. Technol. (IJEAT), 8(38) 578–581 (2019) https://www.ijeat.org/wpcontent/uploads/papers/v8i3S/C11240283S19.pdf Accessed 13 Nov. 2019 Huang, R., Pedoeem, J., Chen, C.: YOLO-LITE: A Real-Time Object Detection Algorithm Optimized for Non-GPU Computers (2018). https://arxiv.org/pdf/1811.05588.pdf Accessed 13 Nov. 2019 Jalled, F.: Face Recognition Machine Vision System Using Eigenfaces (2017) https://arxiv.org/ abs/1705.02782 Accessed 3 March 2019 Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017) Laguna, R., Barrientos, R., Blázquez, L., Miguel, L.: Traffic sign recognition application based on image processing techniques. IFAC Proc. Vol. 47(3), 104–109 (2014) Lane detection for self driving vehicles (2018). https://mc.ai/lane-detection-for-self-drivingvehicles/ Accessed 8 Oct. 2019 Lutin, J., Kornhauser, L.A., Lerner-Lam, E.: The Revolutionary Development of Self-Driving Vehicles and Implications for the Transportation Engineering Profession. Ite Journal (2013). https://www.researchgate.net/publication/292622907_The_Revolutionary_Development_of_ SelfDriving_Vehicles_and_Implications_for_the_Transportation_Engineering_Profession Accessed 23 Nov. 2019 Majaski, C.: Comparing Simple Moving Average and Exponential Moving Average (2019). https://www.investopedia.com/ask/answers/difference-between-simple-exponential-movingaverage/ Accessed 9 Nov. 2019 McCormick, C.: CarND Advanced Lane Lines (2017). https://github.com/colinmccormick/ CarND-Advanced-Lane-Lines Accessed 12 Oct. 2019 Pant, A.: Introduction to Linear Regression and Polynomial Regression (2019). https:// towardsdatascience.com/introduction-to-linear-regression-and-polynomial-regressionf8adc96f31cb Accessed 1 Nov. 2019 Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You Only Look Once: Unified, Real-Time Object Detection pp. 1–4 (2016) Redmon, J., Farhadi, A.: YOLO9000: Better, Faster, Stronger (2016). https://pjreddie.com/media/ files/papers/YOLO9000.pdf Accessed 11 Nov. 2019 Sermanet, P., Eigen, D., Zhang, X., Mathieu, M.: OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks (2013). https://arxiv.org/abs/1312.6229 Accessed 7 Nov. 2019 tf.keras.Model| TensorFlow Core r2.0 (2019) https://www.tensorflow.org/api_docs/python/tf/ keras/Model Accessed 23 Nov. 2019 Trubia, S., Canale, A., Giuffrè, T., Severino, A.: Automated Vehicle: a Review of Road Safety Implications as Driver of Change (2017) Vasant, P., Zelinka, I., Weber, G.: Intelligent Computing & Optimization. Springer (2018) Vasant, P., Zelinka, I., Weber, G.: Intelligent Computing and Optimization. Springer International Publishing (2019)
1096
S. Ramasamy and J. Joshua Thomas
Venketas, W.: Exponential Moving Average (EMA) Defined and Explained (2019). https://www. dailyfx.com/forex/education/trading_tips/daily_trading_lesson/2019/07/29/exponentialmoving-average.html Accessed 12 Nov. 2019 Vitelli, M., Nayebi, A.: CARMA: A Deep Reinforcement Learning Approach to Autonomous Driving (2016). https://www.semanticscholar.org/paper/CARMA-%3A-A-DeepReinforce ment-Learning-Approach-to-Vitelli-Nayebi/b694e83a07535a21c1ee0920d47950b4800b08bc Accessed 16 Nov. 2019 Wagh, P., Thakare, R., Chaudhari, J., Patil, S.: Attendance system based on face recognition using eigen face and PCA algorithms. In: 2015 International Conference on Green Computing and Internet of Things (ICGCIoT) (2015) Joshua Thomas, J., Pillai, N.: A deep learning framework on generation of image descriptions with bidirectional recurrent neural networks. In: Vasant, P., Zelinka, I., Weber, G.W. (eds) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866. Springer, Cham (2019) https://doi.org/10.1007/978-3-030-00979-3_22 Joshua Thomas, J., Belaton, B., Khader, A.T.: Visual analytics solution for scheduling processing phases. In: Vasant, P., Zelinka, I., Weber, G.W., (eds) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866. Springer, Cham (2019) https://doi.org/10.1007/978-3-030-00979-3_42 Thomas, J. J., Ali, A. M.: Dispositional learning analytics structure integrated with recurrent neural networks in predicting students performance. In: International Conference on Intelligent Computing & Optimization. pp. 446–456. Springer, Cham (2019) Bong, C.W., Xian, P.Y., Thomas, J.: Face recognition and detection using haars features with template matching algorithm. In: International Conference on Intelligent Computing & Optimization. pp. 457–468. Springer, Cham (2019) Thomas, J.J., Fiore, U., Lechuga, G. P., Kharchenko, V., Vasant, P.: Handbook of Research on Smart Technology Models for Business and Industry. IGI Global (2020) https://doi.org/10. 4018/978-1-7998-3645-2 Thomas, J.J., Wei, L.T., Jinila, Y.B., Subhashini, R.: Smart computerized essay scoring using deep neural networks for universities and institutions. In: Thomas, J.J., Fiore, U., Lechuga, G. P., Kharchenko, V., Vasant, P. (Ed.), Handbook of Research on Smart Technology Models for Business and Industry. pp. 125–152. IGI Global (2020) https://doi.org/10.4018/978-17998-3645-2.ch006
An Efficient Machine Learning-Based Decision-Level Fusion Model to Predict Cardiovascular Disease Hafsa Binte Kibria(B) and Abdul Matin Department of Electrical and Computer Engineering, Rajshahi University of Engineering and Technology, Rajshahi 6204, Bangladesh [email protected], [email protected]
Abstract. The world’s primary cause of mortality is cardiovascular disease at present. Identifying the risk early could reduce the rate of death. Sometimes, it is difficult for a person to undergo an expensive test regularly. So, there should be a system that can predict the presence of cardiovascular disease by analyzing the basic symptoms. Researchers have focused on building machine learning-based prediction systems to make the process more simple and efficient and reduce both doctors’ and patients’ burdens. In this paper, a decision level fusion model is designed to predict cardiovascular disease with the help of machine learning algorithms that are multilayer neural network and the K Nearest Neighbor (KNN). The decision of each model was merged for the final decision to improves the accuracy. Here Cleveland dataset was used for ANN and KNN, which contains the information of 303 patients with eight attributes. In this two-class classification, ANN gave 92.10% accuracy, and KNN gave 88.16%. After fusing the decision of them, we got an accuracy of 93.42% that performed much better than two of them. The result was obtained by using 75% data in training. Keywords: Cardiovascular disease · Machine learning neural network · knn · Decision level fusion
1
· Artificial
Introduction
Coronary heart failure is one of the primary causes of mortality globally. It is commonly regarded as a primary disease in the old and middle ages. In worldwide, this Disease (CAD) particularly has the highest rate of mortality. In practice, the disease is treated by physicians, but there are very few cardiac experts in contrast to the patient’s number. Particularly, the traditional method of diagnosis of the disease requires time, and it is expensive. Furthermore, at the initial stage, the symptoms are mild, so people usually ignore it until it gets serious. False diagnosis, expensive tests are the main reasons people can not depend so much on doctors. Money also plays a crucial role in this issue. Researchers are trying their best to develop a more effective intelligent system for the diagnosis c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1097–1110, 2021. https://doi.org/10.1007/978-3-030-68154-8_92
1098
H. B. Kibria and A. Matin
of heart disease, and also a number of smart models have been developed over time. The main reason behind our motivation for designing a combined model is that it would improve the health care sector and reduce patients’ struggle. It will also save a lot of time for both physicians and the patients, and again, the patients will save from having an extra expense. A medical diagnosis system to predict cardiovascular disease based on machine learning gives more accurate results than traditionally, and the treatment cost reduces [5]. Medical diagnosis is a test where a physician tries to identify the disease by experimenting with the symptoms and the lab values obtained from the test. Various algorithms are used in that task [7]. In this paper, an automated medical diagnosis system is used to classify coronary disease based on a machine-learning algorithm. The system is based on a backpropagation algorithm that uses an Artificial Neural Network learning method. Also, the KNN algorithm is used for disease classification, and the performance of the two algorithms is observed. Finally, the two algorithms’ decision is fused by summation to make a single decision that gave a better result than each individual model [8]. So, the problem we are solving in this paper is to detect patients with or without heart disease by analyzing the medical data. If the patients do not have heart disease, then there is nothing to worry about, but if the result is positive (the patient has heart disease), he can undergo further treatment. As machine learning can process huge datasets, it can give results at very low cost and within a short time, improving the quality of health care and saving the physicians from a lot of pressure. This paper is presented as follows. Cardiovascular disease and the importance of early identification of disease are introduced in the first section. The second section discusses previous studies of medical diagnosis based on machine learning. The third section describes the method that is used to develop the model. In segment four, the paper analyzes the experimental findings of our suggested decision level fusion model. Then in section five, our fusion model’s performance has been compared with the related work done by so far. Finally, in section six, we end with a conclusion and possible future development of the work.
2
Related Study
Many researches have been done on heart disease classification. So now researchers are trying to develop the current models with something new. Modified and fusion algorithms have been introduced recently, which are giving better performance than others. Different techniques have also been trying to implement in preprocessing to make the data more suitable for algorithms. Here we have discussed some works related to the classification of heart disease with their potential improvements. In [7], cardiovascular disease was classified using the mean value. Mean values replaced missing values in the preprocessing step. They used the SVM classifier and Naive Bayes algorithm to compare with and without using mean value. Both algorithms improve their accuracy by using it in preprocessing. They used using
Decision-Level Fusion Model to Predict Cardiovascular Disease
1099
SVM linear-kernel and got an accuracy of 86.8% using 80% data in training. They have experimented with different train-test ratios in their work, but the accuracy was not satisfactory. 86.66% accuracy was achieved using 75% data in training. The best result among other algorithms was the SVM linear kernel. The main weakness of this work is its poor accuracy. In [5], researchers proposed a system that was designed with a multilayer perceptron neural network. Backpropagation was used for training and their proposed system. They have calculated different accuracy by adjusting the number of sizes in the hidden layer. With eight nodes in the hidden layer, the maximum accuracy of 95% was reached with PCA. Other performance parameters were observed with respect to different sizes of the hidden layer. Another study was done [12], where two supervised data mining algorithms were applied to classify a patient’s heart disease using two classification models, Na¨ıve Bayes and Decision tree classifier. The decision tree classifier predicted better with an accuracy of 91%, where Na¨ıve Bayes showed 87% accuracy. In [1], researchers analyzed machine learning methods for different types of disease prediction. Logistic regression (LR), XGBoost (XGB), random forest (RF) and LSTM, which is a special kind of recurrent neural network, were used for prediction. Here XGBoost performed better than LSTM. In a recent study [13], researchers used four different algorithms for heart disease prediction. The highest accuracy was achieved from KNN, which was 90.789% using test-rain split. Na¨ıve Bayes and random forest both gave 88.15% and 86.84% accuracy, respectively, using the Cleveland dataset. No complex or combinational model for higher accuracy was introduced in this study, which is a drawback. There is scope for improving these work accuracy by implementing a fuzzy method or combining separate algorithms to make a new one. So we have preferred to use the fusion model to classify disease, which can provide greater accuracy.
3
Proposed Solution
This research aims to design a decision-level fused model that can improve accuracy and identify heart disease efficiently. This fusion model classifies patients with and without heart disease by merging two machine learning-based algorithms. The proposed solution is displayed in Fig. 1:
Fig. 1. The proposed architecture
1100
H. B. Kibria and A. Matin
In this proposed architecture, there are several steps. First of all, the data were preprocessed. Using the chi-square test, only the critical features were selected that contribute most to the prediction. For the training with KNN, the top 10 important features were chosen for prediction. And for the artificial neural network, all 13 features were taken. After training, we fused the individual models’ decision by summation to get a more accurate outcome. The phases are described in detail below. 3.1
UCI Heart Diseases Dataset
The dataset UCI for heart disease was used in heart disease classification. The Cardiovascular disease data set has been taken from the UCI machine learning repository [2]. It has 303 instances with 13 features with some missing values. This repository holds a huge and varied range of data sets that are of interest to various sectors. The machine learning group uses these data to contribute to developments in different domains. The repository had been developed by David Aha [2]. The dataset is labeled into those two classes. The target value contains two categories: 0 and 1. 0 means no heart disease, and 1 represents having heart disease, which is a binary classification. The explanation of the dataset has been shown in Table 1 [2]. Table 1. Descriptions of features Features
Description
Age
Lifetime in years (29–77)
Gendr
Instance of gender (0 = Female, 1 = Male)
ChstPainTyp
The type of chest pain (1 = typical angina, 2 = atypical angina, 3 = non-anginal pain, 4: asymptomatic)
RestBlodPresure
Resting blood pressure in mm Hg [94, 200]
SermCholstrl
Serum cholesterol in mg/dl [126–564]
FstingBlodSugr
Fasting blood sugar >120 mg/dl (0 = False, 1= True)
ResElctrcardigrphic Results of Resting ECG (0: normal, 1: ST-T wave abnormality, 2: LV hypertrophy) MaxHartRte
Maximum heart rate achieved [71, 202]
ExrcseIndcd
Exercise induced angina (0: No, 1: Yes)
Oldpek
ST depression induced by exercise relativeto rest [0.0, 62.0]
Slp
Slope of the peak exercise ST segment (1 = up-sloping, 2 = flat, 3 = down-sloping)
MajrVesels
Number of major vessels coloured by fluoroscopy (values 0–3)
Thl
Defect types: value 3 = normal, 6 = fixed defect, 7 = irreversible defect
Decision-Level Fusion Model to Predict Cardiovascular Disease
(a) Target class
(b) Gender
(c) Chest pain type
(d) Slope
(e) Fasting blood sugar
(f) Major vessel number
(g) Exercise including angina
(h) Thalach
(i) Age
(j) Resting blood pressure
(k) Cholesterol
(l) Maximum heart rate
Fig. 2. Data distributions of attributes in terms of target class
1101
1102
H. B. Kibria and A. Matin
(n) ECG
(m) ST by exercise
Fig. 2. (continued)
Figure 2 shows the data distribution of all 13 attributes in terms of the target class. The target class distribution has been displayed here as well. 3.2
Data Pre-processing
Pre-processing represents an important step in the classification of data [6]. There are some missing data in the dataset. For example, major vessels and thalassemia are missing whilst classifying some data in patient records. The most common values replace such data that are lacking. Of the 14 attributes, eight are symbolic, and six are numeric. The categorical values are converted into numeric data by using Label Encoder. 3.3
Feature Scaling
Feature scaling was applied to limit the range of variables. We have used minmax normalization for feature scaling. Qv =
Qv − min(Qv ) max(Qv ) − min(Qv )
(1)
where Qv is an original value, Qv is the normalized value. And then, the dataset has been splitted into two parts: testing and training. 75% of the data were used for training and 25% for testing. 10% data was taken from training data as a validation set. The validation set prevents training data from over-fitting. 3.4
Feature Selection
There are 13 features in our dataset, and only ten of them were used for the training of KNN. The top significant features were selected using the chi-square test. It selects the features that have the best connection with output. Sex, fasting blood sugar, and resting ECG were excluded with the help of the chisquare test. So ten features out of 13 were used in KNN. Figure 3 has represented the ten most essential features according to their score.
Decision-Level Fusion Model to Predict Cardiovascular Disease
1103
Fig. 3. Data distributions of attributes in terms of target class
3.5
Training Phase
One portion of the data was used for the training set. The data for the training set was trained with ANN and KNN. For KNN, highly dependent features have been selected using the Chi-square test from 13 attributes. Artificial Neural Network. The human brain inspires scientists to build an Artificial Neural Network (ANN) as it has webs of interconnected neurons, so the processing ability is incredible. The basic processing unit of ANN is perceptron. The input, hidden, and output layers are the main three layers. The input is given to the input layer, and in the output layer, we got the result. The backpropagation has been used to find the error and to adjust the weight between the layers. After completing the backpropagation, the forward process begins and continues until the error is minimized [15]. In input, there are thirteen nodes. There are eight neurons in the hidden layer and one neuron for the output layer. The output layer produces the output value. In Fig. 4, the architecture of our artificial neural network has been displayed. We used sgd as an optimizer. K-Nearest Neighbor. K Nearest Neighbor (KNN) is a supervised machine learning algorithm. At close proximity, KNN finds similarities. Euclidean distance is usually used for the computation of similarity. KNN starts with deciding how many k neighbor numbers to compare with. K is set to be an odd number for the best outcome. After determining k, the distance of the object is calculated with every available object in the dataset. The least distance k from all acquired distance was chosen. The least distance k will be examined, and the classification result will be the category that shows the most [14]. Some of KNN’s benefits are that it is easy to use and simple. Proper optimization is needed to get the best result for any algorithm [16].
1104
H. B. Kibria and A. Matin
Fig. 4. Structure of proposed artificial neural network
3.6
Decision Level Fusion
After preprocessing and dividing the data for test and training, the dataset was trained with ANN and KNN. Then the output was predicted for the test data using the trained models. The trained models provided a decision probability for all the test data, and based on the decision score, the final output for test data have been predicted. The value for decision probability ranges between 0 to 1. If the value exceeds 0.5, then the model predicts the patient with that particular data with having heart disease. In decision-level merging, we have simply added the decision scores obtained from the two algorithms. The decisions score from two different algorithms was used to form one decision for all test data. Thus, we got a single decision from two classifiers. The equation for decision fusion is as follows: n Ds /n (2) Df = s=1
Where n is the algorithm number used for the fusion of decision, since only ANN and KNN have been used for fusion, the number of k was 2 in our approach. Df represents the final decision of the fusion model. And Dk is the decision probability that any separate algorithm gave. The score of Dk ranges between 0 to 1. in our fusion, the steps are listed below: Only at the decision stage, the fusion has occurred; that is why the model is defined as decision level fusion. As the decision scores range from both models is the same, so there is no need for extra scaling before fusion. Suppose one of the individual models has a false negative (like decision probability 0.45) for a test data that actually suggests a patient with heart disease and other algorithm gave the accurate result for that particular test data (0.7 that is true positive). Then after fusion (.45+.7)/2 = .575 score was found for the fusion model’s prediction. As the score is greater than .5, the result leads to true positive for that specific
Decision-Level Fusion Model to Predict Cardiovascular Disease
1105
Algorithm 1: Algorithm for decision-level fusion Input:
Float value Ds , one int n. Ds is the individual algorithm’s decision score, n is the individual algorithm number in fusion Output: Df ,new decision score of fusion model Dsum = 0 for s ← 0 to n do Dsum = Ds + Dsum end Df = Dsum /n
1 2 3 4 5
test data; thus, the fusion model’s accuracy increases. Usually, both separate algorithms gave accurate decisions for the same test data; this situation occurs only with one or two test data, and that’s made the fusion model more reliable and efficient. 3.7
Evaluation Metrics
Various performance parameters were measured with accuracy to see the performance of our model. The model’s performance will be clearly understood to see the output value of these parameters. Accuracy was calculated to observe the system performance. Accuracy =
T Rpos + T Rneg ∗ 100 T Rpos + T Rneg + F Apos + F Aneg
(3)
These following terms have been used in the equation of the evaluation matrix. The abbreviation has been introduced here. – – – –
True positive (T Rpos ): The output is positive, and expected to be positive. True negative (T Rneg ): The output is positive, but expected to be negative. False positive (F Apos ): The output is negative, and expected to be negative. False negative (F Aneg ): The output is negative, but expected to be positive.
If our model predicts any value as positive, precision means how many times it is actually positive. T Rpos (4) P recision = T Rpos + F Apos Recall is the indicator of the amount of actual positive data, both the parameters are significant for a model’s evaluation. Recall =
T Rpos T Rpos + F Aneg
(5)
F1-score combines recall and precision in a single calculation. F 1 − score =
2 ∗ P recision ∗ Recall P recision + Recall
(6)
We have also measured the confusion matrix to see the exact number of positive and negative predicted data correctly and incorrectly classified in Table 2.
1106
H. B. Kibria and A. Matin Table 2. Confusion matrix
Positive prediction
Really positive
Really negative
True positive T Rpos
False positive F Apos
Negative prediction False negative F Aneg True negative T Rneg
4
Performance Evaluation
A decision-level fusion model has been constructed by using an artificial neural network and k-nearest neighbor algorithm. The two algorithms were applied individually on the same dataset, and then the decision score of each algorithm was combined to improve the accuracy. And we got a better result after fusion.
(a) Train and validation accuracy
(b) Train and validation loss
Fig. 5. Train and validation accuracy and loss of ANN in terms of training number
Fig. 6. Variation of the accuracy of train and test regarding the KNN neighbor number
Decision-Level Fusion Model to Predict Cardiovascular Disease
(a) Confusion matrix of fusion model
1107
(b) ROC curve of ANN, KNN and fusion model
Fig. 7. Confusion matrix and ROC curve of decision level fusion model
Figure 5 displays the training and validation accuracy and loss with respect to the epoch. Fifty epochs have been used. In the graph, validation is mentioned as a test. The relationship between the test and training accuracy regarding the variance of KNN neighbour numbers has been shown in Fig. 6. From this graph, we selected the neighbor number k = 12 and achieved an accuracy of 88.16% for test data. Table 3 shows the various performance parameters, which increases when two algorithms’ decisions were combined. Thus the decision score level fusion became more efficient for predicting disease. Also, performance parameters have been displayed. Here the fusion’s model accuracy has been represented only with testing data, as fusion model was not trained. Table 3. Experimental results Approach
Tp Fp Accuracy (train) Accuracy (test) Precision Recall F1-score Roc-auc score Fn Tn
ANN
40
2
82.35
92.10
92
92
92
91.73
81.15
88.16
89
88
88
87.04
–
93.4
94
93
93
93.94
4 30 KNN
41
1
8 26 ANN+KNN 41
5
1 29
In Table 3, the accuracy for test and train data has been measured. And for the fusion model, only the accuracy for test data is displayed as decision-level fusion has been implemented after training two algorithms. The fusion model’s accuracy has risen 1% comparing ANN and 5% comparing KNN. All the other
1108
H. B. Kibria and A. Matin
parameters also improved. The confusion matrix is generated from the training dataset. From the values in Table 3, it can be said that ANN performed well compared to KNN, and the performance of the fusion model increased after merging. The decision-level fusion model performed much better than the two individual algorithms. Here each model is used independently for prediction, and then the decision score from each model has been combined to improve accuracy.
Fig. 8. Comparison of the performance parameters of ANN,KNN and fusion model
Figure 7(a) shows the confusion matrix of our fusion model. The miss classification rate is 6.57%. And in Fig. 7(b), the ROC curve has been displayed. The comparison of the performance of the three algorithms (ANN, KNN, and ANN+KNN) was displayed in Figs. 8 to show to improvement of fusion model.
5
Performance Comparison with Previous Work
In our work, we used the Cleveland dataset to classify heart disease. The previous works that have been done with the same dataset have been shown in this part. To illustrate the progress of our model, we compared our model to these. Table 4 represents the previous works. The methods that the researchers used have been mentioned with year and accuracy in this table. Researchers have used different algorithms in these works to diagnose heart disease. Some of these works in Table 4 have measured their accuracy with an 80:20 train-test split. So we have calculated the accuracy for 80% data in training using our fusion model to compare with this work. It gave an outstanding accuracy of 96.72% higher than any other model. Also, for other split ratios, the fusion model did well. In comparison, the efficiency of our decision-making fusion model is much higher than theirs. We have good accuracy with ANN and KNN, but accuracy has improved mostly after the two algorithms’ fusion. The comparison of the existing works in Table 4 indicates the predominance of the fusion model.
Decision-Level Fusion Model to Predict Cardiovascular Disease
1109
Table 4. Previous work with same dataset Author
Year Approach
Train-test ratio
D. Shah et al. [13]
2020 KNN Na¨ıve Bayes
Not mentioned 90.78% 88.157%
P. Ramya et al. [11]
2020 SVM Logistic regression
80:20
85% 87%
96.72%
H. Karthikeyan et al. [3]
2020 SVM Random forest
80:20
82% 89.9%
96.72%
N. Louridi et al. [7]
2019 SVM linear-kernel
80:20
86.8%
96.72%
N.K. Jalil et al. [10]
2019 KNN
80:20
91.00%
96.72%
M. Senthilkumar et al. [9] 2019 HRFLM(proposed)
83:17
88.40%
97%
K. T¨ ulay et al. [5]
2017 ANN with PCA
85:15
95%
97%
T.T. Hasan et al. [4]
2017 MLP
70:30
87%
91.20%
6
Accuracy
Fusion model’s accuracy -
Conclusion
This work aims to develop a decision score level fusion model that can provide a better result than one individual algorithm. This model creates a final decision using the decision score from two other models. Artificial neural network and k-nearest neighbor both has given good results individually for the prediction of cardiovascular disease. By merging the decision score of two algorithms, a significant improvement is noticed. If one algorithm somehow gives the wrong result for a particular data and other predicts right about it, so there is a possibility that the correct result will be obtained in the fusion model. That’s why in the medical diagnosis fused model is dominating. Our model gave an accuracy of 93.42%, and the separate model’s accuracy was 92.10% and 88.16% for ANN and KNN, respectively. In this paper, only the cardiovascular disease dataset has been used. This decision level fusion model can also be used for other medical datasets. In the future, this fused model can be enhanced by using different algorithms. Furthermore, more than two classification algorithms can be used to build a new model to gain more accurate result.
References 1. Christensen, T., Frandsen, A., Glazier, S., Humpherys, J., Kartchner, D.: Machine learning methods for disease prediction with claims data. In: 2018 IEEE International Conference on Healthcare Informatics (ICHI), pp. 467–4674. IEEE (2018) 2. Dua, D., Graff, C.: UCI machine learning repository (2017). http://archive.ics.uci. edu/ml 3. Harimoorthy, K., Thangavelu, M.: Multi-disease prediction model using improved svm-radial bias technique in healthcare monitoring system. J. Ambient Intell. Hum. Comput. 1–9 (2020)
1110
H. B. Kibria and A. Matin
4. Hasan, T.T., Jasim, M.H., Hashim, I.A.: Heart disease diagnosis system based on multi-layer perceptron neural network and support vector machine. Int. J. Curr. Eng. Technol. 77(55), 2277–4106 (2017) ¨ Prediction of heart disease using neural network. In: 2017 5. Karayılan, T., Kılı¸c, O.: International Conference on Computer Science and Engineering (UBMK), pp. 719– 723. IEEE (2017) 6. Kotsiantis, S., Kanellopoulos, D., Pintelas, P.: Data preprocessing for supervised leaning. Int. J. Comput. Sci. 1(2), 111–117 (2006) 7. Louridi, N., Amar, M., El Ouahidi, B.: Identification of cardiovascular diseases using machine learning. In: 2019 7th Mediterranean Congress of Telecommunications (CMT), pp. 1–6. IEEE (2019) 8. Matin, A., Mahmud, F., Ahmed, T., Ejaz, M.S.: Weighted score level fusion of iris and face to identify an individual. In: 2017 International Conference on Electrical, Computer and Communication Engineering (ECCE), pp. 1–4. IEEE (2017) 9. Mohan, S., Thirumalai, C., Srivastava, G.: Effective heart disease prediction using hybrid machine learning techniques. IEEE Access 7, 81542–81554 (2019) 10. Nourmohammadi-Khiarak, J., Feizi-Derakhshi, M.R., Behrouzi, K., Mazaheri, S., Zamani-Harghalani, Y., Tayebi, R.M.: New hybrid method for heart disease diagnosis utilizing optimization algorithm in feature selection. Health Tech. 1–12 (2019) 11. Ramya Perumal, K.A.: Early prediction of coronary heart disease from cleveland dataset using machine learning techniques. Int. J. Adv. Sci. Technol. 29(06), 4225 – 4234, May 2020. http://sersc.org/journals/index.php/IJAST/article/view/16428 12. Krishnan, S., Geetha, J.S.: Prediction of heart disease using machine learning algorithms. In: 1st International Conference on Innovations in Information and Communication Technology (ICIICT). IEEE (2019) 13. Shah, D., Patel, S., Bharti, S.K.: Heart disease prediction using machine learning techniques. SN Comput. Sci. 1(6), 1–6 (2020) 14. Telnoni, P.A., Budiawan, R., Qana’a, M.: Comparison of machine learning classification method on text-based case in twitter. In: 2019 International Conference on ICT for Smart Society (ICISS). vol. 7, pp. 1–5. IEEE (2019) 15. Thomas, J., Princy, R.T.: Human heart disease prediction system using data mining techniques. In: 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT), pp. 1–5. IEEE (2016) 16. Vasant, P., Zelinka, I., Weber, G.W.: Intelligent Computing & Optimization, vol. 866. Springer, Berlin (2018)
Towards POS Tagging Methods for Bengali Language: A Comparative Analysis Fatima Jahara, Adrita Barua, MD. Asif Iqbal, Avishek Das, Omar Sharif , Mohammed Moshiul Hoque(B) , and Iqbal H. Sarker Department of Computer Science and Engineering, Chittagong University of Engineering and Technology, Chittagong 4349, Bangladesh [email protected], [email protected], [email protected], [email protected], {omar.sharif,moshiul 240,iqbal}@cuet.ac.bd
Abstract. Part of Speech (POS) tagging is recognized as a significant research problem in the field of Natural Language Processing (NLP). It has considerable importance in several NLP technologies. However, developing an efficient POS tagger is a challenging task for resource-scarce languages like Bengali. This paper presents an empirical investigation of various POS tagging techniques concerning the Bengali language. An extensively annotated corpus of around 7390 sentences has been used for 16 POS tagging techniques, including eight stochastic based methods and eight transformation-based methods. The stochastic methods are uni-gram, bi-gram, tri-gram, unigram+bigram, unigram+bigram+trigram, Hidden Markov Model (HMM), Conditional Random Forest (CRF), Trigrams ‘n’ Tags (TnT) whereas the transformation methods are Brill with the combination of previously mentioned stochastic techniques. A comparative analysis of the tagging methods is performed using two tagsets (30-tag and 11-tag) with accuracy measures. Brill combined with CRF shows the highest accuracy of 91.83% (for 11 tagset) and 84.5% (for 30 tagset) among all the tagging techniques. Keywords: Natural language processing POS tagset · Training · Evaluation
1
· Part-of-speech tagging ·
Introduction
POS tagging has significant importance in many NLP applications such as parsing, information retrieval, speech analysis, and corpus development. Moreover, it is used as a pivotal component to build a knowledge base for natural language analyzer. It makes the syntactic parser effective as it resolves the problem of input sentence ambiguity. Tagging of words is significantly useful since they are used as the input in various applications where it provides the linguistic signal on how a word is being used within the scope of a phrase, sentence, or document. POS tagging directly affects the performance of any subsequent text processing steps as it makes the processing easier when the grammatical information c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1111–1123, 2021. https://doi.org/10.1007/978-3-030-68154-8_93
1112
F. Jahara et al.
about the word is known. Usually, supervised and unsupervised approaches are employed in POS tagging, which are further divided into rule-based, stochastic based, and transformation based methods. The rule-based POS tagging uses a dictionary or lexicon for taking the possible tags of a word. The stochastic method considers the highest frequency or probability value to assign a POS tag. Few stochastic tagging methods such as N-grams, CRFs, and HMMs have been implemented for Bengali, English and other languages [1–3]. The transformationbased method combines rule-based and stochastic techniques such as Brill Tagger. Designing a POS tagger is a very challenging task for a resource poor language like Bengali. POS tagging of the Bengali sentence is complicated due to its complex morphological structure, the dependency of the subject on verb infections, person-verb-tense-aspect agreement and the scarcity of pre-tagged resources [4,5]. Moreover, the ambiguity of a word with multiple POS tags and the lack of availability of language experts in Bengali language posses other obstacles that need to overcome. Most of the previous works on POS tagging in Bengali neither highlighted the tagging effectiveness nor investigated their appropriateness. Thus, to address this issue, this paper empirically investigates the performance of 16 POS tagging methods using a supervised approach on a corpus containing 7390 Bengali sentences. Comparative analysis in terms of execution time and accuracy are reported, which helps to decide the suitable use of POS tagging technique for various language processing tasks in Bengali.
2
Related Work
Different approaches have been explored on POS tagging in Bengali and other languages. Stochastic and transformation based methods are the most widely used techniques where a large dataset is prerequisite to achieve good performance. Hasan et al. [6] showed a comparative analysis of n-grams, HMM and Brill transformation-based POS tagging for south Asian languages. A tagset of 26 tags used for Bengali, Hindi and Telegu languages which gained 70% accuracy for Bengali using Brill tagger. Another work implemented the trigram and HMM tagging methods [7] for the Marathi language. A comparison between the stochastic (HMM, Unigram) and transformation based (Brill) methods is presented by Hasan et al. [8]. This work used a small training set of 4048 tokens in Bengali and experimented with two different tagsets (12-tag and 41-tag). The results revealed that Brill tagger performed better than the other stochastic methods. A stochastic based approach proposed by Ekbal et al. [9] concluded that a maximum entropy-based method outperforms the HMM-based POS tagging method for Bengali. Ekbal et al. [10] developed a POS tagger for Bengali sentences using CRF in the name entity recognition task. PVS et al. [11] showed that CRF, along with transformation-based learning, achieved 76.08% accuracy for Bengali POS tagging. The supervised tagging methods demanded a large amount of tagged data to achieve high accuracy. Dandapat et al. [12] used a semi-supervised method of
Towards POS Tagging Methods for Bengali Language:
1113
POS tagging with HMM and Maximum Entropy (ME). Hossain et al. [13] developed a method that checked whether the construction of the Bengali sentence is valid syntactically, semantically or pragmatically. They designed a rule-based algorithm using context-free grammars to identify all POS meticulously. Roy et al. [14] developed a POS tagger that identifies 8 POS tags in Bengali using grammar and suffix based rules. However, they only considered word-level tag accuracy, which failed to identify the correct tag sequence in sentences. Sakiba et al. [15] discussed a POS tagging tool where they used a predefined list of POS tags and applied rules to detect POS tags from Bengali texts. Their method used a very small data set containing 2000 sentences, and it faced difficulties due to the limited rules. Chakrabarti et al. [16] proposed a POS tagging method using the layered approach. Rathod et al. [17] surveyed different POS tagging techniques such as rule-based, stochastic, and hybrid for Indian regional languages where the Hybrid method performed better. Most of the previous approaches of POS tagging in Bengali experimented on a smaller dataset which limits to investigate their effectiveness on evaluation concerning diverse tagsets. In this work, a bit larger dataset consisting of 7390 sentences are used.
3
Methodology
The key objective of our work is to investigate the performance of different types of POS tagging techniques under supervised approach. To serve our purpose, we used a tagged corpus in Bengali developed by Linguistic Data Consortium [18]. The overall process of the work consists of five significant steps: tokenization, training/testing corpus creation, POS tagger model generation, tagged sentence generation, and evaluation. Figure 1 illustrates the abstract representation of the overall process to evaluate POS taggers. 3.1
Tagged Corpus
The corpus consists of 7390 tagged sentences with 22330 unique tokens and 115 K tokens overall. Two different levels of tagsets have been used for the annotation: 30-tags and 11-tags [19]. The corpus originally tagged with a 30-tag which denotes the lexical categories (i.e., POS) and sub-categories. This 30-tag is mapped into an 11-tag using a mapping dictionary, which considered the lexical categories alone. An extra tag (‘Unk’) is used for handling unknown words. If a tagger identifies a word which is not available on the training set, then the tagger labels it into “Unk” tag. The Table 1 illustrates the tagsets with their tag name and number of tags. 3.2
Tokenization
Tokenization is the task of slicing a sequence of character into pieces, called tokens [20]. A token is a string of contiguous characters grouped as a semantic unit and delimited by space and punctuation marks. These tokens are often
1114
F. Jahara et al.
Fig. 1. Abstract view of POS tagging evaluation process.
loosely referred to as terms or words. In our corpus, 7390 tagged sentences were tokenized into a total of 115 K tokens with 22,330 unique tokens. A sample tagged sentence and its corresponding tokens are shown in the following.
3.3
Train/Test Corpus Creation
The tokenized dataset is divided into two sets: train corpus and test corpus. The training corpus consists of 98 K tagged tokens, while the test corpus contained 17 K tokens. The data in the test corpus is untagged to use in the testing phase for evaluation. A data sample in the training corpus (with 11-tagset and 30tagset) and the testing corpus is illustrated in the following.
Towards POS Tagging Methods for Bengali Language: Table 1. Summary of POS tagsets 11 Tagset
Tagset count 30 Tagset
Tagset count
Noun (N)
44425
30819
Common Noun (NC) Proper Noun (NP)
7994
Verbal Noun (NV)
2985
Spatio-Temporal Noun (NST) Verb (V) Pronoun (P)
14292 6409
Main Verb (VM) Auxiliary Verb (VAUX)
2230
Pronominals (PPR)
5137
Reflexive (PRF) Reciprocal (PRC) Relative (PRL) WH Pronoun (PWH) Nominal Modifier (J) 14332 Demonstrative (D)
2876
Adverb (A)
3965
Participle (L)
573
Post position (PP)
3989
Particle (C)
6704
2627 12062
362 15 448 447
Adjective (JJ)
9377
Quantifier (JQ)
4955
Absolute Demostrative (DAB)
2421
Relative Demostrative (DRL)
400
WH Demostrative (DWH)
55
Adverb of Manner (AMN)
1995
Adverb of Location (ALC)
1970
Verbal Participle (LV) Conditional Participle (LC) Post Position (PP)
72 501 3989
Coordinating Particle (CCD)
2899
Subordinating Particle (CSB)
2051
Classifier Particle (CCL) Interjection (CIN) Others (CX)
324 59 1371
Punctuation (PU)
13519
Punctuation (PU)
Residual (R)
4348
Foreign Word (RDF)
13519 1873
Symbol (RDS)
1968
Others (RDX)
507
1115
1116
3.4
F. Jahara et al.
POS Tagging Model Generation
The training set is used to train different POS tagging methods. Each tagging method is trained on training corpus tagged with 11 and 30 tagsets–each of 16 POS tagging methods used in a unique way to generate their corresponding training models. N-gram, HMM, TnT, and CRF tagger models generate feature matrices which is used in calculating the probability of the tags. The Brill tagger generates rules used to estimate tags, and the HMM model makes a transition probability matrix called Markov Matrix. An N-gram tagger follows the Bag of Words approach while the CRF tagger uses a statistical approach. N-Gram Tagger. N-gram is a sequence of N words in a sentence. N-gram tagger considers the tags of previous (n − 1) words (such as one word in bigram and two words in trigram) in the sentence to predict the POS tag for a given word [7]. The best tag is ensured by the probability that the tag occurred with the (n − 1) previous tags. Thus, if τ1 , τ2 ...τn are tag sequence and ω1 , ω2 , ..., ωn are corresponding word sequence then the probability can be computed using the Eq. 1. (1) P (τi |ωi ) = P (ωi |τi ).P (τi |τi−(n−1) , ..., τi−1 ) where, P(ωi —τi ) denotes the probability of the word ωi given the current tag τi , and P(τi —τi−(n−1) ,...,τi−1 ) represents the probability of the current tag τi given the (n-1) previous tags. This provides the transition between the tags and helps to capture the context of the sentence. Probability of a tag τi given previous (n − 1) tags τi−(n−1) ,...,τi−1 can be determined by using the Eq. 2. P (τi |τi−(n−1) , ..., τi−1 ) =
C(τi−(n−1) , ..., τi ) C(τi−(n−1) , ..., τi−1 )
(2)
Each tag transition probability is computed by calculating the count occurrences of n tags divided by the count occurrences of the previous (n − 1) tags. Different N-gram models can be combined together to work as a combined tagger. HMM Tagger. In HMM, the hidden states are the POS tags (τ1 , τ2 , ....., τn ) and the observations are the words themselves (ω1 , ω2 , ......, ωn ). Both transition and emission probabilities are calculated for determining the most appropriate tag for a given word in a sequence. The overall probability of a tag τi given a word ωi is, P (τi |ωi ) = P (ωi |τi ).P (τi |τi−1 ).P (τi+1 |τi )
(3)
Here, P(ωi —τi ) is the probability of the word ωi given the current tag τi , P(τi —τi−1 ) is the probability of the current tag τi given the previous tag τi−1 and P(τi+1 —τi ) is the probability of the future tag τi+1 given the current tag τi .
Towards POS Tagging Methods for Bengali Language:
1117
TnT Tagger. In TnT, the most appropriate sequence is selected based on the probabilities of each possible tag. For a given sequence of words of length n, a sequence of tags is calculated by the Eq. 4. arg max
τ1 ...τn
n
[P (τi |τi−1 , τi−2 ).P (ωi |τi )]P (τn |τn−1 )
(4)
i=1
where, P(τi —τi−1 , τi−1 ) denotes the probability of the current tag τi given the two previous tags τi−1 , and τi−2 . P(ωi —τi ) indicates the probability of the word ωi given the current tag ωi , and P(τn —τ(n−1) ) denotes the probability of the tag τn given the previous tag τ(n−1) . CRF Tagger. CRF is a discriminative probabilistic classifier that calculates the conditional probability of the tags given an observable sequence of words. The conditional probability of a sequence of tags T=(τ1 , τ2 ,..,τn ) given a word sequence W=(ω1 , ω2 ,.., ωn ) of length n can be calculated by using the Eq. 5. P (T |W ) =
n 1 exp{ θk fk (τi , τi−1 , ωi )} Z(w) i=1
(5)
k
Here, fk (τi ,τi−1 ,ωi ) represents a feature function whose weight is θk , and Z(w) is a normalization factor which is the sum of all possible tag sequences. Brill Tagger. Brill tagger is a transformation-based tagger, where a tag is assigned to each word using a set of predefined rules. For a sequence of tags τ1 , τ2 ,.., τn Brill rules can be represented as the Eq. 6. Here, a condition tests the preceding words or their tags and executes the rule if fulfilled. τ 1 → τ2
(6)
Stochastic taggers can be used as a back-off tagger with the Brill Tagger. In our work, to investigate the effect of back-off tagger on tagging performance, we used uni-gram, bi-gram, tri-gram, uni-gram+bi-gram, uni-gram+bi-gram+trigram, HMM, TNT, and CRF tagging methods with Brill tagger. 3.5
Predicted Tagged Sentence Generation
The generated POS tagging model predicts the highest probability of a tag against the token and labels it with the appropriate POS tag. This process reads the untagged tokens and calculates the probabilities of different tags based on the trained tagger model. Stochastic tagger models (such as N-gram, HMM, TnT, and CRF) use the feature matrices to calculate the probability of the tags. The transformation-based (i.e., Brill tagger) model use the generated rules to estimate the probability of the tags. After POS tagging of each token, individual
1118
F. Jahara et al.
lists of tagged tokens are created for each sentence. Algorithm 1 describes the process of converting the tagged tokens lists into the tagged sentences. Algorithm 1: Tagged tokens to tagged sentence generation T ←List of Tagged Tokens List initialization tagged sentence ←[] for t ∈ T do S ←”” Tagged sentence initialization for token ∈ t do S ←S + token[word] + ”\” + token[tag] + ”” end tagged sentence.append(S); end Here T denotes a list of tagged tokens of the corresponding sentence, and S represents the tagged sentence list. Every token is a tuple of ‘word and corresponding ‘tag as token{word, tag}. The list of tokens is stacked as a sequence of ‘word and ‘tag to generate a tagged sentence. As an example, for the untagged testing tokens (illustrated in Sect. 3.3), the prediction model generates the tagged tokens and tagged sentence (for 11tagset), as shown in the following.
4
Evaluation Results and Analysis
To evaluate the performance of POS tagging technique, we use two parameters: accuracy (A) and execution time (E). The accuracy can be defined as the ratio between the number of correctly tagged tokens and the total number of tokens. The execution time (E) of a tagger can be computed by the addition of time required during training and testing. Experiments were run on a general-purpose R CoreTM i5-5200H processor running at 2.20 GHz, 8 GB computer with an Intel of RAM, and Windows 10. NVIDIA GeForce GTX 950 M GPU is used with 4 GB RAM. Sixteen POS tagging methods are implemented, and their performance is investigated. Two tagsets (11-tagset and 30 tagset) are used with 115 K tokens for evaluating the performance of each POS tagging method in terms of accuracy and execution time. Table 2 summarizes the accuracy of the POS tagging techniques. The analysis revealed that the Brill+CRF model achieved the highest accuracy of 84.5% (for 30 tagset) and 91.83% (for 11 tagset). The tri-gram methods performed poorly in both sets of POS tags. Additionally, it is observed that the accuracy of the taggers increases with the reduced number of tags in the tagset in all cases.
Towards POS Tagging Methods for Bengali Language:
1119
Table 2. Accuracy of 16 POS tagging methods. POS Tagger
Accuracy(%) for 30 tagset
Accuracy(%) for 11 tagset
Uni-gram
71.46
75.88
Bi-gram
7.79
9.67
Tri-gram
5.33
6.21
Uni-gram+bi-gram
72.59
76.35
Uni-gram+bi-gram+tri-gram
72.42
76.12
HMM
75.12
79.22
TnT
72.35
76.39
CRF
84.27
90.84
Brill+uni-gram
72.99
76.49
Brill+bi-gram
60.58
70.37
Brill+tri-gram
59.3
69.57
Brill+uni-gram+bi-gram
72.75
76.55
Brill+uni-gram+bi-gram +tri-gram 72.54
76.23
Brill+HMM
76.04
79.98
Brill+TnT
72.83
76.45
Brill+CRF
84.50
91.83
To examine the effect of the various corpus size on the performance of POS taggers, the taggers were trained with different amounts of tokens from the same corpus. The tagged corpus is partitioned into different sizes as train sets such as 10 K, 20 K, 30 K, 40 K, 50 K, 60 K, 70 K, 80 K, 90 K, 100 K, 115 K. Each model is trained with each partitioned data sets individually and tested over the 17 K untagged testing dataset. Figure 2 shows the performance of the different POS tagging methods for various sizes of training corpus using a 30-tagset. From the Figure, it observed that Brill+CRF tagger has the highest accuracy even when the corpus size is small. Again, both CRF and Brill+CRF tagger reached almost 75% (for 30-tagset) and 85% (for 11-tagset) of accuracy with a 10 K tagged set. The accuracy of each method increased sharply with the rise of the data set and becomes almost steady at 100 K. The performance of the tagger also depends on the execution time. The faster the execution time, the better the performance. We have computed the execution time of taggers into 11-tagset and 30-tagset. Table 3 shows the performance comparison concerning execution time among 16 tagging techniques. The amount of time required to train a model using the train set decides the training time of the tagger. From Table 3, it is observed that the HMM tagger requires the least training time (0.25 s) whereas, Brill+TnT requires the highest training time (333.38 s) for 30-tagset. For the 11-tagset, HMM consumed 0.25 s and Brill+TnT required 164.56 s of training time. In the case, if testing time,
1120
F. Jahara et al.
Fig. 2. The effect of data size on accuracy for 30-tagset Table 3. Comparison of execution time among 16 POS tagging methods. 30 Tagset POS tagger
Training time (s)
11 Tagset Testing time (s)
Execution Training time (s) time (s)
Testing time (s)
Execution time (s)
Unigram
0.43
0.02
0.45
0.34
0.02
0.37
Bigram
0.6
0.03
0.63
0.59
0.03
0.62
Trigram
0.76
0.04
0.8
0.67
0.03
0.7
Uni-gram+bi-gram
1.0
0.04
1.03
1.04
0.04
1.07
Uni-gram+bigram+tri-gram
1.79
0.05
1.84
1.66
0.05
1.71
HMM
0.25
6.88
7.13
0.25
3.8
4.05
TnT
0.53
44.87
45.39
0.5
19.01
19.51
CRF
49.94
0.14
50.08
15.97
0.11
16.09
Brill+uni-gram
21.54
0.16
21.7
16.3
0.2
16.51
Brill+bi-gram
67.08
0.79
67.87
42.97
0.77
43.74
Brill+tri-gram
74.24
0.94
75.17
55.75
0.85
56.6
Brill+unigram+bi-gram
12.36
0.2
12.56
12.79
0.21
13.0
Brill+unigram+bigram+tri-gram
9.64
71.67
81.31
10.62
75.45
86.07
Brill+HMM
39.84
5.76
45.6
25.24
3.5
28.73
Brill+TnT
333.38
44.81
378.19
164.56
22.56
187.12
Brill+CRF
88.56
0.48
89.04
36.83
0.5
37.33
Towards POS Tagging Methods for Bengali Language:
1121
uni-gram tagger utilized the lowest tagging time (0.2 s) in both tagsets, whereas Brill+unigram+bigram+trigram required the highest tagging time about 71.67 s (for 30-tagset) and 75.45 s (for 11-tagset) respectively. The execution time determines the tagging speed of the POS tagging techniques. Figure 3 shows the execution time required for each tagging methods on our dataset. Results indicate that the Brill+TnT demand more execution time compared to other POS tagging methods.
Fig. 3. Execution time of different POS Taggers
From the results, it is investigated that Brill taggers, along with other backoff taggers achieved the higher accuracy but they lag in execution time. The Brill+CRF obtained the highest accuracy of 91.83% (for 11 tagset), but it requires a higher execution time (37.33 s). On the other hand, the CRF method achieved 90.84% of accuracy and consumes 16.09 s for execution. Thus, there is a trade-off between accuracy and execution time. Taking into consideration both the accuracy and execution time, it revealed that the CRF method provided better POS tagging performance compared to other techniques.
5
Conclusion
In this work, we have illustrated and investigated the different POS tagging techniques for the Bengali language. A comparative analysis of 16 POS tagging techniques (eight stochastic-based and eight transformations-based) on a tagged corpus consisting of 1,15,000 tokens have been reported. The comparative analysis revealed that Brill with CRF technique achieved the highest accuracy among
1122
F. Jahara et al.
other POS tagging techniques, but it requires more execution time. CRF can be maintained as a good trade-off between accuracy and execution time, and this method can be used as a better POS tagging technique. Tagging methods that include both statistical and linguistic knowledge may produce a better performance. The performance of the other tagging techniques such as TAGGIT, CLAWS, Xerox, and Hybrid can be investigated further on the larger tagged corpus with more unique words. These issues will be addressed in the future.
References 1. Dandapat, S., Sarkar, S.: Part of speech tagging for Bengali with Hidden Markov Model. In: Proceedings of NLPAI Machine Learning Competition (2006) 2. Diesner, J.: Part of speech tagging for English text data. School of Computer Science Carneige Mellon University Pittsburgh, PA 15213 3. Manju, K., Soumya, S., Idicula, S.M.: Development of a POS tagger for Malayalaman experience. In: International Conference on ARTCom, pp. 709–713. IEEE (2009) 4. Haque, M., Hasan, M.: Preprocessing the Bengali text: an analysis of appropriate verbs (2018) 5. Bhattacharya, S., Choudhury, M., Sarkar, S., Basu, A.: Inflectional morphology synthesis for Bengali noun, pronoun and verb systems. Proc. NCCPB 8, 34–43 (2005) 6. Hasan, M.F., UzZaman, N., Khan, M.: Comparison of Unigram. HMM and Brill’s POS tagging approaches for some South Asian languages, Bigram (2007) 7. Kumawat, D., Jain, V.: POS tagging approaches: a comparison. Int. J. of Com. App. 118 (6) (2015) 8. Hasan, F.M., UzZaman, N., Khan, M.: Comparison of different POS tagging techniques (N-Gram, HMM and Brill’s tagger) for Bangla. Advances and Innovations in Systems. Computing Sciences and Software Engineering, pp. 121–126. Springer, Berliin (2007) 9. Ekbal, A., Haque, R., Bandyopadhyay, S.: Maximum entropy based Bengali part of speech tagging. J. Res. Comput. Sci. 33, 67–78 (2008) 10. Ekbal, A., Haque, R., Bandyopadhyay, S.: Bengali part of speech tagging using conditional random field. In: International Conference on SNLP2007, pp. 131–136 (2007) 11. PVS, A., Karthik, G.: Part-of-speech tagging and chunking using conditional random fields and transformation based learning. Shallow Parsing South Asian Lang. 21, 21–24 (2007) 12. Dandapat, S., Sarkar, S., Basu, A.: Automatic part-of-speech tagging for Bengali: an approach for morphologically rich languages in a poor resource scenario. In: Proceedings of 45th Annual Meeting of ACL Companion, pp. 221–224 (2007) 13. Hossain, N., Huda, M.N.: A comprehensive parts of speech tagger for automatically checked valid Bengali sentences. In: International Conference ICCIT, pp. 1–5. IEEE (2018) 14. Roy, M.K., Paull, P.K., Noori, S.R.H., Mahmud, H.: Suffix based automated parts of speech tagging for Bangla language. In: International Conference on ECCE, pp. 1–5. IEEE (2019) 15. Sakiba, S.N., Shuvo, M.U.: A memory efficient tool for Bengali parts of speech tagging. In: Artificial Intelligence Techniques for Advanced Computer Application, pp. 67–78. Springer (2020)
Towards POS Tagging Methods for Bengali Language:
1123
16. Chakrabarti, D., CDAC, P.: Layered parts of speech tagging for Bangla. Language in India: Problems of Parsing in Indian Languages (2011) 17. Rathod, S., Govilkar, S.: Survey of various POS tagging techniques for Indian regional languages. Int. J. Comput. Sci. Inf. Technol. 6(3), 2525–2529 (2015) 18. Bali, Kalika, M.C., Biswas, P.: Indian language part-of-speech tagset: Bengali. Philadelphia: Linguistic Data Consortium (2010) 19. Sankaran, B., Bali, K., Choudhury: A common parts-of-speech tagset framework for Indian Languages (01 2008) 20. Rai, A., Borah, S.: Study of various methods for tokenization. In: Application of IoT, pp. 193–200. Springer (2021)
BEmoD: Development of Bengali Emotion Dataset for Classifying Expressions of Emotion in Texts Avishek Das, MD. Asif Iqbal, Omar Sharif , and Mohammed Moshiul Hoque(B) Chittagong University of Engineering and Technology, Chittagong-4349, Bangladesh [email protected], [email protected], {omar.sharif,moshiul 240}@cuet.ac.bd
Abstract. Recently, emotion detection in language has increased attention to NLP researchers due to the massive availability of people’s expressions, opinions, and emotions through comments on the Web 2.0 platforms. It is a very challenging task to develop an automatic sentiment analysis system in Bengali due to the scarcity of resources and the unavailability of standard corpora. Therefore, the development of a standard dataset is a prerequisite to analyze emotional expressions in Bengali texts. This paper presents an emotional dataset (hereafter called ‘BEmoD’) for analysis of emotion in Bengali texts and describes its development process, including data crawling, pre-processing, labeling, and verification. BEmoD contains 5200 texts, which are labeled into six basic emotional categories such as anger, fear, surprise, sadness, joy, and disgust, respectively. Dataset evaluation with a Cohen’s κ score of 0.920 shows the agreement among annotators. The evaluation analysis also shows the distribution of emotion words that follow Zipf’s law. Keywords: Natural language processing · Bengali emotion corpus Emotion classification · Emotion expressions · Evaluation · Corpus development
1
·
Introduction
Recognizing emotion in expression involves the task of attributing an emotion label to an expression preferred from a set of pre-specific labels of emotion. There are several application areas where there is a need to understand and interpret emotions in text expressions such as business, politics, education, sports, and entertainment. Recognizing emotion in text expression is one of the critical tasks in NLP, which demands an understanding of natural language. The hindrance commences at the sentence level in which an emotion is stated through the semantics of words and their connections; as the level enhances, the problem’s difficulty grows. Nevertheless, not all opinions are stated explicitly; there are metaphors, mockery, and irony. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1124–1136, 2021. https://doi.org/10.1007/978-3-030-68154-8_94
BEmoD: Development of Bengali Emotion Dataset
1125
Sentiment classification from texts can be divided into two categories: opinion-based and emotion-based. Opinion classification is based on text polarity, which classifies text/sentences into positive, negative, or neutral sentiments [1]. Emotion classification deals with classifying sentences according to their emotions [2]. Bengali is the fifth most-spoken native language in the world. Approximately 228 million people all over the world speak Bengali as their first language, and around 37 million people speak it as a second language. In recent years, data storage on the web increased exponentially due to the emergence of Web 2.0 applications and its related services in the Bengali language. Most of these data are available as textual forms such as reviews, opinions, recommendations, ratings, and comments, which are mostly in unstructured form. The analysis of these enormous amounts of data to extract underlying sentiment or emotions is a challenging research problem for the resource-constrained language, especially in Bengali. The complexity arises due to various limitations, such as the lack of tools, scarcity of benchmark corpus, and learning techniques. Because Bengali is a resource-scarce language, emotion-based classification based on six emotion class has not yet been performed, to the best of our knowledge. Therefore, in this work, we are motivated to develop a corpus (we called it BEmoD-Bengali Emotion Dataset) for classifying emotions from Bengali texts. We consider six types of textual emotions such as joy, sadness, anger, fear, surprise, and disgust based on Ekman’s basic emotion classification [3]. The critical contributions of our work are to develop an emotion dataset for classifying Bengali texts into one of the six emotions. Several expressions are collected from online/offline sources over three months and classified into emotion classes. The findings help the researchers explore several issues, such as the characteristics of emotion words in each class. Another contribution is that this work identifies the high-frequency emotion words in Bengali literature and analyze the dataset in terms of several metrics such as Cohen’s kappa and Zipf’s law.
2
Related Work
Different corpora have been developed to detect emotions in text for various languages. Alm et al. [4] developed a dataset consisting of 185 tales, in which labeling is done at the sentence level with sadness, happy, neutral, anger-disgust, fear, positive surprise, and negative surprise classes. A dataset consisting of blog posts representing eight emotion labels, including Ekman’s six basic emotions [5]. ISEAR2 [6] is a corpus with joy, fear, anger, sadness, disgust, shame, and guilt classes. Semantic Evaluation (SemEval) introduced a complete dataset in English for emotion analysis [7]. To perform sentiment analysis, corpus of several languages, such as Arabic [8], Czech [9], and French [10], were created. Another corpus proposed by SemEval [11] consisting of Arabic, English, and Spanish tweets, and these are labeled with eleven emotions. A recently developed corpus consists of textual dialogues where each discussion is either labeled as anger, happy, sad, or others [12]. A tweeter corpus containing Hindi-English code mixed
1126
A. Das et al.
text was annotated with Ekman’s six basic emotions by two languages proficient and the quality validated using Cohen’s Kappa coefficient in [13]. Although Bengali is a resource-poor language, few works have been conducted on sentiment analysis by classifying Bengali words or texts into positive, negative, and neutral classes. Das et al. [14] approached with semi-automated word-level annotation using the Bangla Emotion word list, which was translated from English WordNet Affect lists [15]. The authors focused heavily on words rather than the context of the sentences and built an emotion dataset of 1300 sentences with six classes. Prasad et al. [16] tagged sentiments of tweet data based on the emoticon and hashtags. They build a sentiment dataset having 999 Bengali tweets. Nafis et al. [17] developed a Bangla emotion corpus containing 2000 Youtube comments, which were annotated with Ekman’s six basic emotions. They measured the majority vote for final labeling and came up tagging with four emotion labels as some of them are ambiguous. Rahman et al. [18] used Bengali cricket Dataset for sentiment analysis of Bengali text in which 2900 different online comments were labeled into positive, negative, and neutral sentiment. A couple of work [19] and [20] developed datasets for sentiment analysis from Bengali text. The first work considered 1000 data of restaurant reviews, whereas the second considered 1200 data. Most of the previous work Bengali dataset developed to analyze sentiment into positive, negative, or neutral. However, BEmoD intends to use classifying emotion in Bengali text into six basic emotion classes.
3
Properties of Emotion Expressions in Bengali
Several text expressions in the Bengali language has investigated and found out the distinctive properties of each category of Ekman’s six emotion such as happiness, sadness, anger, disgust, surprise, and fear. In order to identify the distinct properties of each category, the sentences are investigated based on the following characteristics. • Emotion Seed Words: We identify words commonly used in the context of a particular emotion. For example, the words “happy”, “enjoy”, “pleased” are considered as seed words for the happiness category. Thus, some specific seed words have been stored for a specific emotion in Bengali. For example, (Angry) or (Anger) is usually used for expressing “Anger” emotion. Likewise, (Happy) or (Good mood) are usually used for expression “joy” emotion. • Intensity of Emotion Word: In Bengali, different seed words express different emotions in a particular context. In such cases, seed words are compared in terms of intensity and choosing the highest intensity seed word, including its emotion class, is assigned for the emotion of that context. Consider the following example,
BEmoD: Development of Bengali Emotion Dataset
1127
(English translation: When the news of Alexander’s death reached Athens, someone was surprised and asked, “Alexander is dead! Impossible! If he’s dead, the smell of his dead body would waft from every corner of the earth.”) (death In these texts, several seed words are found such as, (surprised) and (dead! Impossible!) Here the news), (surprised) and (dead! Impossible!) have more words, (death news). Thus this type of texts can be conweight than sidered as “surprise” because the intensity of this emotion is higher than the intensity of sadness emotion. • Sentence Semantic: Observing the semantic meaning of the texts is one of the prominent characteristics of ascertaining emotion class. In the previous example, though the sentence started with the death news of Alexander, this sentence turns into the astonishment of a regular person in Athens. So, sentence semantics make an important parameter in designating emotion expression. • Emotion Engagement: It is imperative to involve the annotation actively while reading the text for understanding the semantic and context of the emotion expression explicitly. For example, (English translation: Every moment spent in St. Martin was awesome. Here are some from those countless moments captured on camera ). In this particular expression, annotators can feel some happiness as it describes an original moment of someone’s experience. This feeling causes annotators engaged with happiness and the expression designated as “joy.”. • Think Like The Person (TLTP): Usually, an emotion expression is a type of expression of someone’s emotion in a particular context. By TLTP, an annotator imagines his/her in the same context where the emotion expression is displayed. By repeatedly uttering, an annotator tried to imagine the situation and annotated emotion class. By considering the above characteristics, each emotion expression will label into one of the six emotion classes: joy, sadness, anger, disgust, surprise, and fear.
4
Dataset Development Processes
The prime objective of our work is to develop an emotion dataset that can be used to classify emotion expression, usually written in Bengali, into one of six basic emotion categories. To develop dataset in Bengali is a critical challenge for any language processing task. One of the notable challenges is the scarcity of appropriate emotion text expression. Some links, misspelled sentences, and “Benglish” sentences were obtained during data crawling. Moreover, emotion detection from plain text is challenging than detecting emotion from facial expression. Because sometimes people pretend to be alright through text messages with having lots of emotional problem in his day to day life. Figure 1 shows
1128
A. Das et al.
an overview of the development process of BEmoD, which consists of four major phases: data crawling, preprocessing, data annotation, and label verification, respectively. We adopted the method described by Dash et al. [21] to develop dataset.
Fig. 1. Development processes of BEmoD
4.1
Data Crawling
Bengali text data were accumulated from several sources such as Facebook comments/posts, YouTube comments, online blog posts, storybooks, Bengali novels, daily life conversations, and newspapers. Five participants were assigned to collect data. They manually collected 5700 text expressions over three months. Although most of the data collected from online sources, data can be created by observing people’s conversations. In social media, many Bengali native talkers wrote their comments or posts in the form of transliterated Bengali. For example, a transliterated sentence, “muvita dekhe amar khub valo legeche. ei rokom movi socharacor dekha hoy na.” (Bangla: [English translation: I really enjoyed watching this movi.e. Such movies are not commonly seen]. This type of texts demands to be converted phonetically by the phonetic conversion. However, errors may take place during phonetic conversion. For instance, in the above texts, the word “socharacor (English: usually) could be translated in Bengali as, after phonetic conversion whereas the accurate word should be Therefore, correction should handle because there is no such word like in Bengali Dictionary [22]. 4.2
Preprocessing
Pre-processing performed in two phases: manual and automatic. In the manual phase, “typo” errors eliminate from the collected data. We took Bangla academy
BEmoD: Development of Bengali Emotion Dataset
1129
supported accessible dictionary (AD) database [22] to find the appropriate form of a word. If a word existed in input texts but not in AD, then this word was considered to be a typo word. The appropriate word searched in AD and the typo word was replaced with this corrected word. For example, the text, . In this example, the bold words indicate the typo errors that need to be corrected by using AD. After replacing, the sentence turned into . It has been observed that emojis and punctuation marks sometimes create perplexity about the emotional level of the data. That why in the automatic phase, these were eliminated from the manually processed data. We made an emoji to the hex (E2H) dictionary from [23]. Further, all the elements of E2H were converted to Unicode to cross-check them with our corpus text elements. A dictionary was introduced, which contains punctuation marks and special symbols (PSD). Assume any text element matched with elements in E2H or PSD substituted with blank space. All the automatic preprocessing was done with a python-made script. After automatic preprocessing, the above example comes into . 4.3
Data Annotation
The whole corpus was labeled manually, followed by majority voting to assign a suitable label. The labeling or annotation tasks were performed by two separate groups (G1 and G2). G1 consists of 5 postgraduate students having a Computer Engineering background and working on NLP. An expert group (G2) consists of three academicians and working on NLP for several years. They performed label verification by selecting an appropriate label. The majority voting mechanism uses to decide the final emotion class of emotion expression. The unique ultimate label of expression chosen by pursuing the process described in Algorithm 1. Algorithm 1: Majority Voting & Final Label 1 2 3 4 5 6 7 8 9 10 11
T ← text corpus; label ← [0,1,2,3,4,5]; AL ← Annotator Label Matrix ; F L ← Final Label Matrix; for ti ∈ T do count label = [0,0,0,0,0,0]; for aij ∈ AL do count label[aij ]++; end F Li = indexof [max(count label)]; end
1130
4.4
A. Das et al.
Label Verification
The majority voting by the G1 annotators has decided the original label of data. The original label was considered the ultimate if this label matched the expert label (G2). When the label of G1 and G2 was mismatched, then it was sent to the groups for further discussion. Both groups accepted a total of 4950 data labels among 5700 data. The remaining 750 data was sent for discussion. Both groups are agreed about 250 data label after discussion and added to BEmoD. About 500 data have been precluded due to the disagreement between groups. This exclusion may happen due to the texts with neutral emotion, implicit emotion, and ill-formatted. Holding verifying 5200 data, including their labels saved in *.xlsx format.
5
Analysis of BEmoD
Dataset analysis was performed by determining the data distributions concerning source and emotion classes. Emotion expression data were collected from online sources, books, people’s conversations, and artificially generated. Table 1. Statistics of BEmoD Corpus attributes
Attributes value
Size on disk
685 KB
Total number of expressions
5200
Total number of sentences
9114
Total number of words
130476
Unique words
26080
Maximum words in a single expression 114 Minimum words in a single expression 7 Average words in a single expression
25 (approximately)
– Statistics of BEmoD: Table 1 illustrates the discriminating characteristics of the developed BEmoD which consists of 130476 words in total and 26080 unique words under 9114 sentences. – Categorical Distribution in BEmoD: A total of 5200 expressions are labeled into one of the six basic emotion categories after the verification process. Table 2 shows the categorical summary of each class in BEmoD. It is observed that the highest number of data belongs to the sadness category, whereas the lowest number of data belongs to the surprise category.
BEmoD: Development of Bengali Emotion Dataset
6
1131
Evaluation of BEmoD
We investigate how much the annotators were agreed in assigning emotion classes by using Cohen’s kappa (κ) [24]. We also measure the density of emotion words, high-frequency emotion words, and distribution of emotion words with Zipf’s law, respectively [25]. Table 2. Data statistics by categories Category
Total emotion data
Total sentences
Total words Unique words
Anger
723
1126
18977
7160
Fear
859
1494
19704
6659
Surprise
649
1213
16806
6963
Sadness
1049
1864
26986
8833
Joy
993
1766
25011
8798
Disgust
927
1651
22992
8090
κ score computed from the majority voting of G1 and G2 by investigating the standard inter-annotator agreement. Table 3 shows the result of Kappa statistics. According to Landis et al. [26], a κ score of 0.920 is almost a perfect agreement. Table 3. Kappa statistics Kappa metrics
G1 vs. G2
Number of observed agreements (po )
93.33% of the observations
Number of agreements expected by chance (pe ) 16.69% of the observations Kappa (κ)
0.920
SE of kappa
0.021
95% confidence interval
0.879 – 0.960
To measure the influences of emotion words in various classes, we consolidate the density of emotion words. Density can be computed by the ratio of the number of emotion words to the total number of words in each expression. The density of emotion words per class illustrated in Table 4. The overall density in the whole corpus is 0.2433. If the density for one class is higher than 0.2433, it signifies that each writer communicates enhanced emotion concerning this class; it also indicates that emotions are directed within this class. Figure 2 shows the variance of each emotion class density from the average. This figure indicates that the frequencies of sadness, joy, and disgust classes are higher than the average density of 0.2433, revealing that people are more responsive in these classes and use more emotional words.
1132
A. Das et al. Table 4. Density of emotion words in each class
Emotion class
Unique words(T)
Emotion words(N) Density(D) D-0.2433
Anger
7160
1140
0.1592
−0.08409
Fear
6659
1174
0.1763
−0.67
Surprise
6963
1689
0.2425
−.0008
Sadness
8833
2607
0.2951
.0518
Joy
8798
2394
0.2721
0.0288
Disgust
8090
2312
0.2857
0.0424
Total/average 46503
11316
0.2433
0.0000
Fig. 2. Emotion words density vs. average density
Emotion words frequency is counted on the whole BEmoD. This frequency of emotion words brings to the conclusion that some specific words are always meant to express specific emotions of humans. Table 5 listed 10 most frequent emotion words on BEmoD. Zipf’s law reveals an empirical observation that states that the frequency of a given word should be inversely proportional to its rank in the corpus. This law states that if the Zipf curve is plotted on a log-log scale, a straight line with a slope of −1 must be obtained. Figure 3 shows the resultant graph for each classes. It is observed that the curve obeys Zipf’s law as the curve follows a slope of −1. Considering all the evaluation measures along with 93.33% similarity score, 0.920 κ score, and the obeying the Zipf’s law, the developed corpus (i.e., BEmoD) can be used to classify basic emotions from Bengali text expressions. This work provides a primary foundation to detect human’s six basic emotions by relating Bengali text expressions.
BEmoD: Development of Bengali Emotion Dataset
1133
Table 5. Top 10 highest frequency emotion words in BEmoD. Count Bengali Word English Equivalent Frequency 1 Fear 259 2 Surprised 168 3 Trouble 148 4 Beautiful 93 5 Bad 59 6 Anger 54 7 Suddenly 50 8 Love 48 9 Bad 40 10 Damn it 34
Fig. 3. Distribution of word frequencies:(a) Zipf curve (b) Zipf curve (log-log) scale
6.1
Comparison with Existing Bengali Emotion Datasets
We investigated the available datasets on Bengali text emotions and find out their characteristics in terms of the number of data, number of emotion categories, and types of emotions. Table 6 summarizes the properties of several available datasets along with developed BEmoD. The summarizing indicates that the developed dataset is larger than the available datasets. Classification of implicit emotion is the most critical problem because of its unapparent nature within the expression, and thus, its solution demands to interpret of the context. Emotions are complicated; humans often face problems to express and understand emotions. Classifying or detecting emotions in text enhances the complicity of interpreting emotion because of the lack of apparent facial expressions, non-verbal gestures, and voice [25]. Recognition of emotion automatically is a complicated task. A machine should handle the difficulty of linguistics phenomena and the context of written expression.
1134
A. Das et al. Table 6. Comparative illustrations of Bengali emotion datasets.
Dataset
7
No. of data No. of class Types of emotions/sentiment
Rahman et al. [18] 2900
3
Positive, negative, neutral
Maria et al. [7]
2800
3
Positive, negative, neutral
Ruposh et al. [20]
1200
6
Happy, sad, anger, fear, surprise, disgust
BEmoD
5200
6
Joy, fear, anger, sadness, surprise, disgust
Conclusion
Emotion recognition and classification are still developing areas of research, and challenges for low-resource languages are daunting. The scarcity of the benchmark dataset is one of the vital challenges to perform the emotion classification task in the Bengali language. Thus, in this work, we presented a new corpus (called BEmoD) for emotion classification in Bengali texts and explained its development processes in details. Although few datasets are available for emotion classification (mostly consider positive, negative, and neutral classes) in Bengali, this work adopted basic six emotions, including joy, fear, anger, sadness, surprise, and disgust. This work revealed several features of emotion texts, especially concerning each different class, exploring what kinds of emotion words humans use to express a particular emotion. The evaluation of BEmoD shows that the developed dataset followed the distribution of Zipf’s law and maintained an agreement among annotators with an excellent κ score. However, the current version of BEmoD has 5200 emotion texts. This amount is not sufficient to apply in deep learning algorithms. Therefore, more data samples should be considered with implicit and neutral emotion categories. BEmoD can be considered to annotate emotion expressions in terms of various domains. These are left for future research.
References 1. Liu, B.: Sentiment analysis and subjectivity, 1–38 (2010) 2. Garg, K., Lobiyal, D.K.: Hindi emotionnet: a scalable emotion lexicon for sentiment classification of hindi text. ACM Trans. Asian Low-Resour. Lang. Inf. Process. 19(4), 1–35 (2020) 3. Eckman, P.: Universal and cultural differences in facial expression of emotion. In: Nebraska Symposium on Motivation, vol. 19, pp. 207–284 (1972) 4. Alm, O.C., Roth, D., Richard, S.: Emotions from text: machine learning for textbased emotion prediction. In: Proceeding in HLT-EMNLP, pp. 579–586. ACL, Vancouver, British Columbia, Canada (2005) 5. Aman, S., Szpakowicz, S.: Identifying expressions of emotion in text. In: International Conference on Text, Speech and Dialogue, pp. 196–205. Springer, Berlin (2007)
BEmoD: Development of Bengali Emotion Dataset
1135
6. Scherer, K.R., Wallbott, H.G.: Evidence for universality and cultural variation of differential emotion response patterning. J Per. Soc. Psy. 66(2), 310–328 (1994) 7. Pontiki, M., Galanis, D., Pavlopoulos, J., Papageorgiou, H., Androutsopoulos, I., Manandharet, S.: Semeval-2014 task 4: aspect based sentiment analysis. In: International Workshop on Semantic Evaluation, pp. 27–35. ACL, Dublin, Ireland (2014) 8. Al-Smadi, M., Qawasmeh, O., Talafha, B., Quwaider, M.: Human annotated Arabic dataset of book reviews for aspect based sentiment analysis. In: International Conference on Future Internet of Things and Cloud, pp. 726–730. IEEE, Rome, Italy (2015) 9. Ales, T., Ondrej, F., Katerina, V.: Czech aspect-based sentiment analysis: a new dataset and preliminary results. In: ITAT, pp. 95–99 (2015) 10. Apidianaki, M., Tannier, X., Richart, C.: Datasets for aspect-based sentiment analysis in French. In: International Conference on Lan. Res. & Evaluation, pp. 1122– 1126. ELRA, Portoroˇz, Slovenia (2016) 11. Mohammad, S., Bravo-Marquez, F., Salameh, M., Kiritchenko, S.: Semeval-2018 task 1: affect in tweets. In: International Workshop on Semantic Evaluation, pp. 1–17. ACL, New Orleans, Louisiana (2018) 12. Chatterjee, A., Narahari, K.N., Joshi, M., Agrawal, P.: Semeval-2019 task 3: emocontext: contextual emotion detection in text. In: International Workshop on Semantic Evaluation, pp. 39–48. ACL, Minneapolis, Minnesota, USA (2019) 13. Vijay, D., Bohra, A., Singh, V., Akhtar, S.S., Shrivastava, M.: Corpus creation and emotion prediction for hindi-english code-mixed social media text. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pp. 128–135 (2018) 14. Das, D., Bandyopadhyay, S.: Word to sentence level emotion tagging for Bengali blogs. In: Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pp. 149–152 (2009) 15. Strapparava, C., Valitutti, A., et al.: Wordnet affect: an affective extension of wordnet. In: Lrec, vol. 4, p. 40. Citeseer (2004) 16. Prasad, S.S., Kumar, J., Prabhakar, D.K., Tripathi, S.: Sentiment mining: an approach for Bengali and Tamil tweets. In: 2016 Ninth International Conference on Contemporary Computing (IC3), pp. 1–4. IEEE (2016) 17. Tripto, N.I., Ali, M.E.: Detecting multilabel sentiment and emotions from Bangla youtube comments. In: 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), pp. 1–6. IEEE (2018) 18. Rahman, A., Dey, E.K.: Datasets for aspect-based sentiment analysis in Bangla and its baseline evaluation. Data 3(2), 15 (2018) 19. Sharif, O., Hoque, M.M., Hossain, E.: Sentiment analysis of Bengali texts on online restaurant reviews using multinomial naıve bayes. In: International Conference on Advance in Science, Engineering & Robotics Technology, pp. 1–6. IEEE, Dhaka, Bangladesh (2019) 20. Ruposh, H.A., Hoque, M.M.: A computational approach of recognizing emotion from Bengali texts. In: International Conference on Advances in Electrical Engineering (ICAEE), pp. 570–574. IEEE, Dhaka, Bangladesh (2019) 21. Dash, N.S., Ramamoorthy, L.: Utility and Application of Language Corpora. Springer (2019) 22. Accessible dictionary. https://accessibledictionary.gov.bd/. Accessed 2 Jan 2020 23. Full emoji list. https://unicode.org/emoji/charts/full-emoji-list.html. Accessed 7 Feb 2020
1136
A. Das et al.
24. Cohen, J.: A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 20(1), 37–46 (1960) 25. Alswaidan, N., Menai, M.B.: A survey of state-of-the-art approaches for emotion recognition in text. Knowl. Inf. Syst. 62, 2937–2987 (2020) 26. Landis, J.R., Koch, G.G.: The measurement of observer agreement for categorical data. Biometrics, 159–174 (1977)
Advances in Engineering and Technology
Study of the Distribution Uniformity Coefficient of Microwave Field of 6 Sources in the Area of Microwave-Convective Impact Dmitry Budnikov(&), Alexey N. Vasilyev, and Alexey A. Vasilyev Federal State Budgetary Scientific Institution “Federal Scientific Agroengineering Center VIM” (FSAC VIM), 1-st Institutskij 5, 109428 Moscow, Russia {dimm13,vasilev-viesh}@inbox.ru, [email protected]
Abstract. The development of processing modes using electrical technologies and electromagnetic fields can reduce the energy intensity and cost of grain heat treatment processes. During development, it is necessary to consider the technological requirements of the processed material, types of used technology, the mode of operation of the mixing equipment (continuous, pulse etc.). In addition, it is necessary to ensure uniform processing of the grain layer. Thus, the purpose of the work is experimentally evaluate the uniformity of the microwave field distribution in the zone of microwave-convective grain processing of a laboratory installation containing 6 microwave energy sources. The article presents the scheme of the microwave convective processing zone in which experimental studies were conducted, the factors considered in the experiment and the levels of their variation. An experiment was performed to determine the uniformity coefficient of propagation of the microwave field in a layer of grain material. It was found that the results of the study of the microwave field strength in the grain mass can be used in the study of the dielectric properties of the processed grain, as well as the construction of a control system for the microwave convective processing plants. Keywords: Electrophysical effects FIELD Uniform distribution
Post-harvest treatment Microwave
1 Introduction Many agricultural processing processes, such as drying, are characterized by their high energy intensity. In this connection, the development of energy-efficient equipment remains relevant. Nowadays the development of energy-efficient equipment is connecting with accumulating and handling the mount of information. Obtained data this way can be oriented at optimized algorithms of the managing equipment and also education deeply for building smart control systems. Improving the post-harvest processing of grain does not lose its relevance. At the same time, the improvement of these processes using electrophysical effects is becoming increasingly popular [3, 4, 8, 10, 11, 15]. Many researchers consider exposure to ultrasound, ozone-air mixtures, aeroions, infrared radiation, microwave field, etc. At the same time, almost all of these © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1139–1145, 2021. https://doi.org/10.1007/978-3-030-68154-8_95
1140
D. Budnikov et al.
factors are associated with a high unevenness of impact on the volume of the processed material. Thus, there is a need to assess the uniformity of the studied factor over the layer of the processed material [1, 2, 6, 7, 12]. This article will consider an experimental study of the propagation of the microwave field in a dense layer of wheat subjected to microwave convective drying.
2 Main Part 2.1
Research Method
In order to take into account the influence of uneven distribution of the field by volume, enter the coefficient of uniform distribution of the layer Кun. This coefficient is the ratio of the average tension in the chamber volume to the maximum one. Similarly, this coefficient can be calculated as the square root of the ratio of the average power absorbed by the grain material in the volume of the product pipeline to the maximum one: Kun
Emid ¼ ¼ Emax
rffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 f e0 tgd 5:56 1011 Emid Qmid ¼ ; 2 5:56 1011 Emax Qmax f e0 tgd
ð1Þ
where Emid – the average value of the amplitude of the electric field in the volume of the processed material, V/m; Emax – the maximum value of the amplitude of the electric field in the volume of the microwave convection processing zone, V/m; Qmid, Qmax – average and maximum power dissipated in dielectric material, W/m3. If we take into account the fact that the depth of penetration into the material is the distance at which the amplitude of the incident wave decreases by e times, then the minimum uniformity coefficient Кun min, which should be taken into account can be defined as: Кun min = 1/e = 0.368. Further, the uniformity coefficient of propagation of the electromagnetic field can be used to assess the applicability of the design of the microwave convective treatment zone together with the applied. To solve the problem of propagation of electromagnetic waves in a waveguide with a dielectric, it is necessary to solve two dimensional Helmholtz equations in various sections of the waveguide [4, 5, 9, 12], in particular, in an empty waveguide (product line) and in a filled with dielectric, also write the boundary conditions connecting these equations. 2.2
Modelling
This problem can be solved numerically. Currently, there is a large number of specialized software that uses various numerical methods [5]. CST Microwave Studio was chosen as the framework for conducting the numerical experiment. The most flexible calculation method, implemented in Microwave Studio as a transient solver, can calculate the device over a wide frequency range after calculating a single transient characteristic (as opposed to the frequency method, which
Study of the Distribution Uniformity Coefficient of Microwave Field
1141
requires analysis at many frequency points). The results obtained by this product are based on the Finite Integration Technique (FIT), which is a sequential scheme for discretizing Maxwell's equations in integral form. The desired coefficient cannot only be calculated based on the presented simulation, but also based on experimental measurements on control points. The Table 1 presents the calculated coefficient for the given dimensions of the product pipeline and three types of applied waveguides for the propagation of the microwave field in a dense layer of wheat with a humidity of 16%. It was found that the results of modeling the distribution of the electromagnetic field in the zone of microwave convective influence of the installation containing sources of microwave power for processing the grain layer indicate a high level of its unevenness in the volume of the product pipeline. Table 1. Calculated uniformity coefficient Kun of propagation of the electromagnetic field in wheat. Product section, mm mm W, % 14 16 20 24 26 200 200 0.4946 0.4332 0.3827 0.3502 0.3278 200 300 0.3128 0.3022 0.2702 0.2546 0.2519
The obtained data on the dependence of the uniformity coefficient of the electromagnetic field propagation in the zone of microwave convective exposure on the humidity of wheat can be approximated by a third-degree polynomial of the form: Kun ¼ b0 þ b1 W þ b2 W 2 þ b3 W 3 ;
ð2Þ
where b0, b1, b2, b3 – proportionality coefficients. Table 1 shows the results of field propagation modeling in wheat, and Table 2 shows corresponding values of b0, b1, b2, b3 coefficients. Table 2. Calculated values of the proportionality coefficients for calculating the uniformity coefficient. Product section, mm mm b0 b1 b2 b3 R2 200 200 2.2753 −0.2525 0.0114 −0.0002 0.9992 0.9972 200 300 0.1887 0.0336 −0.0023 4⋅10–5
The next step is conducting an experimental check of the distribution uniformity coefficient in the laboratory installation.
1142
2.3
D. Budnikov et al.
Experimental Research
Measurement of the field strength at a given point in the product pipeline is carried out experimentally. The results of the study of the microwave field strength in the grain mass can be used in the study of the dielectric properties of the processed grain, as well as the construction of a control system for the design of microwave convective processing plants. The experimental studies presented further were conducted for the installation containing six sources of microwave power. In this case, the response function was the intensity of the acting electric field. The research was carried out on the volume of the product pipeline with dimensions of 200 200 600 mm relative to the central point (Fig. 1). The step along the coordinate axes was 50 mm. Thus, on the 0X and 0Y axes, the values were taken from −100 to +100; on the 0Z axis they are from −300 to +300.
(a)
(b)
Fig. 1. Product line model 200 200 600 mm in cross section: a) three-dimensional model; b) simulation results.
The experiment was conducted for wheat of three humidity levels: 14, 20, 26%. Taking into account the fact that even one experiment in this case will include 325 measurement points and the symmetry of the design, the number of control points can be reduced by 4 times. Moreover, in order to exclude the mutual influence of the sensors on each other, they can be distributed throughout the volume. 2.4
Results and Discussion
The regression models obtained in this case are of low quality and it is better to use the Regression Learner tool of Matlab application software package to describe them. Considering 19 models shows that the Rational Quadratic GPR model has the best
Study of the Distribution Uniformity Coefficient of Microwave Field
1143
quality indicators, and it can be improved by accumulating and additional statistical data. This model can be used for developing control algorithms as a data source on the behavior of the grain layer during processing. For a more detailed understanding of the distribution of the driving forces of the heat-moisture transfer process in the grain layer under the exposure to the microwave field, a presented earlier nonlinear regression model [13, 14] can be used. Based on the obtained data, the uniformity coefficient of the electromagnetic field propagation can be calculated. The Table 3 shows values of the uniformity coefficient of the electromagnetic field propagation calculated from experimental data. Table 3. The uniformity coefficient of the electromagnetic field propagation calculated from experimental data. W, % Kun1 16 0.38128 20 0.28125 24 0.252511
Kun2 0.390358 0.340313 0.250549
Kun3 0.402989 0.330938 0.271037
Kun4 0.426276 0.300313 0.247713
Kun5 0.357204 0.33 0.256229
Kun mid 0.391621 0.316563 0.257601
Figure 2 shows the experimental dependence of the uniformity coefficient of the electromagnetic field propagation on the humidity of processed wheat. In this case, line 1 is the dependence obtained based on experimental studies; line 2 is a straight line corresponding to 0.368. 0.45 1
Uniformity coefficient
0.4
2
0.35
0.3
Kun = 0.0005 W2 - 0.0369 W + 0.8528
0.25
0.2 15
17
19
21 Moisture, W , %
23
25
Fig. 2. Experimental values of the uniformity coefficient of the electromagnetic field propagation.
1144
D. Budnikov et al.
Differences in experimental data from the calculated data obtained on the basis of electrodynamic modeling are 15–20% and may be due to inaccuracy of information about the dielectric properties of the material, its blockage, measurement errors, etc.
3 Conclusions Based on the foregoing, we can conclude the following: 1. The results of the study of the microwave field strength in the grain mass can be used in the study of the dielectric properties of the processed grain. 2. The results of the study of the microwave field strength in the grain mass can be used in the construction of a control system for the microwave convective processing plants. 3. The values and regularities in the uniformity coefficient of the electromagnetic field distribution in the grain layer indicate a difference in reference data on the dielectric properties of grain materials and their values for the layer of the processed material. 4. The values of the coefficient of uniformity in the considered options for a dense layer of wheat are in the range 0.247713–0.426276.
References 1. Agrawal, S., Raigar, R.K., Mishra, H.N.: Effect of combined microwave, hot air, and vacuum treatments on cooking characteristics of rice. J. Food Process Eng. e13038 (2019). https://doi.org/10.1111/jfpe.13038 2. Ames, N., Storsley, J., Thandapilly, S.J.: Functionality of beta-glucan from oat and barley and its relation with human health. In: Beta, T., Camire, M.E. (eds.) Cereal Grain-Based Functional Foods, pp. 141–166. Royal Society of Chemistry, Cambridge (2019) 3. Bansal, N., Dhaliwal, A.S., Mann, K.S.: Dielectric characterization of rapeseed (Brassica napus L.) from 10 to 3000 MHz. Biosyst. Eng. 143, 1–8 (2016). https://doi.org/10.1016/j. biosystemseng.2015.12.014 4. Basak, T., Bhattacharya, M., Panda, S.: A generalized approach on microwave processing for the lateral and radial irradiations of various Groups of food materials. Innov. Food Sci. Emerg. Technol. 33, 333–347 (2016) 5. Budnikov, D.A., Vasilev, A.N., Ospanov, A.B., Karmanov, D.K., Dautkanova, D.R.: Changing parameters of the microwave field in the grain layer. J. Eng. Appl. Sci. N11 (Special Issue 1), 2915–2919 (2016) 6. Dueck, C., Cenkowski, S., Izydorczyk, M.S.: Effects of drying methods (hot air, microwave, and superheated steam) on physicochemical and nutritional properties of bulgur prepared from high-amylose and waxy hull-less barley. Cereal Chem. 00, 1–3 (2020). https://doi.org/ 10.1002/cche.10263 7. Izydorczyk, M.S.: Dietary arabinoxylans in grains and grain products. In: Beta, T., Camire, M.E. (eds.) Cereal Grain-Based Functional Foods, pp. 167–203. Royal Society of Chemistry, Cambridge (2019) 8. Nelson, S.O.: Dielectric Properties of Agricultural Materials and Their Applications. Academic Press, Cambridge (2015). 229 p.
Study of the Distribution Uniformity Coefficient of Microwave Field
1145
9. Pallai-Varsányi, E., Neményi, M., Kovács, A.J., Szijjártó, E.: Selective heating of different grain parts of wheat by microwave energy. In: Advances in Microwave and Radio Frequency Processing, pp. 312–320 (2007) 10. Ranjbaran, M., Zare, D.: Simulation of energetic- and exergetic performance of microwaveassisted fluidized bed drying of soybeans. Energy 59, 484–493 (2013). https://doi.org/10. 1016/j.energy.2013.06.057 11. Smith, D.L., Atungulu, G.G., Sadaka, S., Rogers, S.: Implications of microwave drying using 915 MHz frequency on rice physicochemical properties. Cereal Chem. 95, 211–225 (2018). https://doi.org/10.1002/cche.10012 12. Vasilev, A.N., Budnikov, D.A., Ospanov, A.B., Karmanov, D.K., Karmanova, G.K., Shalginbayev, D.B., Vasilev, A.A.: Controlling reactions of biological objects of agricultural production with the use of electrotechnology. Int. J. Pharm. Technol. (IJPT) 8(N4), 26855– 26869 (2016) 13. Vasiliev, A.N., Ospanov, A.B., Budnikov, D.A., et al.: Improvement of grain drying and disinfection process in the microwave field. Monography. Almaty: Nur-Print (2017).155 p. ISBN 978-601-7869-72-4 14. Vasiliev, A.N., Goryachkina, V.P., Budnikov, D.: Research methodology for microwaveconvective processing of grain. Int. J. Energy Optim. Eng. (IJEOE) 9(2), 11 (2020). Article: 1. https://doi.org/10.4018/IJEOE.2020040101. 15. Yang, L., Zhou, Y., Wu, Y., Meng, X., Jiang, Y., Zhang, H., Wang, H.: Preparation and physicochemical properties of three types of modified glutinous rice starches. Carbohyd. Polym. 137, 305–313 (2016). https://doi.org/10.1016/j.carbpol.2015.10.065
Floor-Mounted Heating of Piglets with the Use of Thermoelectricity Dmitry Tikhomirov1(&) , Stanislav Trunov1, Alexey Kuzmichev1, Sergey Rastimeshin2, and Victoria Ukhanova1 1
Federal Scientific Agroengineering Center VIM, 1-st Institutskij 5, 109456 Moscow, Russia [email protected], [email protected], [email protected], [email protected] 2 Russian State Agrarian University - Moscow Timiryazev Agricultural Academy, Timiryazevskaya st., 49, 127550 Moscow, Russia [email protected]
Abstract. The article discusses the problem of energy saving when providing comfortable conditions for keeping young animals. The question of the use of thermoelectric modules as a source of thermal energy in installations for local heating of piglets is considered. A functional technological scheme of floormounted heating of suckling piglets is proposed. In the scheme the energy of the hot circuit of thermoelectric modules is used to heat the panel. The energy of the cold circuit of the thermoelectric module is used to assimilate the heat from the removed ventilation air. The heat exchange of the heated panel with the environment is considered. Much attention is paid to the study of the prototype of thermoelectric installation and its laboratory tests. The energy efficiency of using thermoelectric modules as energy converters in thermal technological processes of local heating of young animals has been substantiated. Keywords: Thermoelectricity module Energy saving
Pigsty Local heating Thermoelectric
1 Introduction The development of a comfortable temperature regime in a hog house is an important and urgent task in animal husbandry. In the cold season it is necessary to create two different temperature regimes for sows and suckling piglets. The air temperature should be maintained at the level of 18…20 °C for sows while in the zone of piglets’ location it should be about 30 °C and should be decreased for weaned piglets (in 26 days) to 22 °C [1]. The cold floor causes intense cooling of animal bodies and, in addition, the air above the surface is saturated with moisture and ammonia. This air is poorly removed by ventilation, and animals are forced to breathe it. Such conditions cause colds and animal deaths. A means of combating such phenomena is the use of heated floors and heatinsulating mats [2]. It is necessary to heat separate areas on which animals are resting.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1146–1155, 2021. https://doi.org/10.1007/978-3-030-68154-8_96
Floor-Mounted Heating of Piglets with the Use of Thermoelectricity
1147
Various types of equipment have been developed for heating young animals [3, 4]. However, the most widespread in practice are infrared electric heaters and heating panels [5]. The unevenness of the heat flow in the area where animals are located and the incomplete correspondence of the radiation spectrum to the absorption spectrum of heat waves by animals are the main disadvantages of IR heaters. In addition, a common disadvantage of local heaters is rather high energy consumption. Heat-cold supply by means of heat pumps including using thermoelectricity belongs to the field of energy-saving environmentally friendly technologies and becomes more widespread in the world [6]. The analysis of the use of Peltier thermoelectric modules (thermoelectric assemblies) shows that thermoelectric assemblies are compact heat pumps allowing for the creation of energy-efficient plants for use in various technological processes of agricultural production [7–10]. The purpose of the research is to substantiate the parameters and develop an energy saving installation for local heating of piglets using Peltier thermoelectric elements.
2 Materials and Methods A sow is located in the area of the stall, the piglets are mainly located in the area fenced off from the sow. In this area there is a heated panel where the piglets rest. Suckling piglets freely pass through the passage in the separation grate to the sow for feeding. Based on our analysis of various schemes of thermoelectric assemblies [11, 12] it was found that the “liquid-air” heat exchange scheme should be adopted for energy and structural parameters for creating a local heating system for piglets using a thermoelectric heat pump. The technological scheme of the installation of local floor-mounted heating of piglets is shown in Fig. 1. The installation consists of a thermoelectric assembly, which in turn includes the estimated number of Peltier elements 3, the air cooler of cold circuit 1 with an exhaust fan 2 installed in the bypass of a pigsty ventilation system, the water cooler of hot junction 4 connected by direct and return pipes 8 through a circulation pump 9 with a heated panel 6. Thermal power of the thermoelectric assembly is regulated by changing the current strength flowing through the thermocouples. Power, control and management of the circuit is carried out by the block 5. To assess the performance of a thermoelectric water heater circulating through a heating panel, consider the heat balance equations for hot and cold junctions of a thermoelectric element [13], which have the form: Qph þ 0; 5QR ¼ QH þ QT ;
ð1Þ
QC þ QT þ 0; 5QR ¼ Qpc ;
ð2Þ
Qpc ¼ eTC I;
ð3Þ
1148
D. Tikhomirov et al.
Qph ¼ eTH I; ð4Þ p p where Qh , Qc are Peltier heat of hot and cold junctions, J; QH is heat transferred by hot junction to an heated object, J; QT is heat transferred by the thermal conductivity from hot junction to cold one, J; QC is heat taken from the environment, J; QR is JouleLenz heat, J; e is Seebeck coefficient, lV/K; TC and TH are temperatures of hot and cold junctions, K; I is current strength in a thermocouple circuit, A.
Fig. 1. Functional-technological scheme of the installation of local floor-mounted heating of piglets using thermoelectricity. 1 is air cooler of cold circuit; 2 is fan; 3 are Peltier elements; 4 is water cooler of hot junction; 5 is power and control unit; 6 is heat panel; 7 are temperature sensors; 8 is pipe; 9 is pump.
Since QH and QC represent the amount of heat per unit time, the work of electric forces (power consumption) can be determined by Eq. (5) W ¼ QH QC
ð5Þ
Taking into account Eqs. (1) and (2) as well as relations (3) and (4), the Eq. (5) can be rewritten in the following form: W ¼ eI ðTH TC Þ þ I 2 R;
ð6Þ
where R is the resistance of the thermocouple branch, Ohm. From the analysis of the equation it is seen that the power consumed by the thermocouple W is spent on overcoming thermo-EMF and active resistance. In this
Floor-Mounted Heating of Piglets with the Use of Thermoelectricity
1149
case, the thermocouple works like a heat pump, transferring heat from the environment to the heated object. To analyze the energy efficiency of heat pumps, turn to Eq. (5), which can be rewritten in the following form: 1 ¼ QH =W QC =W ¼ kh kc ;
ð7Þ
where kh is heating factor, kc is cooling factor. Moreover, the heating coefficient kh > 1. Given the Eqs. (1), (6) and (7) the heating coefficient will be determined by Eq. (8): kh ¼
Qpc þ 0; 5QR QT : eIðTH TC Þ þ I 2 R
ð8Þ
The smaller the temperature difference between the junctions TH – TC is, the higher the efficiency of the thermocouple in heating mode is. From the point of view of electric energy consumption, the most economical mode of operation is the mode of a heat pump, in which the heating coefficient tends to the maximum. When operating in heating mode, there is no extreme dependence of the heating coefficient on current. The thermoelectric assembly is a compact heat pump that absorbs thermal energy from the cold side of the module and dissipates it on the hot side [14]. The use of thermoelectric assemblies makes it possible to simultaneously heat and cool objects in different technological processes whose temperature is not adequate to the ambient temperature. The power of a heating device for a heated panel should compensate for heat loss of the site heated to the required temperature on its surface [15]. From the conditions of a quasi-stationary regime, the energy balance of a heated area free of animals has the form: P ¼ P1 þ P2 þ P3 ;
ð9Þ
where: P1 is loss of heat transfer through the surface of a site to a room, W; P2 is heat transfer from side surfaces of a site, W; P3 is loss of heat transfer through a base to a floor array, W. The magnitude of the convective component of heat transfer is determined by the well-known Eq. (10): Pc ¼ ac F ðtb ta Þ;
ð10Þ
where ta is the air temperature in a room, °C; tb is the required temperature on the surface of a heated panel, °C; ac is heat transfer coefficient from the panel to the environment, W/m2K; F is surface area of the heated panel, m2. These air parameters should be taken in accordance with the norms of technological design of agricultural livestock facilities. The sizes of sites F vary depending on the purpose of a farm. For piglets it is from 0.5 to 1.5 m2.
1150
D. Tikhomirov et al.
According to technological requirements, the value ta should be maintained in the range of 18–22 °C (for rooms with weaned piglets and sows). The value tb should be in the range 32–35 °C [1]. The mobility ratio of air in the areas of animals should not exceed 0.2 m/s (Fig. 2).
Fig. 2. Calculation of the power of a heating panel. 1 is base; 2 is heat carrier; 3 is thermal insulation.
Reynolds number is determined by Eq. (11): Re ¼ va l=t;
ð11Þ
where va is the relative air mobility in the room, m/s; l is length of a heated panel characteristic size, m; t is kinematic viscosity of air, m2/s. Heat transfer coefficient in a laminar boundary layer is determined by Eq. (12): ac ¼ 0; 33ðk=lÞ Re0;5 Pr1=3 ;
ð12Þ
where k is thermal conductivity coefficient of air, W/mK. The radiant component of heat transfer by a heating panel site is determined by Eq. (13): h i Pr ¼ C0 eb F ðTb =100Þ4 ðTbe =100Þ4 ;
ð13Þ
where C0 = 5.76 W/m2K is the emissivity of a completely black body; eb is the integral degree of blackness of the surface of a heated panel; F is surface area of the heated panel, m2; Tb and Tbe are surface temperatures of a heated panel and building envelope indoors, K. General losses from the surface of a heated panel as a result of convective and radiant heat transfer are:
Floor-Mounted Heating of Piglets with the Use of Thermoelectricity
1151
P1 ¼ Pc þ Pr :
ð14Þ
tt ¼ tb þ P1 db =ðFkb Þ;
ð15Þ
Coolant temperature is:
where tb is the surface temperature of a heated panel (base), °C; db is the thickness of a base, m; kb is the coefficient of thermal conductivity of a base material, W/mK. Heat losses from the side surface P2 of the heated panel can be neglected due to its small area. A preliminary calculation of the heat loss of the heated panel through the floor P3 is carried out taking into account a number of assumptions: • The values of floor temperature in winter should be higher than negative values and dew points; • The heated panel does not affect the thermal regime of a floor at its location. • The value of floor temperature is taken below the calculated values in the range of 5–10 °C to ensure system stability. P3 ¼ ðkins =dins ÞF ðtt tf Þ;
ð16Þ
where kins is coefficient of thermal conductivity of an insulating material, W/mK; dins is thickness of an insulation layer, m; tf is floor temperature, °C. The total thermal power of an energy carrier for heating a site: P ¼ P1 þ P2 þ P
ð17Þ
Table 1 presents the calculation of the power of the heated panel made for the maximum allowable temperature and air velocity in the room for suckling piglets. Table 1. Calculation of the power of the heated panel. Parameter Room temperature Temperature on heated panel surface (max) Relative room air mobility Panel length The area of the upper surface of a heated panel Kinematic viscosity of air Thermal conductivity coefficient Reynolds number Convection heat transfer coefficient Convective component of heat loss Degree of blackness of a heated panel surface Surface temperature of a heated panel Wall temperature
Designation ta tb va l F t k Re ac Pc eb Tb Tbe
Unit measure °C °C m/s m m2 m2/s W/mK W/m2 K W K K
Value 18 35 0.2 1.5 1.05 14.7∙10–6 2.49∙10–2 20408 0.694 12.5 0.92 308 283 (continued)
1152
D. Tikhomirov et al. Table 1. (continued)
Parameter Radiant component of heat loss General losses from a heated panel surface Base thickness Thermal conductivity of a base material Coolant temperature Conductivity coefficient of thermal insulation Thermal insulation layer thickness Floor temperature Heat loss through a floor Power of a heated panel (max)
Designation Pr P1 db kb tt kins dins tf P3 P
Unit measure W W m W/mK °C W/mK m °C W W
Value 150.2 162.7 0.003 0.4 36.2 0.03 0.04 5 44.3 207.0
The estimated maximum power of the panel was about 207 W (Table 1). The minimum rated power of the panel for its surface temperature of 32 °C will be 174 W in accordance with the presented method.
3 Results and Discussion Based on the performed calculations, a prototype of the installation for local heating of piglets was made (Fig. 3) using a heated floor panel with thermoelectric modules (TEC1–12706). Water is used as a heat carrier that circulates in a closed loop.
Fig. 3. A prototype of a heated panel and thermoelectric assembly. 1 is heated panel; 2 is circulation pump; 3 is thermoelectric assembly; 4 are water heating tubes; 5 are thermoelectric elements; 6 is cold side radiator; 7 is fan; 8 is hot side radiator
The thermal energy and structural calculation of the thermoelectric assembly, cold and hot circuit radiators was performed according to the methodology [16] and taking
Floor-Mounted Heating of Piglets with the Use of Thermoelectricity
1153
into account the results of [17]. The main parameters of the local floor-mounted installation for heating piglets are given in Table 2. Table 2. The main parameters of the installation of floor-mounted heating piglets. Parameter Mains voltage Power consumption Supply voltage of one Peltier element The number of thermoelectric elements in the assembly Panel heat power Temperature on the working surface of the heated panel at an ambient temperature of 18 °C Circulating fluid volume Panel size
Unit measure V W V pcs W °C
Value
l m
0.9 0,7 1,5
220 120 12 2 170 32
Figure 4 shows the experimental dependences of the power consumed from an electrical network P and the power removed by the hot circuit of the thermoelectric module QH on the mains voltage. After analyzing the averaged experimental dependences, it is possible to conclude that the power removed by the hot circuit of the thermoelectric module is 30% higher than the power consumed from a network (Unominal = 12 V). This occurs due to the intake of absorbed energy of the warm air removed from a room by the cold circuit of the thermoelectric assembly. Using the
Fig. 4. Energy characteristic of the thermoelectric module
1154
D. Tikhomirov et al.
waste heat of removed ventilation air to supply a cold junction, it is possible to reduce the power consumed from an electrical network to heat the panel itself. This heat pump generates 1.3 kWh of thermal energy per 1 kWh of consumed electrical energy in the set mode. It is possible to use infrared heaters located above piglets if the combined heating of piglets is necessary [18]. They might be the irradiators [19] with uniform heat flux and high reliability.
4 Conclusions Analysis of the experimental results of the prototype and theoretical studies suggest the possibility of using Peltier thermoelectric modules in local heating systems for young animals as an energy-saving source of thermal energy. The use of local heating systems, based on Peltier elements as heat pumps, can significantly reduce the energy consumption (up to 30%) for heating young animals compared to existing direct heating plants (electric irradiators, heated floor mats, brooders, etc.). The heat output of the panel with an area of 1,0–1.4 m2 should be 170– 200 W, and the total electric power of the thermoelements – 120–140 W. The estimated payback period for local heaters of young animals, based on Peltier thermoelectric elements, is about 3.5 years. Further research will be aimed at determining the conditions for the optimal operation of thermoelectric systems to obtain the maximum increase in power taken by the hot circuit compared to the power consumed from an electrical network. The justification of heat, power and structural parameters of local installations for heating piglets will be also carried out. The heat power of the panel is determined for static modes of heat exchange with the environment. The study of heat exchange dynamics, taking into account the degree of filling of the panel with piglets, will allow further development of an adaptive control system to maintain a set temperature on its surface.
References 1. RD-APK 1.10.02.01–13. Metodicheskie rekomendacii po tekhnologicheskomu proektirovaniyu svinovodcheskih ferm i kompleksov [Guidelines for the technological design of pig farms and complexes]. Ministry of Agriculture of the Russian Federation, Moscow, Russia (2012) 2. Trunov, S.S., Rastimeshin, S.A.: Trebovaniya k teplovomu rezhimu zhivotnovodcheskih pomeshchenij s molodnyakom i predposylki primeneniya lokal’nogo obogreva [Requirements for the thermal regime of livestock buildings with young animals and the prerequisites for the application of local heating]. Vestnik VIESKH 2(27), 76–82 (2017) 3. Valtorta, S.: Development of microclimate modification patterns in animal husbandry. In: Stigter, K. (eds.) Applied Agrometeorology. Springer, Heidelberg (2010). https://doi.org/10. 1007/978-3-540-74698-0_92
Floor-Mounted Heating of Piglets with the Use of Thermoelectricity
1155
4. Tikhomirov, D., Vasilyev, A.N., Budnikov, D., Vasilyev, A.A.: Energy-saving automated system for microclimate in agricultural premises with utilization of ventilation air. Wireless Netw. 26(7), 4921–4928 (2020). Special Issue: SI. https://doi.org/10.1007/s11276-01901946-3 5. Samarin, G.N., Vasilyev, A.N., Zhukov, A.A., Soloviev, S.V. Optimization of microclimate parameters inside livestock buildings. In: Vasant, P., Zelinka, I., Weber, G.W. (eds.) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol 866. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-00979-3_35 6. SHostakovskij, P.G.: Teplovoj kontrol’ ob”ektov na baze termoelektricheskih sborok [Thermal control of objects based on thermoelectric assemblies]. Komponenty i tekhnologii 9, 142–150 (2011) 7. Tihomirov, D.A., Trunov, S.S., Ershova, I.G., Kosolapova, E.V.: Vozdushno-teplovaya zavesa s ispol’zovaniem termoelektricheskih modulej v doil’nom bloke fermy KRS [Airthermal curtain using thermoelectric modules in the milking unit of the cattle farm]. Vestnik NGIEI 1, 47–57 (2020) 8. Amirgaliyev, Y., Wojcik, W., Kunelbayev, M.: Theoretical prerequisites of electric water heating in solar collector-accumulator. News Nat. Acad. Sci. Repub. Kaz. Ser. Geol. Tech. Sci. 6, 54–63 (2019) 9. Trunov, S.S., Tikhomirov, D.A.: Termoelektricheskoe osushenie vozduha v sel’skohozyajstvennyh pomeshcheniyah [Thermoelectric air drainage in agricultural premises]. Nauka v central’noj Rossii 2(32), 51–59 (2018) 10. Kirsanov, V.V., Kravchenko, V.N., Filonov, R.F.: Primenenie termoelektricheskih modulej v pasterizacionno-ohladitel’nyh ustanovkah dlya obrabotki zhidkih pishchevyh produktov [The use of thermoelectric modules in pasteurization-cooling plants for processing liquid food products]. FGOU VPO MGAU, Moscow, Russia (2011) 11. Chen, J., Zhang, X.: Investigations of electrical and thermal properties in semiconductor device based on a thermoelectrical model. J. Mater. Sci. 54(3), 2392–2405 (2019) 12. Reddy, B.V.K., Barry, M., Li, J.: Mathematical modeling and numerical characterization of composite thermoelectric devices. Int. J. Therm. Sci. 67, 53–63 (2013) 13. Stary, Z.: Temperature thermal conditions and the geometry of Peltier elements. Energy Convers. Manage. 33(4), 251–256 (1992) 14. Ismailov, T.A., Mirzemagomedova, M.M.: Issledovanie stacionarnyh rezhimov rabo-ty termoelektricheskih teploobmennyh ustrojstv [Study of stationary modes of operation of thermoelectric heat exchange devices]. Vestnik Dagestanskogo gosudarstvennogo tekhnicheskogo universiteta. Tekhnicheskie nauki 40(1), 23–30 (2016) 15. Wheeler, E.F.; Vasdal, G., Flo, A., et al.: Static space requirements for piglet creep area as influenced by radiant temperature. Trans. ASABE 51(1),2008) 278–271 ) 16. Pokornyj, E.G., SHCHerbina, A.G.: Raschet poluprovodnikovyh ohlazhdayushchih ustrojstv [Calculation of semiconductor cooling devices]. Nauka, Lipetsk, Russia (1969) 17. Trunov, S.S., Tihomirov, D.A., Lamonov, N.G.: Metodika rascheta termoelektricheskoj ustanovki dlya osusheniya vozduha [Calculation Method for Thermoelectric Installation for Air Drying]. Innovacii v sel’skom hozyajstve 3(32), 261–271 (2019) 18. Ziemelis, I, Iljins, U, Skele, A.: Combined local heating for piglets. In: International Conference on Trends in Agricultural Engineerin, Prague, Czech Republic, pp. 441–445 (1999) 19. Kuz’michyov, A.V., Lyamcov, A.K., Tihomirov, D.A.: Teploenergeticheskie pokazateli IK obluchatelej dlya molodnyaka zhivotnyh [Thermal energy indicators of IR irradiators for young animals]. Svetotekhnika 3, 57–58 (2015)
The Rationale for Using Improved Flame Cultivator for Weed Control Mavludin Abdulgalimov1, Fakhretdin Magomedov2, Izzet Melikov2, Sergey Senkevich3(&), Hasan Dogeev1, Shamil Minatullaev2, Batyr Dzhaparov2, and Aleksandr Prilukov3 1
Federal Agrarian Scientific Center of the Republic of Dagestan, St. A.Shakhbanova, 18a, Makhachkala 367014, Russia [email protected], [email protected] 2 Dagestan State Agrarian University named after M.M. Dzhambulatov, St. M. Gadzhiev, 180, Makhachkala 367032, Russia [email protected], [email protected], [email protected], [email protected] 3 Federal Scientific Agroengineering Center VIM, 1 st Institute pas. 5, Moscow 109428, Russia [email protected], [email protected]
Abstract. The work substantiates the problem of weed control and the need to improve control methods in the production of agricultural products, the applied control methods are considered. The design of an improved flame cultivator for burning weeds in the near-stem strips of orchards and vineyards (rows and rows of perennial plantings) by the thermal method, as well as the technological process of its operation and conclusions, is given. Keywords: Technological process Control methods A flame cultivator Stability Flame correctability
Burning Weeds
1 Introduction Agricultural sector environmental management approaches are taking on a global scale at the present stage. The consumer focuses the greatest interest on the quality of the purchased product and the presence of harmful elements in it, this is due to the great attention to the environmental friendliness of products produced by the agro industrial complex. Weeds are always kept in crops regardless of the degree of formation of agriculture, agrotechnical methods used and technical tools [1]. Weed control is obligatory and expendable agrotechnical event. The usage of different methods depends on the level of contamination, climate and soil conditions, and also it depends on growth and formation of cultivated crops conditions [2, 3]. The soreness of farmland makes the implementation of agricultural activities very difficult, weedy vegetation contributes to a decrease in the quality of agricultural products. The cost of weed destruction is about 30% of the total cost of pre-paring and caring for crops. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1156–1167, 2021. https://doi.org/10.1007/978-3-030-68154-8_97
The Rationale for Using Improved Flame Cultivator
1157
It is necessary to understand the possible soil infestation with weed seeds in order to analyze their composition and monitor their growth and the difference in weed infestation depends on the weed seeds concentration in the upper layer of the soil [4]. It is established that the decrease productivity of cultivated crops reaches 35% or higher when their soreness. Weed vegetation not only reduces soil fertility, it also helps to generate pests and diseases. Weed vegetation rapidly reduces the harvest and its existence complicates the harvesting process, besides the quality of the grown product is reduced. The temperature of soils full of weeds decreases from 2 to 4 °C, this helps to reduce the vital activity of bacteria in them, restraining the development of decay of organic matter by this and reducing the number of useful components [5]. This explains the interest of agricultural producers in development and integration of modern technical means for weed control, those are allowing to reduce the cost of growing crops. Agricultural production does consist not in absolute elimination of weeds, but in keeping them in volume, does not negatively affect the yield of cultivated crops in present conditions. The main factors of significant infestation are high productivity, viability and resistance of weed vegetation to the protection measures used. Classic measures of protection from weeds are chemical and preventive. These measures make it possible to reduce the number of weeds to a cost-effective threshold value of harmfulness in their collective use. They also have a number of other wellknown disadvantages. A certain proportion of herbicide solution particles carried away by uncontrolled wind movement has a negative effect on the surrounding atmosphere. Modification of weeds occurs and the impact on most of their chemical methods varieties of weed destruction is reduced [6]. Researchers from many countries are searching for other ways to destroy weed vegetation for this reason. So, fire methods of weed control have General scientific and practical significance as well as classical methods [7].
2 Main Part Purpose of research is development of an advanced flame cultivator to in-crease the productivity of burning weeds. Tasks set to achieve the intended goal: – to analyze the functioning methods, ways, technologies and technical tools for weed control; – to develop the design of an advanced flame cultivator for destroying weeds by burning. Materials and Methods The combination of agricultural works with using of advanced machines and technical tools is one of the fundamental conditions for the destruction of weed vegetation. Methods and methods of destruction with any of the varieties of weed vegetation are as multiform as the weed itself [8]. The agrotechnical method of weed destruction makes it possible to form requirements for provoking their germination with further
1158
M. Abdulgalimov et al.
elimination of formed sprouts. The practical operation of agricultural production demonstrates that the achievement of the smallest amount of weed vegetation is possible only with the unified use of preventive and destructive (agrotechnical, mechanical, chemical, biological) measures for their destruction [9, 10]. The study of various physical methods of weed control (electrothermal soil treatment, high-voltage electric current, ultrasound, laser beam, microwave energy, fire cultivation, and others) is actively carried out as well as the mentioned methods both abroad and in our country. Methods of destruction of weeds such as electro-physical are not widely used in the agricultural sector because of the need for con-tact with weeds, despite the environmental accuracy, such methods also do not affect weed seedlings that are located in a shallow soil cover [11, 12]. Fire cultivation is used to destroy weeds and their seeds on cultivated land, some arable land, channels for irrigation of cultivated crops at the end of harvesting. These works are carried out by flame cultivators equipped with burners that operate on oil fuel and natural gas. The early weeds growing from 3,0 to 5,0 cm in height control consists in heating to a temperature of more than 50 °C. This leads to dehydration, protoplasm clotting, drying of stems and leaves and as a result to their death. Death temperature for formed weeds: 300 °C – annual; from 1000 to 1200 °C – perennial. The effectiveness of destroying seeds with a flame cultivator is in the range of 90 to 95%, when the temperature from 250 to 300 °C at the output of its burners affects the treated surface. The energy effect on the weed seeds germination is shown in Fig. 1.
Fig. 1. Weed seed germination dependence on the impact energy
The use of flame cultivator in vegetable growing is also effective. Good results can be seen when using this method on areas sown with onions, beets, carrots and others,
The Rationale for Using Improved Flame Cultivator
1159
which are characterized by delayed germination. Flame cultivators do not have a negative impact on the soil structure due to the lightweight design [13]. Besides that, the weed vegetation destruction on the ground surface by burning is becoming more relevant, during which the short-term high temperatures effect on the seedlings of weeds is produced. Heat exposure makes it possible to control weeds, at the same time the impact on the microbiological and physical and chemical properties of the soil almost does not happen [14]. Using this method of weed destruction reduces the number of treatments and the negative impact of the machine's propellers on the soil [15]. This is due to a reduction in the number of moves on the cultivated area, decelerating the formation of soil erosion [16]. It should be noted that this contributes to the preservation of soil fertility [17]. The search for ways to reduce the number of moves on the treated area by using modern technical tools, including machines for burning weeds, is one of the conditions for solving the considered problem. Damage to weed vegetation level dependence on the impact energy is shown in Fig. 2.
Fig. 2. Weed vegetation damage level dependence on the impact energy
The amount of weed vegetation in the areas where crops are cultivated in-creases their harmful impact. It accompanies a decrease in crop productivity. This circumstance contributed to the establishment of a mathematical interpretation of the quantitative
1160
M. Abdulgalimov et al.
relationship between the weed vegetation excess and the particular crop yield, which is determined by an exponential regression equation quite reliably: Y ¼ aebx þ c;
ð1Þ
where Y is crop productivity of cultivated crops on a particular area that has a weed infestation %, g/m2, ton/Ha; x is excess weed vegetation, %, g/m2, ton/Ha; e = 2,718 is base of natural logarithms; a is the value that determines the decrease crop productivity with the maximum weed infestation of the cultivation area; b is the value that determines the expression of the decrease in crop productivity with weed infestation of the cultivation area; c is the value that determines the volume of saving crop productivity with maximum weed infestation of the cultivation area. The following levels of weed infestation harmfulness limits have been established subject to the impact of cultivated crops weed vegetation: phytocenotic (an excess weed vegetation does not have a significant impact on cultivated crops), critical (the excess of weed vegetation contributes to a decrease in the productivity of cultivated crops from 3.0 to 6.0%), economic (the increase crop productivity reaches from 5.0 to 7.0%, that is provided by reducing to a minimum the amount of weed vegetation). The economic limit of harmfulness is determined by the formula: Ca ¼ Ya P;
ð2Þ
where Ca is additional expenses for the destruction of weeds, RUB./Ha; Ya∙ is additional harvest cultivated crops, ton/Ha; P is tariff per unit of harvest cultivated crops, RUB./ton. Economic limits numerical indicators of harmfulness of weed vegetation can be defined not only for any cultivated crop and its individual producer, but also for a particular cultivation area. The movement speed of the proposed flame cultivator is determined by the formula: v ¼ 3; 6ðl=tÞ; km=h where l is the length of the trip, m; t is trip time, s.
ð3Þ
The Rationale for Using Improved Flame Cultivator
1161
Gas consumption is determined by the formula: Vg ¼ Vge Vog ; L
ð4Þ
where Vge is initial gas volume in the gas tank, liter; Vog is remaining gas at the end of the trip, liter. The processing area is determined by the formula: S ¼ B l = 10000; Ha
ð5Þ
where B is machine’s working width, m; The working gas flow rate is determined by the formula: Q ¼ Vg = S; L=Ha
ð6Þ
The working gas flow rate (minute) is determined by the formula: Qmin ¼ 60000 V = t; mL=min
ð7Þ
The value of the destructive flame temperatures (°K) dependent on the exposure time is determined by the formula: T ¼ 1700 950 lnðsÞ; where s is exposure time, sec. Heat used value is determined by the formula: X K¼ ðt sÞ=G;
ð8Þ
ð9Þ
where t is temperature, °C; s is exposure time, sec; G is gas mixture flow rate, kg/Ha. Destruction weed vegetation technology by using open flame burners provides: – weed seeds destruction by applying heat to them; – destruction weed sprouts; – tall weeds destruction. The proposed scheme of weed vegetation technological decontamination is illustrated in Fig. 3.
1162
M. Abdulgalimov et al.
Fig. 3. Technological scheme of weed vegetation decontamination
Results The flame cultivator must comply with the following main conditions based on the technological neutralization weeds scheme: – to provide a significant level of weed control; – to not harm cultivated plants within the limits of the safe lane width; – provide copying of the terrain movement in the transverse-longitudinal-vertical planes; – have protective devices against damage to the working elements in contact with obstacles. Weeds destruction in the ground between the rows does not imply significant problems with the support current agricultural tools. However, the issue of weed control in rows and protection zones of cultivated crops is still not fully resolved, despite the large number of technical means and methods available in the current agricultural sector [18]. Agricultural machinery and equipment manufacturers are mainly focused on the creation agricultural aggregates for the destruction weed vegetation without the use of toxic drugs, this is relevant due to the demand for ecological agricultural production at present stage. Gas-powered units that use the gas-burning flame energy to control weeds are considered the result of these foundations formation [19, 20]. They combine in itself high productivity with low construction and maintenance costs and the ability to effectively perform joint work on crop care and weed control. The control of weeds and their seeds should be effective and carried out by highperformance technical means at minimal cost [21, 22]. Implementation of weed vegetation neutralization process on the soil surface is allowed using a flame cultivator [23]. The design of the proposed advanced flame cultivator (Figs. 4, 5 and 6) includes gas sections-burners 1 consisting of removable paired pipe sections (burners) 5 with
The Rationale for Using Improved Flame Cultivator
1163
deaf flanges 20 at the ends and holes at the bottom for screwing gas-air nozzles 6 connected by mixer pipes 19 having holes on top for connecting the air pipe 3 and the gas pipe 2 with the nozzle 4; reflector shields 7; rails-skates (bearing) 8 with vertical plates-knives (vertical) 9 to maintain the straight movement of the unit; gas-air nozzles (terminal) 10; damper 11 welded to the probe 12 and the sleeves 13; brackets with stoppers 15; rod 16 connected to each other by return springs 17 [23].
Fig. 4. Front view of the flame cultivator
Fig. 5. Paired sections-burners construction
1164
M. Abdulgalimov et al.
Fig. 6. Mechanisms for protecting trunk from flames
The flame cultivator works as follows. Gas is fed to the mixer pipe 19 by pipeline 2 through the nozzle 4, air is fed enveloping the gas jet through a «jacket pipe» 3, which can be connected to the tractor compressor. The gas-air mixture flows from the mixer pipe 19 to the paired pipeburners 5 and the nozzles 6 ignite it at the exit. The flame of the burners and hot air, concentrated under the reflector shield 7, burn out weeds and their seeds. The flame of the terminal nozzles 10 enters the plantings rows direction down at a slight angle. During of the unit movement the probes 12 are sequentially withdrawn to the side opposite to the direction of the unit movement, contacting the trunk 18, at the same time, the arc-shaped dampers 11 welded to them also rotate alternately (one after the other) around the axes and block the flame from the terminal nozzles 10, preventing direct contact with the planting trunk 18. The dampers 11 return alternately to the original position, re-turns, unfolding to stoppers 15, placed on brackets 14, which together with the springs 17 support the dampers 11 with the dipsticks 12 in a stationary state until contact with the subsequent obstacle, under the action of return springs 17 upon completion of the contacts of the probes 12 with the trunk 18. Knives 9 in the form of plates, welded to the rails 8, form «rails-skates» and provide straightness of movement, wedging into the soil, and prevent the unit from sliding sideways on land with a slope. Production simplicity, low metal consumption and the ability to simultaneous soil processing ground between the rows and perennial plantings row are the advantages of the proposed advanced flame cultivator and they will ensure high efficiency and resource saving of the technological process.
The Rationale for Using Improved Flame Cultivator
1165
Discussion The use of the proposed flame cultivator for weed control reduces the number of soil processing and the cost of caring for cultivated crops, and the fire method of destroying weeds is environmentally safe. It is established that the destruction weeds occurs due to pruning at considerable cost and only at different stages of their formation in a significant part of the available technical means and aggregates for weed control. The device novelty is the improved flame cultivator construction developed for the burning weeds technological process [23].
3 Conclusion Negative results of the use methods for weed control indicate the need for their improvement. The improved technology and design of the proposed flame cultivator ensure the effectiveness of the destroying weed vegetation process using the proposed flame cultivator. Economic feasibility is substantiated by a decrease number of soil processing while destroying weed vegetation relative to other technologies. It is necessary to continue the modernization of both existing and developed flame cultivator for the destruction weeds constructions.
References 1. Perederiyeva, V.M., Vlasova, O.I., SHutko, A.P.: Allelopaticheskiye svoystva sornykh rasteniy i ikh rastitel’nykh ostatkov v protsesse mineralizatsii [Allelopathic properties of weeds and their plant residues in the process of mineralization]. KubGAU, Krasnodar 09 (73), 111–121 (2011). https://ej.kubagro.ru/2011/09/pdf/11. (in Russian) 2. Hatcher, P.E., Melander, B.: Combining physical, cultural and biological methods: prospects for integrated non-chemical weed management strategies. Weed Res. 43(5), 303–322 (2003). https://doi.org/10.1046/j.1365-3180.2003.00352.x 3. Abouziena, H.F., Hagaag, W.M.: Weed control in clean agriculture: a review. Planta Daninha. 34(2), 377–392 (2016). https://doi.org/10.1590/S0100-83582016340200019 4. Dorozhko, G.R., Vlasova, O.I., Perederiyeva, V.M.: Sposob obrabotki - faktor regulirovaniya fitosanitarnogo sostoyaniya pochvy i posevov ozimoy pshenitsy na chernozemakh vyshchelo-chennykhzony umerennogo uvlazhneniya Stavropol’skogo kraya [The tillage method is a factor in regulating the phytosanitary state of the soil and winter wheat crops on leached humus of the moderate humidification zone of the Stavropol Territory]. KubGAU, Krasnodar 04(68), 69–77 (2011). https://ej.kubagro.ru/2011/04/pdf/08. (in Russian) 5. Tseplyaev V.A., Shaprov M.N., Tseplyaev A.N.: Optimizatsiya parametrov tekhnologicheskogo protsessa poverkhnostnoy obrabotki pochvy rotornym avtoprivodnym agregatom [Optimization of the technological process parameters of the surface tillage by the rotary automatic drive unit]. Izvestiya Nizhnevolzhskogo agrouniversitetskogo kompleksa: nauka i vyssheye professional’noye obrazovaniye 1(25), 160–164 (2012). (in Russian)
1166
M. Abdulgalimov et al.
6. Schütte, G.: Herbicide resistance: promises and prospects of biodiversity for European agriculture. Agric. Hum. Values 20(3), 217–230 (2003). https://doi.org/10.1023/A: 1026108900945 7. Bond, W., Grundy, A.C.: Non-chemical weed management in organic farming systems. Weed Res. 41(5), 383–405 (2001). https://doi.org/10.1046/j.1365-3180.2001.00246.x 8. Izmaylov, A.YU., Khort, D.O., Smirnov, I.G., Filippov, R.A., Kutyrëv, A.I.: Analiz parametrov raboty ustroystva dlya gidravlicheskogo udaleniya sornoy rastitel’nosti [Analysis of Work Parameters of the Device for Hydraulic Removal of Weed Vegetation] Inzhenernyye tekhnologii i sistemy. FGBOU VO «MGU im. N. P. Ogarëva». Saransk 29 (4), 614–634 (2019). https://doi.org/10.15507/2658-4123.029.201904.614-634. (in Russian) 9. Agricultural and Forestry Machinery. Catalogue of Exporters Czech Republic. Copyright: A. ZeT, Brno (2005) 10. Bezhin, A.I.: Obosnovaniye parametrov i rezhimov raboty kul’tivatornogo agregata dlya sploshnoy obrabotki pochvy [Rationale for parameters and operating modes of the cultivator for complete tillage]. Ph.D. dissertation. Orenburg State Agrarian University, Orenburg (2004),183 p. (in Russian) 11. Popay, I., Field, R.: Grazing animals as weed control agents. Weed Technol. 10(1), 217–231 (1996). https://doi.org/10.1017/S0890037X00045942 12. Astatkie, T., Rifai, M.N., Havard, P., Adsett, J., Lacko-Bartosova, M., Otepka, P.: Effectiveness of hot water, infrared and open flame thermal units for controlling weeds. Biol. Agric. Hortic. 25(1), 1–12 (2007). https://doi.org/10.1080/01448765.2007.10823205 13. Abdulgalimov, M.M.: Nestandartnyye resheniya i tekhnicheskiye sredstva dlya bor’by s sornyakami v sadakh i vinogradnikakh [Non-standard solutions and technical means for weed control in orchards and vineyards]. DagGAU im. M.M. Dzhambulatova. Makhachkala 4–6 (2015). (in Russian) 14. Blackshaw, R.E., Anderson, R.L., Lemerle, D.: Cultural weed management. In: Upadhyaya, M.K., Blackshaw, R.E. (eds.) Non-Chemical Weed Management: Principles, Concepts and Technology,1 edn.,CABI 2007, Wallingford, England, pp. 35–47. https://researchoutput.csu. edu.au/en/publications/cultural-weed-management 15. Melikov, I., Kravchenko, V., Senkevich, S., Hasanova, E., Kravchenko, L.: Traction and energy efficiency tests of oligomeric tires for category 3 tractors. In: IOP Conference Series: Earth and Environmental Science, vol. 403, p. 012126 (2019). https://doi.org/10.1088/17551315/403/1/012126 16. Senkevich, S., Ivanov, P.A., Lavrukhin, P.V., Yuldashev, Z.: Theoretical prerequisites for subsurface broadcast seeding of grain crops in the conditions of pneumatic seed transportation to the coulters. In: Handbook of Advanced Agro-Engineering Technologies for Rural Business Development, pp. 28–64. IGI Global, Hershey (2019). https://doi.org/10. 4018/978-1-5225-7573-3.ch002 17. Lavrukhin, P., Senkevich, S., Ivanov, P.: Placement plants on the field area by seeding machines: methodical aspects assessment rationality. In: Handbook of Research on Smart Computing for Renewable Energy and Agro-Engineering, pp. 240–261. IGI Global, Hershey (2020).https://doi.org/10.4018/978-1-7998-1216-6.ch010 18. Tokarev, N.A., Gar’yanova, E.D., Tokareva,, N.D., Gulyayeva, G.V.: Sposob bor’by s sornyakami [Weed Control Method]. Zemledeliye. 2012. 8. p. 37–38. (in Russian) 19. Baerveldt, S., Ascard, J.: Effect of soil cover on weeds. Biol. Agric. Hortic. 17(2), 101–111 (1999). https://doi.org/10.1080/01448765.1999.9754830 20. Latsch, R., Anken, T., Herzog, C., Sauter, J.: Controlling Rumex obtusifolius by means of hot water. Weed Res. 57(1), 16–24 (2017). https://doi.org/10.1111/wre.12233 21. Abdulgalimov, M.M.: Ognevoy kul’tivator [Flame Cultivator]. Byul. 14, 2016. RU Patent 2584481 C2. A01M 15/00
The Rationale for Using Improved Flame Cultivator
1167
22. Abdulgalimov, M.M., Magomedov, F.M., Senkevich, S.E., Umarov, R.D., Melikov, I.M.: Sovershenstvovaniye tekhnologii i sredstv mekhanizatsii dlya bor’by s sornoy rastitel’nost’yu [Improvement of technology and means of mechanization for weed control]. Sel’skokhozyaystvennyye mashiny i tekhnologii 5, 38–42 (2017). https://doi.org/10. 22314/2073-7599-2017-5-38-42. (in Russian) 23. Abdulgalimov, M.M.: Ognevoy kul’tivator [Flame Cultivator]. Byul. 7, 2019. RU Patent 187387 U1. A01M 15/00
The Lighting Plan: From a Sector-Specific Urbanistic Instrument to an Opportunity of Enhancement of the Urban Space for Improving Quality of Life Cinzia B. Bellone1(&) and Riccardo Ottavi2 1
Urban Planning, DIS, Università degli Studi Guglielmo Marconi, Rome, Italy [email protected] 2 Industrial Engineer, Electrical and Lighting Design, Perugia, Italy [email protected]
Abstract. The urban space, its lighting, the environmental sustainability, the energy efficiency, the attention to public spending, the innovation and the optimization of public utility services, the light pollution, the energy and lighting upgrading of public lighting systems and – most importantly – the growth potential of urban living thanks to a new strategy for the cities of the future, the creation of efficient and effective smart cities: these are the themes that innervate the research whose results are presented in this article. These topics are very different from each other but share the urban space with the features of complexity and modernity. And it is on the city, on the urban space, that this study is directed on since, as De Seta writes “nowadays over half the global population lives in a city, and it is estimated that this percentage will rise to three quarters in 2050” [1]. The present research represents the evolution of what has been submitted by the authors for the AIDI conference “XIX Congresso Nazionale AIDI, La luce tra cultura e innovazione nell’era digitale” (The research work was submitted and accepted at the XIX National Conference of AIDI, 21−22 May 2020, Naples, Italy.). Keywords: Sustainable urban development Smart city
Innovation Social wellbeing
1 Introduction The complexity of a contemporary city led to the creation of new analysis, design and planning tools in the discipline of urban technology. These above mentioned tools relate to issues both widespread - including energy saving, water protection, waste management and localization of broadcast media – and locally, such as traffic congestion, pollution in its different forms, subsoil monitoring, etc. These new tools fill a gap in the general planning method in its traditional meaning: they listen to and interpret those specific territorial requirements and try to give them a
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1168–1175, 2021. https://doi.org/10.1007/978-3-030-68154-8_98
The Lighting Plan: From a Sector-Specific Urbanistic Instrument
1169
concrete answer, with the main aim of raising performance, functional and quality city standards [2, 3]. These topics are fully covered by multiple types of plan in the sector-specific city planning: the urban traffic and the acoustic renewal plan, the timetable/schedule and the municipal energy plan, the color plan and the electro-smog renewal one and, last but not least, the lighting plan.
2 The Lighting Plan To better understand the meaning of the latter tool, here’s a definition from an ENEA1 guideline: «The lighting master plan is an urbanistic instrument to regulate all types of lighting for the city, it is a veritable circuit of how the city is supposed to be planned in terms of lighting technique. It has key advantages, as it allows to respect urban fabrics in general correlating them with a type of proper lighting. The final result is the obtaining and the optimizing of the municipal lighting network, depending on main requirements» [4]. The lighting of a city, therefore public, is a very peculiar component in the structure of the urban system, as it assumes two different identities depending on the daytime and the night-time. By day, its components are visible objects which have to be mitigated as much as possible in the environmental context, while by night, those same elements play an essential role by lighting urban space, ensuring that the whole city works as during the day and that everything proceeds in terms of quality, efficiency and reliability [5]. Not less important, considering the technologically advanced and innovative role that this engineerable and widespread Fig. 1. Infographic of the report system can play, this may be considered the first real by the McKinsey Global Institute step towards the formation of the smart city [6]. According to a report by the McKinsey Global Institute, smart cities might reduce crime by 10% and healthcare costs by 15%, save four days stuck in traffic and 80 L of water per day (Fig. 1) [7]. The real usefulness and importance of a dedicated sector-specific urbanistic instrument – as is the lighting plan – becomes evident. Such an instrument, in fact, can act as a guide in people’s lives within the night urban spaces [8]. In fact, almost all Italian regional legislation – for about two decades now – imposes the obligation for the Municipalities to adopt this sector-specific urbanistic instrument (unfortunately, today it is still disregarded by the municipalities themselves). From an analysis conducted by ENEA on municipalities involved in the Lumière Project Network, less than 20% of them had adopted a lighting plan at the time of the surveys [9].
1
(IT) National agency for new technologies, energy and sustainable economic development.
1170
C. B. Bellone and R. Ottavi
While, on one hand, there is no coherence of adoption and/or unique model of how to cover issues revolving around urban space lighting, on the other hand, it is also true that the sensitivity to urban lighting issues is increasing day by day; we are witnessing the creation of a real process of change, which is felt not only in our country but all over Europe, in response to an increasing citizens’ demand to see the quality of life in urban public spaces improved [10, 11].
3 Proposal for an Experimental Methodological Analysis In order to scientifically analyze the different urban contexts and how the above principles, within the lighting plans, may or may not be implemented in the public lighting, a scale of values is proposed to represent a valid instrument for an objective research.
Fig. 2. Matrix chart no.1 “Light quality”
The two levers of analysis will be quality and innovation. We’ll start investigating what it means to generate quality in the urban space through light, and then understand how it can represent a real innovation for the cities of the future.
The Lighting Plan: From a Sector-Specific Urbanistic Instrument
1171
Returning to the above explained methodological approach, we report a matrix chart “light quality” (Fig. 2) highlighting the key points and the implementation practical connections which can be planned, in order to determine a planning of the light useful to men and their space, so that light truly becomes an instrument for the city planning, an Fig. 3. Matrix chart no.2 “Innovating with liht” instrument to build the city. Here is the matrix chart “innovating with light” (Fig. 3), showing the objective key points which will act as a guide in the planning research of the next paragraph.
4 Two Case Studies in Comparison: Florence and Brussels The answers provided in Europe are quite varied2 and, among these ones, the FrenchBelgian experience deserves special attention: in fact, the connection created there between light, urban space and aesthetics led this reality to be an example in terms of contemporary urban quality. The plan lumière of the Région de Bruxelles-Capitale3 is particularly significant. Also referred to as a « second generation plan » for its choice to put users, people living the night urban space and the way they do it at the center of the study, rather than making the city spectacular [12]. The merit of this plan is not confined to provide a global vision of lighting at regional level, but also to accompany the actions carried out in the area defined as priorities by the Brussels government in the 2014−2019 declaration4 with some specific proposals for this areas of prime importance for the regional development. In order to analyze the key points of this lighting plan, during the research project some technical sheets have been drawn up to synthesize and frame the peculiarities of this urbanistic instrument, in which the general data, the technical and regulatory features and the purposes of the lighting plan itself have been reported.
2
3 4
In the European context, Brussels, Lion, Copenhagen, Eindhoven, Rotterdam and Barcelona stand out. Adopted by the municipality of Brussels in March 2016. Accord de Majorité 2014−2019.
1172
C. B. Bellone and R. Ottavi
Even in Italy there are some virtuous examples of light planning5, among which the P.C.I.P. (Public Lighting Municipal Plan) of Florence stands out, which gives priority to the need and the concept of light quality6. What emerges from the analysis of the Florence’s plan is indeed the pragmatization of the concept of quality, which can be found in the issues covered such as plant engineering technological renewal of the municipal lighting plan, modification of the road network and the urbanization (pedestrian and cycling urban areas), energy saving, environmental protection and, not least, green-procurement [13]. Similarly to the previous case study, a specific technical sheet of the urbanistic instrument has been drawn up also for the city of Florence. Both plans, both planning modes, express innovation of the subject by mutating the very concept of lighting plan; despite implementing different approaches to this topic, they both move in the direction of giving special attention to the most functional themes of the cities of the future, with the aim of improving the urban space user’s living standards. In this regard, there are particularly significant elements that effectively show these different approaches. About the P.C.I.P. in Florence, project-based structure stands out immediately, which means involving lighting mimic diagrams, reports, risk analysis and graphic drawings: an important documentary writing that goes in determining what will be the structure of the next project phases of technological upgrading and modernization. While in the case of the plan lumière, even if a broader approach at an analysis context level can be perceived, Brussels chose a more compact form and a very manual-like representation of its urban space. The plan lumière of the Brussels-Capital Region is a real guideline for the lighting of the urban space; everything is based on a very innovative and efficient method, ready for use by light designers for a direct implementation on the territory. Again, Florence chooses the theme of the user’s visual perception of the night urban space [14] as the heart of its plan, all the planning follows this fil rouge with a particular attention to protecting the artistic and architectural context of the space to be illuminated: the city of Florence. While the lighting plan of Florence shows innovation in planning the quality of urban light in favor of the users, even imagining them as they are surrounded by a real open-air night museum itinerary, Brussels adopts innovative layered planning techniques, directly involving the citizens and all their sensations and desires for comfort and wellbeing in the plan itself7.
5
6 7
Among the most relevant Italian experiences and which, in some cases, have even try to overcome the limitations imposed by a concept of general development urbanistic instrument, there are Florence, Udine, Turin, Bergamo, Rome and Milan. Adopted by the municipality of Florence on 12 September 2016. Organization’s program of the marche exploratoire nocturne (14/04/2016) – participatory study tool involving a sort of night walk of the night urban space. (Source: Lighting Plan of the BrusselsCapital Region, p. 125 - Partie Technique).
The Lighting Plan: From a Sector-Specific Urbanistic Instrument
1173
In the Plan Lumière of Brussels there is evidently a structure of the discipline of the Light Urbanism and the methodology of the Light Master Plan8. It is also noteworthy that the municipality of Brussels aims for a dynamic lighting through a chronobiological approach (even creating moments of dark therapy) and a sensitivity focused on people’s circadian rhythms, on how citizens can perceive the lighting, discovering and understanding the functionality of the urban space around them. Finally, this special attention towards the man also affects the surrounding environment, with the implementation of lighting control measures in harmony with flora and fauna [15]. What clearly emerges from the analysis and the comparison of these two plans is the observation that both use consolidated techniques but at the same time innovative in planning the urban lighting and, although they are two de facto capitals (one of the Italian Renaissance and the other of a united Europe), it shows the lack of a magic word, which is smart city, to converge on innovation, on the opportunities of lighting as an “enabling technology” [16] capable of shaping the future of the smart city.
5 Results of the Research and Future Visions Both such planning experiences are examples of an organic, careful and innovative planning modes at the same time. On that basis, it is desirable that the culture of good lighting may be increasingly extended and implemented by the local authorities, in order that lighting can actually be considered as an element of characterization capable of deeply affecting the urban space, no longer as a mere lighting plant [17] (already an important signal from the world of standardization, with the birth of the new CIE9 234:2019 - A Guide to Urban Lighting Masterplanning). In other words, it is proposed to imagine light as an instrument capable of improving society in its everyday life, by redesigning the urban space and making it more responsive to the needs of the community. The results of the research have led to the definition of 15 guidelines standing as good practices for the conception and implementation of innovative and high-quality lighting plans focused on men and their relationship with the urban space in all its forms, functions and conditions. Here is an overview of the above guidelines, indicating the urgency of intervening in this field with a regained approach of innovation and high quality: (a) The choice of the model of plan and the implementation methodology (guidelines 1-2-3-15). A model of plan shall be identified to combine functionality, aesthetics and sociality; organic and proceduralized. The model shall begin from the pre-completion status and present a layered structure, in order to make the elements overlapping and, at the same time, analyzable separately.
8
9
The methodology of the Lighting Master Plan created by Roger Narboni, the lighting designer who outlined the first plan of Brussels in 1997. CIE: Commission Internationale de l'Éclairage.
1174
C. B. Bellone and R. Ottavi
(b) Urban spaces to be preserved (guidelines 4-5-6). The plan, which is attentive not only to the substantial aspects but also to the aesthetic-formal ones, shall aim at preserving the urban spaces, enhancing its peculiarities and improving them, by qualifying and redesigning the night landscape. (c) Ecological and social aspect (guidelines 7-8-10). The plan shall be focalized on men, on their natural biorhythm, guiding them on their space, but it shall also meet the needs of the night social life of the users, helping to guarantee their physical and psychological safety. A renewed attention shall be given also to the respect for the local flora and fauna, by regulating the lighting, even providing for its absence or minor intensity. (d) The team and the user’s involvement (guidelines 9-11). Different forms of collaboration shall be planned for the plan implementation. On one hand, in view of its multidisciplinary nature, the collaboration with a team of professionals specialized in the respective areas of intervention; on the other, the one with the users of the night urban space, whose demands must be considered. (e) A responsible light (guidelines 12-13). The plan, which shall be economically and financially sustainable, shall hopefully be extended to the entire visible urban lighting, to involve also the private outdoor one and/or that for not strictly public use. (f) An innovative light (guideline 14). The urban lighting shall evolve into “enabling technology” [16], putting the electrical infrastructure at the service of the smart city. And if it is true, as it is, that « the city is the most typical expression of civilization on the territory; the city as a group of building facilities and infrastructures, ma also as an aggregation of inhabitants in their roles as citizens taking part in the urban phenomenon » [18], then men are the starting point for such an operation: men within their spaces, men in their daily multiple forms of expression, aggregation, sharing. The involvement and the collaborative commitment requested to the sector-specific professionals (engineering, architectural, urbanistic, lighting, etc.), together with the public administrations, aim at promoting a real implementation of this new concept of urban space through the planning of light, reimagined in terms of innovation and quality [19, 20].
References 1. De Seta, C. (ed.): La città: da Babilonia alla smart city. Rizzoli, Milano (2017) 2. Talia, M. (ed.): La pianificazione del territorio. Il sole 24 ore, Milano (2003) 3. Rallo, D. (ed.): Divulgare l’urbanistica. Alinea, Firenze (2002)
The Lighting Plan: From a Sector-Specific Urbanistic Instrument
1175
4. Cellucci, L., Monti, L., Gugliermetti, F., Bisegna, F.: Proposta di una procedura schematizzata per semplificare la redazione dei Piani Regolatori di Illuminazione Comunale (PRIC) (2012). http://www.enea.it 5. Enel-Federelettrica: Guida per l’esecuzione degli impianti di illuminazione pubblica, pp. 25– 26. Roma (1990) 6. Spaini, M.: Inquadramento normo-giuridico della pubblica illuminazione. Atto di convegno AIDI-ENEA, Milano (2014) 7. McKinsey Global Institute: Smart cities: Digital solutions for a more livable future, rapporto (2018). http://www.mckinsey.com 8. Süss, M.: I piani della luce – Obiettivi, finalità e opportunità per le Pubbliche Amministrazioni. Atto di convegno AIDI-ENEA (2014) 9. Progetto Lumière ENEA - Report RdS/2010/250 10. Terzi, C. (ed.): I piani della luce. Domus « iGuzzini » , Milano (2001) 11. Ratti, C., con Claudel, M. (eds.): La città di domani. Come le reti stanno cambiando il futuro urbano. Einaudi, Torino (2017) 12. Plan Lumière de la Régione de Bruxelles-Capitale (www.mobilite-mobiliteit.brussels/www. radiance35.it/Bruxelles Mobilité) 13. Piano Comunale di Illuminazione Pubblica della Città di Firenze, (www.silfi.it/P.O. Manufatti e Impianti Stradali comune di Firenze) 14. Gehl, J. (ed.): Vita in città. Maggioli, Rimini (2012) 15. Narboni, R. (ed.): italiana a cura di Palladino P.: Luce e Paesaggio. Tecniche Nuove, Milano (2006) 16. Gianfrate, V., Longo, D. (eds.): Urban Micro-Design. FrancoAngeli, Milano (2017) 17. Frascarolo, M. (ed.): Manuale di progettazione illuminotecnica (parte a cura di E.Bordonaro) Manco.su Edizioni Architectural Book and Review « TecnoTipo » , Roma (2011) 18. Dioguardi G. (ed.): Ripensare la città, p. 45. Donzelli, Roma (2001) 19. Bellone C., Ranucci P., Geropanta V.: The ‘Governance’ for smart city strategies and territorial planning. In: (a cura di) Vasant, P., Zelinka, I, Weber, GM.: Intelligent Computing & Optimization, Intelligent Computing & Optimization. AISC, pp. 76–86. Springer, Berlin (2018) 20. Bellone C., Geropanta V.: The ‘Smart’ as a project for the city smart technologies for territorial management planning strategies. In: (a cura di): Vasant, P., Zelinka, I., Weber, GW. (eds.) Intelligent Computing & Optimization. AISC, vol. 866, pp. 66–75. Springer, Berlin (2019)
PID Controller Design for BLDC Motor Speed Control System by Lévy-Flight Intensified Current Search Prarot Leeart, Wattanawong Romsai, and Auttarat Nawikavatan(&) Department of Electrical Engineering, Faculty of Engineering, Southeast Asia University, 19/1 Petchkasem Road, Nongkhangphlu, Nonghkaem, Bangkok 10160, Thailand [email protected], [email protected], [email protected]
Abstract. This paper presents the optimal proportional-integral-derivative (PID) controller design for the brushless direct current (BLDC) motor speed control system by using the Lévy-flight intensified current search (LFICuS). Based on modern optimization, the LFICuS is one of the most powerful metaheuristic optimization techniques developed from the electric current flowing through the electric networks. The error between the referent speed and actual speed is set as the objective function to be minimized by the LFICuS according to the constraint functions formed from the design specification. As results, it was found that the PID controller can be optimally achieved by the LFICuS. The speed responses of the BLDC motor speed controlled system are very satisfactory according to the given design specification. Keywords: PID controller Lévy-flight intensified current search motor speed control system Modern optimization
BLDC
1 Introduction Since 1970’s, the brushless direct current (BLDC) motor has been widely used in different applications, for example, industrial automation, automotive, aerospace, instrumentation and appliances [1]. This is because the BLDC motor retains the characteristics of a brushed DC motor but eliminates commutator and brushes. It can be driven by DC voltage source, while the current commutation is electronically done by solid state switches [2]. From literature reviews, the BLDC motor have many advantages over brushed DC motors. Among those, higher speed range, higher efficiency, better speed versus torque characteristics, longer operating life, noiseless operation and higher dynamic response can be found in BLDC usage [1–3]. The BLDC motor speed control can be efficiently operated under the feedback control loop with the proportional-integral-derivative (PID) controller [4–6]. Moving toward new era of control synthesis, an optimal controller design has been changed from the conventional paradigm to the new framework based on modern optimization by using some powerful metaheuristics as an optimizer [7, 8]. For example of the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1176–1185, 2021. https://doi.org/10.1007/978-3-030-68154-8_99
PID Controller Design for BLDC Motor Speed Control System
1177
BLDC motor speed control, optimal PID controller was successfully design by genetic algorithm (GA) [6], tabu search [6], cuckoo search (CS) [9] and intensified current search (ICuS) [10]. The Lévy-flight intensified current search (LFICuS), one of the most efficient metaheuristics, is the lasted modified version of the ICuS. It utilizes the random number drawn from the Lévy-flight distribution and the adjustable search radius mechanism to enhance the search performance and speed up the search process. In this paper, the LFICuS is applied to design an optimal PID controller for BLDC motor speed control system. This paper consists of five sections. After an introduction is given in Sect. 1, the BLDC motor control loop with the PID controller is described in Sect. 2. Problem formulation of the LFICuS-based PID controller design is illustrated in Sect. 3. Results and discussions are illustrated in Sect. 4. Finally, conclusions are given in Sect. 5.
2 BLDC Motor Control Loop The BLDC motor control loop with the PID controller can be represented by the block diagram as shown in Fig. 1, where R(s) is reference input signal, C(s) is output signal, E(s) is error signal between R(s) and C(s), U(s) is control signal and D(s) is disturbance signal.
Fig. 1. BLDC motor control loop.
2.1
BLDC Motor Model
Referring to Fig. 1, the BLDC motor can be represented by the schematic diagram as shown in Fig. 2. The modeling of BLDC motor is then formulated as follows. Phase voltages van, vbn and vcn are expressed in (1), where ia, ib and ic are phase currents, Ra, Rb and Rc are resistances, La, Lb and Lc are inductances and ean, ebn and ecn are back emfs of phase a, b and c, respectively [1, 3]. The back emfs in (1) are stated in (2) related to a function of rotor position, where he and Ke are the electrical rotor angle and the back emf constant of each phase, x is the motor angular velocity in rad/s and fa, fb and fc represent the function of rotor position of phase a, b and c, respectively.
1178
P. Leeart et al.
Fig. 2. Schematic diagram of BLDC motor.
9 dia > þ ean > van ¼ Ra ia þ La > > > dt > = dib þ ebn vbn ¼ Rb ib þ Lb > dt > > > > dic ; þ ecn > vcn ¼ Rc ic þ Lc dt 9 ean ¼ Ke xfa ðhe Þ > = ebn ¼ Ke xfb ðhe 2p=3Þ > ; ecn ¼ Ke xfc ðhe 4p=3Þ
ð1Þ
ð2Þ
The electromagnetic torque Te depending on the current and back emf voltage is expressed in (3), where Tea, Teb and Tec represent phase electric torque. The relation between the speed and the torque is then stated in (4), where Tl is load torque, J is the moment of inertia and B is the viscous friction. In generally, a simple arrangement of a symmetrical (balanced) 3-phase wye (Y) connection as shown in Fig. 2 can provide the per-phase concept. With symmetrical arrangement, the mechanical time constant sm and the electrical time constant se of the BLDC can be formulated as expressed in (5) [5], where Kt is torque constant. Te ¼ Tea þ Teb þ Tec ¼
ð3Þ
dx þ Bx þ Tl dt
ð4Þ
9 X Ra J JRRa Jð3Ra Þ > > ¼ ¼ = Kt Ke Kt Ke Kt Ke X La La La > > ; se ¼ ¼ ¼ Ra RRa 3Ra
ð5Þ
Te ¼ J sm ¼
½ean ia þ ebn ib þ ecn ic x
PID Controller Design for BLDC Motor Speed Control System
1179
Commonly, the BLDC motor will be driven by DC input voltage via the particular invertor (or power amplifier). Such the amplifier can be approximated by the first order model. Therefore, the mathematical model of the BLDC motor can be formulated as the transfer function stated in (6), where KA is amplifier constant and sA is amplifier time constant. From (6), the BLDC motor model with power amplifier can be considered as the third-order system. Gp ðsÞ ¼
XðsÞ ¼ VðsÞ
KA sA s þ 1
1=Ke sm se s2 þ sm s þ 1
¼
b0 a 3 s 3 þ a 2 s 2 þ a1 s þ a0
ð6Þ
In this work, BLDC motor model will be obtained by MATLAB system identification toolbox [11]. The testing rig of the BLDC motor system including a blushless DC motor of 350 W, 36 VDC, 450 rpm and a power amplifier (driver) is conducted as shown in Fig. 3. The experimental speed data at 280, 320 and 360 rpm are tested and recorded by a digital storage oscilloscope and PC. Speed data at 320 rpm as the operating point will be used for model identification, while those at 280 and 360 rpm will be used for model validation. Based on the third-order model in (6), MATLAB system identification toolbox provides the BLDC motor model as stated in (7). Results of model identification and validation are depicted in Fig. 4. It was found that the model in (7) shows very good agreement to actual dynamics (sensory data) of BLDC motor. The BLDC motor model in (7) will be used as the plant model Gp(s) shown in Fig. 1.
Power Amp (Driver) 350W 36Vdc 10A
+ Power Supply 36VDC, 10A
BLDC Motor 350W 36VDC 450rpm.
-
Tachogenerator Vout = 0-10V
Computor Intel(R) Core(TM) i7-10750H [email protected]
-
+
USB
DAQ NI USB-6008
Fig. 3. BLDC motor testing rig.
Gp ðsÞ ¼
0:5725 s3 þ 3:795s2 þ 5:732s þ 0:5719
ð7Þ
1180
P. Leeart et al.
Fig. 4. Results of BLDC motor model identification.
2.2
PID Controller
Regarded to Fig. 1, the PID controller models Gc(s) in time-domain and s-domain are stated in (8) and (9), where e(t) is error signal, u(t) is control signal, Kp is the proportional gain, Ki is the integral gain and Kd is the derivative gain. In the control loop, the PID controller Gc(s) receives E(s) to generate U(s) in order to control the plant (BLDC motor) Gp(s) for producing C(s) referring to R(s) and regulating D(s), simultaneously. Z uðtÞ ¼ Kp eðtÞ þ Ki Gc ðsÞ ¼ Kp þ
eðtÞdt þ Kd Ki þ Kd s s
deðtÞ dt
ð8Þ ð9Þ
PID Controller Design for BLDC Motor Speed Control System
1181
3 Problem Formulation The LFICuS-based PID controller design for the BLDC motor speed control system can be formulated as represented in Fig. 5. Based on modern optimization, the objective function f(Kp,Ki,Kd) is set as the sum-squared error (SSE) between the reference input r(t) and the speed output c(t) as stated in (10). The objective function f () = SSE will be fed to the LFICuS block to be minimized by searching for the appropriate values of the PID controller’s parameter, i.e., Kp, Ki and Kd, within the corresponding search spaces and according to the constraint functions set from the design specification as expressed in (11), where [Kp_min, Kp_max] is the lower and upper bounds of Kp, [Ki_min, Ki_max] is the lower and upper bounds of Ki, [Kd_min, Kd_max] is the lower and upper bounds of Kd, tr is the rise time, tr_max is the maximum allowance of tr, Mp is the maximum percent overshoot, Mp_max is the maximum allowance of Mp, ts is the settling time, ts_max is the maximum allowance of ts, ess is the steady-state error and ess_max is the maximum allowance of ess.
Fig. 5. LFICuS-based PID controller design for BLDC motor speed control system.
Min
N X
½ri ðtÞ ci ðtÞ2
ð10Þ
9 > > > max ; > = Kp min Kp Kp max ; > > Ki min Ki Ki max ; > > > > ; Kd min Kd Kd max
ð11Þ
f ðKp ; Ki ; Kd Þ ¼
i¼1
Subject to tr tr max ; Mp Mp ts ts max ; ess ess
max ; > >
In this work, the LFICuS algorithm, the last modified version of the ICuS algorithm [10], is applied to optimize the PID controller for the BLDC motor speed control system. The LFICuS utilizes the random number drawn from the Lévy-flight distribution to generate the neighborhood members as feasible solutions in each search iteration. Such the Lévy-flight random distribution L can be approximated by (12) [12],
1182
P. Leeart et al.
where s is step length, k is an index and C(k) is the Gamma function as expressed in (13). In the LFICuS algorithm, the adjustable search radius (AR) and adjustable neighborhood member (AN) mechanisms are also conducted by setting the initial search radius R = X (search space). The LFICuS algorithm for optimizing the PID controller for the BLDC motor speed control system can be described step-by-step as follows. L
kCðkÞ sinðpk=2Þ 1 1þk p s Z1
CðkÞ ¼
tk1 et dt
ð12Þ
ð13Þ
0
Step-0 Initialize the objective function f() = SSE in (10) and constraint functions in (11), search space X = [Kp_min, Kp_max], [Ki_min, Ki_max] and [Kd_min, Kd_max], memory lists (ML) W, Ck and N = ∅, maximum allowance of solution cycling jmax, number of initial solutions N, number of neighborhood members n, search radius R =X, k = j = 1. Step-1 Uniformly random initial solution Xi= {Kp, Ki and Kd} within X. Evaluate f (Xi) via (10) and (11), then rank and store Xi in W. Step-2 Let x0 = Xk as selected initial solution. Set Xglobal = Xlocal= x0. Step-3 Generate new solutions xi= {Kp, Ki and Kd} by Lévy-flight random in (12) and (13) around x0 within R. Evaluate f(xi) via (10) and (11), and set the best one as x*. Step-4 If f(x*) < f(x0), keep x0 into Ck, update x0 = x* and set j = 1. Otherwise, keep x* into Ck and update j = j + 1. Step-5 Activate AR mechanism by R = qR, 0 < q < 1 and invoke AN mechanism by n = an, 0 < a < 1. Step-6 If j jmax, go back to Step-3. Step-7 Update Xlocal = x0 and keep Xglobal into N. Step-8 If f(Xlocal) < f(Xglobal), update Xglobal = Xlocal. Step-9 Update k = k + 1 and set j = 1. Let x0 = Xk as selected initial solution. Step-10 If k N, go back to Step-2. Otherwise, stop the search process and report the best solution Xglobal= {Kp, Ki and Kd} found.
4 Results and Discussions To optimize the PID controller of the BLDC motor speed control system, the LFICuS algorithm was coded by MATLAB version 2017b (License No.#40637337) run on Intel(R) Core(TM) i7-10510U [email protected] GHz, 2.30 GHz, 16.0 GB-RAM. The search parameters of the LFICuS are set from the preliminary study, i.e., initial search radius R = X = [Kp_min, Kp_max], [Ki_min, Ki_max] and [Kd_min, Kd_max], step length
PID Controller Design for BLDC Motor Speed Control System
1183
s = 0.01, index k = 0.3, number of initial neighborhood members n = 100 and number of search directions N = 50. Each search direction will be terminated by the maximum iteration (Max_Iter) of 100. Number of states of the AR and AN mechanisms activation h = 2, state-(i): at the 50th iteration, R = 50% of X and n = 50, state-(ii): at the 75th iteration, R = 25% of X and n = 25. The constraint functions (11) are set as expressed in (14). 50-trials are run for searching for the optimal parameters of the PID controller. Subject to
9 Mp 5:00%; > > > > ts 4:00 sec.; ess 0:01%; > > = 0 Kp 10:00; > > 0 Ki 5:00; > > > > ; 0 Kd 5:00
tr 2:50 sec.;
ð14Þ
After the search process stopped, the LFICuS can successfully provide the PID controller for the BLDC motor speed control system as stated in (15). The convergent rates of the LFICuS for the PID design optimization over 50-trials run are depicted in Fig. 6. The step-input and step-disturbance responses of the BLDC motor speed control system without and with PID controller designed by the LFICuS are depicted in Fig. 7 and Fig. 8, respectively.
Fig. 6. Convergent rates of the LFICuS for the PID design optimization.
Gc ðsÞ ¼ 8:5875 þ
0:8164 þ 1:8164s s
ð15Þ
Referring to Fig. 7, it was found that the BLDC motor without the PID controller gives tr = 20.39 s., Mp = 0.00%, ts = 36.74 s., and ess = 0.00%. The BLDC motor speed control system with the PID controller optimized by the LFICuS in (15) yields tr = 2.44 s., Mp = 2.47%, ts = 3.28 s., and ess = 0.00%. From Fig. 8, it was found that the BLDC motor without the PID controller cannot regulate the output response from
1184
P. Leeart et al.
the external disturbance. However, the BLDC motor speed control system with the PID controller designed by the LFICuS in (15) can successfully regulate the output response from the external disturbance with the maximum percent overshoot from regulation Mp_reg = 10.43% and regulating time treg = 19.78 s.
Fig. 7. Step-input responses of the BLDC motor speed control system.
Fig. 8. Step-disturbance responses of the BLDC motor speed control system.
PID Controller Design for BLDC Motor Speed Control System
1185
5 Conclusions The application of the LFICuS to design an optimal PID controller for the BLDC motor speed control system has been presented in this paper. The LFICuS algorithm has been developed from the ICuS as one of the most efficient metaheuristic optimization techniques. Based on modern optimization, the sum-squared error between the reference input and the speed output of the BLDC motor speed control system has been set as the objective function for minimization. In addition, the desired specification and search bounds have been correspondingly set as the constraint functions. As results, the optimal PID controller for the BLDC motor speed control system has been successfully obtained by the LFICuS algorithm. The step-input and step-disturbance responses of the BLDC motor speed controlled system are very satisfactory according to the design specification. The advantage of the proposed design method is that users can utilize any controller to be optimally designed for any plant of interest. However, the disadvantage of this method is the boundary setting which is the problem-dependent. For future research, the LFICuS algorithm will be conducted to design the PIDA, FOPID and FOPIDA controllers for many real-world systems.
References 1. Yedamale, P.: Brushless DC (BLDC) motor fundementals. AN885 Michrochip Technol. Inc. 20, 3–15 (2003) 2. Vas, P.: Parameter Estimation, Condition Monitoring and Diagnosis of Electrical Machines. Oxford University Press, Oxford (1993) 3. Tashakori, A., Ektesabi, M., Hosseinzadeh, N.: Modeling of BLDC motor with ideal backEMF for automotive applications. In: The World Congress on Engineering (2011) 4. Othman, A.S., Mashakbeh, A.: Proportional integral and derivative control of brushless DC motor. Eur. J. Sci. Res. 35(4), 198–203 (2009) 5. Patel, V.K.R.S., Pandey, A.K.: Modeling and performance analysis of PID controlled BLDC motor and different schemes of PWM controlled BLDC motor. Int. J. Sci. Res. Publ. 3, 1–14 (2013) 6. Boonpramuk, M., Tunyasirut, S., Puangdownreong, D.: Artificial intelligence-based optimal PID controller design for BLDC motor with phase advance. Indonesian J. Electr. Eng. Inform. 7(4), 720–733 (2019) 7. Zakian, V.: Control Systems Design: A New Framework, Springer-Verlag (2005) 8. Zakian, V., Al-Naib, U.: Design of dynamical and control systems by the method of inequalities. IEEE Int. Conf. 120, 1421–1427 (1973) 9. Puangdownreong, D., Kiree, C., Kumpanya, D., Tunyasrirut, S.: Application of cuckoo search to design optimal PID/PI controllers of BLDC motor speed control system. In: Global Engineering & Applied Science Conference, pp. 99–106 (2015) 10. Puangdownreong, D., Kumpanya, D., Kiree, C., Tunyasrirut, S.: Optimal tuning of 2DOF– PID controllers of BLDC motor speed control system by intensified current search. In: Global Engineering & Applied Science Conference, pp. 107–115 (2015) 11. Ljung, L.: System Identification Toolbox for use with MATLAB. The MathWorks (2007) 12. Yang, X.S.: Flower pollination algorithm for global optimization. Unconventional Comput. Nat. Comput. Lect. Notes Comput. Sci. 7445, 240–249 (2012)
Intellectualized Control System of Technological Processes of an Experimental Biogas Plant with Improved System for Preliminary Preparation of Initial Waste Andrey Kovalev1(&), Dmitriy Kovalev1, Vladimir Panchenko1,2, Valeriy Kharchenko1, and Pandian Vasant3 1
Federal Scientific Agroengineering Center VIM, 1st Institutskij proezd 5, 109428 Moscow, Russia [email protected], [email protected] 2 Russian University of Transport, Obraztsova st. 9, 127994 Moscow, Russia 3 Universiti Teknologi PETRONAS, Tronoh, 31750 Ipoh, Perak, Malaysia [email protected]
Abstract. Obtaining biogas is economically justified and is preferable in the processing of a constant stream of waste. One of the most promising and energyefficient methods of preparing a substrate for fermentation is processing it in a vortex layer apparatus. The aim of the work is to develop an intellectualized control system of technological processes of an experimental biogas plant with improved system for preliminary preparation of initial waste, created to study the effectiveness of using of VLA for the pretreatment of return biomass during anaerobic processing of organic waste. The article describes both an experimental biogas plant with improved system for preliminary preparation of initial waste, created to study the effectiveness of using of VLA for the pretreatment of return biomass during anaerobic processing of organic waste, and a schematic diagram of the being developed intellectualized process control system The use of the developed intellectualized process control system makes it possible to determine the main parameters of experimental research and maintain the specified operating modes of the equipment of the experimental biogas plant, which in turn will reduce the error in the subsequent mathematical processing of experimental data. Keywords: Anaerobic treatment Preliminary preparation of initial waste Vortex layer apparatus Bioconversion of organic waste Recirculation of biomass
1 Introduction Obtaining biogas is economically justified and is preferable in the processing of a constant stream of waste, and is especially effective in agricultural complexes, where there is the possibility of a complete ecological cycle [1]. Despite the long-term use of biogas plants and an even longer period of studies of the processes occurring in them, our ideas about its basic laws and mechanisms of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1186–1194, 2021. https://doi.org/10.1007/978-3-030-68154-8_100
Intellectualized Control System of Technological Processes
1187
individual stages are insufficient, which in some cases determines the low efficiency of biogas plants, does not allow them to be controlled to the necessary extent, leads to unjustified overstatement of construction volumes, increase in operating costs and, accordingly, the cost of 1 m3 of biogas produced. This puts forward the tasks of developing the most effective technological schemes for biogas plants, the composition of their equipment, creating new designs and calculating their parameters, improving the reliability of their work, reducing the cost and construction time, which is one of the urgent problems in solving the issue of energy supply of agricultural production facilities [1]. One of the most promising and energy-efficient methods of preparing a substrate for fermentation is processing it in a vortex layer of ferromagnetic particles (vortex layer apparatus (VLA)), which is created by the action of a rotating magnetic field [2]. Previously, the positive effect of processing various organic substrates in VLA on the characteristics of methanogenic fermentation, in particular on the kinetics of methanogenesis, the completeness of decomposition of organic matter, the content of methane in biogas, and waste disinfection has been shown [3–6]. The purpose of this work is to develop an intellectualized control system of technological processes of an experimental biogas plant with improved system for preliminary preparation of initial waste, created to study the effectiveness of using of VLA for the pretreatment of return biomass during anaerobic processing of organic waste.
2 Background The experimental biogas plant consists of the following blocks: – – – –
block for preliminary preparation of the initial substrate (pre-treatment block); block for anaerobic bioconversion of organic matter of the prepared substrate; block for dividing the anaerobically treated substrate into fractions; block for recirculation of the thickened fraction of the anaerobically treated substrate; – process control block. The pre-treatment block includes a pre-heating vessel, a vortex layer apparatus (VLA) and a peristaltic pump for circulating the substrate through the VLA. The preheating tank is a steel cylindrical tank equipped with devices for loading and unloading the substrate, a device for mixing the substrate, branch pipes for outlet and supplying the substrate, as well as a substrate heating device equipped with a temperature sensor. The diameter of the preheating tank is 200 mm, the height of the preheating tank is 300 mm, the filling factor is 0.9 (the volume of the apparatus is 9.4 L; the volume of the fermented substrate is 8.5 L). Temperature and mass transfer are maintained in the preheating tank by means of a heating system and mechanical stirring.
1188
A. Kovalev et al.
The temperature in the preheating tank is maintained by a heating system consisting of: an electric clamp heater located in the lower part of the outer wall of the preheating tank and a temperature sensor. To intensify the preheating process in the preheating tank stirring is carried out. Stirring is carried out by a mechanical stirrer controlled by a time switch. The diameter of the stirrer blades is 100 mm, the blade height is 25 mm, and the rotation speed is 300 rpm. The unloading device is connected by means of a pipeline with the block for anaerobic bioconversion of organic matter of the prepared substrate. VLA is a tube with a diameter of 50 mm, made of stainless steel and placed instead of the rotor in the stator of an induction motor. In the pipe, the initial mixture of components is affected by the electromagnetic field created by the stator windings, and by intensively and chaotically moving ferromagnetic bodies, which change the direction of movement with a frequency equal to the frequency of the current. In those areas of the pipe where electromagnetic fields arise, a vortex layer is literally created, which is why the devices under consideration got their name. In this layer, all possible mechanical effects on the processed material are realized [3]. The suction branch pipe of the peristaltic pump for circulation of the substrate through the VLA is connected to the branch pipe for outlet the substrate from the preheating tank. The supply branch pipe of the peristaltic pump for circulation of the substrate through VLA is connected to the hydraulic inlet of VLA. The hydraulic outlet of VLA is connected to the substrate supply pipe into the preheating tank. Thus, during the preheating process, the substrate circulates through the VLA, where it is subjected to additional pretreatment. The block for anaerobic bioconversion of organic matter of the prepared substrate includes a laboratory anaerobic bioreactor, as well as equipment required for the functioning of the anaerobic bioreactor. The laboratory anaerobic bioreactor is a steel cylindrical tank equipped with a gas cope, devices for loading and unloading a substrate, a biogas outlet pipe, a substrate stirring device, and a substrate heating device equipped with a temperature sensor. The diameter of the laboratory anaerobic bioreactor is 400 mm, the height of the bioreactor is 500 mm, the filling factor is 0.9 (the volume of the apparatus is 56 L, the volume of the fermented substrate is 50 L). The pretreated substrate is loaded into the laboratory anaerobic bioreactor through a substrate loading device that is connected to the preheating tank unloading device. In the laboratory anaerobic bioreactor, the optimal conditions for the vital activity of anaerobic microorganisms are maintained: temperature and mass transfer using a heating system and through mechanical stirring. The temperature in the laboratory anaerobic bioreactor is maintained using a heating system, which consists of: an electric clamp heater located in the lower part of the outer wall of the anaerobic bioreactor, and a temperature sensor. To intensify the fermentation process in a laboratory anaerobic bioreactor, stirring is carried out. Stirring is carried out using a mechanical stirrer controlled by a time switch. The diameter of the stirrer blades is 300 mm, the height of the main blade is 25 mm, the height of the blade for breaking the crust is 25 mm, the rotational speed is 60 rpm.
Intellectualized Control System of Technological Processes
1189
The unloading of the anaerobically treated substrate occurs automatically by gravity (overflow) through the substrate unloading device when the next portion of prepared substrate is added. The substrate unloading device is connected to a block for separating the anaerobically treated substrate into fractions. The biogas formed in the laboratory anaerobic bioreactor, the main constituent of which is methane, is collected in a gas cope and removed through the biogas outlet pipe and the hydraulic seal to the biogas quality and quantity metering unit. The metering unit for the quality and quantity of biogas is included in the process control unit. The clamp electric heater ensures the maintenance of the temperature regime of anaerobic processing of the substrate in the laboratory anaerobic bioreactor. The block for separating the anaerobically treated substrate into fractions includes an effluent sump equipped with a supernatant discharge collector, a thickened fraction outlet pipe and an anaerobically treated substrate inlet pipe. The effluent sump is a rectangular steel vessel with a total volume of 45 L. The supernatant drain collector allows you to control the volume of the resulting thickened fraction, depending on the rheological properties of the initial and fermented substrate and on the sedimentation time (hydraulic retention time in effluent sump). The block for recirculation of the thickened fraction of the anaerobically treated substrate includes a peristaltic pump, the suction branch pipe of which is connected to the thickened fraction outlet pipe of the effluent settler, and the supply branch pipe is connected to the preheating tank loading device. The process control block includes temperature relays, time relays, a biogas quantity and quality metering unit, as well as actuators (starters, intermediate relays, etc.), sensors and light-signaling fittings. The described experimental plant is designed to study the effectiveness of the introduction of ABC for the treatment of recycled biomass in the technology of methane digestion of organic waste. The described experimental plant allows experimental research in the following modes: – anaerobic treatment of liquid organic waste without biomass recirculation and without treatment in VLA in mesophilic or thermophilic conditions with different hydraulic retention times; – anaerobic treatment of liquid organic waste with recirculation of biomass and without treatment in VLA in mesophilic or thermophilic conditions with different hydraulic retention times; – anaerobic treatment of liquid organic waste with preliminary treatment in VLA without recirculation of biomass in mesophilic or thermophilic conditions with different hydraulic retention times; – anaerobic treatment of liquid organic waste with recirculation of biomass and pretreatment in VLA under mesophilic or thermophilic conditions with different hydraulic retention times. At the same time, in modes with biomass recirculation, operation is provided with different recirculation coefficients (up to 2). However, in order to implement experimental studies in a continuous mode with obtaining experimental data of increased accuracy, it is necessary to develop an
1190
A. Kovalev et al.
intellectualized system for monitoring technological processes and measuring the operation parameters of each of the blocks of biogas plant with improved system for preliminary preparation of initial waste.
3 Results of Investigations Figure 1 shows a block diagram of the proposed method for increasing the efficiency of anaerobic bioconversion of organic waste for the study of which an experimental setup was created and an intellectualized process control system was developed.
Fig. 1. A block diagram of a method for increasing the efficiency of anaerobic bioconversion of organic waste to produce gaseous energy and organic fertilizers based on biomass recycling using a vortex layer apparatus
When developing an intellectualized control system of technological processes of an experimental biogas plant with an improved system for preliminary preparation of initial waste, both previously developed schemes [7, 8] and the results of studies of other authors [9, 10] were taken into account. The main control element of the intellectualized process control system is a programmable logic controller (PLC) (“UDIRCABiBo” on the diagram), which simulates the functions of executing regulators of various purposes. A functional diagram of the integration of a PLC into an intellectualized process control system is shown in Fig. 2.
Intellectualized Control System of Technological Processes 1
2
3
4
5
6
7
8
KS1
9
10
11
KS5
QIR
12
13
14
15
1191
16
KS7 TIRCA2
KS6
KS2 KS3
TIRCA1
KS4
LIRCA2
LIRCA1
Fig. 2. Functional diagram of PLC integration into an intellectualized process control system
In Fig. 2, arrows show the directions of electrical signals between the simulated controllers (temperature and level controllers, time relays) and the contacts of sensors and actuators (the numbers of contacts in Fig. 2 correspond to the numbers of contacts in Fig. 3). The designations of regulators and sensors comply with GOST 21.208-2013. The locations of the sensors and actuators required for the implementation of the processes of pretreatment of the mixture of substrates, anaerobic bioconversion of organic matter, as well as for the recirculation of the thickened fraction of the anaerobically treated substrate, are shown in the schematic diagram of the intellectualized process control system shown in Fig. 3. The main parameters to be determined in the course of experimental research: – – – – –
hydraulic retention time in the anaerobic bioreactor; hydraulic retention time of the vortex layer apparatus; hydraulic retention time in the preparation reactor; the quantity and quality of the produced biogas; the rate of recirculation of the thickened fraction of the anaerobically treated substrate; – frequency of the electromagnetic field in the vortex layer apparatus; – substrate temperature during anaerobic processing; – initial substrate temperature.
1192
A. Kovalev et al. 3
M
1 LE1
8
QIE
11
M
10
5
2
7 4
1
LE2
6
6
2 TE1
TE2
9
5
4
14 TE3
TE4
12
13
9 10
3 8
12
2
7
11
3
4
5
ТТ1
ТТ 2
LТ1
6
7
8
9
10
11
12
13
14
QT
TT3
TT4
LT2
15
16
On the panel
Local
1
15
16
UVDT
EDT
EDT
UVDT
EDT
UDIRCABiBo
Fig. 3. Schematic diagram of an intellectualized process control system 1 - laboratory RP heater; 2 - laboratory reactor for preparation (RP) of the substrate for anaerobic digestion; 3 mixing device of laboratory RP; 4 - pump for circulation of the being prepared substrate; 5 valve for loading the prepared substrate; 6 - LAB heater; 7 - laboratory anaerobic bioreactor (LAB); 8 - LAB mixing device; 9 - device for separating the fermented substrate into fractions; 10 - valve for removing the supernatant; 11 - pump for recirculation of the thickened fraction of the anaerobically treated substrate; 12 - vortex layer apparatus (VLA).
4 Conclusion The application of the proposed method of intensification allows to obtain the following effects: 1. The return of the condensed effluent fraction allows to increase the residence time of the hardly decomposable components of organic waste in the reactor, which in turn leads to an increase in the degree of decomposition of organic matter and, accordingly, an increase in the biogas yield. 2. Biomass recycling leads to an increase in the concentration of methanogenic microorganisms in the bioreactor, which will increase the productivity of the bioreactor and ensure the stability of its operation when changing the characteristics of the incoming raw materials.
Intellectualized Control System of Technological Processes
1193
3. Fine grinding, which allows to improve the rheological properties of the substrate, to conduct partial hydrolysis of complex organic compounds, to improve the availability of nutrients for microorganisms, to ensure heating of the substrate. 4. The introduction of the abrasive working body (steel needles) into the substrate of ferromagnetic particles, which allows to shorten the launch time of the bioreactor, increase the rate of formation and final yield of methane, provide more complete decomposition of the substrate and reduce the required volume of the bioreactor, increase the adaptive ability of the microbial community to adverse conditions (for example excessive accumulation of volatile fatty acids (VFA) or H2, lower pH). Thus, the use of the vortex layer apparatus for additional processing of the recirculated biomass of the condensed fraction of fermented organic waste mixed with prepared source organic waste in a system for increasing the efficiency of anaerobic bioconversion of organic waste to obtain a gaseous energy carrier based on biomass recirculation will allow providing a synergistic effect of a set of methods for intensifying the process of anaerobic biochemistry organic waste of the agricultural industry in anaerobic bioreactors, namely: – mechanical impact on the initial organic waste (grinding of the initial mass in VLA before loading into the bioreactor); – biochemical methods of influencing the fermentable substrate (introducing ferromagnetic particles into the substrate in VLA); – electromagnetic processing of fermentable substrate; – microbiological methods (biomass retention due to its recirculation). The use of the developed intellectualized process control system makes it possible to determine the main parameters of experimental research and maintain the specified operating modes of the equipment of the experimental biogas plant, which in turn will reduce the error in the subsequent mathematical processing of experimental data.
5 Acknowledgment This work was supported by the Federal State Budgetary Institution “Russian Foundation for Basic Research” as part of scientific project No. 18-29-25042.
References 1. Kovalev A.A.: Tekhnologii i tekhniko-energeticheskoye obosnovaniye proizvodstva biogaza v sistemakh utilizatsii navoza zhivotnovodcheskikh ferm (Technologies and feasibility study of biogas production in manure utilization systems of livestock farms). Dissertatsiya … doktora tekhnicheskikh nauk (Thesis … doctor of technical sciences), p. 242, All-Russian Research Institute of Electrification of Agriculture, Moscow (1998)
1194
A. Kovalev et al.
2. Kovalev, D., Kovalev, A., Litti, Y., Nozhevnikova, A., Katrayeva, I.: Vliyaniye nagruzki po organicheskomu veshchestvu na protsess biokonversii predvaritel’no obrabotannykh substratov anaerobnykh bioreaktorov (The Effect of the Load on Organic Matter on Methanogenesis in the Continuous Process of Bioconversion of Anaerobic Bioreactor Substrates Pretreated in the Vortex Layer Apparatus). Ekologiya i promyshlennost’ Rossii (Ecology and Industry of Russia) 23(12), 9–13 (2019). https://doi.org/10.18412/1816-03952019-12-9-13 3. Litti, Yu., Kovalev, D., Kovalev, A., Katraeva, I., Russkova, J., Nozhevnikova, A.: Increasing the efficiency of organic waste conversion into biogas by mechanical pretreatment in an electromagnetic mill. In: Journal Physics: Conference Series vol. 1111, no. 1, p. 012013 (2018). https://doi.org/10.1088/1742-6596/1111/1/012013 4. Kovalev, D.A., Kovalev, A. A., Katraeva, I.V., Litti, Y.V., Nozhevnikova A.N.: Effekt obezzarazhivaniya substratov anaerobnykh bioreaktorov v apparate vikhrevogo sloya (The effect of disinfection of substrates of anaerobic bioreactors in the vortex layer apparatus). Khimicheskaya bezopasnost’ (chemsafety) 3(1), 56–64 (2019) https://doi.org/10.25514/ CHS.2019.1.15004 5. Kovalev, A.A., Kovalev, D.A., Grigor’yev, V.S.: Energeticheskaya effektivnost’ predvaritel’noy obrabotki sinteticheskogo substrata metantenka v apparate vikhrevogo sloya (Energy Efficiency of Pretreatment of Digester Synthetic Substrate in a Vortex Layer Apparatus). 2020; 30(1):92–110. Inzhenernyye tekhnologii i sistemy (Engineering Technologies and Systems) 30(1), 92–110 (2020). https://doi.org/10.15507/2658-4123.030. 202001.092-110 6. Litti, Y.V., Kovalev, D.A., Kovalev, A.A., Katrayeva, I.V., Mikheyeva, E.R., Nozhevnikova, A.N.: Ispol’zovaniye apparata vikhrevogo sloya dlya povysheniya effektivnosti metanovogo sbrazhivaniya osadkov stochnykh vod (Use of a vortex layer apparatus for improving the efficiency of methane digestion of wastewater sludge) Vodosnabzheniye i sanitarnaya tekhnika (Water supply and sanitary technique) 11, 32–40 (2019). https://doi. org/10.35776/MNP.2019.11.05 7. Kovalev, A., Kovalev, D., Panchenko, V., Kharchenko, V., Vasant, P.: Optimization of the process of anaerobic bioconversion of liquid organic wastes. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 1072, pp. 170–176 (2020). https://doi.org/10.1007/978-3-030-33585-4_17 8. Kovalev, A., Kovalev, D., Panchenko, V., Kharchenko, V., Vasant, P.: System of optimization of the combustion process of biogas for the biogas plant heat supply. In: Intelligent Computing & Optimization. Advances in Intelligent Systems and Computing, vol. 1072, pp. 361–368 (2020). https://doi.org/10.1007/978-3-030-33585-4_36 9. Kalyuzhnyi, S., Sklyar, V., Fedorovich, V., Kovalev, A., Nozhevnikova, A.: The development of biotechnological methods for utilisation and treatment of diluted manure streams. In: Proceeding IV International Conference IBMER, Warshawa. (1998) 10. Mulata, D.G., Jacobi, H.F., Feilberg, A., Adamsen, A.P.S., Richnow, H.-H., Nikolausz, M.: Changing feeding regimes to demonstrate flexible biogas production: effects on process performance, microbial community structure and methanogenesis pathways. Appl. Environ. Microbiol. 82(2), 438–449 (2016). https://doi.org/10.1128/AEM.02320-15
Way for Intensifying the Process of Anaerobic Bioconversion by Preliminary Hydrolysis and Increasing Solid Retention Time Andrey Kovalev1(&), Dmitriy Kovalev1, Vladimir Panchenko1,2, Valeriy Kharchenko1, and Pandian Vasant3 1
2
Federal Scientific Agroengineering Center VIM, 1st Institutskij proezd 5, 109428 Moscow, Russia [email protected] Russian University of Transport, Obraztsova street 9, 127994 Moscow, Russia [email protected] 3 Universiti Teknologi PETRONAS, 31750 Tronoh, Ipoh, Perak, Malaysia [email protected]
Abstract. The negative impact of agricultural activities on the environment is associated with the formation of liquid and solid waste from agricultural and processing industries. Obtaining biogas is economically justified and is preferable in the processing of a constant stream of waste. The purpose of the work is to develop the foundations of a method for intensifying the process of anaerobic bioconversion of organic matter to obtain a gaseous energy carrier through the combined use of recirculation of a thickened fraction of fermented waste (to increase solid retention time and retain biomass) and preliminary hydrolysis of a mixture of an initial substrate and a thickened fraction of fermented waste. The article describes the foundations of the proposed way for intensifying the process of anaerobic bioconversion by preliminary hydrolysis and increasing solid retention time, including both the technological scheme of the proposed way and the material balance of the biogas plant with the proposed way. Application of the proposed method for intensifying the process of anaerobic bioconversion of organic matter will provide a synergistic effect of a combination of methods for intensifying the process of anaerobic bioconversion. Keywords: Anaerobic treatment Preliminary preparation of initial waste Hydrolysis Bioconversion of organic waste Recirculation of biomass
1 Introduction The negative impact of agricultural activities on the environment is associated not only with the increasing consumption of natural resources, but also, to a greater extent, with the formation of liquid and solid waste from agricultural and processing industries. In particular, raising animals, processing meat and dairy products, producing beer, sugar, starch, etc. are accompanied by the formation of a large amount of wastewater [1, 2]. Obtaining biogas is economically justified and is preferable in the processing of a constant stream of waste, and is especially effective in agricultural complexes, where there is the possibility of a complete ecological cycle [3]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1195–1203, 2021. https://doi.org/10.1007/978-3-030-68154-8_101
1196
A. Kovalev et al.
The transition of animal husbandry to an industrial basis and the associated concentration of animals on large farms and complexes lead to a sharp increase in the volume of organic waste that must be disposed of without polluting the environment. One of the ways of processing organic waste is its anaerobic digestion in biogas plants due to the vital activity of microorganisms (methanogenesis), when biogas is obtained with the help of microbiological processing of biomass and used, in turn, as a raw material for obtaining thermal and electrical energy [3]. Development, design and construction of new and modification of existing reactors for biogas production from various agricultural waste are designed to solve a number of significant energy and environmental problems. It allows to reduce anthropogenic load on ecosystems by reduction harmful emissions of greenhouse gases and fully utilization and recycling organic waste. In addition, the use of biogas plants can provide facilities with uninterrupted electricity and heat supply for their own needs, as well as obtain, through the use of various technological solutions, high-quality products from methanogenesis waste, which can be used as fertilizer in greenhouses, feed additives or bedding for livestock [3]. Despite the long-term use of biogas plants and an even longer period of studies of the processes occurring in them, our ideas about its basic laws and mechanisms of individual stages are insufficient, which in some cases determines the low efficiency of biogas plants. It does not allow them to be controlled to the necessary extent, which leads to unjustified overstatement of construction volumes, increase in operating costs and, accordingly, the cost of 1 m3 of biogas produced. This puts forward the tasks of developing the most effective technological schemes for biogas plants, the composition of their equipment, creating new designs and calculating their parameters, improving the reliability of their work, reducing the cost and construction time, which is one of the urgent problems in solving the issue of energy supply of agricultural production facilities [3]. In this regard, the main goal of the research was to modernize the technology using one of the methods for intensifying processes in biogas plants - preliminary hydrolysis of the initial organic waste together with recirculation of the thickened fraction of fermented waste. Thus, the purpose of the work is to develop the foundations of a way for intensifying the process of anaerobic bioconversion of organic matter to obtain a gaseous energy carrier. It is planned to intensify the process through the combined use of recirculation of a thickened fraction of fermented waste and preliminary hydrolysis. Recirculation of a thickened fraction of fermented waste makes it possible to increase solid retention time and to retain biomass un bioreactor. At the same time, it is planned to expose a mixture of an initial substrate and a thickened fraction of fermented waste to preliminary hydrolysis.
2 Background Biomethanogenesis is a complex multistage decomposition of various organic substances under anaerobic conditions under the influence of bacterial flora, the end result of which is the formation of methane and carbon dioxide [4].
Way for Intensifying the Process of Anaerobic Bioconversion
1197
According to modern views, the anaerobic conversion of almost any complex organic matter into biogas goes through four successive stages: – – – –
stage of hydrolysis; fermentation stage; acetogenic stage; methanogenic stage [4].
The limiting stage of methane digestion of urban wastewater sludge (WWS) is the hydrolysis of the solid phase, in particular, activated sludge, which consists mainly of cells of microorganisms and a small amount of dissolved organic matter (OM). The sediment of the primary settling tanks contains more dissolved OM and fewer active microorganisms as compared to activated sludge. Soluble organic compounds, which can be further converted into biogas, are formed in WWS in the course of hydrolysis. Therefore, the biogas yield during WWS fermentation is in direct proportion to the biodegradability of the sludge and, accordingly, the rate of hydrolysis. One of the technological methods of increasing the bioavailability of sediments is their pretreatment before fermentation in digesters. Pretreatment of sludge allows: • lyse/disintegrate microbial cells of activated sludge; • solubilize sediment solids; • partially decompose the formed organic polymers to monomers and dimers [5]. Hydrolysis of macromolecules (polysaccharides, proteins, lipids) included in the organic matter is carried out by exogenous enzymes excreted into the intercellular medium by various hydrolytic microorganisms. The action of these enzymes leads to the production of relatively simple products, which are efficiently utilized by the hydrolytics themselves and by other groups of bacteria at the subsequent stages of methanogenesis. The hydrolysis phase in methane fermentation is closely related to the fermentation (acidogenic) phase, with hydrolytic bacteria performing both phases, and they are sometimes combined with enzymatic bacteria [4]. Another way to increase the efficiency of the process of anaerobic bioconversion of organic waste to obtain a gaseous energy carrier and organic fertilizers is to increase solid retention time in the reactor [6]. For the effective operation of the reactor, it is important to correctly correlate two main parameters - the retention time of raw materials in the reactor and the degree of digestion of the raw materials. Retention time is the time it takes to completely replace raw materials in the reactor. The degree of digestion of raw materials is the percentage of decomposed organic matter converted into biogas over a certain period of time. In the process of anaerobic fermentation, the organic matter of the waste passes into biogas, i.e. the amount of total solids in the reactor is constantly decreasing. The formation of biogas is usually maximum at the beginning of the process, then the biogas yield decreases. Often, the longer the raw material is held in the reactor, the more methane is recovered, due to the increased contact time between the microorganisms and the substrate. Typically, in periodic systems, the degree of digestion of raw materials is higher than in continuous. Theoretically, in periodic systems, the degree of digestion of raw materials can reach 100%. In practice, however, complete
1198
A. Kovalev et al.
(100%) decomposition of raw materials and complete extraction of biogas is impossible. The degree of digestion of the raw material also depends on its type. Rapidly decaying waste, such as squeezing sugar beets, can have a degree of decomposition of more than 90%, whereas forage crops with a high fiber content decompose by 60% over the same period (see Table 1) [7]. Table 1. Degree of digestion of various types of raw materials [7] Raw material
Degree of digestion (% of ashless substance) Cattle manure 35 Pig manure 46 Forage crops 64 Squeezed sugar beets (cake) 93 Fruit and vegetable waste 91
Thus, based on the degree of digestion of raw materials, such a retention time of raw materials in the reactor is experimentally selected that ensures the most efficient operation of the reactor (i.e., the maximum biogas yield at a relatively high degree of decomposition of the substrate). A constant volume of raw materials is maintained in the reactor by supplying new raw material to the reactor and removing the fermented mass at regular intervals. Often the volume of added raw materials is more than the volume of the removed fermented mass, because part of the total solids of the raw material passes into biogas during the fermentation process. The volume of raw materials in the reactor can be adjusted by adding a certain amount of liquid. The fermented mass (substrate removed from the reactor after fermentation) consists of water, including dissolved salts, inert materials and undecomposed OM. The fermented mass also contains the biomass of microorganisms accumulated during the retention of raw materials in the reactor. Hydraulic retention time (HRT) is the average length of time that liquids and soluble compounds remain in the reactor. An increase in HRT promotes a longer contact between microorganisms and the substrate, but requires a slower supply of raw materials (reactor loading) and / or a larger reactor volume. If the retention time is too short, there is a great risk that the growth rate of the microorganisms will be lower than the rate of removal of the raw materials from the reactor. Often, the doubling time of methanogens in reactors is more than 12 days. Therefore, HRT should be longer than this time, otherwise microorganisms will be removed out of the reactor during the unloading of the fermented mass and the population will not be dense enough to efficiently decompose the raw materials. For single-stage reactors, HRT ranges from 9 to 30 days; HRT for thermophilic reactors averages 66% of that for mesophilic reactors (10–16 versus 15–25 days). One of the ways to increase the efficiency of the process of anaerobic bioconversion of organic waste to obtain a gaseous energy carrier and organic fertilizers is to increase the time spent in the reactor on solid matter (solid retention time – srt).
Way for Intensifying the Process of Anaerobic Bioconversion
1199
Separating, using one of the known methods (sedimentation, centrifugation, etc.), the liquid fraction (supernatant) and the solid, it is possible without increasing the hydraulic retention time of the substrate to increase its solid retention time. The return flow consists not only of the biomass of methane-forming bacteria, but also of the not completely decomposed organic matter of the sediment. With this approach, the dissolved organic matter decomposes first, and the particles of organic matter remaining in the solid phase, which require a longer decomposition time, are returned to re-fermentation. An important advantage is the ability to increase the SRT of sediment without increasing HRT, as well as the retention of biomass. As a result, the required degree of digestion is achieved in smaller reactors [6].
3 Results of Investigations The foundations of a way for intensifying the process of anaerobic bioconversion of organic matter with the production of a gaseous energy carrier include, among other things, the technological scheme and material balance. Figure 1 shows the technological scheme for using the proposed way for intensifying the process of anaerobic bioconversion by preliminary hydrolysis and increasing solid retention time.
Fig. 1. Technological scheme for using the proposed way for intensifying the process of anaerobic bioconversion by preliminary hydrolysis and increasing solid retention time 1 hydrolyser; 2 - anaerobic bioreactor; 3 - effluent settler; 4 - pump for recirculation of the thickened fraction; 5 - waterseal.
The anaerobic bioreactor (2) is equipped with both a stirring system and a heating system with temperature control of the fermentation process. Also, the anaerobic bioreactor (2) is equipped with systems for loading and unloading the substrate, as well as a branch pipe for removing biogas. The design of the hydrolyser (1) repeats the design of the anaerobic bioreactor (2) in a reduced form. As a consequence, the hydrolyser (1) has the same systems as the anaerobic bioreactor (2). However, the heat supply system of the hydrolyser (1) controls the temperature of the hydrolysis process,
1200
A. Kovalev et al.
and through the biogas removal pipe acid biogas is discharged, which is sent to the anaerobic bioreactor. The effluent settler (3) is equipped with an overflow pipe, as well as systems for loading the fermented substrate, removing the supernatant liquid and draining the thickened fraction. The pump for recirculation of the thickened fraction (4) delivers the thickened fraction of the fermented substrate to the hydrolyser (1). The waterseal (5) is used to exclude air from entering the anaerobic bioreactor when removing biogas, as well as to maintain the biogas pressure. When developing a technological scheme for using the proposed way for intensifying the process of anaerobic bioconversion by preliminary hydrolysis and increasing solid retention time, both schemes of continuous supply of the initial substrate [8] and schemes with discrete supply were taken into account [9]. In addition, it is proposed to perform the heat supply scheme for the biogas plant in accordance with the scheme described in the article [10]. Material Balance of Biogas Plant The general view of the material balance of the hydrolyser is as follows: Ginit þ k Gthd ¼ Ginf þ Gabg
ð1Þ
where Ginit – specific feed of the initial substrate to the hydrolyser to pretreatment for anaerobic digestion, kg/kginit, where kginit is the amount of organic matter in the initial substrate; k – proportion of recirculated thickened fraction of fermented waste; Gthd – specific yield of the thickened fraction of fermented waste from the effluent settler, kg/kginit; Ginf – specific yield of the initial substrate prepared for anaerobic treatment, kg/kginit; Gabg – specific losses of organic matter with acid biogas during the preparation of the initial substrate for anaerobic treatment, kg/kginit. Specific losses of organic matter with acid biogas during the preparation of the initial substrate for anaerobic treatment will be: Gabg ¼ ðGinit þ k Gthd Þ uh
ð2Þ
where uh – the degree of digestion of organic matter in the hydrolyser during the preparation of the initial substrate for anaerobic treatment. The general view of the material balance of the anaerobic bioreactor is as follows: Ginf þ Gabg ¼ Geff þ Gbg
ð3Þ
where Geff – specific yield of fermented waste from anaerobic bioreactor, kg/kginit; Gbg – specific biogas yield from anaerobic bioreactor, kg/kginit.
Way for Intensifying the Process of Anaerobic Bioconversion
1201
Specific biogas yield from anaerobic bioreactor will be: Gbg ¼ ðGinf þ Gabg Þ um
ð4Þ
where um – the degree of digestion of organic matter during anaerobic treatment of the prepared substrate in an anaerobic bioreactor. The general view of the material balance of the effluent settler is as follows: Geff ¼ Gthd þ Gsup
ð5Þ
where Gsup – specific yield of the supernatant liquid from the effluent settler, kg/kginit. The general view of the material balance of a biogas plant is as follows: Ginit ¼ Gsup þ Gbg þ ð1 k Þ Gthd
ð6Þ
Figure 2 shows a block diagram of the material balance of a biogas plant with the proposed way for intensifying the process of anaerobic bioconversion by preliminary hydrolysis and increasing solid retention time.
Ginit
Hydrolyser
Gbg
Ginf
Anaerobic bioreactor
Geff
Effluent settler
Gsup
Gabg k·Gthd (1-k)·Gthd Fig. 2. Block diagram of the material balance of a biogas plant with the proposed way for intensifying the process of anaerobic bioconversion by preliminary hydrolysis and increasing solid retention time
4 Conclusion Application of the proposed method for intensifying the process of anaerobic bioconversion of organic matter will provide a synergistic effect of a combination of methods for intensifying the process of anaerobic bioconversion. The use of recirculation of the thickened fraction of fermented waste allows: – to increase the retention time in the reactor of hardly decomposable components of organic waste, which in turn leads to an increase in the degree of digestion of organic matter and, accordingly, to an increase in the yield of biogas;
1202
A. Kovalev et al.
– to increase the concentration of microorganisms in the hydrolysis reactor, which will make it possible to intensify the hydrolysis process, thereby reducing the time for pretreatment of the substrate before anaerobic fermentation; – to increase the concentration of methanogenic microorganisms in the anaerobic bioreactor, which, in turn, will increase the productivity of the bioreactor and ensure the stability of its operation when the characteristics of the incoming raw material change. Pre-treatment of the substrate in a hydrolyser allows: – to transfer a significant part of the organic matter of the substrate into a dissolved state, thus improving the availability of nutrients to methanogenic microorganisms; – to heat the initial substrate to the fermentation temperature in the anaerobic bioreactor, which, in turn, allows you to maintain the optimal temperature mode in the anaerobic bioreactor, avoiding temperature fluctuations that are harmful to methanogenic microorganisms. The supply of acid biogas formed during hydrolysis to the anaerobic bioreactor allows: – avoid the loss of organic matter of the original substrate with acid biogas formed during hydrolysis; – to intensify mixing in an anaerobic bioreactor, which, in turn, allows to improve mass transfer and, as a consequence, the availability of nutrients of the fermented mass to methanogenic microorganisms; – to provide methanogenic microorganisms with additional nutrients in the gas phase. Acknowledgment. This work was supported by the Federal State Budgetary Institution “Russian Foundation for Basic Research” as part of scientific project No. 18-29-25042.
References 1. Izmaylov, A.Y., Lobachevskiy, Y.P., Fedotov, A.V., Grigoryev, V.S., Tsench, Y.S.: Adsorption-oxidation technology of wastewater recycling in agroindustrial complex enterprises. Vestnik mordovskogo universiteta Mordovia Univ. Bull. 28(2), 207–221 (2018). https://doi.org/10.15507/0236-2910.028.201802.207-221 2. Artamonov, A.V., Izmailov, A.Y., Kozhevnikov, Y.A., Kostyakova, Y.Y., Lobachevsky, Y. P., Pashkin, S.V., Marchenko, O.S.: Effective purification of concentrated organic wastewater from agro-industrial enterprises, problems and methods of solution. AMA Agric. Mech. Asia Afr. Latin Am. 49, 49–53 (2018) 3. Kovalev, A.A:. Tekhnologii i tekhniko-energeticheskoye obosnovaniye proizvodstva biogaza v sistemakh utilizatsii navoza zhivotnovodcheskikh ferm (Technologies and feasibility study of biogas production in manure utilization systems of livestock farms). Dissertatsiya ... doktora tekhnicheskikh nauk (Thesis ... doctor of technical sciences), p. 242. All-Russian Research Institute of Electrification of Agriculture, Moscow (1998)
Way for Intensifying the Process of Anaerobic Bioconversion
1203
4. Kalyuzhny, S.V., Danilovich, D.A., Nozhevnikova, A.N.: Results of Science and Technology, ser. Biotechnology, vol. 29. VINITI, Moscow (1991) 5. Nozhevnikova, A.N., Kallistova, A., Litty, Y., Kevbrina, M.V.: Biotechnology and Microbiology of Anaerobic Processing of Organic Municipal Waste: A Collective Monograph. University Book, Moscow (2016) 6. Kevbrina, M.V., Nikolaev, Y.A., Dorofeev, A.G., Vanyushina, A.Y., Agarev, A.M.: Vysokoeffektivnaya tekhnologiya metanovogo sbrazhivaniya osadka stochnykh vod s retsiklom biomassy (Highly efficient technology for methane digestion of sewage sludge with biomass recycling). Vodosnabzheniye i sanitarnaya tekhnika (Water Supply Sanitary Tech.) 10, 61 (2012) 7. Schnurer, A., Jarvis, A.: Microbiological handbook for biogas plants. Swedish Gas Centre Rep. 207, 13–8 (2010) 8. Kovalev, A., Kovalev, D., Panchenko, V., Kharchenko, V., Vasant, P.: Optimization of the process of anaerobic bioconversion of liquid organic wastes. intelligent computing & optimization. In: Advances in Intelligent Systems and Computing, vol. 1072, pp. 170–176 (2020). https://doi.org/10.1007/978-3-030-33585-4_17. 9. Kovalev, D., Kovalev, A., Litti, Y., Nozhevnikova, A., Katraeva, I.: Vliyaniye nagruzki po organicheskomu veshchestvu na protsess biokonversii predvaritel’no obrabotannykh substratov anaerobnykh bioreaktorov (The effect of the load on organic matter on methanogenesis in the continuous process of bioconversion of anaerobic bioreactor substrates pretreated in the vortex layer apparatus). Ekologiya i promyshlennost’ Rossii (Ecol. Ind. Russ.) 23(12), 9–13 (2019). https://doi.org/10.18412/1816-0395-2019-12-9-13 10. Kovalev, A., Kovalev, D., Panchenko, V., Kharchenko, V., Vasant, P.: System of Optimization of the combustion process of biogas for the biogas plant heat supply. Intelligent computing & optimization. In: Advances in Intelligent Systems and Computing. Vol. 1072, p. 361–368 (2020). https://doi.org/10.1007/978-3-030-33585-4_36
Evaluation of Technical Damage Caused by Failures of Electric Motors Anton Nekrasov1, Alexey Nekrasov1, and Vladimir Panchenko2,1(&) Federal State Budgetary Scientific Institution “Federal Scientific Agroengineering Center VIM” (FSAC VIM), 1-st Institutskij 5, 109428 Moscow, Russia [email protected] Russian University of Transport, Obraztsova st. 9, 127994 Moscow, Russian Federation [email protected] 1
2
Abstract. Emergent failures of electric motor impair economic damage comprising its technological and technical components. The first one is associated with underproduction of livestock goods while the second includes expenditures for replacement of electric motor. Evaluation of the technical component of the economic damage caused by failures of electric motors having various capacities and installed in driving assemblies of technological equipment in livestock production has been made. Dependences of technical damage on the price and lifespan of failed electric motor have been obtained, based on the results of estimations. Failures of electric motors in equipment of livestock farms cause interruptions in technological processes leading to material damage. The extent of the damage is particularly high at remote agricultural enterprises characterized by substantial dispersion of dairy and fattening farms in which case prompt illumination of electric motor failure is not an easy task. Therefore, time-outs may appear unacceptably long. The major factors affecting the effectiveness indicators of electric motors use in agricultural production that ensure the reduction of failure rate and time to recover have been specified. Results of this work will enable to estimate the technical component of economic damage associated with failures of electric motors, to define the extent of economical responsibility of the applied electric equipment, to make more specific schedules of preventive maintenance and repair and to correct them, as well as to find room for improvement of operation effectiveness of energy services at agricultural enterprises. Keywords: Technical maintenance electric equipment motors Technical damage
Failures of electric
1 Introduction Improving the system of operation and maintenance of electric equipment will make it possible to extend its lifespan, to save material expenditures for purchasing new electric components and for repair of failed ones thus reducing associated technological damage of agricultural enterprises [1–4]. Failure of electric motor or motor shutdown © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1204–1212, 2021. https://doi.org/10.1007/978-3-030-68154-8_102
Evaluation of Technical Damage Caused by Failures
1205
due to a power outage lead to economic damage comprising its technological and technical components arising due to, respectively, underproduction of livestock goods and the need to replace failed electric motors [5]. Organizational concept, structure and economic base of agricultural production have changed, during recent years. Modern peasant households, farms and agricultural enterprises are provided with electrified machinery and equipment, and new types of electric equipment are produced today including electric motors and automatic tripping contactors [6, 7]. Evaluating damages caused by failures of electric equipment, in livestock raising, is based on researches [8–10] carried out earlier and dedicated to the methods of calculating damage to agricultural production in case of power outages and electric equipment failures.
2 Technical Demages Estimating damages shall be done on the basis of a complex approach to the solution of the problem of improving operational reliability of rural electric installations with the use of relevant data on damages caused by failure of electric drives, in technological equipment. A convenient method for calculating damages is needed to be applied in practice of electrical-technical services of agricultural enterprise that does not involve insignificant components. It has to be supported with up-to-date reference materials on electric equipment and agricultural products easy to use by service personnel of agricultural enterprises. In case of failure of electric motor, the plant or machine stops in which this motor is installed. The extent of the damage is mainly defined by the period of time-out of working mechanism [11, 12]. In the majority of cases, failed electric motors are not subject to repair directly on the site. They are transported to a specialized service center for complete repair in which case a spare electric motor has to be installed on the working mechanism. Normally repair shops are located at a great distance from agricultural production unit. That is why failed motors are stored on the site and sent to the repair shops by batches of 10 to 15 pieces [12]. A certain majority of repair shops have dedicated exchange fleets of electric motors in order to reduce the time of returning repaired motors to the production sites. It means that repair shops receive failed electric motors in exchange to repaired ones, in the same quantities, to maintain a bank of spare components at the production sites [13]. Damage from replacement of failed electric motor comprises expenditures for dismantling-assembling and the price of either a new electric motor or one that has passed complete repair. Besides, it depends on capacity of electric motor, its size and weight, design and methods of assembly for different types of machines characterized by different operating conditions including convenience of dismantling-assembling. That is why a uniform method of estimating the volumes of works and working conditions of electrical installers cannot be reliably defined, in case of failure, for particular type and capacity of electric motor. It is assumed that, in today’s operation and maintenance conditions of electric motors, an approximate value of the average share of technical component of economic damage associated with either ahead-of-time repair of decommissioning of electric motor may amount to a half of its price.
1206
A. Nekrasov et al.
More specific values of technical damage due to electric motor failure can be calculated in various ways depending on whether a failed electric motor has to be replaced by either a new one of one that has passed the complete repair with the use of fault-free components or it is intended to be repaired either at the site or in a repair shop of agricultural enterprise. The unrecovered cost of electric motor and all other expenditures and charges have to be taken into account which makes such calculations relatively complicated. As evidenced from the available information on the nomenclature of electric motors supplied to agricultural consumers, electric equipment factories produce mainly motors of general purpose industrial versions. It means that they are operated and applied in conditions they are not intended for. With regard to low quality of complete repair the lifespan of repaired motors does not, normally, exceed 1 to 1.5 years [8]. In Tables 1, 2, 3 and 4 the input data required for estimating the technical component of economic damage is given including price, weight and complete repair expenditures, for various types of asynchronous electric motors of unified series (in Russia) most commonly applied in agricultural production [11, 12]. Table 1. Price of electric motors of 5a series, for various capacities Rotation frequency (min–1) 3000 1500 1000 750
Price of electric motor (euro’s) Capacity (kW) 5.5
7.5
–
–
11
15
18,5
22
30
37
45
55
231.76 346.98 366.73 414.89
497.6
759.39
858.18 1038.81
205.78 247.92 330.92 372.90 428.47
558.1
784.08
891.51
229.83 251.52 329.69 372.90 472.93 698.88
807.6
985.35 1248.1
–
281.53 334.63 390.19 497.61 772.98 838.41 1075.5 1296.5
1478.7
970.54 1360.64 1704.84
Table 2. Weight of electric motors of 5a series, for various capacities Rotation frequency (min–1) Weight of electric motor (kg) Capacity (kW) 5.5 7.5 11 15 18,5 22 3000 – – 70 106 112 140 1500 – 64 76 111 120 145 1000 63 74 108 129 160 245 750 74 108 124 160 240 260
30 155 165 280 340
37 235 245 330 430
45 255 270 430 460
55 340 345 450 705
These data related to electric motors is intended for use while estimating economic damages from failure of electric equipment, planning activities on technical maintenance and improvement of its operational reliability to ensure high effectiveness of its operation, in livestock-breeding farms.
Evaluation of Technical Damage Caused by Failures
1207
Table 3. Average price of electric motors of AIR series Rotation frequency (min–1)
Price of electric motor (euros)
3000
25.9 32.2 43.8 59.8
74.95
1500
30.7 41
92.56 103.55 139.3 175.48 211.16 328.50 406.2 495.3
1000
36.2 45.4 59.0 91.1 130.1
141.44 183.6 200.97 297.86 435.42 538
750
47.1 70.5 87.9 133
192.50 205.2 297.55 333.96 559.52 596.7 775.9
Capacity (kW) 0.25 0.55 1.1
2.2
52.9 72.4
3.0
146.6
4.0
5.5
7.5
11
18.5
22
30
92.01 106.2 134.46 194.34 302.59 398.6 462.2 616.2
Table 4. Weight of electric motors of AIR series Rotation frequency (min–1) Weight of electric motor (kg) Capacity (kW) 0.25 0.55 1.1 2.2 3.0 4.0 3000 3.8 6.1 9.2 15 30 26 1500 4.2 8.1 9.4 18.1 35 29 1000 5.6 9.9 16 27 43 48 750 – 15.9 22 43 48 68
5.5 7.5 11 18.5 32 48 78 130 45 70 84 140 69 82 125 160 82 125 150 210
22 150 160 195 225
30 170 180 255 360
3 Results and Discussion Technical damage from failures of electric motors depends on multiple factors [14]. That is why its correct values are difficult to define, in practice of agricultural enterprise conditions. Technical damage DT associated with replacement of a failed electric motor comprises the expenditures for dismantling-assembling and price of either new P0 electric motor or one that has passed complete repair PR, with the account of its amortization. In a general case, technical component of damage from ahead-of-time failure of electric motor (that has not been operated during its rated lifespan) can be calculated as follows [5]: tF DT ¼ PR þ KM 1 PD TN
ð1Þ
where PR are expenditures associated with replacement of the failed electric motor with a new one (dismantling and assembling), KM is price of the failed electric motor, PD is price of failed electric motor that has been decommissioned (metal scrap), TN and tF are, respectively, rated and factual lifespan of electric motor. In calculations, data on the factual lifespan shall be used that comes from agricultural enterprises operating corresponding types of electric equipment in accordance with the requirements of System of Scheduled-Preventive Repair and Technical Maintenance of Electric Equipment in Agriculture. The rated lifespan of 7 years is
1208
A. Nekrasov et al.
established [9], for electric motors operating in hard conditions of livestock-breeding. Price of metal scrap for electric motors is assumed to be 20 rubles or 0.35$ per kg (for Russian market on 21.05.2020) while expenditures for dismantling-assembling are estimated as 10% of new electric motor’s price. Results of calculations for technical damage caused by failure of electric motor operating in conditions of animal-breeding premises with the use of expression (1) are presented in Table 5. Table 5. Calculation results for technical damage caused by failures of electric motors, in livestock production Electric motor model Technical damage (euros) Lifespan tF (years) 1 2 3 4 5 6 AIR 0.55 kW 37 31 25 20 14 8 AIR 3.0 kW 80 66 53 40 27 14 AIR 5.5 kW 122 102 82 62 42 22
7 – – –
Dependence of technical damage on the lifespan in case of electric motor failure in livestock production are shown in Fig. 1, for 0.55 kW (1), 3.0 kW (2) and 5.5 kW (3). In works [13, 14] It found that electric motors applied in agricultural production have an average lifespan (before the first complete repair) 3 to 3.5 years, 4 years and 5 years, for livestock-breeding, plant cultivation and private subsidiary farming, respectively. The period of repair cycle, between capital repairs, of electric motors is 1.5 years, 2 years and 2.5 years, respectively, in livestock farming, plant cultivation and subsidiary farming. Practice shows that electric motors have not to be repaired more frequently than once within their whole lifespan. The level of maintenance and the quality of repairs have to be improved so that each motor could retain its operability AIR 0.55 kW
AIR 3.0 kW
AIR 5.5 kW
140
122
DT [euros]
120
102
100 80
82
80 66
60
62
53
40
42
40 37
20
31
27 25
0 0
1
2
3
20
tF [years]
4
22
14 14 5
8 6
7
Fig. 1. Dependence of technical damage caused by failure of electric motors on their lifespan, for 0.55 kW, 3.0 kW and 5.5 kW.
Evaluation of Technical Damage Caused by Failures
1209
within a period not shorter than 5 to 6 years before the first complete repair and, at least, 3 to 4 years after that. Electric motors shall be subject to decommissioning upon termination of their rated lifespan determined by their depreciation rates. The values of these rates are used while estimating the actual number of complete repairs and the average values of technical damage. In order to reduce material damages caused by failures of electrified equipment, at agricultural enterprises, by decreasing the failure rate of electric motors it is necessary to perform preventive activities on their technical maintenance and repair accompanied by diagnostic checks of the technical status of electric motors. Besides, the required sets of spare components and materials have to be kept for repair-maintenance purposes, and the appropriate protection devices have to be applied to protect electric motors, in alarm conditions. It is also important to improve operational efficiency of repair teams and their provision with technical means for trouble-shooting and fault handling. A more reliable evaluation of the technical component of damage requires collecting large volumes of data that can be only obtained in conditions of practical operation and maintenance of electric equipment at agricultural enterprises. The latter has to have a complete staff of energy service team capable to follow schedules of preventive activities keeping accurate records related to the operation and maintenance of electrified equipment. It is also important to know actual lifespan of spare electric motors from the date of their manufacture by the factory till the moment of installation on the working machine, as well as those of the failed electric motors from the moment of manufacture by the factory till the date of failure on the corresponding working machine. Their rated lifespan have to be also specified. In the process of operation of electric motor, amortization charges related to electric motor’s initial price have to be carried out, on the annual basis. These charges can be later involved for either repairing the motor or its replacement by a new one. In case that a failed electric motor has not operated during its whole rated lifespan before the complete repair or decommissioning, the enterprise will run up losses associated with repair or replacement that are a part of technical damage DT. In order to make practical evaluations of technical damage more convenient it is advisable to apply summarizing coefficient of damage kD depending on actual lifespan of electric motor. In the case of failure of electric motor followed by its decommissioning or replacement by its analog, damage DT arises that is defined by the following expression [12]: DT ¼ PR kD ðtF Þ
ð2Þ
where PR is price of new electric motor including installation expenditures (euros), kD is coefficient of damage (a.u.), tF is actual lifespan of electric motor (years). The obtained dependence of coefficient of damage on actual lifespan of electric motor for livestock production is shown in Fig. 2. In case of decommissioning of electric motors installed in technological equipment of livestock farm, the values of technical damage DT, in the dependence on the lifespan is calculated by multiplying coefficient of damage by the price of electric motor including installation expenditures, for the rated value of lifespan 7 years. As it is clear
A. Nekrasov et al.
kD [a.u.]
1210
1 0,8 0,6 0,4 0,2 0
1.0
0.8 0.6 0.4 0.2 0
1
2 3 4 tF [years]
5
6
0.1 7
Fig. 2. Dependence of coefficient of damage on actual lifespan of electric motor, in livestock production.
from Fig. 2, coefficient of damage equals to 0.5 to 0.6, for the average value of lifespan of electric motor equal to 3 to 3.5 years, in livestock production. Calculation results for the average value of technical damage DT caused by failure of electric motors of type AIR depending on their capacity calculated with the use of expression (2), for KD = 0.6, are presented in Table 6.
Table 6. Calculation results for technical damage caused by failure of electric motor of type AIR, in livestock production, for KD = 0.6. Capacity of electric motor (P) kW Price of new electric motor (P0), euros Price of installation (PI), euros PR = P0 + PI, euros Damage (DT), euros
1.1 52.92 5.30 58.21 34.92
2.2 3.0 4.0 5.5 72.43 92.57 103.55 139.30 7.25 9.26 10.35 13.93 79.61 101.83 113.90 153.23 47.76 61.10 68.33 91.86
7.5 177.79 17.78 195.56 117.34
When necessary, this evaluation method enables prompt calculations of economic damage due to early replacements or complete repairs of electric motors and other electric equipment. It is also applicable to estimative calculations while solving economic objectives in conditions of absence of clearly specified initial information related to the failures of electric motors.
4 Conclusions Evaluation of technical components of economic damage from failures of electric motors installed in technological equipment of livestock-breeding farms has been made. It has been found out that technical damage caused by failures of electric motors depends on multiple factors. Taking these factors into account makes calculations rather complicated while their results may appear insignificant in terms of practical application, in conditions of agricultural enterprise. The main components of technical
Evaluation of Technical Damage Caused by Failures
1211
damage arise from the reduction of actual lifespan and because of unexpended price of electric motor. For practical application in conditions of livestock farm, the average calculated value of technical damage due to emergent failure of single electric motor is defined with the use of averaged coefficient of technical damage related to the price of a new electric motor including installation expenditures. Calculations have been performed, and dependences of technical damage on actual lifespan have been defined, for failures of electric motors of various capacities applied in livestock sector of agriculture. The results of works will make it possible to estimate technical damage associated with failures of electric motors, to define the extent of economical responsibility of particular employed electric equipment, to make more specific schedules of preventive maintenance and repair and to correct them, as well as to find room for improvement of operational effectiveness of energy services at agricultural enterprises.
References 1. Gill, P.: Electrical Power Equipment Maintenance and Testing. CRC Press, Boca Raton (2008) 2. Gracheva, E., Fedorov, V.: Forecasting reliability electrotechnical complexes of in-plant electric power supply taking into account low-voltage electrical apparatuses. In: 2019 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM). IEEE, pp. 1–5 (2019) 3. Vasant, P., Zelinka, I., Weber, G.-W. (eds.): Intelligent computing and optimization. In: Proceedings of the 2nd International Conference on Intelligent Computing and Optimization 2019 (ICO 2019). Springer (2019). ISBN 978–3–030–33585–4. 4. Vasant, P., Zelinka, I., Weber, G.-W. (eds.) Intelligent computing & optimization. In: Conference proceedings ICO 2018. Springer, Cham (2018). ISBN 978-3-030-00978-6 5. Johannes, R., Schmidthaler, M., Schneider, F.: The value of supply security: the costs of power outages to Austrian households, firms and the public sector. Energy Econ. 36, 256– 261 (2013) 6. Kharchenko, V., Vasant, P.: Friendly agricultural development. In: Handbook of Research on Energy-Saving Technologies for Environmentally. IGI Global. https://doi.org/10.4018/ 978-1-5225-9420-8 7. Kharchenko, V., Vasant, P.: Renewable Energy and Power Supply Challenges for Rural Regions. IGI Global (2019). https://doi.org/10.4018/978-1-5225-9179-5 8. Syrykh, N.N., Ye Kabdin, N.: Theoretical Foundations of Operation of Electric Equipment. Agrobiznestsentr Publ., Moscow (2007). 514 p. (in Russian) 9. Strebkov, D.S., Nekrasov, A.I., Nekrasov, A.A.: Maintenance of power equipment system based on the methods of diagnosis and control of technical conditions. In: Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development. January 2018, Chapter 18, pages 535. pp. 421–448. https://doi.org/10.4018/978-1-52253867-7 10. Nekrasov, A.I., Nekrasov, A.A.: Method for definition of required spare components set to maintain electric equipment for rural applications. In: 2018 International Russian Automation Conference (RusAutoCon). IEEE (2018)
1212
A. Nekrasov et al.
11. Yu, S., Borisov, A., Nekrasov, A.N., Nekrasov, A.A.: Methodical recommendations on forecasting and monitoring technical status of electric motors in agricultural production. Moscow. GNU VIESH (2011). 108 p. (in Russian) 12. Yeroshenko, G.P., Bakirov, S.M.: Adaptation of operation of electric equipment to the specifics of agricultural production. Saratov IC ‘Nauka’ (2011). p. 132. (in Russian) 13. Price of electric motors. https://electronpo.ru/price. Accessed 24 Jan 2020 14. LaVerne, S., Bodman, G., Schinstock, J.: Electrical wiring methods and materials for agricultural buildings. Appl. Eng. Agric. 5(1), 2–9 (1989)
Development of a Prototype Dry Heat Sterilizer for Pharmaceuticals Industry Md. Raju Ahmed(&), Md. Niaz Marshed, and Ashish Kumar Karmaker
Abstract. The use of sterilizer in pharmaceutical industries are very important due to having capabilities of reducing microbial contamination of medicinal preparations. Sterilizer eliminates all form of microorganisms and other biological agents, which helps to maintain a sterile environment. Although several researches conducted research on design & implementation of sterilizer, due to lack of accuracy, safety issues, operational limitation and high cost, these sterilizers are not suitable for pharmaceutical industries in Bangladesh. In this project, a low-cost and user friendly Dry Heat Sterilizer (DHS) is designed and implemented for pharmaceutical industries. In order to make fully automated control scheme, Programmable Logic Controller (PLC) and Human Machine Interface (HMI) is used in this experimental work, which ensures necessary safety measures also. The performance of the implemented DHS is analyzed for two different experimental setups. It is found that the DHS made from locally available materials can perform satisfactory and can be used in pharmaceutical industries as well as for oven applications. Keywords: Sterilizer
Heating Cooling Microorganism PLC HMI
1 Introduction The importance of using sterilizer in healthcare sector is getting popularity due to its capability of killing microorganisms. DHS is a necessary equipment for pharmaceuticals as well as hospitals to facilitate sterilization process. Different type of cycles has been used sterilization process to perform the function, such as drying, exhaust, presterilization (heating), stabilization, sterilization and cooling. Heat destroys bacterial endotoxins (or pyroxenes) which are difficult to eliminate by other means, and this property makes it applicable for sterilizing glass bottles and pharmaceuticals equipment. In this paper, the design and implementation of a DHS using locally available materials is presented. The performance analysis of the implemented DHS is conducted and also presented. Many researchers have suggested and prescribed many ways of sterilizer and sterilization process. Pradeep and Rani [1] described the function of the sterilizer and sterilization process. Their process is used for implementation of different type of sterilizer. They also carried out detailed study on machine performance, to show the evaluation process of the efficiency of micro-organism control. Kalkotwar et al. [2] performed some research on the sterilization process for killing of microorganisms by heat, which was a combination of time and temperature. Finally, they published papers © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1213–1221, 2021. https://doi.org/10.1007/978-3-030-68154-8_103
1214
Md. R. Ahmed et al.
on methods and process of validation of sterilization process. Purohit and Gupta [3] has analyzed the temperature mapping of DHS. Sultana [4] discussed about the sterilization methods and principles, also have explain different type of sterilizer and sterilization methods. She employs high temperatures in the range of 160–180 °C. Obayes [5] has studied the construction and working principle of DHS. He applied different values of time and temperature to perform the sterilization, such as 170 °C for 30 min, 160 °C for 60 min, and 150 °C for 150 min. He also described the construction of the sterilizer, he used conventional control system using switch and relay to perform the functions of selecting electric heater for temperature rise and thermostat for measuring temperature. Satarkar and Mankar [6] fabricated an autoclave sterilization machine for reducing the microbial contamination of packaged products. They have described about the moist heat with low range of temperature. Oyawale and Olaoye [7] designed and fabricated a low cost conventional steam sterilizer. They performed the testing of their design considering temperature of 120 °C for 12 min. According to the above discussions, many researchers were designed and implemented DHS and conducted the performance tests. In Bangladesh some switch- relay controlled conventional machines are available that can be used only for oven purpose, cannot use pharmaceuticals applications due to lack of accuracy, safety and operational limitations. In Bangladesh, DHS are imported for industrial purpose which are large in size and costly, and all sorts of control remained in the grip of them. Our engineers do not have knowledge of controlling system; they only can operate. If there is any problem the total system become useless. In this project, a prototype DHS is designed and implemented using locally available materials. For automatically control the operation of the DHS, PLC is used. HMI is used for giving the instructions and for displaying all operation parameters. The specific issues of this work can be summarized as follows: a. To design and implement a PLC controlled DHS using locally available materials for pharmaceuticals and experimental use. b. To control the various parameters of the operation cycle and display the parameters in the HMI display. c. Analysis the performance of the implemented DHS and compare with the set values, and discuss the suitability of the implemented DHS for industrial use.
2 Proposed DHS System Figure 1 shows the operational block diagram of the proposed DHS comprising PLC, HMI and other accessories. PLC is interfaced with the HMI for bidirectional control and display.
Development of a Prototype Dry Heat Sterilizer for Pharmaceuticals Industry
1215
Fig. 1. Operational block diagram of the proposed DHS.
PLC will continuously monitor the inside temperature of the DHS. Door sensor and emergency stop switch are used for safety purpose. Before starting operation, the door of sterilizer should be closed, door sensor will ensure this. If door is not closed at the beginning of operation cycle or any unwanted situation arises, buzzer will alarm. The desired cycle parameters such as temperature, heating time, sterilizing time and cooling time will be loaded in PLC through HMI. All parameters will be continuously monitored by PLC and will be displayed on the HMI display. Heaters and blowers are used for heating and cooling the chamber of DHS. Magnetic contractor is used to turn on/off the blower and solid state relay is used to turn on/off the heater.
3 Implementation of DHS The proposed DHS is implemented practically in the laboratory. Figure 2 shows the control panel of the implemented DHS. Table 1 shows the list of materials used for implementing the prototype DHS. Operational flowchart of the sterilization process in shown in Fig. 3.
1216
Md. R. Ahmed et al.
Fig. 2. Control panel of implemented DHS. Table 1. List of materials used for implementing the DHS. Sl no. 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17
Name of material Machine Frame PLC HMI Expansion Module Communication cable Power supply unit Blower-1 Blower-2 Heater element Temperature sensor SSR Fuse Holder Fuse Electromechanical relay Indicator lamp Indicator lamp Limit Switch
Specification 1 package FBs-20MAR2-AC, 240 VAC P5043S, 24 VDC FBs-6RTD RS232 IP 100/220 VAC, OP 24 VDC 24VDC 24 VDC Cartridge, 220 VAC, 500 W Pt100 250 V, 25 A 220 V 220 V, 6 A 24 V, 14 pin Red color, 220 VAC Green color, 220 VAC 220 V, 5 A
Quantity 01 unit 01 Pcs 01 Pcs 01 Pcs 01 Pcs 01 Pcs 01 Pcs 01 Pcs 02 Pcs 02 Pcs 01 Pcs 01 Pcs 01 Pcs 04 Pcs 02 Pcs 02 Pcs 02 Pcs
Development of a Prototype Dry Heat Sterilizer for Pharmaceuticals Industry
Fig. 3. Flowchart of operation of the proposed sterilization process.
1217
1218
Md. R. Ahmed et al.
4 Performance Analysis of Implemented DHS The recommended operating conditions for the DHS is a cycle of 40 °C to 50 °C for 5 min and a cycle of 55 °C to 65 °C for 3 min. The tests were carried out at an empty chamber, the set value and measured value for two cases are given in Table 2 and in Table 3 respectively. Variation of chamber inside temperature with time for the two cases are shown in Fig. 4 and Fig. 5. Ambient temperature was 28.0 °C for case-1 and 30.5 °C for case-2. It is seen that the implemented DHS can perform the sterilization cycle properly as per instruction. Table 2. Performance test of the implemented DHS for case-1. Sl. no 1
Parameter
Set
Actual
Remarks
45 s
45 s
Internal setting
6
Air circulation for balance Exhaust Heating Stabilization Sterilization temperature Sterilization time
60 s – 120 s 40.0 ° C 300 s
60 s 950 s @ 40.0 °C 120 s @ 40.3 °C Max: 40.3°C Min: 40.0 °C 300 s
7
Cooling temperature
31.9 °C
8 9
Cooling time Cooling extinction time Total Cycle Time
32.0 ° C – 300 s
Internal setting At actual Internal setting Operational data control Operational data control Operational data control At actual Internal setting
2 3 4 5
10
1055 s 300 s
2770 s
Fig. 4. Variation of chamber temperature with time for case-1.
Development of a Prototype Dry Heat Sterilizer for Pharmaceuticals Industry
1219
Table 3. Performance test of the implemented DHS for case-2. Sl. no 1
Parameter
Set
Actual
Remarks
45 s
45 s
Internal setting
6
Air circulation for balance Exhaust Heating Stabilization Sterilization Temperature Sterilization Time
60 s – 120 s 55.0 ° C 180 s
60 s 1637 s @ 55.0 °C 120 s @ 55.3 °C Max: 55.3 °C Min: 54.8 °C 180 s
7
Cooling Temperature
32.9 °C
8 9
Cooling Time Cooling Extinction time Total Cycle Time
33.0 ° C – 300 s
Internal setting At actual Internal setting Operational data control Operational data control Operational data control At actual Internal setting
2 3 4 5
10
2147 s 300 s
4427 s
Fig. 5. Variation of chamber temperature with time for case-2.
4.1
Alarm Check of the DHS
To ensure the proper operation cycle as per instruction and to operate the HDS in safe mode, proper protection system is incorporated with the implemented DHS. If any violation of the setup value or any dander situation occur, the system will alarm. If violation is severe the protection system automatically shut-off the supply. All violation and danger situation will be displayed in HMI display. The alarm messages and the corresponding remedial actions are given in Table 4. A breakdown list and remedy action are also mention in Table 5.
1220
Md. R. Ahmed et al. Table 4. Alarm list and remedy of the implemented DHS.
Sl. no 1
Message
Cause
Remedial action
Emergency stop
Emergency switch is active Blower 1 tripped Blower 2 tripped Door is open or sensor missing Door is open or sensor missing Compressed air not available Air pressure too low Temperature limit exit Sensor broken or wire turnoff Sensor broken or wire turnoff Sensor broken or wire turnoff
Reset emergency switch
2 3 4
Blower 1 overload Blower 2 overload Sterile door open
5
Non sterile door open
6
Low air pressure
7
High temperature
8
Product temperature Sensor broken Safety temperature sensor broken Sensor function off
9 10
Check safety device Check safety device Close the door or check the sensor function Close the door or check the sensor function Make sure the air is available Increase the air pressure Reset Check sensor and PLC input Check sensor and PLC input Replace the sensor or reconnect the wire
Table 5. Breakdown list and remedy of the implemented DHS. Sl. no 1
2 3 4 5
Observations
Cause
Remedial action
Machine not start
Door is open
Close the doors and door sensors Check all components
Heater not ON High temperature Compressed Air low Long time heating
Any component is tripped or damaged Inputs not available Temperature sensor may be damaged Safety temperature sensor failure Compressed air not available or air pressure too low Heater not ON Blower not ON Sensor may be damaged
Check alarm and take steps Replace the sensor or reconnect Replace the sensor or reconnect Checking air availability, Increase the air pressure Check SSR and heater Check blower Check temperature sensor
Development of a Prototype Dry Heat Sterilizer for Pharmaceuticals Industry
1221
5 Conclusion High temperature for certain time is required to control or kill the microorganism. A DHS is implemented in this project to perform the sterilization. Locally available materials are used to implement the DHS, the cost of this machine is very low around six hundred fifty US dollar. Two setups are chosen to test the performance of the implemented DHS. It is seen from the experimental results (Table 2 and 3) that the implemented DHS performed welly as per setup operation cycle. Maximum deviation of temperature is found around 0.75 percent and time is around 0.31 percent. The results of completed cycles show that the implemented DHS work perfectly. In this project, to construct the body of the DHS, wooden material is used due to constructional limitations. Therefore, test cycles are run considering comparatively low temperature. All parameters of the operation cycle of DHS can be set through HMI, as a results the implemented DHS is user friendly. It is expected that the DHS implemented in this project can be used for oven purpose in pharmaceuticals and other industries and also can be used for experimental purpose. This project will be helpful for producing low cost, customized design and user friendly DHS locally, which can be used in industries.
6 Recommendation for Future Work Further research can be done by considering the high temperature resistance material i.e. stainless steel (SS) for performing high temperature sterilization. Air velocity of blowers did not calculate due to the use of small blowers and limitation of measurement instrument. Rate of rise and fall of temperature did not measured in this project, further research can be done in this direction.
References 1. Pradeep, D., Rani, L. : Sterilization protocols in dentistry – a review. J. Pharm. Sci. Res. 8(6), 558 (2016) 2. Kalkotwar, R.S., Ahire, T.K., Jadhav, P.B., Salve, M.B.: Path finder process validation of dry heat sterilizer in parenteral manufacturing unit. Int. J. Pharm. Qual. Assur. 6(4), 100–108 (2015) 3. Purohit, I.K., Gupta, N.V.: temperature mapping of hot air oven (dry heat sterilizer). J. Pharm. Res. 11(2), 120–123 (2017) 4. Sultana, D.Y.: Sterilization Methods and Principles, Faculty of Pharmacy, Jamia Hamdard, Hamdard Nagar, New Delhi-110062, pp. 1–4, 11 July 2007 5. Obayes, S.AS.: Hot air oven for sterilization, definition and working principle. SSRN Electron. J. (2018) 6. Satarkar, S.S., Mankar, A.R.: Fabrication and analysis of autoclave sterilization machine. IOSR J. Eng. 05–08. ISSN (e): 2250–3021, ISSN (p): 2278–8719 7. Oyawale, F.A., Olaoye, A.E.: Design and construction of an autoclave. Pac. J. Sci. Technol. 8 (2), 224–230 (2007)
Optimization of Parameters of Pre-sowing Seed Treatment in Magnetic Field Volodymyr Kozyrsky(&), Vitaliy Savchenko, Oleksandr Sinyavsky, Andriy Nesvidomin, and Vasyl Bunko National University of Life and Environmental Sciences of Ukraine, Street Heroiv Oborony, 15, Kiev 03041, Ukraine {epafort1,sinyavsky2008}@ukr.net, [email protected], [email protected], [email protected] Abstract. The results of theoretical and experimental researches of seed biopotential change at its pre-sowing treatment in a magnetic field are presented. It is established that under the action of a magnetic field the speed of chemical and biochemical reactions in a plant cell increases, which causes a change in the biopotential. The method of determining the efficiency of seeds pre-sowing treatment by changing the biopotential is substantiated. It is established that the main acting factors in magnetic seed treatment are magnetic induction, its gradient and speed of seed movement in a magnetic field. The effect of magnetic treatment takes place at low energy doses (2.0–2.5 Js/kg). The optimal mode of magnetic seed treatment is determined: magnetic induction 0.065 T with fourfold re-magnetization, magnetic field gradient 0.57 T/m and the velocity of its movement in a magnetic field of 0.4 m/s. Keywords: Magnetic field Biopotential Pre-sowing seed treatment Magnetic induction Speed of seed movement Activation energy Energy dose of treatment
1 Introduction It is possible to increase production and improve the quality of crop products by stimulating seeds using the l biological potential of seed material and reducing crop losses from diseases and various types of pests. In the practice of agricultural production, the main method of increasing crop yields is the application of mineral fertilizers, and the protection of plants from diseases and pests - the use of chemical pesticides. But long-term use of mineral fertilizers and plant protection products leads to irreparable environmental damage. Therefore, there is a need to increase yields without the use of chemicals. The greatest interest in terms of obtaining environmentally friendly products are electrophysical factors affecting plants [1], among which a promising method is the use of a magnetic field for pre-sowing seed treatment. Unlike traditional methods of pre-sowing seed treatment with chemicals and other electrophysical methods, it is a technological, energy-efficient method, does not cause negative side effects on plants and staff and is an environmentally friendly type of treatment [2]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1222–1231, 2021. https://doi.org/10.1007/978-3-030-68154-8_104
Optimization of Parameters of Pre-sowing Seed Treatment in Magnetic Field
1223
Seed treatment in a magnetic field affects the physicochemical processes directly in the seed, which leads to biological stimulation, activation of metabolic processes, enhancing enzymatic activity, accelerating the growth of plants, which further increases their productivity. The magnetic field destroys fungi and microorganisms that are on the surface of the seeds, which reduces the incidence of plants.
2 Background Many researchers have found a positive effect of the magnetic field on crop seeds, which is manifested in improving seed sowing qualities [3], plant biometrics and yields [4, 5], crop storage [6] and reducing plant morbidity [7], biochemical indicators [8] and the quality of plant products [9]. However, all studies were performed at different values of magnetic induction and treatment time (treatment dose), although it was found that crop yields and biometrics depend on the dose of magnetic treatment. As a result, many different processing modes are proposed, which sometimes differ significantly from each other. Since the mechanisms of action of the magnetic field on the seeds did not have a clear explanation, so not all the active factors in the magnetic treatment of seeds and their optimal values were established. Numerous studies suggest that seed treatment in a magnetic field may be an alternative to chemical methods of pre-sowing treatment [10]. For successful introduction of pre-sowing treatment of seeds in production it is necessary to establish mode parameters of processing and their optimum values. A common disadvantage of all existing methods of electromagnetic stimulation is the lack of instrumental determination of the treatment dose. Its optimal value is usually determined by yield, which largely depends on agro-climatic factors, soil fertility, cultivation technology used and so on. Therefore, to determine the optimal modes of pre-sowing seed treatment in a magnetic field, it is necessary to develop a method of indicating its effect. The purpose of the study is to establish the influence of the magnetic field on the change of the biopotential of seeds of agricultural crops and to determine the optimal parameters of magnetic seed treatment.
3 Results of the Research Various chemical and biochemical reactions take place in plant seeds, which are mainly redox. Stimulation of seeds is associated with an increase in their speed, resulting in an increase in the concentration of reaction products: dCi ¼ xdt
ð1Þ
where Ci is the concentration of the substance, mol/l; x – rate of chemical reaction, mol/(l s); t – hour, s.
1224
V. Kozyrsky et al.
Under the action of a magnetic field changes the rate of chemical reactions [11]: xm ¼ x exp mK 2 B2 þ 2KBvNa =2RT ;
ð2Þ
where x – rate of chemical reaction without the action of a magnetic field, mol/(ls); m – the mass of ions, kg; B – magnetic induction, T; v - velocity of ions, m/s; K – coefficient that depends on the concentration and type of ions, as well as the number of re-magnetizations, m/(sT); Na – Avogadro’s number, molecules/mol; R – universal gas constant, J/(mol K); T – temperature, K. To study biological objects, Albert Szent-Györgyi introduced the concept of biopotential, which is associated with the redox potential (ORP) by a relation [8]: BP ¼ 820 ORP;
ð3Þ
where 820 mV is the energy potential of water. The change in the redox potential of seeds during its treatment in a magnetic field can be determined by the Nernst equation [8]: DORP ¼ 2; 3
RT ðlg C2 lg C1 Þ zF
ð4Þ
where z is the valence of the ion; F is the Faraday number, Cl/mol; C1 - concentration of ions before magnetic treatment, mol/l; C2 - concentration of ions after magnetic treatment, mol/l. Taking into account (1) DORP ¼
RT ðlg x2 lg x1 Þ: zF
ð5Þ
Substituting in Eq. (4) the expression of the rate of chemical reaction (2), we obtain: mNa K KB2 þ vB zF 2
ð6Þ
mNa K KB2 DBP ¼ þ vB zF 2
ð7Þ
DORP ¼ whence
Expression (7) can be written as DBP ¼ A1 B2 þ A2 Bv;
ð8Þ
where A1 and A2 are the coefficients. The coefficients included in Eq. (8) cannot be determined analytically. They were determined on the basis of experimental data.
Optimization of Parameters of Pre-sowing Seed Treatment in Magnetic Field
1225
Experimental studies of the effect of the magnetic field on the seed biopotential were performed with pea seed “Adagumsky”, beans seed “Hrybovsky”, rye seed “Kharkivsky 98”, oats seed “Desnyansky”, barley seed “Solntsedar”, cucumber seed “Skvyrsky”, sunflower seed “Luxe”. The seeds were moved on a conveyor through a magnetic field created by a multipolar magnetic system based on an induction linear machine (Fig. 1) [12].
а
b
Fig. 1. Installation for pre-sowing treatment of seeds in a magnetic field: a - general view; b functional diagram: 1 – load device; 2 – conveyor; 3 – textolite inserts; 4 – permanent magnets; 5 – plate made of electrical steel; 6 – object of processing; 7 – container
Magnetic induction was adjusted by changing the distance between the magnets and measured with a teslameter 43205/1. The velocity of movement of seeds through a magnetic field was regulated by change of speed of rotation of the driving electric motor of the conveyor by means of the frequency converter. Seeds treated in a magnetic field were germinated and the ORP value was measured. A measuring electrode in the form of a platinum plate with a pointed end was developed to measure ORP. The platinum electrode was inserted into the germinated seed. A standard silver chloride electrode was used as an auxiliary electrode. Using the ionomer И-160 M was determined by the ORP of germinated seeds. The studies were performed using the experimental planning method. An orthogonal central-compositional plan was used for this purpose [13]. The values of the upper, main and lower levels were taken for magnetic induction, respectively, 0, 0.065 and 0.13 T, for seed velocity - 0.4, 0.6 and 0.8 m/s, for response – the biopotential of germinated seeds. As a result of the conducted researches it is established that in the range of change of magnetic induction from 0 to 0,065 T the seed biopotential increases, and at biger values of magnetic induction the biopotential decreases (Fig. 2). At magnetic induction, which exceeds 0.13 T, the seed biopotential does not change, but exceeds its value for the seed, untreated in a magnetic field.
1226
V. Kozyrsky et al.
The seed biopotential during pre-sowing treatment in a magnetic field is also affected by the velocity of its movement, but in the velocity range of 0.4–0.8 m s it is a less significant factor than magnetic induction.
Fig. 2. Dependence of change of biopotential of oat seeds on magnetic induction and speed of movement of seeds
According to the results of a multifactorial experiment, a regression equation was obtained, which connects the seed biopotential with the regime parameters of seed treatment in a magnetic field in the form of: DBP ¼ a0 þ a1 B þ a2 v þ a12 Bv þ a11 B2 ;
ð9Þ
where a0, a1, a2, a12, a11 – coefficients, the values of which for different crops are shown in Table 1.
Table 1. Values of coefficients in the regression equation for seed biopotential. a1 a2 a12 Agricultural culture a0 Pea 0,47 906,9 −0,42 −121,8 Bean 0,84 1006 −1,53 −147,44 Rye −0,9 1621 −0,83 −282,1 Oat 0,9 841,5 −1,81 −96,15 Barley 5,5 1218 −8,47 −185,9 Sunflower 4,14 878,2 −7,08 −134,62 Cucumber 4.12 1428 −6.31 −147,44
a11 −5404 −5812 −9007 −5536 −6746 −5641 −8218
The conducted researches made it possible to determine the optimal parameters of treatment by the method of steep ascent. It is established that the optimal value of magnetic induction for seeds of agricultural crops is 0.065 T, velocity - 0.4 m/s.
Optimization of Parameters of Pre-sowing Seed Treatment in Magnetic Field
1227
The same optimal treatment regimes were established for magnetic treatment of water and salt solutions [11]. This confirms the hypothesis of a significant role of water in the mechanism of influence of the magnetic field on seeds. The seed biopotential can determine not only the efficiency of pre-sowing treatment in a magnetic field, but also the change in activation energy. The expression for the rate of a chemical reaction in a magnetic field can be written as [14]: xm ¼ x exp½ðE þ D Þ=kT ;
ð10Þ
where DE* - change in activation energy, J/mol. Using dependence (5), we obtain:
DORP ¼
˚ DA zF
ð11Þ
The change in biopotential is determined by the equation:
DBP ¼
˚ DA zF
ð12Þ
Then DE ¼ zFDBP
ð13Þ
Thus, by formula (13) it is possible to determine the change in the activation energy during seed treatment in a magnetic field according to experimentally determined values of the change in biopotential. The experimental dependences of the change in the activation energy on the magnetic induction during seed treatment in a magnetic field are similar to the dependences for the change in the seed biopotential during treatment in a magnetic field. The activation energy changes the most at a magnetic induction of 0.065 T and a velocity of 0.4 m/s. In this treatment mode, the activation energy changes by 2.4– 5.7 kJ/g-eq (Table 2).
Table 2. Change in activation energy during pre-sowing seed treatment in a magnetic field. Agricultural culture Change in biopotential, mV Change in activation energy, kJ/g-eq Rye 59 5,69 Barley 50 4,82 Sunflower 34 3.28 Oat 30 2,89 Pea 33 3,18 Bean 38 3,67 Cucumber 58 5,6
1228
V. Kozyrsky et al.
As follows from Table 2, the change in activation energy during seed treatment in a magnetic field is less than the Van der Waals forces (10–20 kJ/mol), which characterize the intermolecular interaction and the interaction between dipoles. Thus, the magnetic field acts on the ions that are in the aqueous solution of the cell. It is now established that the yield and biometric features of crops depend on the dose of magnetic treatment, regardless of the method of creating a magnetic field [15]. Experimental studies of the change in biopotential during seed treatment in a magnetic field made it possible to determine the energy dose of treatment. The energy dose of treatment is determined by the formula [15]: Z W dt ð14Þ D¼ m where W is the energy of the magnetic field, J; m – seed weight, kg; t – treatment time, s; or Z D¼
B2 dt 2ll0 q
ð15Þ
where l is the relative magnetic permeability; l0 – magnetic constant, Gn/m, q – seed density, kg/m3. If we replace dt with dl, we get: Z D¼
B2 dl ; 2ll0 qv
ð16Þ
where l is the path lengths, m. When the seeds move in an gradient magnetic field, the magnetic induction changes along the conveyor belt (Fig. 3).
Fig. 3. Change of magnetic induction in the air gap along the conveyor belt
Optimization of Parameters of Pre-sowing Seed Treatment in Magnetic Field
1229
Using the dependence (Fig. 3), the integral (15) is determined by the method of trapezoids: ZL B2 dl ¼ 0
3L=8 5L=8 Zl=8 Z Z 8Bm 2 8Bm 2 8Bm 2 l dl þ l dl þ l dl 2Bm þ 4Bm L L L 0
L=8
3L=8
7L=8 ZL Z 8Bm 2 8Bm 2 B2 L B2 L B2 L B2 L B2 L B2 L 6Bm þ 8Bm þ l dl þ l dl ¼ m þ m þ m þ m þ m ¼ m ; 24 12 12 12 24 3 L L 5L=8
L=8
ð17Þ where Bm is the maximum value of magnetic induction, which takes place in the plane of installation of magnets, Tl, L – the path that passes the seed in a magnetic field, m Then the energy dose of treatment D¼
B2m L ; 6ll0 qv
ð18Þ
D¼
B2m ns ; 6ll0 qv
ð19Þ
or
where n is the number of re-magnetizations; s is the pole division, m. The formula for determining the energy dose of treatment (19) contains all the mode parameters of seed treatment in a magnetic field (magnetic induction, seed velocity, number of re-magnetizations, pole division). Studies of changes in the biopotential of germinated seeds during its magnetic treatment allowed to determine the energy dose of the treatment by the corresponding values of magnetic induction and seed velocity according to formula (18). The interrelation between the energy dose of treatment and the biopotential of seeds has been established. The dependence of the change in the biopotential of germinated seeds on the energy dose of treatment is shown in Fig. 4. As follows from this dependence, the optimal value of the energy dose of treatment for sunflower seeds is 3.8 Js/kg, rye – 1.86 J s/kg, oats – 2.8 Js/kg, peas – 1, 9 Js/kg, beans – 2.22 Js/kg, cucumber – 2.02 Js/kg/kg, barley – 2.22 Js/kg. From the condition of providing the optimal energy dose of treatment, the value of the pole division 0, 23 m is determined, while the magnetic field gradient is 0.57 T/m.
1230
V. Kozyrsky et al.
Fig. 4. The dependence of the change in the biopotential of cucumber (1) and barley (2) seeds on the energy dose of treatment in a magnetic field
4 Conclusion The change in the seed biopotential during pre-sowing treatment in a magnetic field depends on the square of the magnetic induction and the velocity of the seeds moving in the magnetic field. By measuring the biopotential, it is possible to determine the optimal treatment mode, which takes place at a magnetic induction of 0.065 T, fourfold re-magnetization, pole division of 0.23 m and seed velocity in a magnetic field of 0.4 m/s. The change in seed biopotential depends on the energy dose of treatment in a magnetic field. The greatest seed biopotential during pre-sowing seed treatment in a magnetic field was at an energy dose of treatment 1.7–3.8 J•s/kg. The change in activation energy during seed treatment in a magnetic field is directly proportional to the change in biopotential and at the optimal treatment mode is 2.4– 5.7 kJ/g-eq.
References 1. Vasilyev, A., Vasilyev, A., Dzhanibekov, A., Samarin, G., Normov, D.: Theoretical and experimental research on pre-sowing seed treatment. In: IOP Conference Series: Materials Science and Engineering, p. 791. 012078 (2020). https://doi.org/10.1088/1757-899x/791/1/ 012078 2. Kutis, S.D., Kutis, T.L.: Elektromagnitnyye tekhnologii v rasteniyevodstve. 1. Elektromagnitnaya obrabotka semyan i posadochnogo materiala. [Electromagnetic technologies in crop production. Part 1. Electromagnetic treatment of seeds and planting material], Moscow: Ridero, p. 49 (2017) 3. Ülgen, C., Bİrinci Yildirim, A., Uçar Türker, A.: Effect of magnetic field treatments on seed germination of melissa officinalis L. Int. J. Sec. Metabolite 4(3), 43–49 (2017)
Optimization of Parameters of Pre-sowing Seed Treatment in Magnetic Field
1231
4. Kataria, S., Baghel, L., Guruprasad, K.N.: Pre-treatment of seeds with static magnetic field improves germination and early growth characteristics under salt stress in maize and soybean. Biocatal. Agr. Biotechnol. 10, 83–90 (2017) 5. Maffei, M.E.: Magnetic field effects on plant growth, development, and evolution. Front. Plant Sci. 5, 445 (2014) 6. Lysakov, A.A., Ivanov, R.V.: Vliyaniye magnitnogo polya na sokhrannost’ kartofelya [Influence of the magnetic field on the preservation of potatoes]. Adv. Modern Natural Sci. 8, 103–106 (2014) 7. De Souza, A., Sueiro, L., Garcia, D., Porras, E.: Extremely low frequency non-uniform magnetic fields improve tomato seed germination and early seedling growth. Seed Sci. Technol. 38, 61–72 (2010) 8. Iqbal, M., ul Haq, Z., Jamil, Y., Nisar, J.: Pre-sowing seed magnetic field treatment influence on germination, seedling growth and enzymatic activities of melon (Cucumis melo L.). Biocatal. Agr. Biotechnol. 2016(6), 176–183 (2016) 9. Ramalingam, R.: Seed pretreatment with magnetic field alters the storage proteins and lipid profiles in harvested soybean seeds. Physiol. Mol. Biol. Plants 24(2), 343–347 (2018) 10. Stange, B.C., Rowlans, R.E., Rapley, B.I., Podd, J.V.: ELF magnetic fields increase aminoacid uptake into Vicia faba L. Roots and alter ion movement across the plasma membrane. Bioelectromagnetics 23, 347–354 (2002) 11. Zablodskiy, M., Kozyrskyi, V., Zhyltsov, A., Savchenko, V., Sinyavsky, O., Spodoba, M., Klendiy, P., Klendiy, G. Electrochemical characteristics of the substrate based on animal excrement during methanogenesis with the influence of a magnetic field. In: Proceedings of the 40th International Conference on Electronics and Nanotechnology, ELNANO, pp. 530– 535 (2020) 12. Sinyavsky, O., Savchenko, V., Dudnyk, A.: Development and analysis methods of transporter electric drive for electrotechnological complex of crop seed presowing by electromagnetic field. In: 2019 IEEE 20th International Conference on Computational Problems of Electrical Engineering (CPEE), pp. 1–6 (2019) 13. Adler, Yu.P., Markova, E.V., Granovskiy, Yu.V.: Planirovaniye eksperimenta pri poiske optimal’nykh usloviy [Planning an experiment when searchingfor optimal conditions]. Moskow: Science, p. 278 (1976) 14. Kozyrskyi, V., Savchenko, V., Sinyavsky, O.: Presowing processing of seeds in magnetic field. In: Handbook of Research on Renewable Energy and Electric Resources for Sustainable Rural Development, IGI Global, USA, pp. 576–620 (2018) 15. Pietruszewski, S., Martínez, E.: Magnetic field as a method of improving the quality of sowing material: a review. Int. Agrophys 29, 377–389 (2015)
Development of a Fast Response Combustion Performance Monitoring, Prediction, and Optimization Tool for Power Plants Mohammad Nurizat Rahman(&), Noor Akma Watie Binti Mohd Noor(&), Ahmad Zulazlan Shah b. Zulkifli, and Mohd Shiraz Aris TNB Research Sdn Bhd, 43000 Kajang, Malaysia {nurizat.rahman,akma.noor}@tnb.com.my
Abstract. Combustion performance monitoring is a challenging task due to insufficient post combustion data and insights at critical furnace areas. Post combustion insights are valuable to reflect the plant’s efficiency, reliability, and used in boiler tuning programmes. Boiler tuning, which is scheduled after all the preventive maintenance suggested by boiler manufacturer, could not addressed the operational issues face by plant operators. A system-level digital twin incorporating both computational fluid dynamics (CFD) and machine learning modules is proposed in the current study. The proposed tool could act as a combustion monitoring system to diagnose, pinpoints boiler problems, and troubleshoots to reduce maintenance time and optimize operations. The system recommends operating parameters for different coal types and furnace conditions. The tool can be used as a guideline in daily operation monitoring/ optimization and in risk assessments of new coals. The current paper discusses the general architecture of the proposed tool and some of the preliminary results based on the plant’s historical data. Keywords: Coal-fired boiler
Combustion tuning Optimization
1 Introduction The coal-fired furnace is designed with the aim that there is sufficient amount of heat transferred from the flames to the heat exchangers tubes [1]. This must be done to ensure that the steam that entering turbine is at specific temperature value. At the same time, the plant operators need to monitor the gas temperature to not exceeding the design limit at the furnace exit. Higher gas temperature after the furnace zone will not only damage the heat exchangers tubes but also lead to ash deposition problems and environmental issues [2]. At the moment, it is a challenging task for plant operators to maintain plant availability and efficiency due to wide variety of coal properties [3]. The properties of coal that is received today will not be the same with the coal properties that will be received in the next month. Hence, it will certainly results in different combustion behavior. Furthermore, operators do not have sufficient information of combustion © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1232–1241, 2021. https://doi.org/10.1007/978-3-030-68154-8_105
Development of a Fast Response Combustion Performance Monitoring
1233
inside the boiler where they can only evaluate the combustion behavior by looking at certain parameters e.g. carbon monoxide (CO) emission and gas temperature [4]. Insufficient post combustion data and insights from critical furnace area pose a challenging task for combustion performance monitoring [4]. The inability to predict boiler performance will increase the operating cost due to higher coal consumption and poor reliability that may jeopardize plant operation [5]. There has been an increasing trend in power plant operational disruption across the globe and this has been contributed by issues such as high furnace exit gas temperature (FEGT), increased emission level, and ash deposition [5]. In recent years, technologies for monitoring the quality of combustion in utility boilers have been developed through online spatial measurements of oxygen and fuel [4]. The measurements will be used as indications to balance the flow of air and fuel and subsequently, the combustion performance. While experienced performance engineers can relate these measurements to the combustion’s characteristics, and adjust the boiler to improve combustion, the step in combustion tuning is still primarily a trial and error procedure and even a highly experienced performance engineer could take a number of steps to optimize the boiler performance. Computational Fluid Dynamics (CFD) modelling has often been used for combustion and flow field characterizations [6, 7]. One cannot deny the capability of CFD modelling to facilitate combustion monitoring process by providing a visualization of the combustion characteristics. Rousseau et al. (2020) [8] have developed CFD model to predict the combustion behaviour of coals in coal-fired utility boilers. They also demonstrated the viability of the computational approach as an effective tool for coal burning optimization in full-scale utility boilers. General Electric (GE) Power Services have also suggested an efficiency improvement for the coal-fired boiler guided by CFD approaches via optimizing the flow characterization and combustion optimization [4]. Via CFD, the spatial visualization of combustibles in the flue gas, temperature, and also the potential areas for slagging and fouling can be detected which could assist in the combustion tuning process in order to achieve the required combustion quality. Nonetheless, coal power plants, which are designed to operate within certain range of coal properties, will require CFD modelling to be performed multiple times depending on the properties of coal. To avoid the excessive calculation times, a CFD feature namely Reduced Order Modelling (ROM) [9] can be used to enable continuous monitoring through dynamic data simulation and this will allow multiple simulations to be conducted at one time. Moreover, a system-level digital twin [4] can be generated via integrating both CFD ROM and machine learning modules to assist not only in the combustion monitoring, but also in predicting the upcoming combustion anomalies and provide a certain guidance which will allow proactive troubleshooting to be carried out to eliminate any potential issues. With a recent spike of popularity towards machine learning, several publications have tested the capability of machine learning for optimization purposes and the outcomes were found to be beneficial for the digital twin perusal [10, 11]. The current paper discusses the general workflow of the proposed tool to provide a guideline for optimizing combustion performance in thermal power plants, specifically the coal-fired power plant, which is the location where the preliminary study was carried out based on the plant’s historical data and boiler’s geometry.
1234
M. N. Rahman et al.
2 Methodology 2.1
Overview of the Proposed Application Architecture
The tool is based on the system-level digital twin which incorporate both CFD and machine learning modules to provide a virtual model of combustion performance in a boiler where it allows analysis of data and monitoring of combustion behavior to head off any anomalies (out of range temperature and pollutants) before it occurs. Subsequently, operators could check and apply corrective action on the twin model platform and monitor the predicted outputs prior to applying the corrective action on the actual boiler. Refer to Fig. 1 below on the general application architecture of the tool.
Fig. 1. General application architecture.
Referring to Fig. 1, smart components in an actual boiler system which use sensors to gather real-time status of the boiler operation will be integrated with the commercial industrial connectivity platform in order to bridge the communication gap between the
Development of a Fast Response Combustion Performance Monitoring
1235
sensors in boiler system and the twin model platform. Once the real-time data has passed through the platform, it will be sent to the twin model which act as the monitoring platform to oversee the real-time parameters status, predicted outputs status, and provide anomalies alert notifications. The twin model will also act as a bridge between CFD and machine learning modules. The mentioned inputs are the list of parameters in which the operators will have the flexibility to control in order to optimize the combustion performance in boilers. The outputs are the outcome in which the operators need to ensure that it is within the range of acceptable limit. There was a prior engagement with the power plant where the study was held in order to get insights from the plant personnel on the important combustion parameters which could massively affect the plant’s reliability. Afterwards, the list of output parameters is determined which includes the FEGT, flue gas concentration, and the rear pass temperature. These three parameters are the common criteria to measure the combustion performance as the negligence in monitoring and tuning these parameters will increase the potentiality for unplanned outages and additional maintenance cost due to the formation of slagging and fouling, along with the amount of emissions surpassing an acceptable limit. The machine learning module will act as a “brain” where the machine learning happened based on the incoming real-time data. Prior to the prediction stage, the raw historical data (from January 2020 to August 2020) from the plant’s sensors was cleaned and undergoes dimension reduction along with the parameter extraction. The data was cleaned to remove any outliers since there are periods where the plant experienced several outages. The raw historical data has a large number of parameters which may not be affective since there are some parameters which might be redundant and it will cause longer processing time. Moreover, there are several parameters that do not contribute much on the specified output. From the raw historical data, there is a total of 80 parameters. To determine the optimum number of parameters, the data was trained using several values of parameter number including 70, 30, 25, 20, and 10. Based on the values from the Root Mean Square Error (RMSE) and the Pearson Correlation, the number of chosen parameters is 30 as the former and the later variables have the lowest and the highest values as compared with the other tested parameter numbers. Once the validation of the analytics model is done, the output prediction was executed where the twin model platform as mentioned before, will act as a bridge between the predicted output from the analytics model and the CFD ROM database. The CFD ROM database will received the real-time data from the plant’s sensors along with the predicted outputs from the analytics model to visualize the real time combustion behavior (5-min interval) and the predicted combustion behavior ahead of time. From the prediction of combustion visualization in the CFD ROM interface, the operators will have the flexibility to check and apply corrective action towards the inputs of the boiler twin model prior to applying the corrective action on the actual boiler. 2.2
Computational Fluid Modelling (CFD) Setup
CFD simulations for a coal-fired boiler need a complete dataset to determine the input conditions. However, since the current work is still in the preliminary stage, the number
1236
M. N. Rahman et al.
of controlled inputs are reduced to 4 which include the flowrates of primary air (PA) flowrate, secondary air (SA), over-fire air (OFA), and the fuel. The boiler system under study is a 1000-MW boiler with an opposed wall-firing configuration, see Fig. 2 (a). The baseline case used the flowrates depicted in the pocket book for operator. The subsequent FEGT from CFD simulation was compared with the design value of FEGT from the pocket book, see Fig. 2 (b) on the visualization of FEGT.
[°C]
SA
1200
Fuel + PA
1000 800 600
(a)
(b)
Fig. 2. A 1000 MW opposed-firing boiler; (a) Meshing model and (b) predicted FEGT.
Several assumptions were made on the operating conditions which were applied to the CFD model to cater the efficiency and the accuracy of the model. The air and fuel flow rates for the individual burners and OFA ports are assumed to be identical. The swirl effect from the burners is also neglected. On the numerical sides, there are several sub models implemented to characterize the coal combustion behavior, radiation, gaseous reaction, and turbulence, all of which are utilized to dictate the source terms for the mass, momentum, energy, and the species governing equations. The incoming coal particles were tracked based on the Lagrangian scheme considering the turbulent dispersion of particles [12]. The distribution of particle sizes was based on the RosinRammler distribution function where it was calculated based on the fineness test given by the plant’s operators. For the devolatilisation of the coal, FG-DVC which is an advanced coal network model, was used to get the volatiles composition along with the respective rate constants [13]. The rate of devolatilisation was computed based on a single Arrhenius equation model with an activation energy of 33.1 MJ/kmol and a preexponential factor of 5799s−1 [12].
Development of a Fast Response Combustion Performance Monitoring
1237
The volatiles reactions were solved by applying the kinetic rate/eddy-dissipation model. The reaction steps are from the Jones and Lindstedt mechanism for hydrocarbon gases, and the rate constants for tar are based on the Smooth and Smith [12]. The turbulence was solved using the SST k – x model, where it was found that the model managed to capture better convergence due to the capability to effectively blend the robust and accurate formulation of the k – x model in the near-wall region [14]. The radiative heat transfer from the coal combustion was resolved via the application of the discrete ordinate (DO) model. The DO model is mainly used for the reacting flow modelling due to its compatibility with CFD approaches, both of which based on a finite volume approach [15]. For the simulations, ANSYS Fluent (version R19.1) was implemented with user defined functions (UDFs) build for devolatilization. The mesh for the boiler’s domain was constructed using 1.2 million hexahedral cells. To lessen the number of mesh, the reheater and superheater panels were simplified as a number of thin walls. The water wall membrane of boiler which have both convective and conductive heat transfer had an overall heat transfer coefficient of 500 W/m2 K and an emissivity of 0.7 [12]. The superheater and reheater panels were assumed to have an overall heat transfer coefficient of 3000 W/ m2 K. 2.3
Reduced Order Modelling (ROM)
While CFD is a powerful platform which can generate a huge amount of information related to the combustion behavior in coal-fired boilers, the simulations running time can be quite expensive due to the requirement for vast computational resources. Hence, the implementation of a full CFD model in the digital twin platform is highly impractical as it could not visualize and predict the real-time data from the plant’s operation. As a countermeasure, a ROM approach can be used to supplement the CFD simulations by quickly estimate and visualize the aforementioned outputs based on the inputs from the power plant sensors and the machine learning model [9]. The ROM for the current study was created by the advanced mathematical methods which combined the three-dimensional solver result snapshots from a set of design inputs [9]. The location of the result snapshot is at the furnace exit as shown in Fig. 2 (b). The ROM production was done based on several CFD simulations within the specified range of design inputs. Even though the ROM production for the current study was computationally expensive, the final ROM database can be utilized at negligible computational cost, plus with the capability for near real-time analysis [9]. 2.4
Machine Learning
For the data pre-processing, feature selection was done to reduce the dimensionality of data by selecting only a subset of measured features (predictor variables) to create a model. Feature selection algorithms will search for a subset of predictors that optimally models measured responses, subject to constraints such as required or excluded features and the size of the subset. Several major benefits of feature selection include the improvement of prediction performance, provide faster and more cost-effective predictors, and also provide a better understanding for the data generation process [16].
1238
M. N. Rahman et al.
Using too many features could degrade the prediction performance even when all features are relevant and contain information about the response variable. In the current scenario, a total of 80 parameters were collected and analyze by observing the correlation matrix and the parameters were reduced to 30 relevant parameters. Matlab Regression Learner software was used to train the machine learning model. The software trains several regression models including the linear regression models, regression trees, Gaussian process regression models, support vector machines, and the ensembles of regression trees. The software can automatically train one or more regression models, compare validation results, and choose the best model that works for the regression problem.
3 Results and Discussion 3.1
CFD ROM Capabilities
A number of simulation cases were executed using the ROM and the FEGT result from one of the simulations were then compared with the design value of FEGT from the operator pocket book for similar combustion scenario. Less than 7 percent error was detected, proving the capability of the current CFD ROM model to reasonably predict the combustion behavior in a boiler. The real-time capabilities of CFD ROM model were also tested in which the model managed to display the output almost instantaneously. Figure 3 below shows the two examples of FEGT results from different operating conditions of boiler.
[°C] 1200 1000 800 600
(a)
(b)
Fig. 3. FEGT results from (a) the design and (b) the reduced over-fire air (OFA) conditions.
Development of a Fast Response Combustion Performance Monitoring
1239
The reduced OFA scenario as seen in Fig. 3 predicts the imbalance of temperature distribution along with the excessive temperature at the furnace exit area. Higher gas temperature after the furnace zone would not only affect the tubes of the heat exchangers but also cause issues with ash deposition and environmental problems [2]. 3.2
Machine Learning Capabilities
Table 1 below shows the RMSE for each algorithm. The Fine Tree algorithm shows the lowest RMSE value which represents the highest accuracy for the prediction. The linear regression model shows the highest RMSE as the method has a simple work function to model the medium complex data. Support Vector Machine (SVM) algorithm is in the middle range of error. SVM were often used for the classification method. Table 1. RMSE values for each algorithm. Machine learning algorithm RMSE Linear Regression 103.99 Stepwise linear regression 80.15 Fine Tree 4.53 Coarse tree 8.65 Linear SVM 40.23 Fine Gaussian SVM 36.78 Ensemble Bagged Trees 12.53 Gaussian Process Regression 25.74
Temperature (°C)
Figure 4 below shows the prediction and the actual data of FEGT from the historical data test set. With the prediction ahead of the time-frame provided, plant
Unplanned outage
Date (Jan 2020 - Aug 2020) Fig. 4. Prediction vs actual outputs of FEGT.
1240
M. N. Rahman et al.
operators will have the capability to manage their work plan to avoid unwanted conditions.
4 Conclusion A system-level digital twin of combustion performance in a coal-fired boiler integrating both CFD and machine learning models is proposed in the current study. The validation for both CFD ROM and machine learning models were done based on the operating data from the coal power plant under study in which acceptable errors were found in both models. As the current study was mainly focused on the feasibility of the proposed tool, a well-integrated digital twin system tested in power plant is the step forward. The aforementioned architecture of the proposed tool has shown a major potential for a learning-based model to be integrated in boiler’s operation to assist not only in the boiler tuning process, but could also help in maintaining the reliability of the boiler system for a long run.
References 1. Speight, J.G.: Coal-Fired Power Generation Handbook, 1st edn. Scrivener Publishing LLC, Massachusetts (2013) 2. Beckmann, A.M., Mancini, M., Weber, R., Seebold, S., Muller, M.: Measurements and CFD modelling of a pulverized coal flame with emphasis on ash deposition. Fuel 167, 168–179 (2016) 3. Mat Zaid, M.Z.S., Wahid, M.A., Mailah, M., Mazlan, M.A.: Coal combustion analysis tool in coal fired power plant for slagging and fouling guidelines. In: Editor, F., Editor, S. (eds.) The 10th International Meeting of Advances in Thermofluids 2018, vol. 2062, AIP Conference Proceedings (2019) 4. Zhou, W.: Coal fired boiler flow characterization, combustion optimization and Efficiency improvement guided by computational fluid dynamics (CFD) modeling. Research Gate (2017). 5. Achieving Better Coal Plant Efficiency and Emissions Control with Digital, GE (2017) 6. Laubscher, R., Rousseau, P.: Coupled simulation and validation of a utility-scale pulverized coal-fired boiler radiant final-stage superheater. Thermal Sci. Eng. Progress 18, 100512 (2020) 7. Belosevic, S., Tomanovic, I., Crnomarkovic, N., Milicevic, A.: Full-scale CFD investigation of gas-particle flow, interactions and combustion in tangentially fired pulverized coal furnace. Energy 179, 1036–1053 (2019) 8. Rousseau, P., Laubscher, R.: Analysis of the impact of coal quality on the heat transfer distribution in a high-ash pulverized coal boiler using co-simulation. Energy 198, 117343 (2020) 9. Rowan, S.L., Celik, I., Gutierrez, A.D., Vargas, J.E.: A reduced order model for the design of oxy-coal combustion systems. J. Combustion 2015, 1–9 (2015) 10. Zhao, Y.: Optimization of thermal efficiency and unburned carbon in fly ash of coal-fired utility boiler via grey wolf optimizer algorithm. IEEE Access 7, 114414–114425 (2019) 11. Sangram, B.S., Jagannath, L.M.: Modeling and optimizing boiler design using neural network and firefly algorithm. J. Intell. Syst. 27, 393–412 (2018)
Development of a Fast Response Combustion Performance Monitoring
1241
12. Yang, J.-H., Kim, J.-E.A., Hong, J., Kim, M., Ryu, C., Kim, Y.J., Park, H.Y., Baek, S.H.: Effects of detailed operating parameters on combustion in two 500-MWe coal-fired boilers of an identical design. Fuel 144, 145–156 (2015) 13. Czajka, K.M., Modlinski, N., Kisiela-Czajka, A.M., Naidoo, R., Peta, S., Nyangwa, B.: Volatile matter release from coal at different heating rates – experimental study and kinetic modelling. J. Anal. Appl. Pyrol. 139, 282–290 (2019) 14. Yeoh, G.H., Yuen, K.K.: Computational Fluid Dynamics in Fire Engineering: Theory, Modelling and Practice, 1st edn. Butterworth-Heinemann, USA (2009) 15. Joseph, D., Benedicte, C.: Discrete Ordinates and Monte Carlo Methods for Radiative Transfer Simulation applied to CFD combustion modelling. Research Gate (2009) 16. Liu, H.: Encyclopedia of Machine Learning. Springer, Boston (2010)
Industry 4.0 Approaches for Supply Chains Facing COVID-19: A Brief Literature Review Samuel Reong1(&), Hui-Ming Wee1, Yu-Lin Hsiao1, and Chin Yee Whah2 1
Industrial and Systems Engineering Department, Chung Yuan Christian University, Taoyuan City 320, Taiwan [email protected] 2 School of Social Sciences, Universiti Sains Malaysia, 11800 Gelugor, Penang, Malaysia
Abstract. Widespread disruptions of the COVID-19 pandemic on the performance and planning of global supply chains have become a matter of international concern. While some key supply chains are tasked with the prevention and eventual treatment of the virus, other commercial supply chains must also adapt to issues of shortages, uncertain demand and supplier reselection. This paper provides a brief literature survey into current Industry 4.0 solutions pertinent to COVID-19, and also identifies the characteristics of successful supply chain solutions to the pandemic. In this investigation, it is found that differing technology-enabled supply chain strategies are required for pre-disruption, disruption, and post-disruption phases. Furthermore, a comparison of supply chain success in several nations suggests a need for data transparency, publicprivate partnerships, and AI tools for effective manufacturing implementation. Keywords: SCM
Manufacturing Industry 4.0 COVID-19
1 Introduction The onset of COVID-19 in 2020 has detrimentally impacted countless lives and industries across the globe. In terms of manufacturing and supply chain management, global supply chains – networks of suppliers, manufacturers, retailers and distributors that had adapted lean manufacturing practices with streamlined inventories suddenly found themselves crippled by shortages generated by lockdowns in China and in other Southeast Asian suppliers. Shocks generated by the pandemic, such as overcompensated ordering, lack of information sharing, and lack of collaboration between the private and public sectors, have all forced various industries to seek alternative solutions in order to survive. Many researchers, however, have also identified these forced changes as opportunities for growth. While each industry experiences various difficulties across the board due to inadequate visibility from upstream to downstream channels, many of the available solution methods are also shared. Adaptation of prediction models used in conjunction with automation and modelling techniques allow many manufacturing
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1242–1251, 2021. https://doi.org/10.1007/978-3-030-68154-8_106
Industry 4.0 Approaches for Supply Chains Facing COVID-19
1243
factories to continue operations, and flexible supplier selection paired with collaborative geographical tracking systems would have allowed manufacturers to predict disruptions and adjust their planning areas around impacted areas. In a few cases, some of these digital solutions were preemptively prepared, which enabled supply chains in these countries to continue operations. Such scenarios have led to a future outlook of “resilient” supply chains, in which sourcing and manufacturing capacities are automatically prepared for ahead of time across a variety of international channels, proceeding regardless of the situation at hand. It has also been proposed that most AIdriven technologies and digital platforms used to solve issues created by the pandemic would also strengthen the resiliency of their manufacturing networks against similar, future disruptions.
2 Current Enabling Technologies Javaid et al. [1] provides some insight into certain technologies that present opportunities for supply chains seeking to adopt Industry 4.0 practices: cloud computing services such as Amazon AWS, Azure, and Google Cloud reduce operating costs and increase efficiency. This is apparent through the close examination of supply chain failures that gave rise to a new class of manufacturing technologies using self-learning AI, which researchers hope will eventually enable smart supply chains to operate autonomously. The authors maintain that Big Data methods can forecast the extent of COVID-19, while AI can be used to predict and manage equipment manufacturing. Ivanov [2] demonstrated that performing simulations to assess the supply chain effects of COVID-19 could be used to demonstrate the effects of the pandemic across multiple supply chain levels. Furthermore, while tracing diseases across geographic locales was impossible before 2010, researchers such as Dong et al. [3] at John Hopkins university have harnessed Global Imaging Systems (GIS) alongside data mining and machine learning techniques to geographically trace the spread of the COVID-19 pandemic, allowing for preemptive preparation. Furthermore, as detection techniques in other systems were used to isolate the spread of the disease in localized areas, the spread of the pandemic was tracked using live, real-time methods, with immediate and automated alerts sent to key professionals. Wuest et al. [4] suggest further that the pandemic provides strong justification for industries to adapt an “AI-Inspired Digital Transformation,” spearheaded by unmanned smart factories, automated supply chains and logistics systems, AI-based forecasts for demand variation, shortages and bottlenecks, and predicted maintenance routines to lessen the severity of internal disruptions.
3 Supply Chain Characteristics Ivanov [2] further observed that supply chains are characterized by different challenges corresponding to their respective objectives, noting that supply chains affected by COVID-19 can be further categorized as humanitarian supply chains and commercial supply chains. Altay et al. [5] found that post-disaster humanitarian logistics differed greatly from pre-disaster logistics, or even commercial logistics.
1244
3.1
S. Reong et al.
Humanitarian Supply Chains
According to [5], humanitarian supply chains are often hastily formed in order to respond to disaster relief needs, and as a result possess much higher levels of complexity than corporate supply chains. Furthermore, they are much more sensitive to disruptions. In order to mitigate these detrimental effects, the authors suggested that prior to a disaster, a humanitarian supply chain should be designed for one of two orientations: (1) flexibility, which can quickly adapt to change, or (2) stability, which follows a traditional hierarchical design and maximizes efficiency. In relation to pandemic response teams, the necessity of rapidly identifying sites for mass vaccination requires the key success factors of communication and efficient vertical integration. Thus, response planners must determine whether a static, efficient setup should be prepared, compared to a dynamic organization able to absorb potential aftershocks subsequent to the first disaster. It was then suggested that the post-disaster phase be characterized by effective collaboration, transparency, and accountability. Local governments were identified as a strong role in in both disaster preparedness and response enablers, which would provide support through advanced planning and stronger coordination. Through close and open working relationships, disaster relief supply chains would be further ensured to perform in an effective manner. Furthermore, Altay et al. [9] noted that humanitarian supply chains are often hastily formed and are vulnerable to disruptions. These disruptions are further intensified after the effects of a disaster, and can often prevent the aid targeted towards the recipients of relief efforts. 3.2
Commercial Supply Chains
While commercial supply chains are composed of a more straightforward structure and lack the ad hoc, impromptu complexity of humanitarian supply chains, they nevertheless carry their own challenges. Unlike their more flexible counterparts, commercial supply chains must adhere to uniform standards while maintaining an effective level of financial performance. Their primary goal is to ensure the continued profitability of both suppliers and retailers alike, in addition to maintaining brand loyalty with consumers. Rigid scheduling issues and lack of communication across multiple levels in each supply chain thus result in disruptions at multiple levels and underutilized potential. Wuest et al. [4] found that commercial industries were impacted at different product life stages by the onset of COVID-19, shown in Fig. 1. It was further noted that the most affected industries were the service and hospitality industries. In addition, the challenges faced by commercial supply chains remain widely varied in nature: automotive and aircraft manufacturing plants have closed due to safety issues and the lack of remote management capacities. Since the private industry has proven incapable of matching demand, many countries such as the US have made use of state interventionist policies to ensure supply-side capacity. One particular example includes legal enforcement on GM and General Electric to switch from the production of non-essential consumer goods to that of medical supplies.
Industry 4.0 Approaches for Supply Chains Facing COVID-19
Beginning of Life
Middle of Life
1245
End of Life
Automotive Manufacturing
Pharmaceuticals Manufacturing
Aircraft Manufacturing Defense Manufacturing
Fig. 1. Manufacturing and supply networks affected by COVID-19 at different stages, adapted from Wuest et al. [4]
4 Solution Characteristics Changes to manufacturing and supply chains in 2020 have been compared to a “global reset,” according to the most recent meeting of the World Economic Forum’s Global Lighthouse Network [6]. Out of the 54 leading advanced manufacturers in the world, a performance survey found the relevant manufacturing changes to be (1) Agility and customer centricity, (2) Supply chain resilience, (3) Speed and productivity, and (4) Eco-efficiency. In terms of relevance to the COVID-19 pandemic, the former three shifts are shown below (Table 1): Table 1. Three necessary industry shifts: Adapted from the WEF 2020 Agenda. Global changes Demand uncertainty and disruptions
Necessary industry shifts Agility; customer centricity National security interests, trade barriers and logistics disruption Supply chain resilience Disruption of global manufacturing Forced transition to remote management and digital collaboration Speed and productivity Physical distancing regulations Workforce displacement and unbalanced growth Economic recession: costs must be reduced
1246
S. Reong et al.
By virtue of success, the implementation of effective technology-enabled solutions should thus be tailored to address these issues. The following points below discuss both the possible implementation methods explored by researchers and examine examples used by real industries during the COVID-19 pandemic. 4.1
Supply Chain Agility and Resilience
Swafford et al. [7] defined supply chain agility quantitatively as an enterprise’s ability to reduce its own lead time and respond to shifts in supply and demand – namely, how quickly its rates of procurement and manufacturing activities can be changed. Agility is often used in conjunction with resilience, the latter of which is defined by Sheffi et al. [8] as the ability of an enterprise to bounce back from a disruption, characterized by redundancy - reserve resources such as a safety stock - and flexibility, the ability to redirect material flows through alternative channels. In line with the above definitions, Altay et al. [9] established that humanitarian supply chain activities are categorized by pre-disaster and post-disaster activities. Using survey data, they found that while agility and flexibility were effective for pre-disaster preparations, only flexibility proved to be significant for post-disaster activities. These findings suggest that the appropriate objectives must be established for implemented supply chain technologies before, during, and after major disruptions. Furthermore, these results imply that shifting between phases necessitate the use of adequate prediction and detection systems. One such proposed solution method was made by Ivanov and Dolgui [10], who prioritized supply chain resilience and recovery planning introduced the concept of a “digital supply chain twin” alongside a data-driven supply chain risk modelling framework. Using this framework, the digital supply chain twin would then provide decision-making support through machine learning analysis and modelling software. Yet the researchers Marmolejo-Saucedo et al. [15, 16], also noted that many papers incorrectly present the term “Digital Twin;” rather, such an area of research presents a need for valid statistical analysis and information collection throughout the use of Agent-Based Simulation. In particular, the benefits of appropriate decision-making under early warning systems cannot be overlooked. As was previously established, the most effective method of supply chain preparation comes from preliminary, pre-disruption preparation. For early warning systems to be effective, a degree of cooperation must take place on both a domestic and a cross-border level. A CDC report by Lin et al. [11] documents the use of cross-departmental surveillance in a real-time database in Taiwan. The authors noted how the Central Epidemic Command Center (CECC) partnered with the National Health Insurance administration's confidential, 24-h interval cloud-based patient database, and with the Customs and Immigration database in order to identify and track persons with high infection risks. Anticipating shortages, the Taiwanese government suspended mask exports and funded Mask Finder, a mobile phone application that identifies and local mask supply distribution points and their current stocks. Next, in order to prevent overbuying, the
Industry 4.0 Approaches for Supply Chains Facing COVID-19
1247
government implemented a mask rationing system tied to each resident’s identification card. Thus, the case study demonstrates how public and private sector cooperation augmented by sensor technologies and database systems assists essential resource allocation. 4.2
Supply Chain Productivity
Speed and productivity, according to the definition set forth by [6], are associated with the maintenance of production and distribution goals despite the addition of new challenges and constraints. An essential direction for analysis involves identifying the causes of success in supply chains in certain nations subsequent to the COVID-19 crisis, and the causes of failures in others. Dai et al. [12] remarked that data transparency greatly limited supply chain mobility in the United States PPE supply chain, where supply chain members were left in the dark concerning the capacities, locations, and reliability of other members. In fact, such information was intentionally kept private as trade secrets. Such an approach can be counterproductive to the reliability of a supply chain, much like how a single point of failure can cripple even the most resilient network. To address this problem, the authors also supported public-private partnerships, much like what was observed by [11] in Taiwan and suggested the use of digital management and cyber-physical systems in the production line. One novel data transparency and validation system for supply chains is the development of the MiPasa project by IBM, which uses the IBM blockchain platform and cloud to verify information sources for analysts seeking to map the appropriate COVID-19 response [13]. HACERA, the startup company in ownership of MiPasa, has collaborated with healthcare agencies such as the CDC, WHO, and the Israeli Public Health Ministry to make peer-to-peer data ledgers possible. Accordingly, researchers worldwide have proposed similar usage of blockchain technology to verify and share information between supply chain members. Unlike present methods, which require costly and time-consuming verification procedures, blockchain technology enables data sharing to occur between members with relative ease and increased trustworthiness, and tracked with almost no downtime.
5 Direct Supply Chain Impacts & Research Strategies 5.1
Vaccine Development and Distribution
As of September 2020, 10 vaccine candidates for COVID-19 prevention were reported to have entered their clinical trial phases [19]. The main commercial players for the development of these vaccines include CanSino Biologics, Sinovac Biotech, Novavax, BioNTech/Pfizer, Inovio Pharmaceuticals, and Symvivo. Global attention is focused on several specific qualities of the vaccine candidates that will inevitably take the lead, as these characteristics will heavily influence what issues will be faced by the manufacturing, supply chain, and end-user parties. Specifically, factors of the leading candidates such as storage temperature, shelf life, and the number of required doses per patient were stated to heavily impact
1248
S. Reong et al.
implementation in global supply chains. In a conference addressing the supply chain implications of a COVID-19 vaccine, CanSino CFO Jing Wang stated that the most critical logistical challenge is the cold chain [17]. Namely, whether the vaccine can be maintained in cold storage between 2–8 °C, or at temperatures lower than -60 °C, will impact availability and cold chain solution methodologies. In particular, the latter option historically displayed strict limitations due to available technology and supply constraints, during distribution of the Ebola vaccine. Alex de Jongquieres, chief of staff at Gavi, the Vaccine Alliance, also reflected that only a sufficiently long shelf life would ultimately determine whether the vaccines could be stored in control facilities or regional warehouse. Lastly, while a single dose is ideal, multiple dosage requirements further compound the quantity problem. One recent leading candidate in which this is anticipated to carry over into distribution is that of BioNTech/Pfizer, for which a 2-dose regimen has been confirmed [20]. Thus, several implications exist for researchers seeking to model distribution solutions based on the most promising COVID-19 vaccine candidates. The nature of cold chain and ultra-cold chain capacity and their available variants will be a major factor, for which Lin et al. [21] have recently formed a cold chain decision-making model, finding that certain cold chain transport combinations can be calculated as more viable than others under specific constraints. Nevertheless, the authors noted, little research has yet been published on the subject. Shelf life of the leading candidates will determine whether traditional central warehouse network models deserve consideration, or whether more innovative solutions such as cross docking and last-mile delivery logistics will grow in popularity. At the same time, the practicality of these latter methods must also be addressed, as limitations on how far healthcare workers can safely travel to collect or apply treatment will be readily apparent in each country. More than ever, proper planning and organization will be necessary to prevent wastage and deterioration. Currently, owners of the other vaccine candidates have yet to release their distribution requirements. 5.2
Personal Protective Equipment (PPE)
According to Park et al. [18], the ongoing shortage of PPE materials stemmed from the offshoring of PPE production to the People’s Republic of China, for which factory bans have caused global shortages. Under the just-in-time strategy, national stockpiles of materials used in PPE products were continuously reduced in order to make efficiency gains. While this is a common practice in many sectors, it proved problematic in the event of disease outbreak. As a result of the COVID-10 pandemic, global supply chains are experiencing temporary shortages until the PPE supply can be renewed. An adapted summary table of strategies selected by the US Center for Disease Control and Prevention for PPE supply optimization [22], as pertaining to systemic objectives, can be seen below (Table 2):
Industry 4.0 Approaches for Supply Chains Facing COVID-19
1249
Table 2. “Summary Strategies to Optimize the Supply of PPE during Shortages,” selected for possible model objectives; adapted from the CDC guide on July 2020. PPE type All PPE
N95 Respirators and facemasks
Conventional capacity • Use telemedicine whenever possible • Limit number of patients going to hospital/outpatient settings • Limit face-to-face health care professional encounters with patients • Implement just-in time fit testing • Extend the use of N95 respirators by wearing the same N95 for repeated close-contact encounters with several patients (within reasonable limits) • Restrict face mask usage to health care professionals, rather than asymptomatic patients (who might use cloth coverings) for source control
Contingency capacity • Selectively cancel elective and nonurgent procedures and appointments where PPE is typically used
• Facilities communicate with local healthcare coalitions and public health partners to identify additional supplies • Track facemasks in a secure and monitored site and provide facemasks to symptomatic patients upon check-in at entry points
Furthermore, Park et al. maintain that the main bottlenecks in the PPE supply chain include raw materials shortages such as polypropylene, lack of production infrastructure, export bans, and transport constraints caused by quarantine measures or limited workforce capacity. Thus, research on alternate sourcing solutions are a matter of concern for all related industries. A segment of research in 2020, such as that of Campos, et al. [23] suggests growing interest in the reuse of PPE masks and respirators. In such a scenario, the local scale and collection methodologies of reverse logistics activities can also be explored.
6 Conclusion Due to its point of origin, scale, and difficulty for treatment, COVID-19 has created some of the most significant and widespread global supply chain disruptions in modern times. Supply chain managers and IT developers seeking to weather the effects of the pandemic must first distinguish the different challenges faced by their humanitarian or commercial enterprise, and then assess whether they are in a pre-disruption or disruption state. If the supply chain is in a pre-disruption state, use of modeling systems and prediction technologies will allow preparation for alternative material flow routes, which increases supply chain resilience. Whether the supply chain will be able to adapt such a disruption, however, will depend not only on the level of technological expertise available, but also on information sharing attitudes in the industry and on the domestic level of public-private cooperation.
1250
S. Reong et al.
Lastly, researchers seeking to model the COVID-19 vaccine supply chain are suggested to investigate the cold chain, shelf life, and dosage requirements of the leading candidates, which will determine relevance of traditional warehousing network models or more innovative last-mile delivery solutions. PPE supply chain modelers may find direction in alternate procurement, or in reverse logistics models that facilitate the sterilization and reuse of protective equipment.
References 1. Javaid, M., Haleem, A., Vaishya, R., Bahl, S., Suman, R., Vaish, A.: Industry 4.0 technologies and their applications in fighting COVID-19 pandemic. Diabetes & Metabolic Syndrome: Clinical Research & Reviews (2020) 2. Ivanov, D.: Predicting the impacts of epidemic outbreaks on global supply chains: a simulation-based analysis on the coronavirus outbreak (COVID-19/SARS-CoV-2) case. Transp. Res. Part E Logist. Transp. Rev. 136, 101922 (2020) 3. Dong, E., Du, H., Gardner, L.: An interactive web-based dashboard to track COVID-19 in real time. Lancet. Infect. Dis. 20(5), 533–534 (2020) 4. Wuest, T., Kusiak, A., Dai, T., Tayur, S.R.: Impact of COVID-19 on Manufacturing and Supply Networks—The Case for AI-Inspired Digital Transformation. Available at SSRN 3593540 (2020) 5. Altay, N., Gunasekaran, A., Dubey, R., Childe, S.J.: Agility and resilience as antecedents of supply chain performance under moderating effects of organizational culture within the humanitarian setting: a dynamic capability view. Prod. Plann. Control 29(14), 1158–1174 (2018) 6. Betti, F., De Boer, E.: Global Lighthouse Network: Four Durable Shifts for a Great Reset in Manufacturing [Pdf]. World Economic Forum, Cologny (2020) 7. Swafford, P.M., Ghosh, S., Murthy, N.: The antecedents of supply chain agility of a firm: scale development and model testing. J. Oper. Manage. 24(2), 170–188 (2006) 8. Sheffi, Y., Rice, J.B., Jr.: A supply chain view of the resilient enterprise. MIT Sloan Manage. Rev. 47(1), 41 (2005) 9. Altay, N., et al.: Agility and resilience as antecedents of supply chain performance under moderating effects of organizational culture within the humanitarian setting: a dynamic capability view. Prod. Plann. Control 29(14), 1158–1174 (2018) 10. Ivanov, D., Dolgui, A.: A digital supply chain twin for managing the disruption risks and resilience in the era of Industry 4.0. Production Planning & Control, pp. 1–14 (2020) 11. Lin, C., Braund, W.E., Auerbach, J., Chou, J.H., Teng, J.H., Tu, P., Mullen, J.: Early Release-Policy Decisions and Use of Information Technology to Fight 2019 Novel Coronavirus Disease, Taiwan (2020) 12. Dai, T., Zaman, M.H., Padula, W.V., Davidson, P.M.: Supply chain failures amid Covid-19 signal a new pillar for global health preparedness (2020) 13. Singh, G., Levi, J.: MiPasa project and IBM Blockchain team on open data platform to support Covid-19 response, March 2020. https://www.ibm.com/blogs/blockchain/2020/03/ mipasa-project-and-ibm-blockchain-team-on-open-data-platform-to-support-covid-19response/Accessed Sept 2020 14. Intelligent Computing & Optimization, Conference proceedings ICO 2018, Springer, Cham, ISBN 978–3–030–00978–6
Industry 4.0 Approaches for Supply Chains Facing COVID-19
1251
15. Marmolejo-Saucedo, J.A., Hurtado-Hernandez, M., Suarez-Valdes, R.: Digital twins in supply chain management: a brief literature review. In International Conference on Intelligent Computing & Optimization, pp. 653–661. Springer, Cham, October 2019 16. Intelligent Computing and Optimization, Proceedings of the 2nd International Conference on Intelligent Computing and Optimization 2019 (ICO 2019), Springer International Publishing, ISBN 978–3–030–33585 -4 17. de Jonquières, A.: Designing the Supply Chain for a COVID-19 Vaccine (Doctoral dissertation, London Business School) (2020) 18. Park, C.Y., Kim, K., Roth, S.: Global shortage of personal protective equipment amid COVID-19: supply chains, bottlenecks, and policy implications (2020) 19. Koirala, A., Joo, Y.J., Khatami, A., Chiu, C., Britton, P.N.: Vaccines for COVID-19: the current state of play. Paediatr. Respir. Rev. 35, 43–49 (2020) 20. Walsh, E.E., Frenck, R., Falsey, A.R., Kitchin, N., Absalon, J., Gurtman, A., Swanson, K. A.: RNA-based COVID-19 vaccine BNT162b2 selected for a pivotal efficacy study. Medrxiv (2020) 21. Lin, Q., Zhao, Q., Lev, B.: Cold chain transportation decision in the vaccine supply chain. Eur. J. Oper. Res. 283(1), 182–195 (2020) 22. Centers for Disease Control and Prevention: Summary Strategies to Optimize the Supply of PPE During Shortages. In Centers for Disease Control and Prevention (US). Centers for Disease Control and Prevention (US), July 2020 23. Campos, R.K., Jin, J., Rafael, G. H., Zhao, M., Liao, L., Simmons, G., Weaver, S.C., Cui, Y.: Decontamination of SARS-CoV-2 and other RNA viruses from N95 level meltblown polypropylene fabric using heat under different humidities. ACS Nano 14(10), 14017–14025 (2020)
Ontological Aspects of Developing Robust Control Systems for Technological Objects Nataliia Lutskaya1(&), Lidiia Vlasenko1, Nataliia Zaiets1, and Volodimir Shtepa2 1
Automation and Computer Technologies of Management Systems, Automation and Computer Systems, National University of Food Technologies, Kiev, Ukraine [email protected] 2 Department of Higher Mathematics and Information, Technology Polessky State University, Pinsk, Belarus
Abstract. The ontological aspects of designing the efficient control systems of technological objects, which are operating in uncertain environment have been demonstrated in the research work. Design and monitoring of the control system have been outlined as the two basic tasks on the basis of the covered subject and problem domain of the research as well as the life cycle of the system. The subject domain, which consists of the ontology of objects and processes, has been described with the use of the system and ontological approach. The peculiarity of the developed ontological system lies in the knowledge on the uncertainty of technological objects and the conditions of their operation. The ontological system, which underlies the further development of an intelligent decision support system, has been formed alongside with the ontology of objectives. The advantage of the ontology based design lies in the scientific novelty of the knowledge presentation model and the practical relevance for designers, developers, and researchers of the control systems for technological objects operating in uncertain environment. Keywords: Ontological system Control system
Technological object Subject domain
1 Introduction Nowadays the synthesis of an efficient control system (CS) for a technological object (TO) is still a creative process which is completely dependent on the personal preferences of the CS designer. In the first place, it can be explained by the determinant part of the designer’s subject domain initial knowledge and the empirical knowledge obtained on its basis. Although the stages of developing an efficient control system for a technological object have been formalized a long time ago [1, 2], they need to be rethought, given the current diversity of methods and approaches. In addition, technological objects operating in uncertain environment require the use of a generalized methodology based on the life cycle (LC) of the control system for a technological
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1252–1261, 2021. https://doi.org/10.1007/978-3-030-68154-8_107
Ontological Aspects of Developing Robust Control Systems
1253
object. A robust controller whose structure and / or parameters are calculated in accordance with the H2/H∞-criterion [3, 4] becomes the control device of such a CS. However, the changing operating conditions and the evolution of the TO lead to a change in the uncertainty environment within which the robust controller has been engineered. Consequently, the efficiency of the system as a whole decreases and reconfiguration of the control system for a technological object becomes a necessity. Thus, using the data and the subject domain knowledge requires formalization to be used in the final system, which can be implemented by means of the decision support subsystem (DSS).
2 Design of the Ontological System 2.1
Concept of an Ontological System
The research work uses an approach based on the following crucial system-ontological approaches: abstraction technique and instantiating, composition and decomposition, structuring and classification [5]. It is assumed that the ontological approach to the design of the CS, including its software part, is a multidisciplinary issue of formation, presentation, processing and analysis of the knowledge and data, models of which describe the structure and interrelation of the objects of the subject domain (SD) [6]. Unlike the empiric approach, this approach implies a clear systematization of the SD knowledge, including the interdisciplinary knowledge [7–9]. Fig. 1 shows the components of the subject domain of the research, namely the control system for a technological object operating in uncertain environment. The problem domain presented in Fig. 2 forms the objectives which are described in the objective ontology. Problem domain: Fig. 2
Subject domain: CS for ТО in uncertain environment Object set of the subject domain
Ontology of objects
Process set of the subject domain
Objective set: Design and monitoring of the CS, compatible objectives
Ontology of processes
Ontology of objectives
Ontological system
Fig. 1. Components of the ontology of the subject domain of design and monitoring of the efficient control systems operating in uncertain environment.
1254
N. Lutskaya et al.
Ontology forms the framework of the knowledge base in describing the basic concepts of the subject domain and serves as the basis for the development of intellectual decision support systems. Today, ontology can use different models of knowledge presentation, for instance semantic or frame networks. Recently, a descriptive logic sub-set and the OWL2 language dialects [10–12] become a popular formal description for the development of the subject domain ontology. The means of formal description and ontology development allow us to store, edit, verify, transmit and integrate the developed ontology in different formats.
Technological regulations (as the minimum and maximum values within which there is the point of extremum) Uncertainty of the external environment
Criteria for operation (diffused by production levels and time)
Control system Technological object
Nonlinearity, uncertainty and evolution of the object
Control device
C o n s e q u e n c e s Decrease in the quality of the product Increase in consumption
Loss of rigidity
Readjustment of the control system
energy
Fig. 2. The problem domain of design and monitoring of the efficient control systems operating in uncertain environment.
2.2
Engineering Features
Two generalized objectives have been raised while distinguishing the problem domain of the research (Figs. 1, 2): the objective of designing the control system for a technological object in uncertain environment and the objective of monitoring the control system for a technological object. These objectives are the basis for the formation of the life cycle of the CS for a TO. The next stage includes the development of an ontological system for better understanding of the subject domain, stating and capturing the general knowledge and its structural relationships as well as clear conceptualizing of the DSS software which describes the semantics of the data. The ontological approach is based on the concept of the ontological system (OnS) which is an ontological tool for supporting the applied problems, which is described by means of the tuple:
Ontological Aspects of Developing Robust Control Systems
OnS ¼ \ OSD O; OP ; OT [
1255
ð1Þ
The subject domain SD consists of two parts – the ontology of the objects O and the ontology of the processes OP. The former defines the static terms, definitions and relationships of the subject domain, while the second defines the dynamics (events and duration) of the SD. The ontology of the processes OP can be constructed in accordance with the operation of the objects of the subject domain, or it is possible to be done in accordance with the objectives of the problem domain. The research work proposes to develop the ontology of processes in accordance with the life cycle (LC) of the control system (CS) for TO and the objectives of the problem domain OT. Let us consider the CS for a TO in terms of its LC. Like any complex system, the CS for a TO consists of at least three stages: origination, operation, and expiry. The origination is connected with the CS design process and is included into the design and operation LC of the industrial control system (ICS) of the technological process (TP) and its parts, including automation equipment (AE). On the other hand, the LC of the CS is associated with the operation of the TO, which also develops on the principle of evolution. The incompleteness of the LC prevents from the optimality of the decision on designing the efficient CS for a TO, that is why when designing the efficient CSs for TOs it is necessary to take it into account. Let us describe the LC of the CS for a TO with the following tuple: CCS ¼ \ PðLCCS Þ; fSg; R; T [
ð2Þ
where P(LCCS) stands for the aim, requirement or assignment of the CS; {S} is the set of stages of the life cycle of the CS; R is the result of the operation of the CS; T is the life cycle time. Such dependence reflects the orientation of the CS for a TO both towards the aim (assignment) of the system and towards the end result of its operation. Thus, if the system loses its assignment or stops meeting the requirements, it goes to the final stage of the life cycle, its expiry. The LC of the efficient CS for a TO is divided into the following stages (Fig. 3). The aim and criteria of control are selected and the limitations for the system are determined at the stage of determining the requirements for the system. The input documents are the technological regulations and the technical design specifications. The idea of creating a system is substantiated and the input and output variables of the CS are selected at the stage of formulating the concept of the CS. The end result of this stage manifests in recommendations for the creation of the CS for the separate parts of the TO with specified characteristics of each system, the sources and resource limitation for its development and operation. At the third and the subsequent stages of the LC, the issue of designing the CS for a TO is solved using a number of the following traditional activities: the study of the TO; identification of a mathematical model of the TO; design of the CS and the CS modeling. However, the research work proposes to choose the CS from a variety of alternative solutions, which includes different structures of the CS. The final choice is the decision of a designer. In addition, the design process should be model-oriented,
1256
N. Lutskaya et al.
where the system model is used to design, simulate, verify and validate the programed code (similar to the DO-178B standard for embedded systems).
Requirements from the upper levels
Restriction for the АE
Elimination of the CS
redesigning
Monitoring the CS
Formulation of the concept of the CS
reformulating
Implementation and validation of the CS
Development of the ММ of the ТО
verification
Simulation of the CS for a ТО
Defining the requirements to the CS
Development of the structure and parameters of the CS
Fig. 3. Life cycle of the CS for TO.
Unlike the previous stage, where the verification was only performed on the ready built process models, the assumptions, calculations and conclusions, which have been made at the previous stages are verified at the implementation stage. That is, the reliability of the actual costs of the selected alternative solution is being assessed. At the stage of operation and monitoring, the implemented CS for a TO is subjected to final evaluation of theoretical and manufacturing research. The peculiarity of this stage lies in monitoring the system and detecting its “aging”. The “aging” of the CS is caused by lower efficiency of the CS for a TO and can eventually lead to a system failure, the divergence of the time response being one of the manifestations of this phenomenon for control systems. The evolution of the TO or its environment might be the reason for the decrease in efficiency of such systems. A separate issue here is the failure of individual elements of the AE [13]. Elimination of the CS is directly related to the termination of the TO operation when the technological process is physically or morally outdated and its restoration is futile for technical and economic reasons. The return to the previous stages of the LC is predicated by the increase in flexibility and adaptability of the created system. Thus, for the efficient design of the CS for a TO the ontological system is to be considered in terms of ontological objectives which reflect the LC of the CS. The proposed ontological system considers conducting ontological studies in order to identify all the factors which affect the structure of the control system for the TO operating in uncertain environment. The model also takes into account the LC of the CS, as well as the peculiarities of developing the robust CSs for TOs, alternative solutions for which will form the basis of the CS structures.
Ontological Aspects of Developing Robust Control Systems
2.3
1257
Model of the Ontological System
In accordance with the previous section and Fig. 1, an ontological system for the effective functioning of the CS for a TO which operates in uncertain environment has been developed. The ontological system consists of three ontologies which are interlinked by relevant objectives arising from the problem domain of the research. Figure 4 shows an ontology fragment of the subject domain of the research which is described by the following tuple: O ¼ hX; R; Fi
ð3Þ
where X = Xi i = 1,n is a finite non-empty set of concepts (subject domain con cepts); R = ri i = 1,m is a finite set of semantically significant relationships between concepts; F:X R is a finite set of interpretation functions which have been preassigned on concepts and relationships. The basic concepts of the subject domain X have been determined as follows: a technological object, standards, a life cycle, an individual, as well as a controlling part of the ICS which consists of software and hardware facilities. These subject domain concepts are substantiated by the direct impact on the subject domain and research objectives. According to semantic relations, significant relationships between concepts for the set R, have been shown on the O (Fig. 4). The interpretation functions on the ontologies have been shown by means of the corresponding arrows. In addition, the model is divided into reference levels for better structuring of the categories and for linking ontologies. The relationship between other schemes in a single ontological system is carried out with the help of ovals (links) and rectangles (acceptance) with numbers corresponding to the following pattern: Ontology ðSchemeSheetÞ: Level: Relationship number For example, P1.L1.00 corresponds to the reference to the ontology of processes (O stands for the ontology of objects, T for the ontology of objectives (tasks), P for the ontology of processes) of letter 1 to the concept, which is located at the first level with the relationship number 00. The last number is individual for the entire ontological system and it can also be used to trace the relationship between ontological subsystems.
1258
N. Lutskaya et al.
Automation
L0
3
Industrial control system
L1 1
7
8
1
8
Technological object
Standards Individual
Life cycle ICS 2
L2 National standarts of ICS
L3
2
4 Metrology standards
IT standarts 4
1
Food quality standards
Designer
7
0
8
3
Type of process
5 The nature of the functioning
Hardware 3
7
4
8
CRISPDM
4
LC TO
0
5 2
0
3
ISO 9001 5
GOST 34.201-89 7
P1.L5.01 5
9 DSTU 2503-94...
0 is a categorical relationship; 1 is an integer part; 2 is a kind-of-relation; 3 is a class – subclass (set – sub-set); 4 is a set – element; 5 is an attribute relationship; 6 is an equality relationship; 7 is an initialization relationship ; 8 is the relationship of the process behavior; 9 is the end of the process relationship.
Control AE
Field AE
Mathematical support
CL AE
ISO/IEC 15288:2005
Information support
7
8
L6
8
Software
P1.L5.07
CL design ICS
4
L5
Operator
3
5 2
L4
7
4
0
Systems engineering standards
GOST 34.601-90
Control device
Technological regulation
5
Custom software
Terms of reference
6
5
5 1
0
0
5 8
7 Requirements for mathematical support
Functional requirements
P1.L5.02
P1.L5.03
Logical part Regulatory part T1(P1).L5.05 T1.L6.04
2
The structure of the reg.part
5
Parameters of the reg.part
P1.L5.06
Fig. 4. Ontology of objects of the subject domain.
The ontology of objectives consists of a general objective which is divided into tasks, sub-tasks, methods of their solution, as well as the problem solver which is represented in the ontology of processes. Two tasks have been identified according to the LC of the CS – designing the CS and monitoring the AE and CS, which in turn are divided into smaller parts. The objective of monitoring the efficiency of the CS is a part of the objective of monitoring and fault identification of the automation equipment (AE). A fragment of the ontology of objectives is shown in Fig. 5. The main methods for solving the issue of designing the CS for a TO operating in uncertain environment are as follows: methods of identification the mathematical model (MM) of the TO, which also include the identification of the uncertainties of the TO; methods of optimal and robust synthesis; robustness testing methods. The ontology of processes (the fragment is presented in Fig. 6) has been built on a similar tuple (3) in accordance with the selected objectives of the problem domain. The
Ontological Aspects of Developing Robust Control Systems
1259
processes at the lower L5 level correspond to the sequence of activities and operations which contribute to the solution of the corresponding sub-tasks of the ontological research.
Ontology of objectives
L0 0
0
Generalized objective
L1
1
1 1
Task 1.1 Designing hardware
L3
Task 1.2.2 Design of the regulatoring part
Task 1.2.2.1 Preliminary analysis of TO 9 5
P1.L5.11
Monitoring methods
3
6
Robust stability methods
Methods of parametric identification
0 Khariton's theorem
2
5
By transfer function
2
5 5
Synthesis methods of robust IC
The principle of zero exclusion
By state space
Small gain theorem
By regression MM
The probabilistic approach
By Lyapunov functions
Defining the region uncertainty
5
0 1
L5
Structural identification methods
7
2 µanalysis
2
3
3
1
Task 1.2.2.2 Identification of MM TO 5
0
0
CS synthesis methods on linear MM 5
Synthesis methods of optimal CS
1
L4
3
3 5 Methods of synthesis of adaptive systems
P1.L 4.10
Prediction methods
Methods of identification of MM TO
2
Dynamic optimization methods
1 1
0
CS analysis methods
Methods of static optimization
Task 1.2 Software Design
Task 1.2.1 Designing the logical part
0
CS synthesis methods
2
P1.L1.00
0
0
0 Optimization methods
Task 2 Monitoring of AE and CS 5
1
Tasks solver
0 1
Task 1 Designing an effective CS
L2
0
Methods for solving tasks
0 Synthesis methods of local regulators Task 1.2.2.3 Synthesis of СS TO
3 Synthesis methods of robust control 3
3
Structural Mixed Parametric
9 P1.L5.12
О.L5.04
P1.L5.13
О(P).L5.05
Non smooth optimization
Single-loop 4
L6
P/I/PI/PD/PID
Entropy approach 2-Riccati approach
5
LMI-approach
Loop-shapingapproach
µ-synthesis
Multy-loop
Fig. 5. Ontology of objectives.
3 Results and Discussion The task of synthesizing the SC is divided into three subtasks (Fig. 5 - Task 1.2.2.1– 1.2.2.3): preliminary analysis of the TO, identification of the mathematical model of TO and the synthesis of the control system. Each component of the task has corresponding processes (Fig. 6) that must be performed upon its achievement. For example, when using a basic TO model, it is necessary to perform six actions that will lead to the receipt of MM TO. In contrast to obtaining MM TO, not considered in conditions of uncertainty, action 6 was introduced - determining the region of uncertainty, which
1260
N. Lutskaya et al.
can be performed by the procedures described in [14]. The features of the proposed approach to the synthesis of the regulatory part of the CS is testing of alternative structures of the CS and the choice of an effective one based on the theory of decision making. Using the developed ontological model, it is possible to build a decision support system based on the acquired knowledge of the problem area. This approach to the development of the CS reduces the design time for the calculation of structured and automated procedures that are embedded in the ontological model. When synthesizing the CS, modern methods of robust TO control systems are taken into account, which are formalized in the ontology of tasks and the main interrelationships with traditional methods are indicated.
T1.L4.10
L4
The process of designing a continuous part
0
...
8 without ММ ТО
О.L5.03
О.L5.03 Action 1. Division into subsystems
Stage 2. Action 1. Identification Preparation of MM TO and conduct of the experiment ... ...
О.L5.02
... No
О.L4.01
Action 2. Definition of regulated variables, control actions, basic disturbances
О.L4.01
О.L3.07
Action 5. Evaluation (verification) MM Yes
T1.L5.12
Action 6. Defining the area of uncertainty MM
Action 1. Definition of control criterions and restrictions
О.L5.03
Action 2. Definition of plural of CS structures Action 3. Sort alternatives of CS by criterions Action 4. Determination of CS parameters and modeling
О.L5.02
Stage 3. Development of CS
T1.L5.11
with ММ ТО
Stage 1. The process of preliminary analysis
О.L3.07
О.L3.07
T1.L5.13
T1.L5.05
Action 5. Selection and implementation of CS No Action 6. CS verification
О.L4.01
Yes О.L5.05
L5
Fig. 6. Ontology of processes (fragment).
4 Conclusion The analysis of the subject and problem domain of the synthesis of the CS for a TO in uncertain environment has been carried out. On its basis, a system-ontological approach to the efficient design and monitoring of the CS for a TO operating in uncertain environment has been proposed. The ontological system consists of three ontologies: objects, objectives, and processes. The conducted research revealed the relationships between subject domain objects, objectives, methods for solving them and the ontology of processes. The advantage of the given approach is the scientific novelty of the knowledge presentation model, as well as practical importance for both the
Ontological Aspects of Developing Robust Control Systems
1261
researchers and developers of the CS for a TO in uncertain environment and for designers of the CS for a TO. The ontological system forms the basis for the further development of an intelligent decision support system for the efficient operation of the CS for a TO operating in uncertain environment. The authors declare no conflict of interest. All authors contributed to the design and implementation of the research, to the analysis of the results and to the writing of the manuscript.
References 1. McMillan, G.K., Considine, D.M.: Process/Industrial Instruments and Controls Handbook, 5th edn. McGraw-Hill Professional, New York (1999) 2. The Control Handbook: Control System Applications, 2nd edn. CRC Press, W.S. Levine (2011) 3. Lutskaya, N., Zaiets, N., Vlasenko, L., Shtepa, V.: Effective robust optimal control system for a lamellar pasteurization-cooling unit under the conditions of intense external perturbations. Ukrainian Food J. 7(3), 511–521 (2018) 4. Korobiichuk, I., Lutskaya, N., Ladanyuk, A., et al.: Synthesis of optimal robust regulator for food processing facilities, automation 2017: innovations in automation. Robot. Measurement Techniques, Advances in Intelligent Systems and Computing, Springer International Publishing 550, 58–66 (2017) 5. Takahara Y., Mesarovic M.: Organization Structure: Cybernetic Systems Foundation, Springer Science & Business Media (2012). 6. Fernandez-Lopez, M., Gomez-Perez, A.: Overview and analysis of methodologies for building ontologies. Knowl. Eng. Rev. 17(02), 129–156 (2003) 7. Baader, F., Calvanese, D., McGuinness, D.L., et al.: The Description Logic Handbook: Theory. Implementation, Applications, Cambridge (2003) 8. Palagin, A., Petrenko, N.: System-ontological analysis of the subject area. Control Syst. Mach. 4, 3–14 (2009) 9. Smith, B.: Blackwell guide to the philosophy of computing and information: Chapter ontology. Blackwell 39, 61–64 (2003) 10. OWL 2 Web Ontology Language Document Overview, 2nd edn., W3C. 11 December 2012. 11. OWL Web Ontology Language Guide. W3C Recommendation, 10 February 2004. https:// www.w3.org/TR/owl-guide/ 12. Protege Homepage. https://protege.stanford.edu/ 13. Zaiets N., Vlasenko L., Lutska N., Usenko S.: System Modeling for Construction of the Diagnostic Subsystem of the Integrated Automated Control System for the Technological Complex of Food Industries/ICMRE 2019, Rome, Italy, pp. 93–98 (2019). 14. Lutska, N.M., Ladanyuk, A.P., Savchenko, T.V.: Identification of the mathematical models of the technological objects for robust control systems. Radio Electron. Comput. Sci. Control 3, 163–172 (2019) 15. Voropai, N.I.: Multi-criteria decision making problems in hierarchical technology of electric power system expansion planning. In: Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866, pp. 362–368. Springer (2019) 16. Alhendawi, K.M., Al-Janabi, A.A., Badwan, J.: Predicting the quality of MIS characteristics and end-users’ perceptions using artificial intelligence tools: expert systems and neural network. In: Intelligent Computing and Optimization. ICO 2019. Advances in Intelligent Systems and Computing, vol. 1072. pp. 18–30. Springer (2020)
A New Initial Basis for Solving the Blending Problem Without Using Artificial Variables Chinchet Boonmalert, Aua-aree Boonperm(&), and Wutiphol Sintunavarat Department of Mathematics and Statistics, Faculty of Science and Technology, Thammasat University, Pathum Thani 12120, Thailand [email protected], {aua-aree,wutiphol} @mathstat.sci.tu.ac.th
Abstract. Blending problem is one of production problems that can be formulated to a linear programming model and solved by the simplex method, which begins with choosing an initial basic variables set. In the blending problem, it is not easy in practice to choose basic variables since the original point is not a feasible point. Therefore, artificial variables are added in order to get the origin point as the initial basic feasible solution. This addition brings to increase the size of the problem. In this paper, we present a new initial basis without adding artificial variables. The first step of the proposed technique is to rewrite the blending problem. Then, it is divided into sub-problems depend on the number of products. The variable associated with the maximum profit, together with all slack variables of each sub-problem are selected to be a basic variable. This selection can guarantee that the dual feasible solution is obtained. Therefore, artificial variables are not required. Keywords: Linear programming model problem Artificial-free technique
Dual simplex method Blending
1 Introduction Over the past decade, the blending problem is one of the well-known optimization problems related to the production process of a large number of raw materials to get many types of products. It was first mentioned in 1952 by Chanes et al. [1], which proposed a linear programming problem to find mix of fuels and chemicals in the airline business. There is a lot of research involving the blending problem such as the blending of tea, the blend of milk and coal (see more details in [2–4]). Blending problem can be formulated as the linear programming model, and it can be solved by the simplex method. To use this method, the transformation of the canonical form to the standard form is performed. For converting it to the standard form, slack and surplus variables are added. Then, the basic variable set must be chosen and considered the possibility both of its inverse matrix and its feasibility. For solving a large problem, it’s hard to choose the basic matrix that gives the feasible solution. Thus, a main topic of research to improve the simplex method is to propose the method to choose the initial basis or initial basic feasible solution. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1262–1271, 2021. https://doi.org/10.1007/978-3-030-68154-8_108
A New Initial Basis for Solving the Blending Problem
1263
The well-known methods that are used to find a basic feasible solution are TwoPhase and Big-M method. These methods start by adding artificial variables to choose the initial basis. However, adding the artificial variables to the problem not only makes the problem size become larger but also increases the number of iterations. Consequently, one of topics of the research to find the basic feasible solution without adding the artificial variables called artificial-free technique is widely investigated. In 1997, Arsham [5] proposed the method that has two phases which the first phase starts with the empty set of basic variables. Then, each basic variable is chosen one by one for entering the basic variable set until its set is full. The second phase is the original simplex method to find the optimal solution which starts from the basic feasible solution from the first phase. In 2015, Gao [6] gave the counterexample that Arsham’s algorithm reports the feasible solution for the infeasible problem. Nowadays, various artificial-free techniques are proposed [7–10]. In this research, we focus on solving blending problem by the simplex method. However, since an origin point is not feasible solution, artificial variables are required. This leads to the size of problem is expanded. Therefore, to avoid the step for finding a basic feasible solution by adding artificial variables, we propose the technique for choosing an initial basis without using artificial variables. First, the original form of the blending problem is rewritten and divided into master problem and sub-problems. Then, all sub-problems are considered to find a basic feasible solution. The mainly topic of this research is not only suggesting the dividing of the original problem but also propose the algorithm to choose the initial basis for each sub-problem which avoids to add the artificial variables. This paper is organized as follows: Sect. 2 is the briefly review of the blending problem. Section 3 describes about detail of the proposed method. Section 4 gives a numerical example showing the usage of the proposed method. And final section ends with the conclusion.
2 Blending Problem Blending problem is a production problem that blends m types of raw materials to n types of products under the limit of each raw material, demand of each product and ratio to mix the material. Let M and N be the set of raw materials and the set of products, respectively. Parameters aij and aij are denoted by the smallest and the largest proportion of raw material i that allows in product j, pj and ci are selling and cost price of product j and raw material i, respectively. Also, si is represent the amount of raw material i and dj is represent the amount of demand of the product j. In this problem, xij is the decision variable that represents the amount of the raw material i to produce the product j. The objective of the blending problem is to maximize the profit. Thus, the linear programming model of the blending problem can be written as follows:
1264
C. Boonmalert et al.
max
P j2N
s.t.
pj
P
xij
i2M
P
xij
P i2M
P
P
aij
xij
ð 1Þ
j2N
8i 2 M
si
j2N
xij
ci
ð 2Þ
xkj
8i 2 M; 8j 2 N
ð 3Þ
xkj
8i 2 M; 8j 2 N
ð 4Þ
k2M
xij P
P
aij
k2M
xij
¼
dj
xij
0
8j 2 N
ð 5Þ
i2M
8i 2 M; 8j 2 N
ð 6Þ
In the above model, the objective function (1) aims to find the maximum profit. Constraint (2) means that the amount of each raw material must not exceed the available source. Constraints (3)–(4) aim to force the amount of materials must be on h i the interval aij ; aij . Constraint (5) aims to force the amount of each raw material must be enough to encounter the demand. Finally, constraint (6) indicates the domain of the decision variables.
3 The Proposed Method Before presenting the process of the proposed method, we rewrite the general form of the blending problem (1)–(6) to the following form: P P P P pj xij ci xij max ð 7Þ j2N i2M i2M j2N P xij si 8i 2 M ð 8Þ s.t. P
j2N
xkj xij k2M P xij aij xkj k2M P xij
aij
0
8i 2 M; 8j 2 N
ð 9Þ
0
8i 2 M; 8j 2 N
ð10Þ
¼
dj
0
8j 2 N
ð11Þ
i2M
xij
8i 2 M; 8j 2 N:
ð12Þ
In this paper, we reconstitute the above general model into the following model:
A New Initial Basis for Solving the Blending Problem n P
max
j¼1
s.t.
cTj ^ xj
s
0
8j
ð15Þ
0
8j
ð16Þ
j
¼
dj
8j
ð17Þ
0
8j;
ð18Þ
1m ^ x ^ xj where x^ j ¼ ½ x1i
x2j
...
ð13Þ
AS x j Aj Im ^ x j Im Aj ^ x
1265
ð14Þ
xmj T for all j 2 N,
AS is the coefficient matrix of an inequality (8), s ¼ ½ s1 s2 . . . sm T , Im is the identity matrix of dimension m m, 1m ¼ ½1m1 , T Aj ¼ 1m a1j a2j . . . amj 1m for all j 2 N, T Aj ¼ 1m ½ a1j a2j . . . amj 1m for all j 2 N, and cj ¼ cij m1 such that cij ¼ pj ci for all i 2 M and j 2 N. Since the set of all decision variables can be partitioned depending on the number of product types, the above problem can be divided into the master problem max s.t. AS x x
cT x s 0
ð19Þ
and n sub-problems such that each Sub-problem j 2 N is as follows:
max s.t.
ð20Þ
xj cTj ^
j A j Im ^ x j Im Aj ^ x j
1m ^ x ^ xj
0
ð21Þ
0
ð22Þ
¼
dj
ð23Þ
0:
ð24Þ
To choose the initial basis for the blending problem, we will consider it from all sub-problems. First, we transform Sub-problem j to a standard form and so 2m of slack variables are added to this sub-problem. Let A j be the coefficient matrix of constraints _
j
(21)–(22), A and bj are the coefficient matrix and the column vector of RHS of constrains (21)–(23), respectively. So
1266
C. Boonmalert et al.
For choosing basic variables, all slack variables are chosen first. However, it is not enough to construct the set of basic variables. Hence, to avoid adding an artificial variable, one of decision variable is selected. We now present the new technique for choosing this remaining basic variable. First, for each j 2 N, we let. n o lj ¼ arg max cTj
ð25Þ
Then, the initial basic variables of Sub-problem j is constructed associated with _
j
_
j
l xB ¼ ½ xij s1 s2 . . . s2m . Let A:;l be the lth j column of A . Then, a basis Bj , its inverse and a non-basic matrix of Sub-problem j can be written as follows:
and
. Since
,
the primal problem is not feasible. However,
Thus, the dual problem of Sub-problem j is feasible. The initial tableau can be constructed as Table 1. 1 1 Since Blj bj l0 and cTBl Blj N l cTN l 0, the dual simplex can start without j
using artificial variables for solving Sub-problem j. After the optimal solution to all sub-problems is found, if it satisfies the master problem then it is the optimal solution to the original blending problem.
A New Initial Basis for Solving the Blending Problem
1267
Table 1. Initial tableau of simplex method for sub-problem j of the blending problem. z xBlj T
z
1 0
xBlj 0 I2m þ 1
xN l RHS 1 1 cTBl Blj N l cTN l cTBl Blj bj j j 1 1 Blj Nl Blj bj
4 An Illustrative Example In this section, we give an illustrative example showing the numerical results obtained from our algorithm. Example 1. Consider the following blending problem:
max
71ðx11 þ x21 þ x31 Þ þ 83ðx12 þ x22 þ x32 Þ
8ðx11 þ x12 Þ 3ðx21 þ x22 Þ 6ðx31 þ x32 Þ s.t.
þ
x11
x12 x21
þ
x22 x31
x11 x21 x31 x12 x22 x32
0:1315ðx11 þ x21 þ x31 Þ 0:3710ðx11 þ x21 þ x31 Þ 0:3024ðx11 þ x21 þ x31 Þ 0:1153ðx12 þ x22 þ x32 Þ 0:2425ðx12 þ x22 þ x32 Þ 0:3064ðx12 þ x22 þ x32 Þ þ
x11 x11 ;
x21
x12 x12 ;
x11 x21 x31 x12 x22 x32 þ x31 x22 x22 ; x31 ;
þ
x21 ;
þ
x32
1684 1793 1348
0:9231ðx11 þ x21 þ x31 Þ 0:9510ðx11 þ x21 þ x31 Þ 0:7979ðx11 þ x21 þ x31 Þ 0:5343ðx12 þ x22 þ x32 Þ 0:6090ðx12 þ x22 þ x32 Þ 0:9347ðx12 þ x22 þ x32 Þ þ
x32 x32
¼ ¼
325 410 0
Thus, the above model can be written as follows: 63x11 þ 75x12 þ 68x21 þ 80x22 þ 65x31 þ 77x32
max s.t.
x11
þ
x12 x21
x11
þ x12
þ
x21 þ
x22 þ x22
1684
1793
x31 þ x32 1348 x31 ¼ 325 þ x32 ¼ 410
1268
C. Boonmalert et al. 0:8685x11 0:3710x11 0:3024x11
0:8847x12 0:2425x12 0:3064x12
þ 0:1315x21 0:6290x21 þ 0:3024x21
0:9231x21 þ 0:0490x21 0:7979x21
0:0769x11 0:9510x11 0:7979x11 0:4657x12 0:6090x12 0:9347x12 x12 ;
x11 ;
þ 0:1153x22 0:7575x22 þ 0:3064x22
0:5343x22 þ 0:3910x22 0:9347x22 x22 ;
x21 ;
þ 0:1315x31 þ 0:3710x31 0:6976x31
0:9231x31 0:9510x31 þ 0:2021x31
x31 ;
þ 0:1153x32 þ 0:2425x32 0:6936x32
0 0 0 0 0 0
0:5343x32 0:6090x32 þ 0:0653x32 x32
0 0 0 0 0 0 0
Then, the model can be divided into one master problem max s.t.
x11
þ
63x11 þ 68x12 þ 65x21 þ 75x22 þ 80x31 þ 77x32 x12
þ
x21 x11 ;
x22 x31 x22 ; x31 ;
x12 ; x21 ;
þ
x32 x32
1684 1793 1348 0
and two sub-problems are as follows: þ 68x21
þ 65x31
max
63x11
s.t.
0:8685x11 0:3710x11
þ 0:1315x21 0:6290x21
þ 0:1315x31 þ 0:3710x31
0 0
0:3024x11
þ 0:3024x21
0:6976x31
0
0:0769x11 0:9510x11
0:9231x21 þ 0:0490x21
0:9231x31 0:9510x31
0 0
0:7979x11 x11
0:7979x21 þ x21
þ 0:2021x31 þ x31
¼
0 325
x11 ;
x21 ;
0;
max s.t.
75x12 0:8847x12 0:2425x12 0:3064x12 0:4657x12 0:6090x12 0:9347x12 x12 x12 ;
In Sub-problem 1, we let
þ 80x22 þ 0:1153x22 0:7575x22 þ 0:3064x22 0:5343x22 þ 0:3910x22 0:9347x22 þ x22 x22 ;
x31 þ 77x32 þ 0:1153x32 þ 0:2425x32 0:6936x32 0:5343x32 0:6090x32 þ 0:0653x32 þ x32 x32
¼
0 0 0 0 0 0 410 0:
A New Initial Basis for Solving the Blending Problem
1269
since l ¼ arg max cT1 ¼ 2. Thus, the basis can be constructed associated with xB ¼ ½ x31 s1 s2 s3 s4 s5 s6 T as follows:
The initial tableau of Sub-problem 1 can be constructed in Table 2 and 3:
Table 2. The initial tableau of Sub-problem 1. xN 2
xB21 z x21 s1 s2 s3 s4 s5 s6
z 1 0 0 0 0 0 0 0
x21 0 1 0 0 0 0 0 0
s1 0 0 1 0 0 0 0 0
s2 0 0 0 1 0 0 0 0
s3 0 0 0 0 1 0 0 0
s4 0 0 0 0 0 1 0 0
s5 0 0 0 0 0 0 1 0
s6 0 0 0 0 0 0 0 1
x11 −7 1 −1 1 0 1 −1 0
x31 −5 1 0 1 −1 0 −1 1
RHS 22100 325 −42.722 204.4269 −98.2894 299.9998 −15.9281 259.3024
After the initial tableau is constructed, the dual simplex method is used for solving each sub-problem. Thus, the optimal solutions to Sub-problem 1 and Sub-problem 2 are found at ^x1 ¼ ð42:7375; 183:9825; 98:28Þ and ^ x2 ¼ ð47:273; 237:103; 125:624Þ. After checking this solution with the master problem, it satisfies the master problem. Therefore, it is the optimal solution to the original problem. The number of iterations of
1270
C. Boonmalert et al. Table 3. The initial tableau of Sub-problem 2. xB22 z x22 s1 s2 s3 s4 s5 s6
z 1 0 0 0 0 0 0 0
x22 0 1 0 0 0 0 0 0
xN 2 s1 0 0 1 0 0 0 0 0
s2 0 0 0 1 0 0 0 0
s3 0 0 0 0 1 0 0 0
s4 0 0 0 0 0 1 0 0
s5 0 0 0 0 0 0 1 0
s6 0 0 0 0 0 0 0 1
x12 −5 1 −1 1 0 1 −1 0
x32 −3 1 0 1 −1 0 −1 1
RHS 32800 410 −47.2711 310.5865 −125.6288 219.0499 −160.3011 383.2287
each sub-problem is only two, while the simplex method uses 12 iterations. Moreover, 15 slack variables and 2 artificial variables are added before the simplex method starts. The comparison of the simplex method and the proposed method of this example is shown in Table 4. Table 4. Comparison between the simplex method and the proposed method. Method Size of problem Number of iterations
Simplex method Phase I Phase II 17 23 17 21 9 3
Proposed method Sub-problem 1 7 10 2
Sub-problem 2 7 10 2
5 Conclusion In this paper, the original blending problem is rewritten and divided into master problem and sub-problems which the size of each sub-problem is smaller than the original problem. After the problem is divided, sub-problems which depend on the number of products are considered. To find the optimal solution of each sub-problem, an initial basis is chosen to construct the initial simplex tableau for each sub-problem. The algorithm for choosing the initial basic variables for each sub-problem is proposed by selecting one variable that has the maximum coefficient of each objective function and all slack variables. By this selection, the dual feasible solution can be obtained. So, the dual simplex can be performed without using artificial variables. Based on the fact that our algorithm doesn’t use artificial variables, the process for finding the initial feasible solution is omitted. This brings to the advantage of our algorithm. Acknowledgment. This work was supported by Thammasat University Research Unit in Fixed Points and Optimization.
A New Initial Basis for Solving the Blending Problem
1271
References 1. Chanes, A., Cooper, W.W., Mellon, B.: Blending aviation gasolines—a study in programming independent activities in an integrated oil company. Econometrica 20(2), 135–259 (1952) 2. Fomeni, F.D.: A multi-objective optimization approach for the blending problem in the tea industrial. Int. J. Prod. Econ. 205, 179–192 (2018) 3. Marianov, V., Bronfman, V., Obreque, C., Luer-Villagra, A.: A milk collection problem with blending. Transp. Res. Part E Logist. Transp. Rev. 94, 26–43 (2016) 4. Aurey, S., Wolf, D., Smeers, Y.: Using Column Generation to Solve a Coal Blending Problem. RAIRO – University Press, New Jersey (1968) 5. Arsham, H.: An artificial-free simplex-type algorithm for general LP models. Math. Comput. Model. 25(1), 107–123 (1997) 6. Gao, P.: Improvement and its computer implementation of an artificialree simplexype algorithm by arsham. Appl. Math. Comput. 263, 410–415 (2015) 7. Huhn, P.: A counterexample to H. Arsham: Initialization of the Simplex Algorithm: An Artificial Free Approach (1998) 8. Boonperm, A., Sinapiromsaran, K.: The artificial-free technique along the objective direction for the simplex algorithm. J. Phys. Conf. Ser. 490, 012193 (2014) 9. Nabli, H., Chahdoura, S.: Algebraic simplex initialization combined with the nonfeasible basis method. Eur. J. Oper. Res. 245(2), 384–391 (2015) 10. Phumrachat, M.T.: On the use of sum of unoccupied rows for the simplex algorithm initialization. Doctoral dissertation, Thammasat University (2017)
Review of the Information that is Previously Needed to Include Traceability in a Global Supply Chain Zayra M. Reyna Guevara1, Jania A. Saucedo Martínez1, and José A. Marmolejo2(&) 1
2
Universidad Autónoma de Nuevo León, 66451 San Nicolás de los Garza, México {zayra.reynagvr,jania.saucedomrt}@uanl.edu.mx Facultad de Ingeniería, Universidad Panamericana, Augusto Rodin 498, 03920 Ciudad de México, México [email protected]
Abstract. Caring for and preserving the quality of products throughout the supply chain has become a challenge for global trade. So is the backup of information so that it is available to the clients or authorities that require it. Global chains are made up of different stages and processes that include production, transformation, handling and transportation from origin to destination. These stages are executed by different actors and therefore an effective communication is required on a continuous basis between them. For this reason, the initial purpose of this research is to identify what prior information is required to include traceability processes that serve as support for an integrated supply chain and thus reduce potential risks to the integrity of the marketed product. An analysis of available traceability technologies that have been used in global supply chains is included. The information was collected through a review of literature related to traceability and quality with which it was possible to identify in a general way the main stakeholders in supply chains, this provides us with a frame of reference on the possible limitations of the research and selection of a starting point for the proposal of an efficient traceability process according to the client’s requirements. Keywords: Traceability
Global supply chain Technology Quality
1 Introduction In a market context with a tendency to expand in which logistics plays a key role in constantly seeking to satisfy demand at the best possible cost without neglecting the part of customer care and service that requires the delivery of their products in the best possible quality and safety with the possibility of having access to the information related to the product purchased, currently all this is possible thanks to the integration of technological tools in supply chains in conjunction with the application of traceability processes that have become a requirement in international trade for better © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1272–1280, 2021. https://doi.org/10.1007/978-3-030-68154-8_109
Review of the Information that is Previously Needed to Include Traceability
1273
control and risk management, especially in agro-industrial chains. The food and pharmaceutical industries must have traceability systems that allow the identification of products and processes with a proper backup of such information. It is necessary that our production chain complies with both local and international regulations as well as the expected quality standards; also risks must be reduced as much as possible by providing each participant the possibility of access to information on the monitoring and management of their products, which is achieved when traceability systems are integrated and they work properly. Tracking tools allow the product to be tracked at each stage of the transformation process and during its distribution throughout the supply chain by having information support, which includes the implementation of new technologies. Since our study is in the initial phase we can use the simulation tool to start doing some tests. Evolutionary Prototyping is a popular system development technique and can enable users to develop a concrete sense about software systems that have not yet been implemented [1]. Traceability in the supply chain allows the identification of information related to the monitoring and tracing of traceable units during the production and marketing process [10]. Since a supply chain is made up of various actors according to each stage that include producers, suppliers, packers, transporters, traders and consumers, traceability system serves to support the integration of the chain to be able to satisfy demand and if necessary take corrective actions. All this in an effort to seek better marketing opportunities, ensuring compliance with health and quality standards regardless of the type of transport used to reach your destination. All stages of the chain should be documented and managed so that it is possible to trace them back to the origin of the product, step by step, and ensure that there has been no contamination… This traceability minimizes the risk of fraud at all stages and is a very important part of the inspection process of certifying organizations [5]. An analysis of available methodologies and technologies is included that may be useful to identify the risks involved in the chain and how the traceability processes allow to ensure the quality of products by having determined which stages already have safe handling and which are the vulnerable stages. Finally, the use of tools or technologies that serve as support in the backup of information from the process documentation will be proposed.
2 Background Logistics is defined as the set of activities and processes necessary to ensure the delivery of merchandise from the place of its production to the point where the product is marketed or delivered to the final consumer. This involves inspection and quality control of products and in terms of geographical scope it can be: urban, domestic or foreign trade [2]. In this way, d that there are customers who are willing to pay more if the product they are purchasing is of higher quality and they can stay informed.
1274
Z. M. Reyna Guevara et al.
Traceability is the set of pre-established and self-sufficient procedures that allow knowing the history, location and trajectory of a product or batch of products throughout the supply chain, at a given time and through specific tools [6]. According to ISO 9000: 2000, quality is the degree to which a set of characteristics meets the requirements. As a precedent, proposals were found that involve the use of technologies in the management of traceability processes and that conclude that a traceability system must be highly automated for better management and control of all movements involved in the supply chain and for the distribution of a product supported by systems that allow its continuous monitoring and which must function globally and effectively interconnected throughout the food chain, thus achieving a high degree of safety for consumers [9]. Due to the complexity of operations in international trade and the demand for high quality standards, there is a need for adequate risk management, information exchange, and traceability processes that help to verify the safe handling of the product throughout the entire path in the supply chain as well as reducing risks that may affect quality, traceability provides information necessary for compliance with international standards. Depending on the chain in which we are working, a specific strategy must be designed that adapts to the needs therefore, each stage must be analyzed from origin to destination, identifying vulnerable stages by pointing out the variables and critical points. For this, a review of available technological tools has been made (see Table 3). In terms of risks, a study conducted by Dr. J. Paul Dittman for UPS Capital Corporation in 2014 identified the following vulnerabilities in the supply chain, ranking quality risks at the top of the list. Long global supply chains can make it extremely difficult to recover from quality problems [7]. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
Quality. Inventory. Natural disasters. Economy. Loss in transit. Delays. Computer security. Intellectual property. Political instability. Customs. Terrorism.
According to the level of implementation, some advantages of using traceability processes in our supply chain are: – – – – –
Documentation and support of the product history against possible claims. Assist in the search for the cause of the nonconformity. Help in the control and management of processes. Provide information to regulators and customers. Reduction of manual controls in ports.
Review of the Information that is Previously Needed to Include Traceability
1275
– Ease of identifying defective products and the withdrawal of products in the event of possible incidents. – Support for product safety and quality control system. – Helps reduce risks related to food safety.Helps reduce risks related to food safety.
3 Methodology 3.1
Identification of Variables and Critical Points
There are several methodologies that are addressed by the authors listed in Table 1 that help us to identify variables and critical points that affect a supply chain. Table 1. Methodologies for identification of variables and critical points. Author Heizer, J. y Render, B., 2009
Title Principios de administración de operaciones, 7th. Ed
Stanley, R., Knight, C., Bodnar, F., 2011 Rodríguez, E., 2018
Experiences and challenges in the development of an organic HACCP system Identificación de prácticas en la gestión de la cadena de suministro sostenible para la industria alimenticia
Methodology Factor qualification method Quality Function Deployment (QFD) Total quality management (TQM) Hazard analysis and critical control points (HACCP) Network or lattice analysis
The factor rating method is an excellent tool for dealing with evaluation problems such as country risk or for the selection of suppliers [11]. The methodology QFD (Quality function deployment) refers on first place to determining what will satisfy the customer and second translating the customer’s wishes into a target design. The idea is to gain a good understanding of the customer’s wishes and to identify alternative process solutions. The QFD is used early in the design process to help determine what will satisfy the customer and where to deploy quality efforts [11]. TQM (Total Quality Management) refers to the emphasis that an entire organization places on quality, from supplier to customer. TQM emphasizes management’s commitment to continually direct the entire company toward excellence in all aspects of products and services that are important to the customer [11]. The use of Hazard Analysis and Critical Control Point (HACCP) based quality assurance has a well established place in controlling safety hazards in food supply chains. It is an assurance system based on the prevention of food safety problems and is
1276
Z. M. Reyna Guevara et al.
accepted by international authorities as the most effective means of controlling foodborne diseases [12]. From the results of a qualitative analysis, it is possible to implement the technique of network analysis or reticular analysis that allows identifying the relationships between the different practices, dimensions and categories of analysis [13]. Therefore, for this research, we must first start by looking at the type of supply chain that we will be working with. Once the previous step has been carried out, we will be able to select an appropriate methodology to identify the existing variables. 3.2
Supply Chain Modeling
To continue to the next stage, we will select a tool that allows us to model in detail the value chain with which we are going to work and thus verify the traceability aspects involved, Table 2 lists some of the tools available for this purpose. Table 2. Characterization of the supply chain. Author Martínez, K. et al., 2018
Title Caracterización de la cadena de suministro de la Asociación Ruta de la Carne en el departamento de Boyacá
GarcíaCáceres, R. et al., 2014 Pizzuti, T. et al., 2014
Characterization of the supply and value chains of Colombian cocoa
Giraldo, J. et al., 2019
Simulación discreta y por agentes de una cadena de suministro simple incluyendo un Sistema de Información Geográfica (SIG) Un modelo de interoperabilidad semántica para simulación distribuida de cadenas de suministro
Sarli, J. et al., 2018
Food Track & Trace ontology for helping the food traceability control
Tool Analysis of basic functions SCOR Strategic supply chain management (SCM) framework Analysis of basic functions Business process management notation (BPMN) Simulation
Simulation
The analysis of basic functions help us to identify the characteristics of the supply chain starting from qualitative and quantitative evaluations that include interviews and surveys applied to the members of the chain [23]. We can represent the current state of the supply chain through the geographic map, thread diagram and process diagram used in the SCOR Model which describes the existing process by identifying sources, manufacturing sites and distribution centers, using the process categories [23].
Review of the Information that is Previously Needed to Include Traceability
1277
Object Management Group mentioned in 2010 that the BPMN tool is a notation that has been specifically designed to coordinate the sequence of processes and the messages that flow between the different participants of the process in a set of related activities. The analysis of a generic supply chain underscores that as a product passes from the primary producer to the end customer, that product undergoes a series of transformations. Each transformation will involve different agents. Each agent must collect, maintain and share information to enable food traceability. The registration and management of information is facilitated through the use of an information system generated from the BPMN model of the supply chain [25]. A well known technique of system modeling is simulation, which allows to imitate the operation of various kinds of facilities and processes in the real world and since there is one unique way to model by simulation, as this depends on the characteristics of the system, the objectives of the study and the information available [26]. In the Sarli study published in 2018, distributed simulation appears as a more appropriate configuration to run a simulation of a supply chain since it allows the reuse of previously existing simulators in the members of the chain, thus maintaining independence and avoiding need to build a single simulator that understands the behavior of all its participants. In this way each member preserves its logic of business, uses its simulation model and shares the minimum amount of information necessary, which allows modifying its internal logic without affecting the rest [27]. Having mentioned the above, we can start the characterization of the supply chain using the analysis of basic functions at first using qualitative or quantitative evaluations and once that we have that information we can proceed to make a geographic map a thread diagram and the process diagram from the SCOR Model. Our next step can be a simulated supply chain, we considered it an appropriate option for our study since it would allow us to make changes or new proposals in a flexible way to the tests with the supply chain as appropriate. 3.3
Technology for Traceability
For the next stage, a technology for traceability will be selected. Table 3 shows some of the technological tools used in traceability processes, the most common is RFDI technology but the most recent that is still in the exploratory phase in supply chains is blockchain technology, therefore we will select the one or those that allow us to propose improvements to the traceability processes as applicable to our future case study.
1278
Z. M. Reyna Guevara et al. Table 3. Technological tools for traceability.
Author Catarinucci, L. et al., 2011 Zou, Z. et al., 2014 Mainetti, L. et al., 2013 Mack, M. et al., 2014 Zhao, Y. et al., 2014 Consonni, R., Cagliani, L. 2010 De Mattia, F. et al., 2011 Casino, F. et al., 2019
Title RFID and WSNs for traceability of agricultural goods from Farm to Fork: Electromagnetic and deployment aspects on wine test-cases Radio frequency identification enabled wireless sensing for intelligent food logistics An innovative and low-cost gapless traceability system of fresh vegetable products using RF technologies and EPCglobal standard Quality tracing in meat supply chains Recent developments in application of stable isotope analysis on agro-product authenticity and traceability Nuclear magnetic resonance and chemometrics to assess geographical origin and quality of traditional food products A comparative study of different DNA barcoding markers for the identification of some members of Lamiacaea Modeling food supply chain traceability based on blockchain technology
Technology RFDI
RFDI RFDI NFC DataMatrix Unique quality identification Isotope analysis Chemometry and NIRS DNA barcode
Blockchain
A comparative evaluation of these available methodologies and tools will be carried out to define which would be the most appropriate for this research. For the final stage, it is intended to analyze the data as well as to measure whether the objectives were achieved and if it was possible to identify the variables and critical points and thereby generate the discussion of the results and the dissemination of the research.
4 Conclusions We also need to find specific information about regulatory requirements. Also the interest of those responsible for each phase throughout the supply chain is required to achieve full traceability. A good understanding of the business problem is required and also to have a clear definition of the objectives to be achieved, since as logistics is not the same for all organizations, traceability processes must be flexible and adapted to each case as necessary, taking into account the requirements of the client, demand, capacity of implementation and design of the supply chain. In this order of ideas, we understand the level of implementation of traceability will depend on the type of supply chain and the ability to adapt to the expectations and
Review of the Information that is Previously Needed to Include Traceability
1279
demands of the client with the best possible technology since global markets demand it and there are also clients who they will be willing to pay more for a higher quality product and be able to stay informed about it.
References 1. Zhang, X., Lv, S., Xu, M., Mu, W.: Applying evolutionary prototyping model for eliciting system requirement of meat traceability at agribusiness level. Food Control 21, 1556–1562 (2010) 2. Montanez, L., Granada, I., Rodríguez, R., Veverka, J.: Guía logística. Aspectos conceptuales y prácticos de la logística de cargas. Banco Interamericano de Desarrollo, Estados Unidos (2015) 3. Barbero, J.: La logística de carga en América Latina y el Caribe, una agenda para mejorar su desempeño. Banco Interamericano de Desarrollo, Estados Unidos (2010) 4. Besterfield, D.: Control de calidad, 8th edn. Pearson Educación, México (2009) 5. ITC: Guía del Exportador de Café, 3ra. Ed. Centro de Comercio Internacional, agencia conjunta de la Organización Mundial del Comercio y las Naciones Unidas, Suiza (2011) 6. AECOC: La Asociación de Fabricantes y Distribuidores. Recuperado de https://www.aecoc. es/servicios/implantacion/trazabilidad/. Accessed 10 Aug 2020 7. Dittman, J.: Gestión de riesgos en la cadena de suministro global, un informe de la Facultad de Administración de la Cadena de Suministro en la Universidad de Tennessee. Patrocinado por UPS Capital Corporation, Estados Unidos (2014) 8. Forero, N., González, J., Sánchez, J., Valencia, Y.: Sistema de trazado para la cadena de suministro del café colombiano, Colombia (2019) 9. Sosa, C.: Propuesta de un sistema de trazabilidad de productos para la cadena de suministro agroalimentaria, España (2017) 10. Rincón, D., Fonseca, J., Castro, J.: Hacia un marco conceptual común sobre trazabilidad en la cadena de suministro de alimentos, Colombia (2017) 11. Heizer, J., Render, B.: Principios de administración de operaciones, 7th edn. Pearson Educación, México (2009) 12. Stanley, R., Knight, C., Bodnar, F.: Experiences and challenges in the development of an organic HACCP system, United Kingdom (2011) 13. Rodríguez, E.: Identificación de prácticas en la gestión de la cadena de suministro sostenible para la industria alimenticia, Colombia (2018) 14. Badia-Melis, R., Mishra, P., Ruiz-García, L.: Food traceability: new trends and recent advances. A review. Food Control 57, 393–401 (2015) 15. Catarinucci, L., Cuiñas, I., Expósito, I., Colella, R., Gay-Fernández, J., Tarricone, L.: RFID and WSNs for traceability of agricultural goods from farm to fork: electromagnetic and deployment aspects on wine test-cases. In: SoftCOM 2011, 19th International Conference on Software, Telecommunications and Computer Networks, pp. 1–4. Institute of Electrical and Electronics Engineers (2011) 16. Zou, Z., Chen, Q., Uysal, I., Zheng, L.: Radio frequency identification enabled wireless sensing for intelligent food logistics. Philos. Trans. R. Soc. A 372, 20130302 (2014) 17. Mainetti, L., Patrono, L., Stefanizzi, M., Vergallo, R.: An innovative and low-cost gapless traceability system of fresh vegetable products using RF technologies and EPCglobal standard. Comput. Electron. Agric. 98, 146–157 (2013) 18. Mack, M., Dittmer, P., Veigt, M., Kus, M., Nehmiz, U., Kreyenschmidt, J.: Quality tracing in meat supply chains. Philos. Trans. R. Soc. A 372, 20130302 (2014)
1280
Z. M. Reyna Guevara et al.
19. Zhao, Y., Zhang, B., Chen, G., Chen, A., Yang, S., Ye, Z.: Recent developments in application of stable isotope analysis on agro-product authenticity and traceability. Food Chem. 145, 300–305 (2014) 20. Consonni, R., Cagliani, L.: Nuclear magnetic resonance and chemometrics to assess geographical origin and quality of traditional food products. Adv. Food Nutr. Res. 59, 87– 165 (2010) 21. De Mattia, F., Bruni, I., Galimberti, A., Cattaneo, F., Casiraghi, M., Labra, M.: A comparative study of different DNA barcoding markers for the identification of some members of Lamiacaea. Food Res. Int. 44, 693–702 (2011) 22. Casino, F., Kanakaris, V., Dasaklis, T., Moschuris, S., Rachaniotis, N.: Modeling food supply chain traceability based on blockchain technology. IFAC PapersOnLine 52, 2728– 2733 (2019) 23. Martínez, K., Rivera, L., García, R.: Caracterización de la cadena de suministro de la Asociación Ruta de la Carne en el departamento de Boyacá. Universidad Pedagógica y Tecnológica de Colombia, Colombia (2018) 24. García-Cáceres, R., Perdomo, A., Ortiz, O., Beltrán, P., López, K.: Characterization of the supply and value chains of Colombian cocoa. DYNA 81, 30–40 (2014). Universidad Nacional de Colombia, Colombia 25. Pizzuti, T., Mirabelli, G., Sanz-Bobi, M., Goméz-Gonzaléz, F.: Food track & trace ontology for helping the food traceability control. J. Food Eng. 120, 17–30 (2014) 26. Giraldo-García, J., Castrillón-Gómez, O., Ruiz-Herrera, S.: Simulación discreta y por agentes de una cadena de suministro simple incluyendo un Sistema de Información Geográfica (SIG). Información tecnológica 30, 123–136 (2019) 27. Sarli, J., Leone, H., Gutierrez, M.: SCFHLA: Un modelo de interoperabilidad semántica para simulación distribuida de cadenas de suministro. RISTI, Revista lbérica de Sistemas y Tecnologías de Información 30, 34–50 (2018)
Online Technology: Effective Contributor to Academic Writing Md. Hafiz Iqbal1(&) , Md Masumur Rahaman2, Tanusree Debi3, and Mohammad Shamsul Arefin3 1
3
Government Edward College, Pabna, Bangladesh [email protected] 2 Bangladesh Embassy, Bangkok, Thailand Chittagong University of Engineering and Technology, Chittagong, Bangladesh
Abstract. This study explores the potential contributors to online technology in academic writing and designs a policy for using online technology. Focus group discussion and the survey (n = 151) were used for variable selection, questionnaire development, and data collection. A mini-experiment was conducted at the Department of Economics, Government Edward College, Pabna, Bangladesh and the survey was conducted at seven different colleges also in Pabna district. Students’ socio-economic-demographic characteristics like age, gender, religion, students’ parent income, students’ daily expenses, household composition, and level of education are major influential determinants of effective usage of online technology. Tutorial sessions for using online technology in writing, cost-effective internet package for students, institutional and infrastructural supports, motivation, students’ residential location, and electronic gadgets provided to the students at a subsidized price are important contributors to use online technology. Online technology-aided academic writing is a joint effort of students, teachers, college administration, University authority, and the Government. Keyword: Online technology Academic writing Knowledge management Massive open online course
1 Introduction An academic writing without cut, copy, and paste has a greater implication in every research and educational institutions where plagiarism-induced writing is intolerable [1, 2]. Online technology plays a significant role to produce quality writing. It facilitates the provision of e-mentoring, e-library, and e-discussion forum [3, 4]. The most common and popular forms of online technology include Dropbox, Google Drive, YouTube, Facebook, Database from online, Reflective Journals, Blogs, Chat Room for academic discussion, Wikis, Skype, WhatsApp, Zoom, and Social Media Group [5]. Students’ academic and research activities at the tertiary level are mostly influenced by online technology because of its significant role in generating diversified learning strategies [6]. It develops student presentation, writing, searching, concept © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1281–1294, 2021. https://doi.org/10.1007/978-3-030-68154-8_110
1282
Md. H. Iqbal et al.
development, critical thinking, citation, and referencing skills [7]. Besides, mentees or students can send assignments to their respective mentors, and academic supervisors or teachers for assessment of their academic ask through Dropbox, Google Drive, and WhatsApp. They can share their necessary files, documents, and relevant articles, information, and data with other researchers, academicians, and policymakers by using online technology. Proper utilization of online technology also helps to assist, select, design, and develop the questionnaire, survey technique, study area, theory, concept, research questions and objectives, research approach and philosophy, methodology, and data management technique, and interview questions, and proofreading from teachers, researchers, experts, and mentors. Thus, online technology has resulted in the emergence of new forms of learning genres and literature that empowers students’ level of cognition through collaboration among students, teachers, and researchers [8]. Considering the importance of online technology, a British proverb reveals that “if students or learners do not effectively make a bridge between their ideas and online technology, they might as well live in a cave or the ground shell of a space shuttle” [9, p. 1]. Modern education and research always give emphasize on online technology for significant academic writing [10]. In the present time, students are habituated to handling computer, internet, smartphone, and tab. Besides, they have proficiency in intelligent computing that leads to motivating them using online and digital technologies. Optimum use of online technology in academic writing requires the proper utilization of online technology. Having no prior application knowledge of online technology may lose students’ and researchers’ level of interest and satisfaction towards it. This situation may narrow down the scope of collaborative learning in a broader sense and restrict it to produce quality academic writing. Besides, motivation towards more usage of online technology in academic writing gets stuck when it faces a few challenges. Potential challenges to using this technology include learners’ ability to purchase essential electronic gadgets and internet package, insufficient supports for designing online technologymediated curriculum and assessment techniques, motivation, and encouragement from teachers for more usage of online technology in writing, students’ incapability to handling online technology, and interrupted internet connection [11, 12]. Authenticity verification, security, and fraud prevention are also another form of challenge to continue online technology [13, 14]. Graduate college students of the National University (NU) of Bangladesh are not habituated with online technology for their academic writing. They prefer verbal instruction from their respective teacher, printed textbooks, newspapers, periodicals, and predatory journals. Having no sound application knowledge of online technology in academic writing, they prefer cut, copy, and paste practice to complete their term papers and other academic assignments and fail to show their potentiality and capability in assignment and term paper [15]. Moreover, they fail to develop critical thinking, constructive, innovative and creative ideas, and thus their academic assignments lose scientific quality and originality. At the same time, they fail to present and fail to present noble ideas and merge different research paradigm like epistemological stance, and ontological beliefs. To promote online technology in academic writing, this study tries to fulfill the following two objectives: detect potential contributors to more
Online Technology: Effective Contributor to Academic Writing
1283
usage of online technology at the graduate level under the NU of Bangladesh and develop policy options for using online technology in academic writing. Online technology has got popularity in academic writing in many universities and research institutions due to its significant role in better learning, research, and education. This study mainly focuses on the benefits of online learning in academic writing. The study also provides an overview of online technology and academic writing. The study is significant in several ways. It contributes to the current research regarding the effect of online technology in academic writing in two ways. First. We provide quantitative evidence on the effect of online technology in academic writing, which has gained minor importance in the previous literature. Our findings highlight the effect of online technology in writing. Second. We conduct a mini-experiment, which allows us to consider two groups such as treated or experiment or study group and usual or control group for proper empirical investigation. Hence, we can be able to measure the causal effects of online technology in academic writing. In particular, the study gives a fuller understanding of the use of online technology in academic writing and provides an effective guideline for future researchers and policymakers in this field. The remainder of this paper is organized as follows: Sect. 2 covers a brief literature review. Section 3 presents the theoretical motivation, Sect. 4 highlights the methodology and research plan. The results and discussion are provided in Sect. 5. Section 6 concludes by offering recommendations and policy implications.
2 Literature Review Online technology enhances students’ writing skills. It further helps to make strong academic collaboration among students, teachers, researchers, mentors, and others. Research work, communication, and social and cultural competencies get more force when these are dealt with online technology [16, 17]. It works as a catalyst to reduce plagiarism practice in academic writing and helps to seek research opportunities around the world [18]. It is treated as a more effective, significant, and powerful tool that facilitates concept building. Students get a good grade when they are involved in online technology-mediated academic writing [19]. It deals the pedagogical tasks and considers as a means and ways of inclusive and life-long learning. The principal objective of this technology is to generate quality writing. On the other hand, students get easy access to the learning platform through online technology that enhances higher-order learning, thinking, and writing skills [20]. Based on the numerous benefits of online technology, [21] gives more stress on online technology-induced academic writing rather than the traditional modes of writing because it is a suitable platform for distance mode learning. Students can get easy access to learning from anywhere and anytime through online technology [22]. The use of online technology in academic writing through e-discussion forum, social media group, wiki, skype, blog, and zoom are well recognized [23]. For instance, Blog-mediated concepts and feedback play an important role in academic writing [24]. It is an important contributor to students’ learning and academic writing because it provides critical comments and constructive feedback [25, 26]. It develops learners’ cognitive level, enhances capacity building in academic writing, and foster literacy
1284
Md. H. Iqbal et al.
skills [27]. It also develops the general concept of any particular issue, motivates students to copy and paste free writing, and enhances students’ confidence along with their perceived strengths and weakness of academic writing. Like the blogs, skype generates a new dimension of the academic writing approach. Generally, it is seen that traditional writing (such as library-oriented and class notes-based academic writing) requires a longer time and a large number of academic resources (such as class note, textbook, newspaper, and article) to finalize a manuscript [28]. Skype-mediated students perform better in academic writing than traditional academic writing practices [29]. Wikis have greater impacts on academic writing. It makes students pay closer attention to the formal aspects of academic writing by collectively contributing or mutually supportive approaches [30]. Wiki is an emergent technology responsible to increase writing capacity. This web-based authoring instrument is used in academic writing to execute in collaboration with others [31]. Wiki-assisted writing can contribute to raise awareness of the audience and increase the use of interpersonal metadiscourse [32]. On the other hand, the reflective journal has a vital role to eliminate typos and errors free writing. In the provision of the reflective journal, students get relevant and constructive feedback from researchers, scholars, key persons, and teachers that meet their objectives [33]. Like the reflective journal, the chat room also has positive impacts on concept building for academic writing. Online-based chat rooms are functioning well in regular interactions with students and librarians for essential, valid, and reliable bibliographic database, and desirable e-journals and books [34]. Students who are habituated with the chat room for concept development perform well in academic writing-mediated academic writing compared to those of students not used chat reference service [35]. Assessment of the effectiveness of online technology in learning is highly required for teachers, college administrators, the authority of NU of Bangladesh, and the government to implement, plan, and modify online technology-based learning for better academic writing. Existing literature perfectly highlights the effectiveness of online technology in learning in terms of different countries’ perspectives. A very few studies focused on the effectiveness of online technology in academic writing in Bangladeshi educational institutions like NU of Bangladesh. This study tries to cover this issue significantly.
3 Theoretical Motivation A society can utilize an alternative option when it fails to satisfy desirable condition with its traditional or existing option. With this point of view, Kelvi Lancaster and Richard G. Lipsey first formalized the theory of the second-best in their article entitled “The General Theory of Second Best” in 1956 followed an earlier work by James E. Meade [36]. In this viewpoint, online technology works as an effective contributor to academic writing instead of traditional writing practice. The theory of the second-best explains why students use online technology in their academic writing practice. This theory is based on Pareto optimality (academic writing by online technology cannot increase the writing efficiency without reducing traditional writing practice). A certain social dividend can be gained by a movement from a Pareto non-optimal allocation to a
Online Technology: Effective Contributor to Academic Writing
1285
Pareto optimal allocation [37]. Therefore, the optimality of the Pareto condition is often considered an efficient usage of online technology in academic writing. For better understanding, the fundamental feature of this theory can be presented for a single student case. The first-order condition for Pareto optimality is obtained by maximizing the quality of the student’s academic writing subject to the associate writing production function. From the Lagrangian function, it is possible to write the following mathematical form: V ¼ W ðx1 ; x2 ; . . .; xn Þ sF x1 ; x2 ; . . .; xn ; q0
ð1Þ
and equating partial derivative to zero @V ¼ Wi sFi ; i ¼ 1; . . .; n @x1
ð2Þ
@F where Wi ¼ @W @x1 and Fi ¼ @x1 . It follows that
Wi F i ¼ ; i; j ¼ 1; . . .; n Wj F j
ð3Þ
If Eq. (2) is satisfied, the rate of substitution between the traditional writing approach and the online technology-mediated writing approach will equal the corresponding rate of writing transformation. Assume that an educational institution fails to promote an online technologymediated academic writing practice in its campus corresponding to one of the conditions of Eq. (2). Under this practice, we can consider another assumption to eliminate the implementation barriers of online technology-induced writing condition which can be expressed as follows: Wi kFi ¼ 0
ð4Þ
where k is a positive but arbitrary constant that will able to produce different optimal values of s calculated from Eq. (2) and the writing production function. The conditions for a second-best solution are obtained by maximizing online technology-mediated academic writing practice subject to Eqs. (1) and (4). From the Lagrangian function, we can write the following mathematical form: V ¼ W ðx1 ; x2 ; . . .; xn Þ sF x1 ; x2 ; . . .; xn ; q0 cðWi kFi Þ
ð5Þ
where s and s indicate undetermined and unknown multipliers. Take the partial derivatives of V, setting them equal to zero @V ¼ Wi sFi cðWi kFi Þ ¼ 0; i ¼ 1; . . .; n @x1
ð6Þ
1286
Md. H. Iqbal et al.
@V ¼ Fi x1 ; x2 ; . . .; xn ; q0 ¼ 0 @s
ð7Þ
@V ¼ ðWi kFi Þ @c
ð8Þ
Considering the condition c ¼ 0; it is not possible to get a solution from Eq. (6) because of the violation of the assumption presented in Eq. (4). Moving to Eqs. (7) and (8) to the right-hand side and dividing the ith equation by the jth: Wi sFi þ cðWi1 kF1i Þ ; i; j ¼ 1; . . .; n ¼ Wj sFj þ c Wij kF1j It is not possible to know a priori about the signs of cross partial derivatives such as Wi1 ; Wij ; F1i ; and F1j . Hence, anyone may not expect the usual Pareto conditions to be required for the attainment of the second-best optimum [38]. People’s perception of certain actions and their effectiveness are required in experiment, survey, observation, interview, and judgment to attain the second-best optimum. Thus, the following section will discuss students’ perceptions about online technology and its effectiveness in academic writing through a few research plans such as experiment, focus group discussion (FGD), and questionnaire survey.
4 Methodology and Research Plan 4.1
Present Study
Pabna district is selected as the study area. It is the southernmost district of Rajshahi Division. This district is bounded by Natore and Sirajgang districts on the north, Rajbari and Kustia districts on the south, Manikgonj and Sirajganj districts on the east, and the Padma River and Kustia district on the west. A large number of NU affiliated colleges are located in this district. The graduate programs of these colleges are run by the traditional academic practice instead of semester system. Generally, teachers and students of these colleges do not instruct and follow online technology in academic writing. As a consequence, the writing process is disrupted. Very often, teachers engage their students with academic writing,but their performances are not satisfactory due to more traditional academic writing practice. Cut, copy, and paste practices are commonly seen in their academic writing. They failed to submit their writing tasks within the designated timeframe. 4.2
Mini Experiment
For a better empirical assessment of the effectiveness of online technology in writing, this study has conducted a mini-experiment that occurred from 7 January to 22 February 2017. It was concerned with writing term papers and other academic assignments in the graduate class at the Economics Department under the Pabna Government Edward College. To experiment, we asked two questions to separate whole graduate students in a treated or experimental or study group and a usual or
Online Technology: Effective Contributor to Academic Writing
1287
control group of this department. The first question was concerned to know about online technology, and the other question was associated with the effectiveness of online technology in academic writing. A total of 32 students answered “yes” to both of the questions were classified into an experimental group and the rest 47 students answered “no” to both of the questions were treated as a control group. The first group was instructed to develop concept building and plagiarism-free academic writing by using different tools of online technology (see Table 1 for more details) and the last group was requested to follow traditional writing practice. Both groups were assigned the same writing tasks such as group works in the classroom, home works (writing summary, problem statement, review of literature, result and discussion, and recommendations), and report writing on a particular topic. The grading scheme of the writing assignment was carried a 10% mark in participation, a 20% mark in a homework assignment, and the rest 50% mark in short report writing. Our result showed that the experimental group performs well in academic writing and they got a good grade in their writing tasks (average score 82%) compared to those of the usual group (average score 71%). This result motivated us to run the questionnaire survey.
Table 1. Online technology used in the mini-experiment Kinds of technology Web City DropBox Skype Google Drive You Tube Blogs E-discussion Forum Kahoot
Social Media Group Reflective Journal Chat Room Wiki Zoom WhatApps Podcast
Objectives Designated to upload and download essential reading materials, students’ assignments, PowerPoint Presentation (PPT), and feedback Designated to backup students’ important files and class notes and share concept notes and assignments to teachers Designated to make meetings relevant to academic discussions with teachers, classmates, scholars, and researchers Designated to store relevant reading materials, article journals, and data set Designated to run tutorial sessions relevant to useful research and academic issues Designated to build concept Designated to interact with others for developing the idea, concept, and critical thinking Designated to evaluate students’ performance in academic issues through a set of quizzes related to use form of a verb, correction, punctuation marks, typos, and grammar Designated to identify assignment topics by e-discussion forums Designated to proofreaders Designated to Designated to Designated to Designated to Designated to
assess content and academic writing by experts and share experience and ideas backup and modify academic writing make the academic meetings with teachers, and classmates share relevant academic documents and writing materials record and listen to the teacher’s lecture
1288
4.3
Md. H. Iqbal et al.
Techniques and Tools of Variables Selection, and Data Collection
In Bangladesh no prior attempt was made to compare the effect of the online technology with traditional academic writing before and no existence of the database, instruments of both the quantitative and qualitative data were applied for better empirical assessment as well. Variables of the study were selected from existing literature and focus group discussion (FGD), and data were collected from the survey through the semi-structured questionnaire in few NU affiliated colleges in the Pabna district. A systematic search of electronic databases was performed for concept development of online technology and its role to develop academic writing for the period 1999 to 2020. All studies related to the effectiveness of online technology were selected at random to develop its concepts, strategies, and drivers. Three attributes e.g., institutional and infrastructural supports (ins_inf_sup), motivation (mtv), and students’ residential location (stu_resi_loc) were selected from [39–41]. This study arranged 2 FGDs consists of (7–8) participants (e.g. undergraduate and graduate students, teachers, parents, and college administrators) of each occurred from 24 March to 25 March 2017 at Government Edward College and Government Shaheed Bulbul College of Pabna district. The objectives of FGD were to select variables and design relevant questionnaire for data collection. Three attributes such as tutorial sessions for using online technology in academic writing (tut_se_on_te), cost-effective internet package for students (co_ef_in_pac), and providing electronic gadgets for students at subsidizing price (gad_sub_pri) were selected from our two FGDs. The surveys were conducted by interviews based on a semi-structured questionnaire in the seven NU affiliates colleges from 7 May to 19 August 2017. Purposive random sampling method was applied to the whole students of the targeted seven colleges. Three colleges (Government Edward College, Pabna Government Mohila [women] College, and Ishwardi Government College) were selected from two urban areas and 82 respondents participated in the survey from these colleges. Other 69 respondents were selected for a survey from Dr. Zahurul Kamal College under the Sujanagar sub-district, Haji Jamal Uddin College under the Bhangura sub-district, Chatmohar College under the Chatmohar sub-district, and Bera College under the Bera sub-districts. The selection of respondents was kept random as much as possible. However, there was a possibility of sampling errors. The following procedures were taken in reducing the bias of the survey. All survey interviews were conducted by the trained data collectors. All respondents were briefed about the importance of online technology in academic writing. The interviews of respondents were taken care of for a long time. The data collectors do not indulge in any personal and irrelevant gossiping to avoid anchoring or influencing the answers of the respondents. The questionnaire for surveys consists of a few sections. The first section covered the socio-economicdemographic (SED) characteristics of respondents. The second section highlighted the determinants of online technology in academic writing. Collected data have many dummy responses. Random parameter logit (RPL) or basic model has applied dummy responses [42]. In the first step of estimation, the RPL was run for participation decision in online technology where 1 indicates positive perception on online technology for writing excellence, and 0 indicates otherwise are
Online Technology: Effective Contributor to Academic Writing
1289
regressed on explanatory variables or attributes (such as ins_inf_sup, mtv, stu_resi_loc, tut_se_on_te, co_ef_in_pac, and gad_sub_pri). All of our proposed attributes and variables are included in the following regression model: Yi ¼ b0 þ b1 Xi þ ei
ð9Þ
Equation (9) is known as the RPL model when we use only the attributes. It is also known as the MNL model when we use both the attributes and SED characteristics [43]. The use of online technology in academic writing (us_olt_ac_wri) is treated as an outcome variable in both of the models.
5 Results and Discussion 5.1
Descriptive Statistics of the Variables in the Model
Collected data from the survey in seven NU affiliated colleges of Pabna district show basic descriptive statistics of major SED characteristics (see Table 2 for more details). Table 2. Brief descriptive statistics of major SED variables Variable Minimum Maximum Mean Age of respondents (years) 20 25 22.33 Parents’ monthly income (Tk.) 20000 90000 43791.67 Daily expense (Tk.) 20 150 70.83 Household composition (family member) 4 12 6.88 Level of education (years of schooling) 16 17 16.50 (Source: Field Survey)
Std. Deviation 1.373 19023.965 45.076 2.028 0.511
Out of 151 students from seven NU affiliated colleges, 119 (78.807%) were male students and 32 (21.192%) were female students in the survey. Among them, 92% of respondents belonged to Muslim households and the rest 8% of respondents were from Hindu households. About 100 (66%) of students argued that online technology could play a significant role in academic writing, but 51 (34%) of respondents were not interested in online technology due to low family income, scarcity of internet service, inability to purchase smartphone (Android and iOS), the higher price of internet package, lower speed of bandwidth, and the feeling of discomfort with online technology. All students in the survey process have strongly pointed out that they did not get any support from their respective institutions for better utilization of online technology in academic writing. Table 1 outlines the summary statistics of the study. Respondents’ average age and their parents’ monthly income were recorded at 22.33 and Bangladeshi currency Taka 43792 respectively. All respondents were studying at undergraduate and graduate levels and their daily average expense was estimated Taka 71 per person. The majority of the respondents have more than 5 family members.
1290
5.2
Md. H. Iqbal et al.
Results of Models
Coefficients of proposed attributes of online technology express the students’ choice possibility for considering the online technology-mediated academic writing. The sign, level of significance, and degree of magnitude make a guarantee of the effectiveness of online technology in academic writing. All coefficients are statistically significant at 1%, 5%, and 10% levels. Estimated results make the guarantee that online technologymediated writing has produced plagiarism-free and innovative writing than the traditional writing approach. Thus, it can be said that online technology has a greater causal effect on academic writing. Results for all 151 respondents from RPL and MNL models are presented in Table 3. The RPL model shows the result when it includes only the proposed attributes of online technology. The result in the MNL model makes the surety that SED characteristics along with proposed attributes are found to be significant determinants of online technology. But it is not possible to predict the relationship between genderonline technology and religion-online technology. During the survey, a less number of female and Hindu students participated in the survey and that may cause an insignificant relationship between gender and use of online technology, as well as that of between religion and use of online technology. Generally, students’ age, their parents’ income, their daily expenses, and their household compositions are more sensitive to online technology [44]. There are some similar results obtained in the literature in terms of the level of education (years of schooling) of the respondents. For instance, Woodrich and Fan [45] found that a 1% increase in the level of education will lead to an increase in online technology in academic writing. Higher-level education requires citation, referencing, summarizing, concept note development, and paraphrasing for innovative academic writing. The log-likelihood test was used to determine the acceptance or rejection of each variable [46]. The goodness of fit (R2) is also improved when the addition of the covariates is considered [47]. The value of R2 in the range of (0.20–0.30) is comparable to the range of (0.70–0.90) of an adjusted R2 of an ordinary least square (OLS) method [48]. Thus, the RPL and MNL model along with the covariates is deemed the good regression model. 5.3
Result of the Correlation Matrix
The estimated correlation also supports the results generated from the RPL and MNL models. There are various relationships between the outcome variable and the proposed attributes but the intensity or degree of relationships are not the same because of their different values (e.g., very strong (r 0:8Þ, strong (r 0:6), moderate (r 0:4) and weak (r 0:2) of correlation coefficients (see Table 4 for more details). There are positive relationships among all the proposed attributes and online technology in academic writing at the convenient significant levels except students’ residential location because it is negatively associated with online technology. Among all the attributes, institutional, tutorial sessions for online technology, infrastructural supports, and cost-effective internet package have a very strong association with online technology in academic writing.
Online Technology: Effective Contributor to Academic Writing
1291
Table 3. Regression results of the survey Attributes/Variables
RPL model
Intercept
MNL model
Sub-district level colleges
District level colleges
All colleges
All colleges
21.613* (0.309)
13.093** (0.019)
24.499**(0.010)
32.071* (0.546)
0.271* (0.150)
0.178** (0.547)
0.119** (0.031)
ins_inf_sup
0.421 (0.431)
Motivation
−0.221*** (0.680)
0.375** (0.003)
0.110* (0.326)
0.210* (0.001)
stu_resi_loc
−0.347 (0.037)
−0.175* (0.172)
−0.109* (0.342)
−0.113* (0.090) 0.120** (0.022)
tut_se_on_te
0.848*** (0.731)
0.182*** (0.511)
0.020** (0.437)
co_ef_in_pac
0.131** (0.130)
0.296*** (0.221)
0.216* (0.738)
0.129*** (0.067)
gad_sub_pri
0.196** (0.608)
0.132* (0.362)
0.093** (0.586)
0.093*** (0.245)
age
0.009* (0.002)
gender
0.221 (0.023)
religion
0.452 (0.165)
parents’ income
0.130* (0.459)
daily expense
0.062** (0.683)
household composition
0.072*** (0.453) −0.172*** (0.002)
level of education Log-likelihood
−479.9231
−371.5701
−360.0361
Goodness of fit
0.339
0.394
0.402
69
82
151
Observations (n)
−506.4572 0.273 151
Note. Standard errors are reported in parentheses * Significant at the 1% level. ** Significant at the 5% level. *** Significant at the 10% level.
Table 4. Correlations matrix of relevant attributes for online technology us_olt_ac_wri ins_inf_sup Mtv stu_resi_loc tut_se_on_te co_ef_in_pac gad_sub_pri
1 0.831** 0.518* –0.623*** 0.891 ** 0.883 * 0.567 **
1 0.327 0.442** 0.339* 0.125** 0.254***
1 –0.678 0.589 * –0.423** –0.798***
1 0.348* 0.449 0.673**
1 0.889 * 0.805*
1 0.810***
1
Note. * Significant at the 1% level. ** Significant at the 5% level. *** Significant at the 10% level.
6 Conclusion and Policy Implications This study explores the potential contributors to online technology in academic writing in NU affiliated colleges of Bangladesh and develops a policy for using online technology. We used cross-sectional data collected from NU affiliated few colleges in the Pabna district. FGDs help us to select a few attributes and design the questionnaire. We also depend on mini-experiment and two types of regression models for proper empirical assessment. We found that students were much aware of plagiarism-free academic writing and they wanted to produce online technology-mediated writing. Our empirical assessment also takes stand on the effectiveness of online technology based on significant levels of institutional and infrastructural status, students’ residential location, motivation towards more usage of online technology in academic writing, tutorial session for using online technology, cost-effective internet package for students
1292
Md. H. Iqbal et al.
and provides electronic gadgets for students at the subsidized price. Sometimes few SED characteristics work as an accelerator to move forward online technology in writing [49]. The paradigm shift from the traditional writing approach to the online-mediated writing approach is essential to develop the environment of online technology in any educational institution. It requires appropriate guidelines, strategies, policies, and joint efforts of other stakeholders like class teachers, college administrators, the authority of NU of Bangladesh, and government intervention. For instance, the government can formulate an online technology-mediated educational policy for college education. The government can support training facilities for college teachers to habituate them to online technology in education. The government can also provide free electronic gadgets like tab, smartphone, and laptop to the students or provide them a subsidy to purchase electronic gadgets at a lower price. Likewise, college authority can provide an uninterrupted Wi-Fi facility, multimedia projector supported classroom, and a continuous power supply facility for better access to online technology. The NU of Bangladesh can redesign their syllabus and curriculum with special attention to online technology-mediated academic writing. Lastly, the class teachers can encourage and motivate their students to more use of online technology in academic writing. The class teacher can arrange tutorial sessions for their students so that the students can effectively using online technology in academic writing tasks. The study is not free from certain limitations. As a new concept, it is essential to undertake an in-depth study and a higher range of questionnaire surveys. For a better assessment, it is essential to conduct further research in this field. Only a shorter range of the survey in a few colleges may narrow down the scope, concept, and effectiveness of online technology in academic writing. However, we successfully applied different approaches that guarantee the validity, reliability, and consistency of our empirical findings. Acknowledgments. We greatly acknowledge Assistant Professor Dr. Renee Chew Shiun Yee, School of Education, University of Nottingham, Malaysia Campus (UNMC) for her significant comments and suggestions to develop the concept and methodology of this study. We are thankful to the editor(s) and other anonymous referees of this journal paper for their valuable and constructive suggestions for improving the draft.
References 1. Fahmida, B.: Bangladesh tertiary level students’ common errors in academic writing. BRAC University, Dhaka (2020). https://dspace.bracu.ac.bd/xmlui/bitstream/handle/10361/252/ 08163004.PDF?sequence=4 2. Iqbal, M.H.: E-mentoring: an effective platform for distance learning. E-mentor 2(84), 54–61 (2020) 3. Macznik, A.K., Ribeiro, D.C., Baxter, G.D.: Online technology use in physiotherapy teaching and learning: a systematic review of effectiveness and users’ perceptions. BMC Med. Educ. 15(160), 1–2 (2015) 4. Iqbal, M.H., Ahmed, F.: Paperless campus: the real contribution towards a sustainable low carbon society. J. Environ. Sci. Toxicol. Food Technol. 9(8), 10–17 (2015)
Online Technology: Effective Contributor to Academic Writing
1293
5. Iqbal, M.H., Sarker, S., Mazid, A.M.: Introducing technology in Bangla written research work: application of econometric analysis. SAMS J. 12(2), 46–54 (2018) 6. Hanson, C., West, J., Neiger, B., Thackeray, R., Barnes, M., Mclntyre, E.: Use and acceptance of social media among health education. Am. J. Health Educ. 42(4), 197–204 (2011) 7. Kuzme, J.: Using online technology to enhance student presentation skills. Worcester J. Learn. Teach. 5, 27–39 (2011) 8. Vlachopoulos, D., Makri, A.: The effect of games and simulations on higher education: a systematic literature review. Int. J. High. Educ. 14(1), 22–34 (2017) 9. Adams, P.C.: Placing the anthropocene: a day in the life of an enviro-organism. Trans. Inst. Br. Geogr. 41(1), 54–65 (2016) 10. Goldie, J.G.S.: Connectivism: a knowledge learning theory for the digital age? Med. Teach. 38(10), 1064–1069 (2016) 11. Shelburne, W.A.: E-book usage in an academic library: user attitudes and behaviors. Libr. Collect. Acquisitions Tech. Serv. 33(2–3), 59–72 (2009) 12. Thornton, P., Houser, C.: Using mobile phones in English education in Japan. J. Comput. Assisted Learn. 21(3), 217–228 (2005) 13. Bandyopadhyay, D., Sen, J.: Internet of things: applications and challenges in technology and standardization. Wireless Pers. Commun. 58(1), 49–69 (2011) 14. Aebersold, R., Agar, J.N., Amster, I.J., Baker, M.S., Bertozzi, C.R., Boja, E., Ge, Y., et al.: How many human proteoforms are there? Nat. Chem. Biol. 14(3), 206–217 (2018) 15. Click, A.B.: International graduate students in the United States: research process and challenges. Libr. Inf. Sci. Res. 40(2), 153–162 (2018) 16. Aesaert, K., Vanderlinde, R., Tondeur, J., van Braak, J.: The content of educational technology curricula: a cross-curricular state of the art. Educ. Tech. Res. Dev. 61(1), 131– 151 (2013) 17. Voogt, J., Roblin, N.P.: 21st century skills. Discussion Paper. University of Twente, Faculty of Behavioral Science, University of Twente, Department of Curriculum Design and Educational Innovation, Enschede (2010) 18. Amiel, T., Reeves, T.C.: Design-based research and educational technology: rethinking technology and the research agenda. Educ. Technol. Soc. 11(4), 29–40 (2008) 19. Hendler, J., Shadbolt, N., Hall, W., Berners-Lee, T., Weitzner, D.: Web science: an interdisciplinary approach to understanding the web. Commun. ACM 51(7), 60–69 (2008) 20. Levine, M., DiScenza, D.J.: Sweet, sweet science: addressing the gender gap in STEM disciplines through a one-day high school program in sugar chemistry. J. Chem. Educ. 95(8), 1316–1322 (2018) 21. Schulman, A.H., Sims, R.L.: Learning in an online format versus an in-class format: an experimental study. J. Technol. Horiz. Educ. 26(11), 54–64 (1999) 22. Murthy, D.: Digital ethnography: an examination of the use of new technologies for social research. Sociology 42(5), 837–855 (2008) 23. Miyazoe, T., Anderson, T.: Learning outcomes and students’ perceptions of online writing: simultaneous implementation of a forum, blog, and wiki in an EFL blended learning setting. System. 38(2), 185–199 (2010) 24. Jimoyiannis, A., Schiza, E.I., Tsiotakis, P.: Students’ self-regulated learning through online academic writing in a course Blog. In: Digital Technologies: Sustainable Innovations for Improving Teaching and Learning, pp. 111–129. Springer, Cham (2018) 25. Hansen, H.E.: The impact of blog-style writing on student learning outcomes: a pilot study. J. Polit. Sci. Educ. 12(1), 85–101 (2016) 26. Chong, E.K.: Using blogging to enhance the initiation of students into academic research. Comput. Educ. 55(2), 798–807 (2010)
1294
Md. H. Iqbal et al.
27. Novakovich, J.: Fostering critical thinking and reflection through blog-mediated peer feedback. J. Comput. Assisted Learn. 32(1), 16–30 (2016) 28. Hussein, N.O., Elttayef, A.I.: The impact of utilizing Skype as a social tool network community on developing English major students’ discourse competence in the English language syllables. J. Educ. Pract. 7(11), 29–33 (2016) 29. Lo Iacono, V., Symonds, P., Brown, D.H.: Skype as a tool for qualitative research interviews. Sociol. Res. Online 21(2), 103–117 (2016) 30. Li, M., Zhu, W.: Patterns of computer-mediated interaction in small writing groups using wikis. Comput. Assisted Lang. Learn. 26(1), 61–82 (2013) 31. Yusoff, Z.S., Alwi, N.A.N., Ibrahim, A.H.: Investigating students’ perception of using wikis in academic writing. 3L Lang. Linguist. Lit. 18(3), 17–32 (2012) 32. Zhang, W., Cheung, Y.L.: Researching innovation in English language writing instruction: a state-of-the art-review. J. Lang. Teach. Res. 9(1), 80–89 (2018) 33. Estaji, M., Salimi, H.: The application of wiki-mediated collaborative writing as a pedagogical tool to promote ESP learners’ writing performance. Asian ESP J. 14(1), 112– 141 (2018) 34. Henter, R., Indreica, E.S.: Reflective journal writing as a metacognitve tool. Int. Sci. Commit. 533, 133–141 (2014) 35. MacDonald, H.: Undergraduate students can provide satisfactory chat reference service in an academic library. Evid. Based Libr. Inf. Pract. 13(2), 112–114 (2018) 36. Tudini, V.: Using native speakers in chat. Lang. Learn. Technol. 7(3), 141–159 (2003) 37. Lipsey, R.G., Lancaster, K.: The general theory of second best. Rev. Econ. Stud. 24(1), 11– 32 (1956) 38. Vaez-Glasemi, M., Moghadds, Z., Askari, H., Valizadeh, F.: A review of mathematic algorithms and data envelopment analysis. Iran. J. Optim. 12(1), 115–137 (2020) 39. Dasgupta, P., Stiglitz, J.: On optimal taxation and public production. Rev. Econ. Stud. 39(1), 87–103 (1972) 40. Porter, W.W., Graham, C.R., Spring, K.A., Welch, K.R.: Blended learning in higher education: institutional adoption and implementation. Comput. Educ. 75, 185–195 (2014) 41. Hsu, H., Wang, S.: The impact of using blogs on college students’ reading comprehension and learning motivation. Literacy Res. Instr. 50(1), 68–88 (2010) 42. Peterson, P.W.: The debate about online learning: key issues for writing teachers. Comput. Compos. 18(4), 359–370 (2001) 43. Iqbal, M.H.: Valuing ecosystem services of sundarbans approach of conjoint experiment. J. Global Ecol. Conserv. 24, e01273 (2020) 44. Iqbal, M.H.: Telemedicine: an innovative twist to primary health care in rural Bangladesh. J. Prim. Care Community Health 11, 2150132720950519 (2020) 45. Dillenburger, K., Jordan, J.A., McKerr, L., Keenan, M.: The Millennium child with autism: early childhood trajectories for health, education and economic wellbeing. Dev. Neurorehabilitation 18(1), 37–46 (2015) 46. Woodrich, M., Fan, Y.: Google Docs as a tool for collaborative writing in the middle school classroom. J. Inf. Technol. Educ. Res. 16(16), 391–410 (2017) 47. Hemmert, G.A., Schons, L.M., Wieseke, J., Schimmelpfennig, H.: Log-likelihood-based pseudo-R 2 in logistic regression: deriving sample-sensitive benchmarks. Sociol. Methods Res. 47(3), 507–531 (2018) 48. Miaou, S.P., Lu, A., Lum, H.S.: Pitfalls of using R2 to evaluate goodness of fit of accident prediction models. Transp. Res. Rec. 1542(1), 6–13 (1996) 49. Louviere, J.J., Hensher, D., Swait, J.: Stated Choice Methods: Analysis and Application. University Press, Cambridge (2020)
A Secured Electronic Voting System Using Blockchain Md. Rashadur Rahman , Md. Billal Hossain , Mohammad Shamsul Arefin(B) , and Mohammad Ibrahim Khan Department of CSE, CUET, Chattogram 4349, Bangladesh [email protected], [email protected] , [email protected], muhammad [email protected]
Abstract. The foundation of sustainable democracy and good governance is the transparency and credibility of elections. For last several years, electronic voting systems have gained much more popularity and have been of growing interests. E-voting has been considered as a promising solution to many challenges of traditional paper-ballot voting. Conventional electronic voting systems are vulnerable due to centralization of information system. Blockchain is one of the most secure public ledgers for preserving transaction information and also allows transparent transaction verification. It is a continuously growing list of data blocks which are linked and secured by using cryptography. Blockchain is emerging as a very potential technology for e-voting as it satisfies the essential requirements for conducting a fair, verifiable and authentic election. In this work, we proposed a blockchain based voting mechanism based on predetermined turn for each node to mine new block in the blockchain rather than performing excessive computation to gain the chance to mine block. We analyzed two possible conflicting situation and proposed a resolving mechanism. Our proposed voting system will ensure voter authentication, anonymity of the voter, data integrity and verifiability of the election result.
Keywords: Blockchain
1
· E-voting · Bitcoin · Consensus mechanism
Introduction
Our modern representative democracy has been operated on the voting mechanism since the 17th century [1]. The security of an election is the most fundamental issue and is a matter of national security in every democracy. Many voting systems have been developed to conduct fair elections which are acceptable to all form of people involved in it. With the prompt expansion of information technology, electronic voting (e-voting) or online voting has become a new phenomenon. For the last several years e-voting systems have been of growing interest in many sectors as it provides much conveniences over traditional paper voting [2]. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1295–1309, 2021. https://doi.org/10.1007/978-3-030-68154-8_111
1296
Md. R. Rahman et al.
Neumann [3] proposed several fundamental criteria for a reliable e-voting system which includes integrity of the system, data reliability and integrity, anonymity of the voter, operator authentication. Employment of e-voting systems may encounter several issues like transparency of the voting, data integrity, ballot secrecy, fraud handling, reliability etc. [4]. The centralization of voting data makes the traditional e-voting systems quite vulnerable to many kinds of attacks as it provide a single point of failure. It is very difficult to ensure anonymity of voter in e-voting by using encryption alone. Therefore, designing a more secure and feasible e-voting system has become an emerging research area. Blockchain technology can be an auspicious solution in resolving the security concerns of voting systems. Blockchain is a decentralized, distributed, immutable, incontrovertible public ledger technology. It was first introduced by Satoshi Nakamoto in 2008. The first application of Blockchain technology was Bitcoin. Bitcoin is a currency system that could be exchanged securely based on only cryptography. The fundamental characteristics of this emerging technology are: 1. This is a decentralized ledger technology. The chain is stored in many different locations. Since the ledger is replicated and distributed over many nodes, there is no single point of failure. This provides verifiability and ensures availability. 2. A block in the Blockchain is chained to the immediate previous block through a reference that is actually a hash value of the previous block which is called parent block. This makes it immutable and tempering proof. 3. A consensus must be reached by most of the network nodes before making a proposed new block as a permanent entry in the chain. Blockchain is an ordered data structure that is composed of data chunks called blocks. Each block contains transactions as the block data. Blocks are cryptographically linked with each other. Blockchain technology does not store the data in a single centralized server. In this paper, we proposed a blockchain based voting system for storing the vote securely. The key contributions of our work are summarized as: (i) We have proposed a voting system based on blockchain technology which satisfies the essential properties of e-voting. For verification, we implemented our proposed system in windows platform. (ii) Proposed a consensus mechanism based on predefined turn of nodes to mine blocks in the network. We analyzed two conflicting situations and proposed a mechanism for resolving them.
2
Related Works
Electronic voting is getting popularity day by day. It offers convenience to the voters and the authority over the conventional paper-ballot process. Security is the prime challenge in electronic voting [2]. Some researches have been done
A Secured Electronic Voting System Using Blockchain
1297
in the domain of e-voting [5,6]. Ofori et al. presented OVIS based online voting system in [8]. Most of the earlier works were usually based on signature mechanism and the votes were stored in a centralized server. These e-voting systems are proven vulnerable to many threats like DDoS attack, sybil attack [7]. Blockchain is the most suitable technology in storing the cast votes securely. There exist very few research works on E-voting on blockchain. Some projects of E-voting on blockchain are showed in [9]. In [10], Ayed proposed a method of storing the votes in the blocks in different chains. In that work, the first transaction added to the block is a special transaction that represents the candidate. Each chain is allocated for each candidate. If there are n numbers of candidate, there will be n numbers of blockchains. There will be much processing and storage overhead because of assigning a dedicated blockchain for every candidate. Smart contracts are irreversible and also traceable applications which executes in decentralized environment like Blockchain. No one can change or manipulate the code or its execution behavior once it is deployed. Hjalmarsson et al. in [11] proposed election as a smart contract. The election process is represented by a set of smart contracts. The smart contracts are instantiated by the election administrators on the blockchain. Their implementation was based on Ethereum platforms. Ethereum-based structures include Quorum and Geth For which concurrent execution of transactions is not supported, That limits scalability and velocity. Exonium is a cryptocurrency based paid which makes it very expensive for large scale implementation. Lee et al. proposed a method where votes can be stored in the blockchain preserving the anonymity of the voter in [12]. In this work they assumed that there be a trusted authentication organization which authenticate the validity of the voters. The organization will provide an unique key against each voter’s identity. The voter has to vote using the given key. So the identity of the voter and his or her information are not stored in the blockchain. The main limitation of the work is that, the system completely assume that all the voters are eligible. So a voter can cast his or her vote multiple time. The system validate the cast vote at the end of the voting operation. Yi proposed a blockchain-based e-voting scheme based on distributed ledger technology (DLT) [13]. For providing authentication and non-repudiation, they used a user credential model based on elliptic curve cryptography (ECC). This system allows a voter to change the cast vote before deadline this can be used as a way of vote alteration which violates the basic security of the e-voting system. Bellini et al. proposed an e-voting service based on blockchain infrastructure [14]. In there system, service configuration defined by the end user is automatically translated into a cloud-based deployable bundle. Most of the existing works have directly adopted Bitcoin’s consensus mechanism, where nodes compete with each other to solve a cryptographic puzzle to mine a block and get incentive. But in the case of public election, there is no incentive necessary. So the competition among the nodes to mine a new block, is quite redundant and will waste a lot of computing power of every node. Our
1298
Md. R. Rahman et al.
proposed blockchain based voting system is based on predetermined turn for each nodes to mine a new block. Our system stores all the voting information in a single blockchain. It does not require any cryptocurrency to operate and an eligible voter can vote once in the system.
3
Methodology
The overall architecture of our proposed voting system is shown in Fig. 1. The primary focus of this work is to store the voters’ votes securely in blockchain and make sure that the voting information is immutable. Each center in the election have a node in the network. Each time a vote is cast, first the validity of the vote is checked. The system ensures that all the voters participating in the voting are validate. Each center or node has its own copy of the blockchain, where all the cast votes are stored. Center n Voter registration
Authentic voter database voter verification & authentication
Node n Rest API
Vote key
Center 3 Rest API
Center 1
Casting vote
Validity check
Rest API
Rest API
Sign with the centre key
Center 2
Node 3
Node 2 Node 1
New block added by the mining node Voting Blockchain stored in every valid node Block 1
Block 2
Block 3
Block N
Election monitoring • Election result • Verify vote • Query vote
Fig. 1. The system architecture overview of our proposed blockchain based voting system
3.1
Blockchain Structure
The structure of the blockchain in our proposed system is shown in Fig. 2. Each block contains transactions as the block data. Blocks are cryptographically linked with each other. Each block has the hash of the previous block. Almost all the data of the previous block including the hash value are used to generate the hash value of the current block and so on.
A Secured Electronic Voting System Using Blockchain Block 1
Block 2
Block 3
Block index
Block index
Block index
Previous Hash
Previous Hash
Previous Hash
Timestamp
Timestamp
Timestamp
Transactions …………….
Transactions …………….
Transactions …………….
Proof/ Nonce
Proof/ Nonce
Proof/ Nonce
1299
Fig. 2. Basic structure of connections of blocks in blockchain
3.2
Voter Verification and Authentication
Our proposed voting system assumes that there must be an authentication organization to authenticate the eligible voters. The authentication organization handles the registration of the voters. When a voter makes a vote request to the system by entering the vote key given by the authentication organization, the system first checks the vote key in the registered voter database provided by the authentication organization. If the voter is valid and the voter has not yet voted in the election, then the vote is considered as a valid vote and added to the open transactions. The organization has all the information of the registered voters. When voting, the voter has to give none of his private information but only the vote key which is only known to that particular voter. Thus the anonymity of the voter is well preserved. The voter is able to verify whether his or her vote was cast by the vote key. 3.3
Blockchain Transactions
Each block in a blockchain has the only data field that is called its transactions. The data which are to be stored in blockchain are in this transaction field. For our case the votes are stored as data for the blocks in other words, votes are stored as transactions in our blockchain. These data of the votes are very sensitive, these data should not be trusted in the hand of third parties [15]. After casting a vote, the vote is stored as the transaction in the block. The size of the transaction field is variable. A block can store variable number of transactions (Fig. 3). Open Transactions. Open transactions are the transactions in the blockchain system which are not yet been mined or added to the blockchain. In our system the votes are first stored in the open transactions before mining (Fig. 4). Once a voter cast his or her vote, if the voter information is valid then the vote is saved as a transaction in the open transactions list. Once the block is mined, all the open transactions are stored in block thus the open transactions become empty. The process of storing votes in the open transactions is shown in Fig. 5. At first the key of the particular center or node must be loaded or created.
1300
Md. R. Rahman et al.
Fig. 3. Transactions inside block A valid casted vote
Open transactions
Blockchain
Fig. 4. Flow of a cast vote through open transactions.
Each center must have a public key and a private key. The keys must be loaded before any vote is cast from any particular center. After creating or loading the center. The voter will cast his or her vote with the voting key provided by the authentication organization. The voter will input the key and select a candidate to vote. Signing the Transactions. Each vote are treated as a transaction in the blockchain. Each transaction has a field named signature. The signature is created by the center. The signature is used for the further validation of the transactions. It is quite impossible for an attacker to alter any block information in the blockchain. Any single alteration of the block information will result in a complete changed hash. So the chain will be completely broken. Each transaction got its signature which is created by the private key of the node along with the candidate, vote key of the vote cast. The signature can only be verified by the public key of the center. If any of the information of any vote is altered the verification process will fail because the signature will not be valid for the changed transaction. All the transactions or votes in the open transactions are verified before mining. So this signature ensures the integrity of all the open transactions (Fig. 6).
A Secured Electronic Voting System Using Blockchain
1301
Load Center
Casting vote
Validity Checking
Registered voter & Blockchain
Invalid
Rejection
Valid
Adding the signature with the Center Key The vote added to the open transactions Broadcast the transaction to the other connected nodes
Fig. 5. Storing vote in the open transactions Private key Creates
Transaction Created Together Signature
Only this Public Key can verify Transaction with a signature created with the Private Key
Verify
Public key
Fig. 6. Creation and validation of signature of a transaction
3.4
Hashing the Blocks
Each block has a field named previous hash. By this field one block is chained to its prior block. Thus the integrity of the overall chain is preserved. The hashing algorithm used for this hashing is Secure Hashing Algorithm (SHA-256). Messages up to 264 bits are transformed into digests or hash value of size 256 bits or 32 bytes. It is quite impossible to forge the hash value. SHA 256 : B 1 ∪ ............. ∪ B 2 M
64
→ B 256 →H
1302
Md. R. Rahman et al.
All the fields of the previous block are used as the input message strings for the hashing function including all the transactions of the previous block. The function maps all these information into a fixed message digests or hash of size 256 bits or 32 bytes. Before mining any particular block the hash is generated every time then used as the previous hash field of the block being mined (Fig. 7). Input message Block index Timestamp Proof Transactions
Hash SHA – 256 Hash Function
c775ae7455f086e2fc6852 0d31bfebfdb18ffeaceb933 085c510d5f8d2177813
Fig. 7. Generation of the hash value of a block using SHA-256
3.5
Proposed Mechanism for Consensus
Blockchain has many use cases outside Bitcoin. In our work a methodology is designed to store the votes securely insides block. The main distinction of the work is, most of the other blockchain based voting systems, there is a reward given to the mining node. Each and every node in the network tries to mine a new block to the chain, the wining node has to solve a cryptographic puzzle to find the nonce or proof which will produce the required hash for the block to be mined. In our case there need not to be any mining competition among the block. The procedure of our proposed consensus is showed in Algorithm 1. Each node gets it turn Tn to mine a block and add it to the blockchain. The mining node will generate a valid proof number which will be used to generate a hash with specific condition and later used to verify the block by all the peer nodes. Each transaction tx in the open transactions are checked for verification with signature of that particular transaction. If all the transaction in the open transactions are valid then proceed to the next step otherwise rejected. The hash is generated with the Secure Hashing Algorithm (SHA-256). The information of the previous block is used to generate the hash for the block. For each block in the blockchain stored in this particular node (self chain), the validity is checked. If all the blocks in the chain are valid then proceed to the next step otherwise rejected. The new block is then broadcast to the connected peer nodes (self.peer nodes). Finally if there exists no conflict situations the new block is appended to the blockchain and all the open transactions are cleared.
A Secured Electronic Voting System Using Blockchain
1303
Algorithm 1. Proposed Consensus Mechanism 1: for Turn Tn of a N odei in N ODE T U RN S do 2: previous hash ← Hash SHA256 (self.chain[chain length − 1] 3: proof ← 0 4: guess hash ← Hash SHA256(self.open transactions, previous hash, proof ) 5: while guess hash [0 : 3] ! = “000” do 6: P roof ← P roof + 1 7: guess hash ← Hash SHA256(self.open transactions, previous hash, proof ) 8: end while 9: for each Transaction tx in self.open transactions do 10: if V erif y T ransaction (tx ) = f alse then 11: return 12: end if 13: end for 14: block ← Create Block (self.chain, previous hash, self.open transactions, proof ) 15: for each blocki in self.chain do 16: if block.previous hash! = Hash SHA256(self.chain[block.index − 1]) then 17: return 18: end if 19: if block.proof is not valid then 20: return 21: end if 22: end for 23: for each nodei in self.peer nodes do 24: if Broadcast Block T o (nodei ) = N ull then 25: raise conf licts and return 26: end if 27: end for 28: if conf lict = N ull then 29: append the block to self.chain 30: self.open transactions ← [ ] 31: end if 32: end for
3.6
Conflicts in the Network
In a practical blockchain with so many nodes situated in different locations, there is a probability that any or some of the nodes can’t receive any broadcasted block. This situation can occur for so many reasons like network congestion, power damage etc. If such situation occurs, the affected node will have shorter chain than rest of the nodes. This situation is termed as conflicts. There might be two possible types of conflicts. (i) Any of the nodes except the broadcasting node, has shorter chain. (ii) The broadcasting node has the shorter chain. in Fig. 8a shows the case of the conflict situation where any of the nodes in the network except the broadcasting node has the shorter chain. In Fig. 8b the broadcasting node has shorter chain than the network.
1304
Md. R. Rahman et al. Node
New Block
Node
Node Node
Node with shorter chain Conflict Occurs
(a) The Broadcasting node has shorter chain Node
New Block
Node Conflict Occurs
Node Node
(b)
Fig. 8. (a)Conflict when any node has shorter chain than the network, (b) Conflict when the broadcasting node has shorter chain than the network
Resolving Conflicts. Earlier we have mentioned two different type of conflicts in the network. Any new block is broadcast to all the connected nodes or center. During the broadcasting of the block any node which has the shorter chain is informed that there exists conflict. The steps of detecting conflicts are: Step-1: If index of the broadcasted block > index of the last block in the chain + 1, then the recieving node has the shorter chain. Step-2: If index of the broadcasted block local chain len and V erif ication(node chain) = true then winner chain ← node chain end if self.chain ← winner chain
Implementation and Experiment
We implemented the system in Python. For the user interface of the system we used Vue Js. We simulated our proposed system in 100 virtual nodes or servers. Each node acts as a voting center in our system. Information transfer among the nodes is done through Rest API. The system interface of a node in our system is shown in Fig. 9. Where a voter has to enter his or her valid voter key provided by the authentication organization to cast vote. Once a block is mined in the chain, the mined block is broadcasted to all the nodes in the system. Each node adds the newly mined block to its own copy of the chain. The chain stored in one
Fig. 9. Interface of our proposed system
1306
Md. R. Rahman et al.
Fig. 10. Blocks mined in the blockchain
of the nodes is shown in Fig. 10. There exists so many challenges and security requirements to develop a secure e-voting system. Some of the comprehensive security requirements of contemporary e-voting systems are summarized in [16]. Our implemented blockchain based voting system satisfies several essential criteria for transparent and trusted electronic voting. These are listed below: Authentication: The system will only allow the authenticated voter who are already registered to vote. Our system does not contain the registration process. For simulating a election, we used about 1,000 voters in the voting process. We tried to input same vote key from multiple nodes but the system automatically detect this and does not permit multiple cast. The system is also able to verify voter and makes sure that no two vote is cast from a single vote key. Anonymity of voter: Voters in the system cast their vote by using they vote key. No identity of the voter is revealed. The voter remains anonymous during and after the election. We can see stored chain of a node in Fig. 10, where a vote in block no 2 is shown. There is no information of the voter is revealed except the key (QFYPLMBTLTDN). This ensures the anonymity of voter in our system. Data integrity: The system makes sure that once a vote is cast, it can never be changed. Each block is linked with hash to the prior block. To ensure that we performed manual manipulation of the local chain of several nodes. An example is shown in Fig. 11. We intentionally changed the proof number of a block from 17 to 23 shown in Fig. 11b. After the change, we tried to cast vote and mine the block from the manipulated node, the system can detect the manipulation and immediately mark the node as a violated node and reject mining Fig. 12. Verifiability: There are 100 nodes in our implemented system and each nodes stores a valid copy of the chain. Thus Each node knows how many vote is cast in the system. When the final result of the election is counted, the system makes sure all the valid nodes agree on the final count of the vote. So system is verifiable and makes sure that all votes are counted correctly.
A Secured Electronic Voting System Using Blockchain
1307
(a) Before manipulation
(b) After manipulation (Changing proof from 17 to 23)
Fig. 11. Manipulation of data in the blockchain stored in a node
Fig. 12. The system rejects the mining request from the manipulated node
The system is secure against DDoS (Distributed Denial-of-Service) attack and Sybil attack as blockchain technology by its nature is secured against DDoS attack [2]. In the proposed e voting system it is not allowed for the participants to generate their own identity and cast a vote. In Sybil attack [17] an individual creates a large amount of fake identities to disrupt the network. As our proposed blockchain network does not allow creation of identity so no individual has access to create one. 4.1
Comparative Analysis of Using Proposed Mechanism
There are several approaches to consensus in blockchain based systems. Different consensus mechanism is used in different use case of blockchain [18]. In our proposed system, the consensus mechanism used is not exactly the Bitcoin’s proof of work (POW). Most of the existing works in this domain directly adopted the consensus mechanism of the Bitcoins. In that consensus all the mining nodes compete with one another to solve a cryptographic puzzle to generate a new valid hash for the node to be mined. this mechanism consumes a lot of computational resources [19] as all the node tries to calculate (Table 1). For the existing works the consumption of computational resources is proportional to: O(n), Where n is the number of nodes in the system. In our system only one node gets its turn to mine a new block rather than all the node try to get the chance to mine in the system. So the consumption of computational resources is proportional to: O(1) as only one node tries to mine a new block
1308
Md. R. Rahman et al. Table 1. Comparison with the existing consensus mechanism.
Candidates
Computational resource consumption
Existing works which directly adopted Bitcoin’s consensus
O(n)
Proposed methodology
O(1)
in the system. In this analysis we only considered the complexity of getting the turn to mine a block. We assumed all the processing works in the blockchain are constant.
5
Conclusion
In this work we proposed and implemented an electronic voting system based on Blockchain. The votes are stored as transactions inside the blocks and the blocks are chained together using hash. The system is decentralized so there is no single point of failure. Each node mines a new block in the system based on predetermined turn instead of all node competing to solve puzzle to get turn. We have also mentioned two potential conflict situations and adopted a conflict resolving methodology for our system. And have showed how conflicts can be resolved in the system. There is scope of future research on this work. The system can be employed to conduct real political or organizational elections to measure its scalability and execution performance. Our proposed voting system can be a solution to the security concern of the electronic voting system and can be useful in conducting a transparent election which is trusted by all. In this work, the consensus mechanism is a not exactly the Bitcoin’s Proof of Work, rather it is a modified version of that. The method is based on predefined turn on each node or vote center in the blockchain. Which will save a lot of computational consumption.
References 1. Eijffinger, S., Mahieu, R., Raes, L.: Inferring hawks and doves from voting records. Eur. J. Polit. Econ. 51, 107–120 (2018) ¨ A Systematic review of challenges and opportunities of 2. Ta¸s, R., Tanrı¨ over, O.: Blockchain for e-Voting. Symmetry. 12, 1328 (2020) 3. Neumann, P.G.: Security criteria for electronic voting. In: Proceedings of 16th National Computer Security Conference, pp. 478-481. Maryland (1993) 4. Esteve, J.B., Goldsmith, B., Turner, J.: International Experience with EVoting. Available online: https://www.parliament.uk/documents/speaker/digitaldemocracy/IFESIVreport.pdf (accessed on 20 August 2020) 5. Evans, D., Paul, N.: Election security: perception and reality. IEEE Secur. Privacy Mag. 2(1), 24–31 (2004) 6. Chaum, D.: Secret-ballot receipts: true voter-verifiable elections. IEEE Secur. Privacy Mag. 2(1), 38–47 (2004). https://doi.org/10.1109/msecp.2004.1264852 7. Daramola, O., Thebus, D.: Architecture-centric evaluation of blockchain-based smart contract e-voting for national elections. Informatics 7, 16 (2020)
A Secured Electronic Voting System Using Blockchain
1309
8. Ofori-Dwumfuo, G.O., Paatey, E.: The design of an electronic voting system. Research J. Inf. Tech. 3, 91–98 (2011) 9. Curran, K.: E-voting on the blockchain. J. Br. Blockchain Assoc. 1, 1–6 (2018) 10. Ayed, A.B.: A conceptual secure blockchain based electronic voting system. Int. J. Net. Secur. Appl. 9, 1–9 (2017) 11. Hjalmarsson, F., Hreioarsson, G., Hamdaqa, M., Hjalmtysson, G.: Blockchainbased e-Voting system. In: 2018 IEEE 11th International Conference on Cloud Computing (CLOUD), pp. 983–986. IEEE (2018) 12. Lee, K., James, J.I., Ejeta, T.G., Kim, H.J.: Electronic voting service using blockchain. J. Dig. Forens., Secur. Law. 11, 123 (2016) 13. Yi, H.: Securing e-voting based on blockchain in P2P network. J Wireless Com Netw. 2019, 137 (2019) 14. Bellini, B., Ceravolo, P., Damiani, E.: Blockchain-based e-vote-as-a-service. In: 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), pp. 484–486. IEEE (2019) 15. Zyskind, G., Nathan, O., Pentland, A.: Decentralizing privacy: using Blockchain to protect personal data. In: 2015 IEEE Security and Privacy Workshops, pp. 180–184. IEEE (2015) 16. Wang, K.H., Mondal, S.K., Chan, K., Xie, X.: A review of contemporary e-voting: requirements technology systems and usability. Data Sci. Pattern Recogn. 1, 31–47 (2017) 17. Douceur, J.R.: The Sybil Attack. In: International Workshop on Peer-to-Peer Systems, WA, USA (2002) 18. Zheng, Z., Xie, S., Dai, H.N., Chen, X., Wang, H.: Blockchain challenges and opportunities: a survey. Int. J. Web Grid Serv. 14(4), 352–375 (2018) 19. Bach, L.M., Mihaljevic, B., Zagar, M.: Comparative analysis of blockchain consensus algorithms. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1545–1550. IEEE (2018)
Preconditions for Optimizing Primary Milk Processing Gennady N. Samarin1,2(&) , Alexander A. Kudryavtsev1, Alexander G. Khristenko3 , Dmitry N. Ignatenko4 , and Egor A. Krishtanov5 Federal State Budgetary Scientific Institution “Federal Scientific Agroengeneering Center VIM” (FSAC VIM), Moscow, Russia [email protected], [email protected] 2 Northern Trans-Ural State Agricultural University, Tyumen, Russia 3 Novosibirsk State Agrarian University, Novosibirsk, Russia [email protected] Prokhorov General Physics Institute of the Russian Academy of Sciences, Moscow, Russia [email protected] St. Petersburg State Agrarian University, Saint-Petersburg, Pushkin, Russia [email protected] 1
4
5
Abstracts. 99% of freshly drawn milk produced on farms in Russia is cooled with artificial cold produced by refrigeration machines, which consume large amounts of electricity (up to 0.00944 Ws per 1 kg of milk when cooled from 30…35 to 3…5 °C). Therefore, the goal of this work is to optimize the process of primary milk processing using energy efficiency as a target function, and the objective is to study the layered souring of milk in an open container without a mixing system. According to experimental studies of layered milk souring in an open container without a mixing system, the dependence of changes in titratable acidity and the number of bacteria in the lower and upper layers of milk in the container at a certain temperature over time have been obtained. As a result of the research conducted, adequate regression equations for titratable acidity and the quantities of colony forming units in unit volumes of the lower milk layer depending on storage time have been obtained. The significance of the research results is that in milk storage tanks, decontamination devices should be installed at the top. Keywords: Milk Primary processing Alternative methods indicators Specific energy efficiency Optimization
Milk quality
1 Introduction Currently, the Russian Federation (RF) is one of the world's largest producers of milk and dairy products, but has low share of commercial milk in total production, which is 57%. RF achieves about half milk productivity per cow compared to developed countries. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1310–1318, 2021. https://doi.org/10.1007/978-3-030-68154-8_112
Preconditions for Optimizing Primary Milk Processing
1311
The main two challenges the dairy industry in the Russian Federation faces are: reducing the dairy industry's dependence on imported products; increased demand for commercial milk. The solution to the first problem stems from the country’s food security: the share of domestic products should be 90% of the total volume of commodity resources, and this can be achieved by replacing imported products with domestic products, by increasing the production of commercial milk. During solving the first problem, a natural solution to the second one appears, that is, increasing the production of raw commercial milk, not losing, but increasing the demand for it [1, 2]. In milk production, cooling is one of the most important factors. When cooled, the bacteria in the milk do not die, but fall into suspended animation. When bacteria get into comfortable temperature conditions, and this can occur at any time, including during storage and transportation of milk, sometimes even 10 °C is enough, they begin to procreate intensively, thereby affecting the quality of the product and, accordingly, its cost [3–6]. Combining these two tasks, we can conclude that alternative methods and technical means to kill bacteria are needed to solve them. Analyzing the regulations of different economic zones and countries, we can see: For the Russian Federation, Federal laws (FL) FL-88 and FL-163 “Technical Regulations for Milk and Dairy Products” contain general and specific safety requirements; State Standard (GOST) GOST R 52054–2003 “Cow’s milk raw. Specifications” contains general requirements for the production of all types of dairy products [7]. For the European Economic Community (EEC) there is a Council Directive 92/46/EEC of 16 June 1992 laying down the health rules for the production and placing on the market of raw milk, heat-treated milk and milk-based products; Council Directive 2002/99/EC laying down the animal health rules governing the production, processing, distribution and introduction of products of animal origin for human consumption; Regulation (EC) No 852/2004 of the European Parliament and of the Council of 29 April 2004 on the hygiene of foodstuffs; Regulation (EC) No 853/2004 of the European Parliament and of the Council of 29 April 2004 laying down specific hygiene rules for food of animal origin; Regulation (EC) No 854/2004 of the European Parliament and of the Council of 29 April 2004 laying down specific rules for the organisation of official controls on products of animal origin intended for human consumption; Regulation (EC) No 882/2004 of the European Parliament and of the Council of 29 April 2004 on official controls performed to ensure the verification of compliance with feed and food law, animal health and animal welfare rules [8, 9]. For the Eurasian Customs Union (EACU), there are technical regulations (TR): “TR CU 033/2013. Technical Regulations of the Customs Union. On the Safety of Milk and Dairy Products” [7]. The definition of the “quantity of mesophilic aerobic and facultative anaerobic microorganisms” (QMFAnM) refers to the estimate of the size of the group of sanitary indicative microorganisms. Different groups of microorganisms are part of the QMFAnM: bacteria, fungi, yeast and others. Their total number indicates the sanitary condition of the product and the degree of its insemination with microflora. The best
1312
G. N. Samarin et al.
temperature for QMFAnM growth is 35…37 °C (aerobic); the temperature boundary of their growth is within 20…45 °C [7]. The general safety criteria in different economic zones and countries in terms of the key quality of raw milk, QMAFAnM, according to literary sources [10–14] is: In the EU, the quality of milk is considered satisfactory if the value of QMAFAnM is less than 100103 CFU/g; The CFU abbreviation is deciphered as a Colony Forming Unit and denotes the number of bacteria that are capable of forming a full-fledged microbial colony [7, 15– 17]. For the Customs Union (Republic of Armenia; Republic of Belarus; Republic of Kazakhstan; Kyrgyz Republic; Russian Federation), the QMAFAnM value has to be less than 4000103 CFU/g; In the Russian Federation and Ukraine, the QMAFAnM value has to be less than 4000103 CFU/g. In order to reduce the number of microorganisms in milk, it is necessary to observe the sanitary rules of automatic milking. However, as we know from literary sources [18–21], even a small number of microorganisms in milk can affect its storage, so it is necessary to take measures to destroy or suspend the development of microorganisms that got into it. The most effective and accessible method on livestock farms is the one that involves cooling and heating the milk. The vital activity of many microorganisms found in milk slows down sharply when it is cooled to 8…10 °C and is completely suspended when cooled to 2…3 °C. When the temperature drops to minus 20 °C and fluctuates from 20 °C to minus 20 °C, a small fraction of the microorganisms dies [20, 23]. When milk is heated to 50 °C, 68% of microorganisms die, and if the milk is kept for 20 min at this temperature, more than 99% of all microorganisms contained in it are eliminated. To completely destroy the microorganisms and their spores it is necessary to bring the milk to a temperature of 120 °C and sustain the temperature level for 15… 20 min. The most significant factor influencing the growth and development of bacteria is the positive temperature. And freezing leads to the slow destruction of the product, as ice crystals break cell membranes [22–24]. Light is essential for the development of bacteria, but in the presence of ultraviolet light (sunlight), bacteria are destroyed [16, 25]. Depending on the need for oxygen, microorganisms can be classified as follows: aerobic, anaerobic, facultative aerobic/anaerobic; a typical example would be common lactic acid bacteria, microaerophilic. The process of cooling milk requires high energy costs, special equipment and its centralized maintenance. Alternative methods of primary milk processing include processes such as exposure to ultraviolet, microwave radiation, infrared electric heating, ultrasound, ultra-high pressure, electroprocessing (electrochemical processing), bactofugation, sterilization, and others [1, 7, 21].
Preconditions for Optimizing Primary Milk Processing
1313
Therefore, the goal of this work is to develop and justify the constructive and technological parameters and modes of operation of the container-type milk decontamination unit by the energy efficiency criteria. Based on this goal, we have outlined the following objectives: to develop a technological scheme for milk decontamination with alternative variants allowing for repeated exposure; to justify constructive and technological parameters and optimal operation modes that ensure the required milk quality indicators at minimal energy costs.
2 Materials and Methods The description of the real processes of heat and mass exchange in the unit can be done via mathematical modeling, therefore, the mathematical model will be defined by the system of three Eqs. (1, 2, 3).
where W is energy consumption of the unit (pump and radiator), kWh; NP is power of the drive of the pump supplying milk to the container (at small volumes of milk may not be necessary), kW; s1 is operation time of the milk pump, s; RNu is the total power consumed by the radiator of the unit, kW; s2 is total time of milk irradiation, s; f1, f2, f3 are functionalities; RGMi is the total fractional mass of milk, kg; DP is pressure drop during pumping between the entrance and exit of the unit, Pa; FS is area of the crosssection of the unit, m2; DnMB is the difference in the number of bacteria at the entrance and exit of the unit, units; ηy0 is efficiency of the radiator; NG is power of the radiator generator, kW; GA is mass of air that comes into contact with the unit, kg; tIM is initial milk temperature, K; tIA is initial ambient air temperature, K; FV is surface area of the container in the unit, m2; kP is the heat transfer coefficient, W/(m2K); ηt is thermal efficiency of the radiator; lHi is fractional factor of freshly drawn milk; tKi is speed ratio of souring of each fraction of milk when stored, 1/s; s3 is duration of milk storage, s; GMi is mass of each fraction of milk, kg, hyI is depth of the radiator immersion; ηM is bactericidal efficiency of the radiator. As a target function of optimization of these parameters, we have set minimal energy consumption while getting the longest duration of milk storage.
1314
G. N. Samarin et al.
All studies on determination of the physical and chemical characteristics of milk were conducted according to conventional methods and relevant state standards: GOST R 52054–2003 “Cow’s milk raw. Specifications”; GOST 3624–92 “Milk and milk products. Titrimetric methods of acidity determination”; GOST 32901–2014 “Milk and milk products. Methods of microbiological analysis.” The method is based on the restoration of resazurin by redox enzymes secreted by microorganisms into milk.
3 Results The purpose of this experiment was to determine the nature of milk souring in different layers of liquid over time s. The study of layered freshly drawn milk souring in an open container without mixing was conducted at ambient air temperature of about 20 °C. During the experiment, we have measured three main parameters of milk: temperature t, titratable acidity TA and number of CFU N in a unit volume of the liquid. The results of studies of milk’s temperature and titratable acidity are presented in Table 1. The titratable acidity of milk is measured in degrees Therner, °Th. Table 1. Results of the experimental research no. 1. Time to freeze, s,h 10 14 18 22 6 10 14 18
Temperature of the upper milk layer (UL), tUL, °C
Temperature of the lower milk layer (LL), tLL, °C
16.05 18.30 18.74 19.25 18.27 18.74 18.39 18.61
16.32 17.89 19.38 19.69 19.09 19.09 19.44 19.05
Titratable acidity of the UL milk, TAUL, °Th 15.84 18.38 30.16 48.84 54.71 42.11 43.28 40.67
Titratable acidity of the LL milk, TALL, °Th 15.97 16.73 16.66 17.62 19.23 20.28 25.74 32.32
Figure 1 shows a graphical representation of the results of the experiment no. 1. Mathematical processing of the data of the experiment no. 1 [26, 27] allowed us to obtain an adequate regression Eq. (4), which describes the dependence of the titratable acidity in the lower milk layer (TALL, °Th) on its storage time s, h, with coefficient of determination R2 = 0.944. TALL ¼ 0:5335 s2 2:0043 s 17:103
ð4Þ
Preconditions for Optimizing Primary Milk Processing
1315
Fig. 1. The dependence of the change in titratable acidity in the lower and upper milk layers in the container at a certain temperature over time.
The results of temperature studies and CFU numbers in the unit volumes of liquid are presented in Table 2.
Table 2. Results of the experimental research no. 2. Measurement time, s, h
Temperature of the upper milk layer (UL), tUL, °C
Temperature of the lower milk layer (LL), tLL, °C
10 14 18 22 6 10 14 18 22
16.05 18.30 18.74 19.25 18.27 18.74 18.39 18.61
16.32 17.89 19.38 19.69 19.09 19.09 19.44 19.05
The number of bacteria in the UL milk (NUL, million bacteria/ml) 0.3 1.75 8.0 20.0 20.0 20.0 20.0 20.0 20.0
The number of bacteria in the LL milk (NLL, million bacteria/ml) 0.3 0.4 0.4 0.4 1.75 1.75 1.75 1.75 20.0
Figure 2 shows a graphical representation of the results of the experiment no. 2.
1316
G. N. Samarin et al.
Fig. 2. The dependence of changes in the number of bacteria in the lower and upper layers of milk in the container at a certain temperature over time.
Mathematical processing of the experimental data [26, 27] allowed us to obtain an adequate regression Eq. (5), which describes the dependence of the number of CFUs in the unit volumes of liquid in the lower layer of milk NLL, million bacteria/ml, on its storage time s, h, with coefficient of determination R2 = 0.9933 NHC ¼ 0:0261s5 0:5755s4 þ 4:641s3 16:688s2 þ 26:077 13:292s
ð5Þ
4 Discussion The Eqs. (4) and (5) and the graphical interpretation (Fig. 1, 2) suggest that milk souring in the container begins with the upper layer. Consequently, during the research of milk treatment with ultrasound and microwave or ultraviolet radiation, the milk should be treated top down. The information obtained can be used in the following stages: the development of new technological schemes for decontamination of milk using alternative methods and new machines containing an optimized number of radiators, allowing milk to be decontaminated repeatedly during processing.
Preconditions for Optimizing Primary Milk Processing
1317
5 Conclusions In the course of the research conducted by the authors of the article, it was found that the JSC “Krasnoe Znamya” farm located at the Novosokolniki district of the Pskov region of the Russian Federation consumes 132 500 kWh during storage of 3 900 tons of milk per year. Milk in an open container begins to sour from its upper layer, therefore, the required location of devices for preserving milk using alternative methods (ultrasound, microwave radiation, ultraviolet and other options) was experimentally confirmed. The milk souring process is described by regression Eqs. (4) and (5).
References 1. Samarin, G., Vasiliev, A.N., Vasiliev, A.A., Zhukov, A., Krishtopa, N., Kudryavtsev, A.: Optimization of technological processes in animal husbandry. In: International Conference on Efficient Production and Processing (ICEPP-2020), E3S Web Conferences, vol. 161, p. 1094 (2020) 2. Samarin, G.N.: Energosberegayushchaya tekhnologiya formirovaniya sredy obitaniya sel'skohozyajstvennyh zhivotnyh i pticy: monografiya [Energy-saving technology for creating the habitat of farm animals and poultry: monograph]/G.N. Samarin. V. P. Goryachkin Moscow state agrarian University, Moscow, 215 p. (2008) . (in Russian) 3. Boor, K.J.: Pathogenic Microorganisms of Concern to the Dairy Industry. . Dairy Food Environ. Sanitation 17, 714–717 (1997) 4. Chambers, J.V.: The microbiology of raw milk. In: Robinson, R.K. (ed.) Dairy Microbiology Handbook, 3rd edn, pp. 39–90 (2002). Wiley, New York 5. Ye, A., Cui, J., Dalgleish, D., et al.: Effect of homogenization and heat treatment on the behavior of protein and fat globules during gastric digestion of milk. J. Dairy Sci. 100(1), 36–47 (2017) 6. Liang, L., Qi, C., Wang, X., et al.: Influence of homogenization and thermal processing on the gastrointestinal fate of bovine milk fat: in vitro digestion study. J. Agric. Food Chem. 65(50), 11109–11117 (2017) 7. Coles, K.: Information Regarding US Requirements for the Importation of Milk and Dairy Products/Washington Department of Agriculture. Food Safety Program. Milknews - News of the dairy market [Electronic resource] – Electron. text data Moscow (2016). https:// milknews.ru/index/novosti-moloko_6294.html. 8. Bisig, W., Jordan, K., Smithers, G., Narvhus, J., Farhang, B., Heggum, C., Farrok, C., Sayler, A., Tong, P., Dornom, H., Bourdichon, F., Robertson, R.: The Technology of pasteurisation and its effects on the microbiological and nutritional aspects of , p. 36. milk International Dairy Federation, IDF, Brussels (2019) 9. Mack, G., Kohler, A.: Short and long-rung policy evaluation: support for grassland-based milk production in Switzerland. J. Agric. Econ. 2018, 1–36 (2018) 10. Aiassa, E., Higgins, J.P.T., Frampton, G.K., et al.: Applicability and feasibility of systematic review for performing evidence-based risk assessment in food and feed safety. Crit. Rev. Food Sci. Nutr. 55(7), 1026–1034 (2015) 11. Finnegan, W., Yan, M., Holden, N.M., et al.: A review of environmental life cycle assessment studies examining cheese production. Int. J. Life Cycle Assess. 23(9), 1773– 1787 (2018)
1318
G. N. Samarin et al.
12. Djekic, I., Miocinovic, J., Tomasevic, I., et al.: Environmental life-cycle assessment of various dairy products. J. Cleaner Prod. 68, 64–72 (2014) 13. Depping, V., Grunow, M., van Middelaar, C., et al.: Integrating environmental impact assessment into new product development and processing-technology selection: milk concentrates as substitutes for milk powders. J. Cleaner Prod. 149, 1 (2017) 14. Bacenetti, J., Bava, L., Schievano, A., et al.: Whey protein concentrate (WPC) production: environmental impact assessment. J. Food Eng. 224, 139–147 (2018) 15. Carlin, F.: Origin of bacterial spores contaminating foods. Food Microbiol. 28(2), 177–182 (2011). Special Issue SI 16. Coorevits, A., De Jonghe, V., Vandroemme, J., et al.: Comparative analysis of the diversity of aerobic spore-forming bacteria in raw milk from organic and conventional dairy farms. Syst. Appl. Microbiol. 31(2), 126–140 (2008) 17. Cusato, S., Gameiro, A.H., Corassin, C.H., et al.: Food safety systems in a small dairy factory: implementation, major challenges, and assessment of systems’ performances. Foodborne Pathog. Dis. 10(1), 6–12 (2013) 18. Doll, E.V., Scherer, S., Wenning, M.: Spoilage of Microfiltered and Pasteurized Extended Shelf Life Milk Is Mainly Induced by Psychrotolerant Spore-Forming Bacteria that often Originate from Recontamination. Front. Microbiol. 8, 135 (2017) 19. Boor, K.J., Murphy, S.C.: The microbiology of market milks. In: Robinson, R.K. (ed.) Dairy Microbiology Handbook, 3rd edn, pp. 91–122. Wiley, New York, (2002) 20. Doyle, C.J., Gleeson, D., Jordan, K., et al.: Anaerobic sporeformers and their significance with respect to milk and dairy products. Int. J. Food Microbiol. 197, 77–87 (2015) 21. O’Riordan, N., Kane, M., Joshi, L., et al.: Structural and functional characteristics of bovine milk protein glycosylation. Glycobiology 24(3), 220–236 (2014) 22. Huck, J.R., Sonnen, M., Boor, K.J.: Tracking heat-resistant, cold-thriving fluid milk spoilage bacteria from farm to packaged product. J. Dairy Sci. 91(3), 1218–1228 (2008) 23. Chavan, R.S., Chavan, S.R., Khedkar, C.D., et al.: UHT milk processing and effect of plasmin activity on shelf life: a review. Compr. Rev. Food Sci. Food Saf. 10(5), 251–268 (2011) 24. Mafart, P., Leguerinel, I., Couvert, O., et al.: Quantification of spore resistance for assessment and optimization of heating processes: a never-ending story. Food Microbiol. 27(5), 568–572 (2010) 25. Luecking, G., Stoeckel, M., Atamer, Z., et al.: Characterization of aerobic spore-forming bacteria associated with industrial dairy processing environments and product spoilage. Int. J. Food Microbiol. 166(2), 270–279 (2013) 26. Samarin, G.N., Vasilyev, A.N., Zhukov, A.A., Soloviev, S.V.: Optimization of microclimate parameters inside livestock buildings. In: Vasant, P., Zelinka, I., Weber, G.W. (eds) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol 866. Springer, Cham (2018). 27. Özmen, A., Weber, G., Batmaz, İ.: The new robust CMARS (RCMARS) method. In: International Conference 24th Mini EURO Conference “Continuous Optimization and Information-Based Technologies in the Financial Sector” (MEC EurOPT 2010), Izmir, Turkey, 23–26 June 2010 (2010).
Optimization of Compost Production Technology Gennady N. Samarin1,2(&) , Irina V. Kokunova3, Alexey N. Vasilyev2 , Alexander A. Kudryavtsev2, and Dmitry A. Normov4 1
3
4
Northern Trans-Ural State Agricultural University, Tyumen, Russia [email protected] 2 Federal State Budgetary Scientific Institution “Federal Scientific Agroengeneering Center VIM” (FSAC VIM), Moscow, Russia [email protected], [email protected] Velikie Luki State Agricultural Academy, Velikie Luki, Pskov region, Russia [email protected] Kuban State Agrarian University Named After I.T. Trubilin, Krasnodar, Russia [email protected]
Abstract. Due to the intensive development of livestock production in the world, there is an increase in the amount of waste generated by the production activities of agricultural enterprises. This sets us to formulate the new goals of minimizing the cost of livestock products, taking into account the reduction of the negative impact on the environment, as well as developing promising methods and technical means for the disposal of livestock waste. We note that one of the promising methods of manure utilization, including its liquid and semi-liquid fractions, is composting. To speed up this process and produce highquality organic composts, special technical means are needed, capable of mixing and grinding the components of mixtures. Considering this, creation of compost mixers and development of new technologies for the disposal of livestock waste on their basis is an urgent task. Keywords: Environmental safety Livestock waste Manure disposal Organic composts Composting phases Compost mixers
1 Introduction One effective way to dispose of livestock waste is to produce organic composts based on it. Peat, straw, sapropels, logging waste and organic waste from industrial plants, as well as other ingredients, can be used as components of compost mixtures. The additional costs of compost production are paid off by increasing crop yields and improving soil fertility [1–5]. Modern composting is an aerobic process and consists of several stages: lag stage, mesophilic, thermophilic and maturation. The timing of the compost maturation and the quality of the fertilizers obtained depend on the nature of these processes. At the first stage of composting, microorganisms “adapt” to the type of components of the mixture © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1319–1327, 2021. https://doi.org/10.1007/978-3-030-68154-8_113
1320
G. N. Samarin et al.
and living conditions in the compost heap. At the same stage, organic components begin to decompose, but the total population of microorganisms and temperature of the mass are still small. In the second phase, the decomposition of substrates is increasing, the number of microorganisms is constantly growing. At the beginning, simple sugars and carbohydrates are decomposed, and from the moment of their depletion, bacteria start processing cellulose proteins, while secreting a complex of organic acids necessary to feed other microorganisms [6–11]. The third phase of composting is accompanied by a significant increase in temperature caused by an increase in the number of thermophilic bacteria and their metabolic products. The temperature of 55 °C is harmful for most pathogenic and conditionally pathogenic microorganisms of humans and plants. However, it does not affect aerobic bacteria, which continue the composting process: accelerated breakdown of proteins, fats and complex carbohydrates. When food resources run out, the temperature of compost mass begins to gradually decrease [7, 12–14]. In the fourth stage, mesophilic microorganisms begin to dominate the compost heap, and the temperature indicates the onset of the maturation stage. In this phase, the remaining organic matter forms complexes that are resistant to further decomposition. They are called humic acids or humus. The resulting compost is easily loosened and becomes free-flowing. Truck-mounted organic fertilizer spreaders can be used to disperse it onto the fields [8, 14–17]. In order to produce quality organic composts and shorten their maturation, it is necessary to thoroughly mix and grind the components of the mixture both during laying down the heaps and at their ripening. In this case, the oxygen supply to different areas of compost heaps is improved, the formation of “dead zones” is excluded, microbiological processes are intensified. To perform these operations, special equipment is needed: compost aerator-mixers [15, 18–20]. As promising ways of disposing of livestock waste we should note its composting with the use of special technical means: aerator-mixers of compost heaps [21–23]. This sets us to formulate new goals of minimizing the cost of livestock products, taking into account the reduction of negative impact on the environment, as well as the development of promising ways and technical means for the utilization of livestock waste.
2 Materials and Methods In this article, the authors specify the prerequisites for optimizing compost production technology in different ways [24–26]. The task of optimizing the economic parameters of compost production technology in mathematical terms can be reduced to finding the minimum value of the accepted target function: specific costs for the production of compost SC′ [27–29] SC0 ¼ DC þ Es CI ! min; where DC are specific direct costs of compost production, rub/kg;
ð1Þ
Optimization of Compost Production Technology
1321
ES is the standard return rate on capital investments, ES = 0.15; CI are total specific capital investments in the technological process, rub/kg. At present, quite a variety of models of equipment for aeration of compostable organic mass are being produced. These can be machines with their own wheeled or tracked running (self-propelled or attached to tractors), as well as tunnel (bridge) agitators, propelled with various consoles and driven by electric or gasoline engines. They differ from each other, among other things, by the arrangement of the main working body (agitator drum), productivity, and price [30]. In particular, to improve the quality of operation of the machine, we propose to change the design of the agitator drum. It is known that the helical-feed drum devices are broadly used for the mixing of various loose and viscous components, as well as for their grinding and transporting. The drum usually consists of a hollow pipe and a trunnion with blades, and a screw or helical feeder. The axis of the drum rotation is mostly horizontal [31]. Analysis of studies of the helical-feed drums of different design showed that the drums of the 1st variant of the experiment formed a fertilizer distribution diagram with a depression in the central part and two peaks. The presence of cavities and peaks on the diagrams of the distribution of organic fertilizers indicates a highly uneven distribution of fertilizers in the middle zone of the feeder working body. Removing the loops of the helical feed in the middle part of the drum (variant 2), as well as installing radial blades instead (variant 3) or anti-loops (variant 4) eliminates the depression in the central part of the distribution diagram. In addition, the density of fertilizer distribution in this part is significantly increased, which allows to produce the required form of compost heaps. The best variant in our case is the 3rd one, as the installation of additional blades will not only contribute to the formation of a conical heap profile, but also provide better aeration of compostable mass. Studies of the process of agitation of organic composts using an upgraded heap aerator were carried out using an experimental device developed by the authors; a TG 301 machine (Fig. 1) for compost heaps aeration by Italian manufacturers was taken as its analogue, which has an acceptable price and is well-sold on the Russian market. It is a semi-hanging machine, supported by two pneumatic wheels during operation. The machine has a rounded arc-type frame and a main working body in the shape of a rotor, with agitator blades with notches on their edges installed in a circular formation, which crash well into the compostable mass and grind it. However, the agitator does not mix the components of compost mixtures well enough and does not ensure the formation of a heap of a given shape. The machine is equipped with an apron reflector, involved in the formation of a heap. It operates with tractors of traction class 1.4. The working body is driven from the tractor power take-off shaft.
3 Results The results of the study of the parameters of the compost heaps agitator and the experimental results are presented in Table 1. Figure 2, 3, and 4 contain graphical representations of the experimental results.
1322
G. N. Samarin et al.
Fig. 1. TG 301 compost heap aerator-mixer. Table 1. The results of the study of the parameters of the aerator (agitator-mixer) of compost heaps. No Factor variation levels x n Vtr x1 x2 x3 1 1 1 0 2 −1 −1 0 3 1 −1 0 4 −1 1 0 5 1 0 1 6 −1 0 −1 7 1 0 −1 8 −1 0 1 9 0 1 1 10 0 −1 −1 11 0 1 −1 12 0 −1 1 13 0 0 0 14 0 0 0 15 0 0 0
Natural factor values x min−1 230 170 230 170 230 170 230 170 200 200 200 200 200 200 200
n units 6 2 2 6 4 4 4 4 6 2 6 2 4 4 4
Output parameter N, W h
Vtr km/h 0.2 0.2 0.2 0.2 0.25 0.15 0.15 0.25 0.25 0.15 0.15 0.25 0.2 0.2 0.2
N1
N2
N3
Navg
62.98 48.60 60.19 52.28 77.72 45.70 54.14 61.75 71.38 45.01 13.68 3.84 5.92 6.30 5.93
64.29 49.60 60.19 51.74 74.58 46.65 55.27 63.05 68.50 44.08 13.82 3.72 6.05 6.23 6.05
69.54 52.10 67.72 57.67 83.21 50.46 59.78 70.20 76.43 50.11 15.70 4.19 6.54 6.55 6.35
65.60 50.10 62.70 53.90 78.50 47.60 56.40 65.00 72.10 46.40 14.40 3.92 6.17 6.36 6.11
Optimization of Compost Production Technology
1323
N, W h 80
75-80
75
70-75
70
65-70
65
60-65 55-60
60
50-55
55
b3
50
1 0,6 0,2 -0,2 -0,6
45 40 -1
-0.6
-0.2
0.2
0.6
45-50 40-45
-1 1 b1
Fig. 2. The dependence of the full power consumption of the compost agitator on the frequency of drum rotation and the speed of mass supply.
N, W h
65-70
70
60-65
65
55-60 50-55
60
45-50 55
40-45
50
b2
1 0,6 0,2 -0,2
45 40 -1
-0.6
-0.2
-0,6 0.2
0.6
-1 1
b1
Fig. 3. The dependence of the full power consumption of the compost agitator on the frequency of blades rotation and the number of tossing blades.
1324
G. N. Samarin et al.
N, W h
70-75
75
65-70 60-65
70
55-60
65
50-55 45-50
60
40-45
55
b3
50
1 0,6 0,2
45
-0,2
40 -1
-0.6
-0.2
-0,6 0.2
0.6
-1 1
b2
Fig. 4. The dependence of the full power consumption of the compost agitator on the number of tossing blades and the speed of mass supply
As a result of multifactor regression analysis, based on the results of research conducted by us, we have obtained the dependence of the full power required for the process of mixing components of compost mixtures from the frequency of rotation of the agitator drum x, the number of tossing blades n and the speed of supply of organic mass onto the agitator drum, that is, the forward speed of the machine Vtr. After conducting the subsequent multifactor regression analysis, excluding insignificant effects, we have obtained a regression equation N ¼ 59:6 þ 5:825 b1 þ 10:9375 b3 þ 2:25 ðb3 Þ2 ;
ð2Þ
where N is the full power of the aerator (agitator-mixer) of heaps, W/h. From the presented data, we can conclude that the model (1) is well-fitting, as coefficient of determination of parameters is quite high (R squared equals 99:0818%), the resulting model explains 97.43% of the change in N. The model in question is significant because there is a statistically significant relationship between variables. There is no noticeable correlation between experimental values placed in the matrix, as the Durbin-Watson (DW) statistics is higher than 1.4.
Optimization of Compost Production Technology
1325
4 Discussion Taking into account the values of the coefficients of the obtained mathematical model, analyzing the response surface (Fig. 2), we note that the increased influence on the power consumed by the machine is exerted by the frequency of rotation of the agitator drum and the rate of supply of compostable mass to the working body. With an increase in these parameters, energy costs grow. From the Fig. 3 we can see that the highest values of power consumed, required to mix the compostable organic mass, are observed at the maximum rotation rate of the agitator drum (230 min−1) and at the maximum rate of feeding the mass to the working body (0.25 km/h). Because of the dependence of energy costs (Fig. 4) on the frequency of the drum rotation and the speed of feeding the compostable mass, the minimum energy intensity is achieved at the minimum speed of movement of the unit and the minimum speed of rotation of the agitator drum.
5 Conclusions Based on the research conducted by the authors, the regression Eq. (2), which describes the dependence of full power required for the process of mixing the components of compost mixtures from the rotation rate x of the drum, the number of tossing blades n, and the rate of feeding the organic mass onto the agitator drum. The research will be used in the design of a three-dimensional model of the upgraded aerator-mixer.
References 1. Abdalla, M.A., Endo, T., Maegawa, T., Mamedov, A., Yamanaka, N.: Effectiveness of organic amendment and application thickness on properties of a sandy soil and sand stabilization. J. Arid Environ. 183, 104273 (2020). https://doi.org/10.1016/j.jaridenv.2020. 104273 2. El-Haddad, M.E., Zayed, M.S., El-Sayed, G.A.M., Abd EL-Safer, A.M.: Efficiency of compost and vermicompost in supporting the growth and chemical constituents of salvia officinalis L. cultivated in sand soil. Int. J. Recycl. Org. Waste Agric. 9(1), 49–59 (2020). https://doi.org/10.30486/IJROWA.2020.671209 3. Chen, T.M., Zhang, S.W., Yuan, Z.W.: Adoption of solid organic waste composting products: a critical review. J. Cleaner Prod. 272, 122712 (2020). https://doi.org/10.1016/j. jclepro.2020.122712 4. Alavi, N., Daneshpajou, M., Shirmardi, M., et al.: Investigating the efficiency of cocomposting and vermicomposting of vinasse with the mixture of cow manure wastes, bagasse, and natural zeolite. Waste Manage. 69, 117–126 (2017) 5. Guo, H.N., Wu, S.B., Tian, Y.J., Zhang, J., Liu, H.T.: (2020). Application of machine learning methods for the prediction of organic solid waste treatment and recycling processes: a review. Bioresour. Technol. 319, p. 124114. https://doi.org/10.1016/j.biortech.2020. 124114.
1326
G. N. Samarin et al.
6. Kokunova, I.V., Kotov, E.G., Ruzhev, V.A.: Razrabotka klassifikacii tekhnicheskih sredstv dlya proizvodstva organicheskih kom-postov (Development of classification of technical means for the production of organic compost). In: The Role of Young Scientists in Solving Urgent Problems of the Agro-Industrial Complex: Collection of Scientific Papers International Scientific and Practical Conference. Saint Petersburg-Pushkin, pp. 179–182 (2018). (in Russian) 7. Akhtar, N., Gupta, K., Goyal, D. et al.: Recent advances in pretreatment technologies for efficient hydrolysis of lignocellulosic biomass. Environ. Prog. Sustain. Energy 35(2), pp. 489–511 (2016) 8. El-Sherbeny, S.E., Khalil, M.Y., Hussein, M.S. et al.: Effect of sowing date and application of foliar fertilizers on the yield and chemical composition of rue (Ruta graveolens L.) herb. Herba Polonica 54(1), 47–56 (2008) 9. Joshi, R., Singh, J., Vig, A.P.:Vermicompost as an effective organic fertilizer and biocontrol agent: effect on growth, yield and quality of plants. Rev. Environ. Sci. Bio-Tech. 14(1), 137– 159 (2015) 10. Marinari, S., Masciandaro, G., Ceccanti, B. et al.: Influence of organic and mineral fertilizers on soil biological and physical properties. Bioresour. Tech. 72(1), 9–17 (2000). 11. Hargreaves, J.C., Adl, M.S., Warman, P.R.: A review of the use of composted municipal solid waste in agriculture. Agric. Ecosyst. Environ. 123(1–3), 14 (2008) 12. Ievinsh, G. (2011).Vermicompost treatment differentially affects seed germination, seedling growth and physiological status of vegetable crop species. Plant Growth Regul. 65(1), 169– 181. 13. Bernstad, A.K., Canovas, A., Valle, R.: Consideration of food wastage along the supply chain in lifecycle assessments: a mini-review based on the case of tomatoes. Waste Manag. Res. 35(1), 29–39 (2017). 14. Buchmann, C., Schaumann, G.E.: The contribution of various organic matter fractions to soil-water interactions and structural stability of an agriculturally cultivated soil. J. Plant Nutr. Soil Sci. 181(4), 586–599. 15. Petrova, I.I., Kokunova, I.V.: Povyshenie effektivnosti vneseniya tverdyh organicheskih udobrenij putem razrabotki novogo raspredelyayushchego ustrojstva dlya navozorazbrasyvatelya (Improving the efficiency of solid organic fertilizer application by developing a new distribution device for the manure spreader of agricultural state), pp. 19–23 (2013). (in Russian) 16. Hu, C., Xia, X., Chen, Y. et al.: Soil carbon and nitrogen sequestration and crop growth as influenced by long-term application of effective microorganism compost. Chil. J. Agric. Res. 78(1), 13–22 (2018). 17. Case, S.D.C., Oelofse, M., Hou, Y., et al.: Farmer perceptions and use of organic waste products as fertilizers - a survey study of potential benefits and barriers. Agric. Syst. 151, 84–95 (2017) 18. Kazimierczak, R., Hallmann, E., Rembialkowska, E.: Effects of organic and conventional production systems on the content of bioactive substances in four species of medicinal plants. Biol. Agric. Hortic. 31(2), 118–127 (2015) 19. Cerda, A., Artola, A., Font, X., et al.: Composting of food wastes: status and challenges. Biores. Technol. 248, 57–67 (2018) 20. Dannehl, D., Becker, C., Suhl, J. et al.: Reuse of organomineral substrate waste from hydroponic systems as fertilizer in open-field production increases yields, flavonoid glycosides, and caffeic acid derivatives of red oak leaf lettuce (Lactuca sativa L.) much more than synthetic fertilizer. J. Of Agric. Food Chem. 64(38), 7068–7075 (2016)
Optimization of Compost Production Technology
1327
21. Hwang, H.Y., Kim, S.H., Kim, M.S., Park, S.J., Lee, C.H...: Co-composting of chicken manure with organic wastes: Characterization of gases emissions and compost quality. Appl. Biol. Chem. 63(1), 3 (2020). https://doi.org/10.1186/s13765-019-0483-8 22. Awasthi, M.K., Pandey, A.K., Bundela, P.S. et al.: Co-composting of gelatin industry sludge combined with organic fraction of municipal solid waste and poultry waste employing zeolite mixed with enriched nitrifying bacterial consortium. Bioresour. Tech. 213, 181–189 (2016). 23. Cesaro, A., Belgiorno, V., Guida, M.: Compost from organic solid waste: quality assessment and european regulations for its sustainable use. Resour. Conserv. Recycl. 94, 72–79 (2015) 24. Ben-Tal, A., Nemirovski, A.: Lectures on modern convex optimization: analysis, algorithms, and engineering applications. MPR-SIAM Series on optimization. SIAM, Philadelphia (2001) 25. Ben-Tal, A., Nemirovski, A.: Robust optimization - methodology and applications. Math. Program. 92(3), 453–480 (2002) 26. Özmen, A., Weber, G.-W., Batmaz, İ: The new robust CMARS (RCMARS) method. In: International Conference 24th Mini EURO Conference “Continuous Optimization and Information-Based Technologies in the Financial Sector” (MEC EurOPT 2010), Izmir, Turkey, 23–26 June 2010 (2010) 27. Samarin, G.N., Vasilyev, A.N., Zhukov, A.A., Soloviev, S.V.: Optimization of microclimate parameters inside livestock buildings. In: Vasant, P., Zelinka, I., Weber, G.W. (eds) Intelligent Computing & Optimization. ICO 2018. Advances in Intelligent Systems and Computing, vol. 866. Springer, Cham (2018). 28. Samarin, G., Vasiliev, A.N., Vasiliev, A.A., Zhukov, A., Krishtopa, N., Kudryavtsev, A.: Optimization of technological processes in animal husbandry. In: International Conference on Efficient Production and Processing (ICEPP-2020), E3S Web Conferences, vol. 161, p. 1094 (2020). 29. Carr, L., Grover, R., Smith, B., et al.: Commercial and on-farm production and marketing of animal waste compost products Animal waste and the land-water interface, pp. 485–492. Lewis Publishers , Boca Raton (1995) 30. Kranz, C.N., McLaughlin, R.A., Johnson, A., Miller, G., Heitman, J.L.: The effects of compost incorporation on soil physical properties in urban soils - a concise review. J. Environ. Manage. 261, 110209 (2020). https://doi.org/10.1016/j.jenvman.2020.110209 31. Beck-Broichsitter, S., Fleige, H., Horn, R.: Compost quality and its function as a soil conditioner of recultivation layers - a critical review. Int. Agrophys. 32(1), 11–18 (2018).
Author Index
A Abdulgalimov, Mavludin, 1156 Abdullah, Saad Mohammad, 681 Abedin, Zainal, 311 Adil, Md., 237, 976 Ahammad, Tanzin, 418 Ahmad, Nafi, 393 Ahmed, Ashik, 681 Ahmed, Md. Raju, 1213 Ahmed, Tawsin Uddin, 865 Aishwarja, Anika Islam, 546 Akhmetov, Bakhytzhan, 463 Akhtar, Mohammad Nasim, 1026 Akhtar, Nasim, 735 Akter, Mehenika, 865 Akter, Suraiya, 976 Ali, Abdalla M., 823, 838 Andersson, Karl, 583, 865, 880, 894 Anufriiev, Sergii, 777 Anwar, A. M. Shahed, 607 Anwar, Md Musfique, 964 Apatsev, Vladimir, 205 Apeh, Simon T., 430 Ara, Ferdous, 583 Arefin, Mohammad Shamsul, 326, 476, 1011, 1071, 1281, 1295 Aris, Mohd Shiraz, 1232 Arnab, Ali Adib, 393 Asma Gani, Most., 976 Ayon, Zaber Al Hassan, 1071 B Baartman, S., 250 Babaev, Baba, 95
Bandyopadhyay, Tarun Kanti, 345 Banik, Anirban, 73, 345 Barua, Adrita, 1111 Basnin, Nanziba, 379 Bebeshko, Bohdan, 463 Belikov, Roman P., 28 Bellone, Cinzia B., 1168 Belov, Aleksandr A., 43 Bhatt, Ankit, 633 Biswal, Sushant Kumar, 345 Biswas, Munmun, 880 Bolshev, Vadim E., 19, 28 Boonmalert, Chinchet, 1262 Boonperm, Aua-aree, 263, 276, 287, 1262 Borodin, Maksim V., 19, 28 Budnikov, Dmitry, 36, 1139 Budnikov, Dmitry A., 440 Bugreev, Victor, 205 Bukreev, Alexey V., 19 Bunko, Vasyl, 1222
C Chakma, Rana Jyoti, 788 Chakraborty, Tilottama, 907 Chaplygin, Mikhail, 135, 369 Chaturantabut, Saifon, 1059 Chekhov, Anton, 205 Chekhov, Pavel, 205 Cheng, L., 250 Chernishov, Vadim A., 19 Chirskiy, Sergey, 73, 196 Chit, Khin Me Me, 1038 Chui, Kwok Tai, 670
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Vasant et al. (Eds.): ICO 2020, AISC 1324, pp. 1329–1332, 2021. https://doi.org/10.1007/978-3-030-68154-8
1330 D Daoden, Kanchana, 954 Das, Avishek, 1111, 1124 Das, Dipankar, 621 Das, Utpol Kanti, 607 Daus, Yu. V., 216 Deb, Deepjyoti, 907 Deb, Kaushik, 311, 530 Deb, Ujjwal Kumar, 224 Debi, Tanusree, 326, 1281 Desiatko, Alona, 463 Dogeev, Hasan, 1156 Drabecki, Mariusz, 118 Duangban, Dussadee, 938 Dzhaparov, Batyr, 1156 E Eva, Nusrat Jahan, 546 F Fardoush, Jannat, 788 Faruq, Md. Omaer, 754 Fister Jr., Iztok, 187 Fister, Iztok, 187 Forhad, Md. Shafiul Alam, 476 G Galib, Syed Md., 476 Godder, Tapan Kumar, 922 Golam Rashed, Md., 621 Golikov, Igor O., 19, 28 Gosh, Subasish, 880 Gupta, Brij B., 670 Gupta, Dipankar, 894 H Hartmann-González, Mariana Scarlett, 506 Hasan, Md. Manzurul, 744 Hasan, Mirza A. F. M. Rashidul, 693 Hasan, Mohammad, 418, 801, 1000 Hassamontr, Jaramporn, 452 Hlangnamthip, Sarot, 154 Hoque, Mohammed Moshiul, 1111, 1124 Hossain, Emam, 894 Hossain, Md. Anwar, 693 Hossain, Md. Arif, 681 Hossain, Md. Billal, 1295 Hossain, Mohammad Rubaiyat Tanvir, 1047 Hossain, Mohammad Shahada, 379 Hossain, Mohammad Shahadat, 583, 647, 659, 865, 880, 894 Hossain, Mohammed Sazzad, 894 Hossain, Shahadat, 744 Hossain, Syed Md. Minhaz, 530
Author Index Hossain, Tanvir, 744 Hossen, Imran, 621 Hossen, Muhammad Kamal, 1011 Hsiao, Yu-Lin, 1242 Hung, Phan Duy, 406 Huynh, Son Bach, 823 I Ignatenko, Dmitry N., 1310 Ikechukwu, Anthony O., 297 Ilchenko, Ekaterina, 135 Imteaj, Ahmed, 1011 Intawichai, Siriwan, 1059 Iqbal, MD. Asif, 1111, 1124 Iqbal, Md. Hafiz, 1281 Islam, Md. Moinul, 1071 Islam, Md. Shariful, 476 Islam, Md. Zahidul, 922 Islam, Muhammad Nazrul, 546, 559 Islam, Nazmin, 754 Islam, Quazi Nafees Ul, 681 J Jachnik, Bartosz, 777 Jahan, Nasrin, 659 Jahara, Fatima, 1111 Jalal, Mostafa Mohiuddin, 559 Jamrunroj, Panthira, 276 Jerkovic, Hrvoje, 708 Joshua Thomas, J., 823, 838, 1082 Jyoti, Oishi, 754 K Kar, Susmita, 735 Karim, Rezaul, 647, 659 Karmaker, Ashish Kumar, 1213 Karmaker, Rajib, 224 Keppler, Joachim, 720 Khan, Md. Akib, 476 Khan, Md. Akib Zabed, 964 Khan, Mohammad Ibrahim, 1295 Khan, Nafiz Imtiaz, 546 Kharchenko, V. V., 216 Kharchenko, Valeriy, 95, 103, 1186, 1195 Khobragade, Prachi D., 907 Khorolska, Karyna, 463 Khristenko, Alexander G., 1310 Khujakulov, Saydulla, 103 Kibria, Hafsa Binte, 1097 Klychev, Shavkat, 95 Kokunova, Irina V., 1319 Kovalev, Andrey, 63, 73, 1186, 1195 Kovalev, Dmitriy, 1186, 1195 Kozyrska, Tetiana, 111
Author Index Kozyrsky, Volodymyr, 111, 1222 Krishtanov, Egor A., 1310 Kudryavtsev, Alexander A., 1310, 1319 Kuzmichev, Alexey, 1146 L Lakhno, Valery, 463 Lansberg, Alexander A., 28 Leeart, Prarot, 597, 1176 Leephaicharoen, Theera, 452 Lekburapa, Anthika, 287 Lin, Laet Laet, 570, 1038 López-Sánchez, Víctor Manuel, 357 Loy-García, Gabriel, 812 Lutskaya, Nataliia, 1252 M Madhu, Nimal, 633 Magomedov, Fakhretdin, 1156 Mahmud, Tanjim, 788 Majumder, Mrinmoy, 907 Makarevych, Svitlana, 111 Malim, Nurul Hashimah Ahamed Hassain, 823 Mamunur Rashid, Md., 801 Manshahia, Mukhdeep Singh, 3 Marmolejo, José A., 1272 Marmolejo-Saucedo, Jose-Antonio, 520, 812 Marmolejo-Saucedo, José Antonio, 357, 506 Marshed, Md. Niaz, 1213 Matic, Igor, 720 Matin, Abdul, 1097 Melikov, Izzet, 1156 Miah, Abu Saleh Musa, 801 Minatullaev, Shamil, 1156 Mitic, Peter, 164 Mitiku, Tigilu, 3 Mrsic, Leo, 708, 720 Mukta, Sultana Jahan, 393, 922 Mukul, Ismail Hossain, 1000 Munapo, Elias, 491 Munna, Ashibur Rahman, 237 Mushtary, Shakira, 546 Mustafa, Rashed, 647, 659 N Nahar, Lutfun, 379 Nahar, Nazmun, 583, 880 Nair, Gomesh, 838 Nanthachumphu, Srikul, 938 Nawal, Nafisa, 801 Nawikavatan, Auttarat, 597, 1176 Nekrasov, Alexey, 1204 Nekrasov, Anton, 1204
1331 Nesvidomin, Andriy, 1222 Nggada, Shawulu H., 297 Nikitin, Boris, 95 Niyomsat, Thitipong, 154 Noor, Noor Akma Watie Binti Mohd, 1232 Normov, Dmitry A., 1319 Novikov, Evgeniy, 205 O Okpor, James, 430 Ongsakul, Weerakorn, 53, 633 Ortega, Luz María Adriana Reyes, 520 Ottavi, Riccardo, 1168 P Panchenko, V. A., 216 Panchenko, Vladimir, 63, 73, 84, 95, 103, 196, 205, 345, 1186, 1195, 1204 Pardayev, Zokir, 103 Parvez, Saif Mahmud, 964 Pathak, Abhijit, 237, 976 Pathan, Refat Khan, 583 Pedrero-Izquierdo, César, 357 Podgorelec, Vili, 187 Prilukov, Aleksandr, 135, 369, 1156 Pringsakul, Noppadol, 176 Puangdownreong, Deacha, 145, 154, 176 Q Quan, Do Viet, 406 Quenum, José G., 297 R Rahaman, Md Masumur, 1281 Rahaman, Md. Habibur, 754 Rahman, Ataur, 1047 Rahman, Md. Mahbubur, 964 Rahman, Md. Rashadur, 1295 Rahman, Mohammad Nurizat, 1232 Rahman, Mostafijur, 735, 1026 Raj, Sheikh Md. Razibul Hasan, 393, 922 Ramasamy, Sriram, 1082 Rastimeshin, Sergey, 1146 Redwanur Rahman, Md., 801 Reong, Samuel, 1242 Reyna Guevara, Zayra M., 1272 Ripan, Rony Chowdhury, 1071 Riyana, Noppamas, 938 Riyana, Surapon, 938 Rodríguez-Aguilar, Román, 520, 812 Romsai, Wattanawong, 597, 1176 Roy, Bakul Chandra, 621 Roy, Shaishab, 1026
1332 S Saadat, Mohammad Amir, 647 Saha, Rumi, 326 Saiful Islam, Md., 989 Samarin, Gennady N., 43, 1310, 1319 Sangngern, Kasitinart, 263 Sania, Sanjida Nusrat, 237, 976 Sarker, Iqbal H., 1111 Sathi, Khaleda Akhter, 989 Saucedo Martínez, Jania A., 1272 Savchenko, Vitaliy, 111, 1222 Schormans, John, 393 Senkevich, Sergey, 135, 369, 1156 Shahidujjaman Sujon, Md., 801 Shanmuga Priya, S., 848 Sharif, Omar, 1111, 1124 Sharko, Anton A., 440 Shin, Jungpil, 801 Shtepa, Volodimir, 1252 Siddique, Md. Abu Ismail, 754 Siddiquee, Md. Saifullah, 1047 Sikder, Juel, 607, 788 Sinitsyn, Sergey, 84 Sintunavarat, Wutiphol, 287, 1262 Sinyavsky, Oleksandr, 1222 Sittisung, Suphannika, 938 Sivarethinamohan, R., 848 Skoczylas, Artur, 766, 777 Śliwiński, Paweł, 766 Sobolenko, Diana, 111 Soe, Myat Thuzar, 570 Sorokin, Nikolay S., 19, 28 Stachowiak, Maria, 766 Stefaniak, Paweł, 766, 777 Sujatha, S., 848 Sumpunsri, Somchai, 145 T Tasin, Abrar Hossain, 237, 976 Tasnim, Zarin, 546, 559 Thaiupathump, Trasapong, 954 Thammarat, Chaiyo, 145 Thomas, J. Joshua, 345 Thupphae, Patiphan, 53 Tikhomirov, Dmitry A., 19, 28
Author Index Tikhomirov, Dmitry, 1146 Tito, S. R., 681 Tofayel Hossain, Md., 801 Tolic, Antonio, 708 Tran, H. N. Tran, 823 Tripura, Khakachang, 907 Trunov, Stanislav, 1146 U Uddin, Md. Ashraf, 476 Uddin, Mohammad Amaz, 583 Ukhanova, Victoria, 1146 Uzakov, Gulom, 103 V Vasant, Pandian, 1186, 1195 Vasiliev, Aleksey Al., 43 Vasiliev, Aleksey N., 43 Vasiliev, Alexey A., 440 Vasiliev, Alexey N., 440 Vasilyev, Alexey A., 1139 Vasilyev, Alexey N., 1139, 1319 Vinogradov, Alexander V., 19, 28 Vinogradova, Alina V., 19, 28 Vlasenko, Lidiia, 1252 Voloshyn, Semen, 111 Vorushylo, Anton, 111 W Wahab, Mohammad Abdul, 880 Wee, Hui-Ming, 1242 Whah, Chin Yee, 1242 Y Yasmin, Farzana, 1071 Ydyryshbayeva, Moldyr, 463 Yudaev, I. V., 216 Yuvaraj, D., 848 Z Zahid Hassan, Md., 418, 1000 Zaiets, Nataliia, 1252 Zaman, Saika, 1011 Zhao, Mingbo, 670 Zulkifli, Ahmad Zulazlan Shah b., 1232